Message boards : Number crunching : Discussion on increasing the default run time
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1677 Credit: 17,759,776 RAC: 22,869 |
Tasks finally completed.rb_03_31_20049_19874__t000__3_C1_SAVE_ALL_OUT_IGNORE_THE_REST_904837_1472_0 Run time 15 hours 36 min 43 sec CPU time 15 hours 29 min 42 sec Validate state Valid Credit 605.87 <core_client_version>7.6.33</core_client_version> <![CDATA[ <stderr_txt> command: projects/boinc.bakerlab.org_rosetta/rosetta_4.15_windows_x86_64.exe -run:protocol jd2_scripting @flags_rb_03_31_20049_19874__t000__3_C1_robetta -silent_gz -mute all -out:file:silent default.out -in:file:boinc_wu_zip input_rb_03_31_20049_19874__t000__3_C1_robetta.zip -nstruct 10000 -cpu_run_time 57600 -watchdog -boinc:max_nstruct 600 -checkpoint_interval 120 -database minirosetta_database -in::file::zip minirosetta_database.zip -boinc::watchdog -run::rng mt19937 -constant_seed -jran 3920361 Starting watchdog... Watchdog active. ====================================================== DONE :: 13 starting structures 55782.3 cpu seconds This process generated 13 decoys from 13 attempts ====================================================== BOINC :: WS_max 1.75866e+09 BOINC :: Watchdog shutting down... 19:06:24 (8204): called boinc_finish(0) </stderr_txt> ]]> rb_03_31_20031_19865__t000__0_C1_SAVE_ALL_OUT_IGNORE_THE_REST_904757_832_0 Run time 15 hours 45 min 21 sec CPU time 15 hours 40 min 23 sec Validate state Valid Credit 1,215.60 <core_client_version>7.6.22</core_client_version> <![CDATA[ <stderr_txt> command: projects/boinc.bakerlab.org_rosetta/rosetta_4.15_windows_intelx86.exe -run:protocol jd2_scripting @flags_rb_03_31_20031_19865__t000__0_C1_robetta -silent_gz -mute all -out:file:silent default.out -in:file:boinc_wu_zip input_rb_03_31_20031_19865__t000__0_C1_robetta.zip -nstruct 10000 -cpu_run_time 57600 -watchdog -boinc:max_nstruct 600 -checkpoint_interval 120 -database minirosetta_database -in::file::zip minirosetta_database.zip -boinc::watchdog -run::rng mt19937 -constant_seed -jran 1163139 Starting watchdog... Watchdog active. ====================================================== DONE :: 31 starting structures 56423.5 cpu seconds This process generated 31 decoys from 31 attempts ====================================================== BOINC :: WS_max 9.19749e+08 BOINC :: Watchdog shutting down... 19:36:44 (4984): called boinc_finish(0) </stderr_txt> ]]> Grant Darwin NT |
rzlatic Send message Joined: 20 Nov 07 Posts: 3 Credit: 327,897 RAC: 0 |
rzlatic, it appears you are seeing tasks running more than 4 hours passed the runtime preference, then ended by the watchdog on a Linux system running the i686 application. Please see the discussion here. indeed, tasks generating the problem were starting with "12v1n" and were run by i686-pc-linux (as seen here: https://imgur.com/WaDeO14). created "cc_config.xml" config file in /var/lib/boinc/ with suggested settings, restarted boinc client and there seems to be none 686-pc-linux (32-bit) tasks now. we'll see how it will be going. thanks, great community and support. |
MeeeK Send message Joined: 7 Feb 16 Posts: 31 Credit: 19,737,304 RAC: 0 |
Hi, Did something change in the last few days or weeks? I lost 7000 points in average per day. Dropped from 34.000 a day to 27.000 and below. Thats a huge gap. I changed nothing at my systems. Thwy are running 24/7. |
Bryn Mawr Send message Joined: 26 Dec 18 Posts: 390 Credit: 12,073,013 RAC: 4,827 |
Hi, I can’t see that far back in your results log but you’re loosing quite a few credits through missed deadlines - you might find it better to reduce your buffers to maybe 0.5 days + 0.5 days or even less. |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2121 Credit: 41,179,074 RAC: 11,480 |
Did something change in the last few days or weeks? To be fair, cancelled tasks that have passed deadline are only deleted if they're unstarted - running tasks are allowed to continue, so no credit loss. But it's certainly true that the buffer is too large to meet deadline so should be significantly cut back. Because even if the next host who receives them runs them within deadline they'll be so long after the batch was released they'll be no good to the project. They're effectively being made useless at the point of download. And the runtime is unnecessarily low so the project isn't getting full value for them either tbh But I'd rather point to the credits awarded here. Since the new version, credits have been a mess. Anyone seeking credits here should know they're going to be disappointed. My point being, I don't think lower credits are indicative of a problem with the tasks themselves |
Bryn Mawr Send message Joined: 26 Dec 18 Posts: 390 Credit: 12,073,013 RAC: 4,827 |
Did something change in the last few days or weeks? As well as the hundreds of error tasks there were quite a few invalid tasks that had run but been discarded and given no credits because they were received too late. |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1677 Credit: 17,759,776 RAC: 22,869 |
From the "Thank you" thread To accommodate this, the "watchdog" timeout has been extended from the normal 4 hours to 10 hours.A big change. And given the gaps between checkpoints even on my fairly powerful system -40min and more- and reports from others of even longer periods of no checkpointing (on Windows systems which don't have the Linux i686 application issue) i would hope the programmers are going to look very hard in to increasing the number of points where a Task can checkpoint. Otherwise even powerful systems that run more than just Rosetta will struggle to complete a Task due to resource share settings switching between projects, and less powerful Rosetta only systems will struggle to reach a checkpoint before there is a need for BOINC to suspend computation. And for those that aren't on for long periods of time, or have heavy non-BOINC use while crunching will have no chance of completing a Task if it has to start from the last checkpoint after an interruption. With an 8 hour (now up to 18 hours) Runtime losing 5min here or there isn't a big issue (annoying, but not a big issue). But to lose 40min, in many cases to loose 2hrs and more if a Rosetta Tasks gets interrupted (and unless you have massive amount of RAM to number of cores/threads ratio "Leave non-GPU tasks in memory while suspended" isn't an option (what is the default "Page/swap file: use at most %"?)). Yes, most Tasks won't run for the Target time + 10hrs. But those that do will drive people away if they spend hours doing work, only to loose it all & have to start again. Over & over again. Grant Darwin NT |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
@Grant, you appear to be one of the lucky few that received some of those WUs from two weeks ago when the default runtime was bumped to 16 hours. So the WU thought it was running normally. This explains why the watchdog did not step in. As posted today by admin, the timeout used by the watchdog has been extended for the normal 4 hours to 10 hours. So, on new WUs, where the watchdog is set to 10 hours, that means a given WU may run 10 CPU hours beyond the runtime preference before the watchdog will step in and end the WU. This is because of the extremely challenging protein models under study now. Rosetta Moderator: Mod.Sense |
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
Extending the watchdog has no impact on the checkpointing issue. These are separate issues. Extending the watchdog will prevent errors for jobs that have a longer than usual run time per model, such as a 2000 residue protein, for protocols that do not have checkpoints of appropriate intervals. Although somewhat rare, these type of jobs do exist from Robetta. We should definitely try to address the checkpointing issue by adding more checkpoints to the various protocols but this will take development time. |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1677 Credit: 17,759,776 RAC: 22,869 |
Extending the watchdog has no impact on the checkpointing issue. These are separate issues. Extending the watchdog will prevent errors for jobs that have a longer than usual run time per model, such as a 2000 residue protein, for protocols that do not have checkpoints of appropriate intervals. Although somewhat rare, these type of jobs do exist from Robetta. We should definitely try to address the checkpointing issue by adding more checkpoints to the various protocols but this will take development time.Thank you for the response. Given the increase in the Watchdog time, i think that addressing the checkpoint issue should get bumped up the list of things that need to be done. As i said- losing a few minutes work is annoying, but it's not a major issue. But to lose 2, 4, 8, 12, 16 hours of processing? That will drive people away. If Rosetta is using a grace period on work not returned by the deadline, it may be worth considering increasing it to allow for the increased Watchdog time (and please if possible see that Tasks aren't resent till the Grace period extension has passed. ie don't send them, till they're needed. Reduce people's error count, reduce the network bandwidth used, reduce the Scheduler load- not a big reduction, but every little bit counts as the number of systems crunching grows). Grant Darwin NT |
Sid Celery Send message Joined: 11 Feb 08 Posts: 2121 Credit: 41,179,074 RAC: 11,480 |
From the "Thank you" threadTo accommodate this, the "watchdog" timeout has been extended from the normal 4 hours to 10 hours.A big change. I think this is a strong point so +1. People who aren't running 247 or are running multiple projects with project switching every default amount of time who don't hold tasks in memory are going to have significant issues. I do run 24/7 and I'd already changed my "switch between tasks every xx minutes" from a default 60 to 999 but even this might not be enough. Might be worth adding another 9 |
MeeeK Send message Joined: 7 Feb 16 Posts: 31 Credit: 19,737,304 RAC: 0 |
I am running the same settings since i bought these CPUs. Saved a day and additional 2 days. Never had that much problems. I saved 3 days of work for having some reserve in case of problems with my ISP. So Computers can do their work offline until internetconmection is back. There are 150 deadline task in my stats right now. Never have seen that much before. Think there was something wrong with too short deadlines, bit the all hav not been started. So no lost work. My two ryzen 3600 crashed from 17.000 points each to approximately 12.000 and 13.000. Thats not normal. Will check the tasks later. Maybe i can find a problem. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
You will correct me if I'm mistaken, but doesn't the BOINC Manager wait until a WU reaches a checkpoint before suspending it to work on another project? This change was made about a decade ago, because a task that hasn't reached a checkpoint, especially if tasks are not kept "in memory", will lose what it has been working on. True for all BOINC projects. Picture a machine with 4 CPUs, running 3 BOINC projects, it could easily have 8 different WUs that have been worked on during the day. And even if WUs are kept "in memory" when suspended, you are going to lose progress on all 8 of those WUs when you turn off the machine. So, I don't believe that "switch between tasks every xx minutes" setting really has much effect anymore. In fact I thought it used to include the phrase "at most" every xx minutes. Note: I tend to place "in memory" in quotes, because memory used by inactive threads is always swapped out if there is an active thread that needs memory. So long as you have a swap file of sufficient size, I highly recommend checking the box for "Leave non-GPU tasks in memory while suspended". Rosetta Moderator: Mod.Sense |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1677 Credit: 17,759,776 RAC: 22,869 |
I am running the same settings since i bought these CPUs.You have as many Tasks that are an Error because they miss the deadline as are in your Valid list. That is a problem. Other Store at least 1 days of work Store up to an additional 0.02 days of workWill result in more than a days work until deadlines start to settle down. Then you could bump up "Store at least xx days of work" to 1.5 if you feel you need more, without having most of the work you get being Errors due to missed deadlines which is what is happening at present. Grant Darwin NT |
MeeeK Send message Joined: 7 Feb 16 Posts: 31 Credit: 19,737,304 RAC: 0 |
I am running the same settings since i bought these CPUs.You have as many Tasks that are an Error because they miss the deadline as are in your Valid list. That is a problem. that isn´t the problem at all. all the canceled WUs because of deadline, have not been started. So my CPUs didnt waste a secound of workload to them. There have been 6 or 7 WUs that had an error while working on it and have been aborded. But these 6 or 7 tasks dont make me lose so many points. but i just have an idea right in that moment. Guess its because of the way average points are calculated. i have 150 Jobs finished with points that would be my 34.000 points BUT i also have 150 Deadline tasks with 0 Points. So its 300 WUs with XXX tousand points in average. Dont know the exact numbers at atm. Did somebody else had problems with deadines? I didnt read all the posts. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Did somebody else had problems with deadines? I didnt read all the posts. There have been several issues lately that have caused machines to load more work than can be processed within the 3 day deadlines that now rule the day.
Rosetta Moderator: Mod.Sense |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1677 Credit: 17,759,776 RAC: 22,869 |
that isn´t the problem at all.But it is a problem. You did waste Rosetta's bandwidth & server resources in downloading them, having them sit there doing nothing for days, then having them timeout & having to send them out again to another system that will process them. The reason for the shorter deadlines is because the Project wants (needs) the results back sooner. If you are not going to process them, then why download them? If you set your cache to a more realistic value then everyone benefits. Did somebody else had problems with deadines? I didnt read all the posts.Many other people have problems with deadlines, but none as bad as yourself. Grant Darwin NT |
MeeeK Send message Joined: 7 Feb 16 Posts: 31 Credit: 19,737,304 RAC: 0 |
Did somebody else had problems with deadines? I didnt read all the posts. i think that might have caused my "problems". Now changed the settings to 2 days and WU-Runtime to 6 hours. Will check the next days if there is a change. Do you think i should upgrade RAM to 32 because of higher usage? have two Ryzen 5 3600 12 Core with 16GB each. have always been enough so far. |
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
Another recent issue has been that a batch of jobs has been finishing earlier than expected. i.e. ~3 hours. We've hopefully addressed this issue for future batches of similar runs (cyclic peptide jobs). |
MeeeK Send message Joined: 7 Feb 16 Posts: 31 Credit: 19,737,304 RAC: 0 |
i think because of that, my clients downloaded too many WUs. the clients are downstairs in the basement. I dont watch them every day, so i didnt noticed that. |
Message boards :
Number crunching :
Discussion on increasing the default run time
©2024 University of Washington
https://www.bakerlab.org