Message boards : Number crunching : Default Run Time
Author | Message |
---|---|
Mike Gelvin Send message Joined: 7 Oct 05 Posts: 65 Credit: 10,612,039 RAC: 0 |
Web page for Default Run Time entry states: Target CPU run time (not selected defaults to 4 hours) I have never chosen a run time, allowing the project to chose what might be best for me, so I assume the default is in play here (4 hours). However it actually appears to be set at 3 hours across all my machines. Is this an error? |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1480 Credit: 4,334,829 RAC: 0 |
It does look like an error on the web page. I believe Rhiju may have switched the default to 3 hours. I will change the page to reflect that. |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
So THAT's why so many people's WUs run in 10,000 seconds! Here I had assumed it was just common (REALLY REALLY common) to hit that point and calculate that the next model would take more than an hour. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
neil.hunter14 Send message Joined: 9 May 06 Posts: 10 Credit: 278,867 RAC: 0 |
I don't quite follow the logic of being able to change the CPU Run-Time. I set mine at 12 hours yesterday, and sure enough, the WU ran for about that length of time. Now I have it set to 2 hours. And the model runs for two hours. My question is: Do I get more credit for longer run-times? If the CPU Run Time is too short, am I wasting part of the model, that will never then be computed? Should I leave it stuck at 4 hours as the default? Surely a slower PC will take longer to compute, and therefore the amount of number crunching my PC can do in an hour, might take 4 hours on an older P3 machine. What is the reason for being able to change the run-time? Neil. |
anders n Send message Joined: 19 Sep 05 Posts: 403 Credit: 537,991 RAC: 0 |
I don't quite follow the logic of being able to change the CPU Run-Time. I set mine at 12 hours yesterday, and sure enough, the WU ran for about that length of time. Now I have it set to 2 hours. And the model runs for two hours. This was created to keep cruchers on modems happy. Less Mb to download. Now it also can help the server to be happy with less uploads/downloads per computer. The project has a 8 H setting as best for them IF it works fine on your computers. Anders n |
tralala Send message Joined: 8 Apr 06 Posts: 376 Credit: 581,806 RAC: 0 |
My question is: Do I get more credit for longer run-times? If the CPU Run Time is too short, am I wasting part of the model, that will never then be computed? You get more credit for longer run-times. The CPU Run Time does not affect credit at all. For 12 hours you get 6 times the credit than for 2 hours (on the same machine). Each WU does as many models as possible with unique starting positions given the run-time-preference . Since there is an infinite amount of possible starting positions it does not alter the scientific output whether you do 6 times 10 models with 6 differen WUs or one time 60 models with one WU. The only difference is bandwidth consumption. The shorter runtimes are available for those who like shorter WUs and as a safety net for failing WU (with the implemented watchdog now probably no longer of much importance). |
neil.hunter14 Send message Joined: 9 May 06 Posts: 10 Credit: 278,867 RAC: 0 |
|
tralala Send message Joined: 8 Apr 06 Posts: 376 Credit: 581,806 RAC: 0 |
|
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,874,845 RAC: 926 |
Without wanting to drag this thread off-topic, regarding the cache-size/FSB/CPU core etc... effects on rosetta: is it possible to run the same job, with the same seed on two computers to get a comparison? If so, how is this done? |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
Yes! Rosetta has a property that can be set to establish the seed. Unfortunately I don't recall the property name nor the file that contains it. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Mike Gelvin Send message Joined: 7 Oct 05 Posts: 65 Credit: 10,612,039 RAC: 0 |
Without wanting to drag this thread off-topic, regarding the cache-size/FSB/CPU core etc... effects on rosetta: I'm still trying to understand what this means. It appears that it means that subsequent models are not "whole new attempts". If a model gets generated and has a terrible energy (not sure what that means)... then why continue looking in that neighborhood? Wouldnt either looking in a whole new place each time (using a previous analogy)... or if the first attempt is not as good as some "X" then dont try anymore near here. This "X" could be fed with the workunit and be a feedback from other models that have started to "zero in" on the answer. Not sure I'm making sense here, just some questions. |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
I'm still trying to understand what this means. It appears that it means that subsequent models are not "whole new attempts". If a model gets generated and has a terrible energy (not sure what that means)... then why continue looking in that neighborhood? Wouldnt either looking in a whole new place each time (using a previous analogy)... or if the first attempt is not as good as some "X" then dont try anymore near here. This "X" could be fed with the workunit and be a feedback from other models that have started to "zero in" on the answer. Not sure I'm making sense here, just some questions. Basically, each new model run IS a new start. It gets a different random number and takes a new perspective of looking at the protein. The Moderator was addressing the question posed about running through an identical WU with identical random number, because they wanted to get an accurate, recreateable benchmark to measure. But this isn't what is happening by default. Your model runs will each be different. As for "don't try anymore near here"... I believe that sort of logic is built in to the algorythm as it makes each model. But there are cases where what starts out looking like a really terrible model it suddenly "drops in a deep well" and looks really good. It's like the landscape and elevation analogy Dr. Baker uses, and you climb a huge volcano, it's looking worse and worse every step of the way... then suddenly you drop into the creator and find it's lower (energy, i.e. good) than the base of the volcano where you started. i.e. it was worth the climb to discover it! If they could find a rule to identify, accurately ahead of time when it will be worth the climb and when it's a waste of time, they will build that logic into the program. This is the sort of thing they're working towards all the time. This is the hope of the whole project, that rules of this sort can be devised which allow the same answer to be discovered with less and less model runs over time. As for your idea of immediate feedback used as guidance to future model runs, I believe to some extent they do that at the server level as they devise some of the new WUs. I haven't seen much detail on it though. Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Message boards :
Number crunching :
Default Run Time
©2025 University of Washington
https://www.bakerlab.org