Tells us your thoughts on granting credit for large protein, long-running tasks

Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 . . . 9 · Next

AuthorMessage
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2114
Credit: 41,105,271
RAC: 21,658
Message 95018 - Posted: 20 Apr 2020, 22:13:28 UTC - in response to Message 94952.  

The answer is +25% credit compensation.

I've simply plugged in the data (4x memory requirement) into my equations for 3700X and 3950X at the The most efficient cruncher rig possible thread that amortizes the cost of power and capital expenditure against produced RAC and solved for the needed RAC correction to arrive at the same RAC/$/5years. I assumed 0.3W/GB of power consumption for DDR4 RAM.

Although, you could rephrase the same question in another way: based on supply and demand, if a great majority of volunteers have insufficient amount of memory, how do I incentivize them to purchase more? If you put it that way, adding +50% may not be that far fetched.

Interesting. To throw a few more left-field ideas out there...

1) No change to whatever credits are normally awarded, but award some variation of additional new "Bad Mutha" badges
2) Set a cookie that allows a discount at approved hardware suppliers on purchase of CPU/RAM with some kind of commission also going back to Rosetta/Bakerlab/IPD

I know this isn't a commercial project but there must be some way of monetising this stuff for the benefit of both researchers and volunteers

...don't shoot the messenger
ID: 95018 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tomcat雄猫

Send message
Joined: 20 Dec 14
Posts: 180
Credit: 5,386,173
RAC: 0
Message 95029 - Posted: 21 Apr 2020, 5:19:28 UTC - in response to Message 94913.  
Last modified: 21 Apr 2020, 5:22:26 UTC

Personally, I think the best option would be to try and balance these tasks to grant about as many credits per hour as "normal" tasks. Since, for me, Rosetta Mini has consistently returned 2-6X the credits of regular Rosetta tasks and has been way more consistent (don't know about others), maybe balance the large tasks to match the payout of Rosetta Mini. I don't think there should be a bonus unless one can directly control whether these tasks come in or not (as in, having a toggle in the Rosetta preference panel for big bad proteins). If such a toggle was added, methinks a 10% bonus is reasonable.

Methinks it would be slightly more beneficial if one was granted a bonus for having longer run times. Like a 2% bonus for tasks which ran above the default runtime and a 5% bonus for tasks ran for over a day. Since the watchdog has been extended, maybe grant an additional 0-10% compensative bonus for tasks that ran significantly over the set runtime?
ID: 95029 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
teacup_DPC

Send message
Joined: 3 Apr 20
Posts: 6
Credit: 2,744,282
RAC: 0
Message 95057 - Posted: 21 Apr 2020, 18:43:18 UTC - in response to Message 94951.  
Last modified: 21 Apr 2020, 19:13:37 UTC

Unread Message 94951 - Posted: 19 Apr 2020, 23:16:02 UTC - in response to Message 94937.

I think the new longer tasks should get more than the credit awarded for running 4 1gb tasks, the idea being that they can't be run by everyone and that by definition means older and slower pc;s which should be enouraged to be replaced or updated as over time they simply won't be able to keep up. How much more depends on the priority the Project places on these new workunits, if they are just new workunits than only a minimal amount of credit above the amount 4 1gb tasks would get, on the other hand if the new tasks are a higher than normal priority than a higher credit should be given to encourage people to crunch them instead.


I think your remark is valid when we leave from the assumption that systems able to do 4GB tasks without any issue besides doing other things are rare. To be honest when writing my reply I was not sure about that. That is why I wrote this sentence:
Where exactly to position the credits between 1x1GB and 4x1GB depends on the availability of processor cores and memory in the clients capable for the 4GB jobs, you can judge that better than I.
Another thing I am not sure about is that 4GB WU's will take more time to solve. It seems logical, while more data needs to be moved around. This would imply there is not only a lower limit on memory size, but on core performance as well. If so, where does the threshold lie?


From one side I expect however you will have a point. Lets look at an extremity of the spectrum. Ryzen 9 3950X, 16core/32thread. When you only want to harvest the 4GB jobs for each thread, more than 4x32=128GB of memory is needed. Systems like this will be a minority, I expect. But besides this extremity what about an average desktop, 8 years or younger, quite some systems will have 16GB, a majority will have 8GB of RAM, a minority will have 32GB or more. All those systems are able to do without much effort, besides doing other things, at least 1 4GB WU. Assuming their processor cores can cope with the load.

So back to the question, how probable are those systems? Perhaps Mod.Sense can shed some light on this?
ID: 95057 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ged

Send message
Joined: 17 Apr 06
Posts: 2
Credit: 1,034,115
RAC: 0
Message 95061 - Posted: 21 Apr 2020, 19:07:35 UTC

For me, personally, I'm not driven by the credits granted for running work units; It's about contributing to the science, either by running work units which model a particular behaviour or sheer crunching of data for further treatment or research candidate selection/rejection.

I'd rather the see application development *and* testing effort be expended producing efficient and effective code. I'd also like to see more realistic operational criteria being assigned to work units so as not to 'waste' computing effort (and electricity) by having my machines swamped with, often, spuriously defined deadlines, maybe by including some operational acceptance testing rather than just functional tests.

That's my 10c's worth ;-)

Ged
ID: 95061 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tomcat雄猫

Send message
Joined: 20 Dec 14
Posts: 180
Credit: 5,386,173
RAC: 0
Message 95063 - Posted: 21 Apr 2020, 19:19:02 UTC - in response to Message 95029.  

Personally, I think the best option would be to try and balance these tasks to grant about as many credits per hour as "normal" tasks. Since, for me, Rosetta Mini has consistently returned 2-6X the credits of regular Rosetta tasks and has been way more consistent (don't know about others), maybe balance the large tasks to match the payout of Rosetta Mini. I don't think there should be a bonus unless one can directly control whether these tasks come in or not (as in, having a toggle in the Rosetta preference panel for big bad proteins). If such a toggle was added, methinks a 10% bonus is reasonable.

Methinks it would be slightly more beneficial if one was granted a bonus for having longer run times. Like a 2% bonus for tasks which ran above the default runtime and a 5% bonus for tasks ran for over a day. Since the watchdog has been extended, maybe grant an additional 0-10% compensative bonus for tasks that ran significantly over the set runtime?


Welp, my problem with Rosetta mini generating way more credits than regular Rosetta tasks has been fixed (apparently my computer was generating more than the max allowed credit? Oops)

I still hold the belief that no bonus is necessary for this task, unless there is a setting in the Rosetta preference panel that specifically toggles "big bad proteins". Even if such tasks carried quite a large bonus, I doubt anyone would upgrade their computer just for these tasks.
ID: 95063 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1670
Credit: 17,504,454
RAC: 24,396
Message 95076 - Posted: 21 Apr 2020, 23:03:23 UTC - in response to Message 95063.  
Last modified: 21 Apr 2020, 23:03:57 UTC

I still hold the belief that no bonus is necessary for this task, unless there is a setting in the Rosetta preference panel that specifically toggles "big bad proteins". Even if such tasks carried quite a large bonus, I doubt anyone would upgrade their computer just for these tasks.
The proposal for more Credit isn't to get people to upgrade their computers, it's just so that the people that do process them don't lose out on Credit because they will be unable to process as many as 3 other Tasks at the same time as one of the very large RAM requirement Tasks is running (actually under certain circumstances some people may not be able to process as many as 5 other Tasks till the vary large RAM requirement Task is done, or at the very least during various stages of it's progress).
Grant
Darwin NT
ID: 95076 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 95082 - Posted: 21 Apr 2020, 23:25:31 UTC - in response to Message 95061.  

For me, personally, I'm not driven by the credits granted for running work units; It's about contributing to the science, either by running work units which model a particular behaviour or sheer crunching of data for further treatment or research candidate selection/rejection.

I'd rather the see application development *and* testing effort be expended producing efficient and effective code. I'd also like to see more realistic operational criteria being assigned to work units so as not to 'waste' computing effort (and electricity) by having my machines swamped with, often, spuriously defined deadlines, maybe by including some operational acceptance testing rather than just functional tests.

That's my 10c's worth ;-)

Ged


Ged, I just wanted to clarify, are you basically suggesting that you'd like to see some way to control the deadline of the work you receive? Or have a way to only be assigned WUs that have 8 day deadlines? Or are you referring to cases where the BOINC Manager gets tricked into requesting more R@h work than is required to fill your work cache, and to complete before the 3 day deadlines?
Rosetta Moderator: Mod.Sense
ID: 95082 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tomcat雄猫

Send message
Joined: 20 Dec 14
Posts: 180
Credit: 5,386,173
RAC: 0
Message 95092 - Posted: 22 Apr 2020, 0:34:52 UTC - in response to Message 95076.  
Last modified: 22 Apr 2020, 0:41:25 UTC

I still hold the belief that no bonus is necessary for this task, unless there is a setting in the Rosetta preference panel that specifically toggles "big bad proteins". Even if such tasks carried quite a large bonus, I doubt anyone would upgrade their computer just for these tasks.
The proposal for more Credit isn't to get people to upgrade their computers, it's just so that the people that do process them don't lose out on Credit because they will be unable to process as many as 3 other Tasks at the same time as one of the very large RAM requirement Tasks is running (actually under certain circumstances some people may not be able to process as many as 5 other Tasks till the vary large RAM requirement Task is done, or at the very least during various stages of it's progress).


Ah, that's a really good point. Didn't think of it that way. If that is the case, I think it might be fair to give quite a large bonus for these tasks. Since we don't know how much more RAM these tasks take on average, it's still hard to tell exactly how big a bonus they should be given. Unless there is a way to take the RAM usage into account when calculating credits.

For example
It might be fair to give a big bad protein task that ate up twice the ram that a regular Rosetta task does a 100% credit bonus.

We need someone with a economics background to weigh in, this seems like something that involves opportunity costs and whatnot.
ID: 95092 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1670
Credit: 17,504,454
RAC: 24,396
Message 95094 - Posted: 22 Apr 2020, 0:51:22 UTC - in response to Message 95092.  
Last modified: 22 Apr 2020, 0:51:46 UTC

We need someone with a economics background to weigh in
*shudder*
Things are messy enough as it is.
Economists are like lawyers, ask 20 different ones something and you'll get 20 different answers (or the the answer you want if you pay for it).
*another shudder*
Grant
Darwin NT
ID: 95094 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tomcat雄猫

Send message
Joined: 20 Dec 14
Posts: 180
Credit: 5,386,173
RAC: 0
Message 95095 - Posted: 22 Apr 2020, 0:56:22 UTC - in response to Message 95094.  
Last modified: 22 Apr 2020, 0:58:18 UTC

We need someone with a economics background to weigh in
*shudder*
Things are messy enough as it is.
Economists are like lawyers, ask 20 different ones something and you'll get 20 different answers (or the the answer you want if you pay for it).
*another shudder*

*sigh* I guess there is a reason economics is called the "dismal science". I still think the goal is to balance these tasks so running them don't affect one's RAC by much. Since these tasks can take up to 4GBs instead of the regular 1GB (is it?), maybe a 150% bonus is fair?
ID: 95095 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1670
Credit: 17,504,454
RAC: 24,396
Message 95097 - Posted: 22 Apr 2020, 1:11:54 UTC - in response to Message 95095.  

*sigh* I guess there is a reason economics is called the "dismal science". I still think the goal is to balance these tasks so running them don't affect one's RAC by much. Since these tasks can take up to 4GBs instead of the regular 1GB (is it?), maybe a 150% bonus is fair?
Since it's reducing their output by 75%, triple the going rate would be fair (eg 4 Tasks, 100 each, lose 3 that's 300 lost. One Task *3 =300, still less than the amount lost, but not nearly as much).
Of course this would give a boost in Credit for 1 or 2 core system with lots of RAM, or those huge multi core/thread systems with extreme amounts of RAM. If more people are affected negatively by the new Tasks than benefit (which i suspect will be the case), then the higher level of Credit should be paid. Otherwise still give a bonus for those Tasks, but not as large a one.

And the impact will depend on just how much of the total processing time the Tasks require that much RAM. If it's for 5% or 10% of a run, then the impact of lost output won't be significant at all. But if it's for 25% or more of the time, then that's a big hit for those affected.
Grant
Darwin NT
ID: 95097 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 95101 - Posted: 22 Apr 2020, 1:37:52 UTC - in response to Message 95097.  
Last modified: 22 Apr 2020, 1:38:38 UTC

Admin posted that the average model in their lab actually used under 2GB. The 4GB is the maximum the WU it ALLOWED to use.

It is like asking how much water a Tesla gigafactory is going to use, and looking at the size of the pipe that goes in to the factory. But the system must be sized large enough to run the sprinkler system in case of fire. It doesn't mean the maximum flow is used all of the time.
Rosetta Moderator: Mod.Sense
ID: 95101 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tomcat雄猫

Send message
Joined: 20 Dec 14
Posts: 180
Credit: 5,386,173
RAC: 0
Message 95106 - Posted: 22 Apr 2020, 2:01:39 UTC - in response to Message 95101.  

Admin posted that the average model in their lab actually used under 2GB. The 4GB is the maximum the WU it ALLOWED to use.

It is like asking how much water a Tesla gigafactory is going to use, and looking at the size of the pipe that goes in to the factory. But the system must be sized large enough to run the sprinkler system in case of fire. It doesn't mean the maximum flow is used all of the time.


Hmm, how much do regular Rosetta tasks use on average?
ID: 95106 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1670
Credit: 17,504,454
RAC: 24,396
Message 95107 - Posted: 22 Apr 2020, 2:23:07 UTC - in response to Message 95106.  

Hmm, how much do regular Rosetta tasks use on average?
I've seen from 80MB to 1.5GB. Often around 400MB-800MB.

And i recently found out about wuprop.boinc-af.org. You need to checkout the graphs for the full story, and it looks like there already plenty of Tasks already hitting (or at least requesting) 2GB of RAM (don't know what the source is- used or requested RAM?).



Grant
Darwin NT
ID: 95107 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 95108 - Posted: 22 Apr 2020, 2:35:37 UTC - in response to Message 95107.  
Last modified: 22 Apr 2020, 2:36:18 UTC

Grant's link didn't work for me, try this one

Very cool! A BOINC project to collect & process project stats.
Rosetta Moderator: Mod.Sense
ID: 95108 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Tomcat雄猫

Send message
Joined: 20 Dec 14
Posts: 180
Credit: 5,386,173
RAC: 0
Message 95109 - Posted: 22 Apr 2020, 2:39:00 UTC - in response to Message 95107.  
Last modified: 22 Apr 2020, 2:40:58 UTC

Hmm, how much do regular Rosetta tasks use on average?
I've seen from 80MB to 1.5GB. Often around 400MB-800MB.

And i recently found out about wuprop.boinc-af.org. You need to checkout the graphs for the full story, and it looks like there already plenty of Tasks already hitting (or at least requesting) 2GB of RAM (don't know what the source is- used or requested RAM?).




Thanks, if that is the case, I'll assume big bad proteins would require close to 2X the RAM of a regular protein on average, then.

Assuming that these tasks generate a similar amount of credits per core per hour, it would seem fair to give theses big bad proteins a 100% bonus. However, that is a worst-case scenario estimate, made under the assumption that the user is running at maximum RAM usage and each big bad protein will take up two slots. Some users may be able to run as many BBPs as regular proteins (best case scenario, no bonus needed). I'll assume that from worst to best case, users on Rosetta follow a normal distribution. So I guess a 50% bonus should be a good starting point.
ID: 95109 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
bkil
Avatar

Send message
Joined: 11 Jan 20
Posts: 97
Credit: 4,433,288
RAC: 0
Message 95118 - Posted: 22 Apr 2020, 6:37:28 UTC - in response to Message 95063.  
Last modified: 22 Apr 2020, 6:38:49 UTC

Folding@home offered various bonuses during its lifetime, I think they had beta bonus for completing WUs that were not correctly calibrated yet or may crash, big bonus for upper-end requirement outliers, bigadv for those tasks requiring lots of cores, lots of runtime and lots of RAM, and a quick return bonus if for completing short deadlines and running 24/7.


Over time, this resulted in people running 24/7, upgrading their boxes, and generally better equipped computers to join.

On the flip side, this caused people with less than top-notch hardware to leave or not join in the first place because they didn't feel their contribution to be competitive or valuable.

On the contrary, volunteer computing works the best if as many join as possible (up to a certain level of energy efficiency, like up to 10 years old hardware) - every little counts. We only have a few top notch 32+ cores machines with beefy GPUs around the world, but if we contributed every phone, tablet and low-mid end office machine, typically with 2-4 cores, our computing capacity could increase by orders of magnitude. (I.e., we have way less than a million hosts and there exist billions of personal computing devices in the world)

ID: 95118 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2114
Credit: 41,105,271
RAC: 21,658
Message 95120 - Posted: 22 Apr 2020, 8:36:55 UTC - in response to Message 95094.  
Last modified: 22 Apr 2020, 8:47:58 UTC

We need someone with a economics background to weigh in
*shudder*
Things are messy enough as it is.
Economists are like lawyers, ask 20 different ones something and you'll get 20 different answers (or the the answer you want if you pay for it).
*another shudder*

*tut*
I already solved this:
1) No change to whatever credits are normally awarded, but award some variation of additional new "Bad Mutha" badges
2) Set a cookie that allows a discount at approved hardware suppliers on purchase of CPU/RAM with some kind of commission also going back to Rosetta/Bakerlab/IPD

1 is for people who don't care about credits
2 is for those who want 'paying' in some sense

And my earlier one is proving more apt the more messages I read. Someone always wants to create a pretence of 'rationality' about credits, while suggesting something that's little more than a different version of 'random'
Pick a number.
If you get more than 3 complaints about it, increase it further.
If you get 3 or fewer, keep it as it is.


It's (barely) funny coz it's true
ID: 95120 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2114
Credit: 41,105,271
RAC: 21,658
Message 95121 - Posted: 22 Apr 2020, 8:42:17 UTC - in response to Message 95107.  

Hmm, how much do regular Rosetta tasks use on average?
I've seen from 80MB to 1.5GB. Often around 400MB-800MB.

When we were getting all those 1.5Gb tasks and RAM was pushing against my 16Gb RAM I spotted one of 2.333Gb,
Just the once though and a couple of weeks ago now
ID: 95121 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Grant (SSSF)

Send message
Joined: 28 Mar 20
Posts: 1670
Credit: 17,504,454
RAC: 24,396
Message 95123 - Posted: 22 Apr 2020, 9:26:18 UTC - in response to Message 95118.  

We only have a few top notch 32+ cores machines with beefy GPUs around the world,
Try hundreds of thousands, at the least.



but if we contributed every phone, tablet and low-mid end office machine, typically with 2-4 cores, our computing capacity could increase by orders of magnitude. (I.e., we have way less than a million hosts and there exist billions of personal computing devices in the world)
For as many of of those devices there are, many are of such low capability they are of no use to many projects.
And for those that are of use, their frequent use for what they were designed for by the users means they often can't contribute much during those periods, compared to more capable systems.

And you need to keep in mind efficiency isn't actually about low peak or maximum power use- it is about energy used over time to complete a task.
It's no good having a device use 1W if it takes 1 month to produce a result when something that uses 1kW can produce the same result in a matter of seconds. Yeah, it's instantaneous power consumption is a lot higher. But it uses less energy to do the same work. And the fact it can do so much more work over the same period of time as the slower device makes it even more useful to a project.
Grant
Darwin NT
ID: 95123 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 . . . 9 · Next

Message boards : Number crunching : Tells us your thoughts on granting credit for large protein, long-running tasks



©2024 University of Washington
https://www.bakerlab.org