Message boards : Number crunching : OpenCL
Author | Message |
---|---|
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
would rosetta ever do it ? :o lol but i'd guess some volunteers would have issue with the power budget, gpus are extremely power hungry, some run in excess of 200 W but for sure if single precision is all that is needed and that is it possible to vectorize the compute, all the Nvidia and AMD higher range gpus easily delivers 2-20 Tflops of computation power, if say an average of 5 Tflops, it just take 200 hosts to reach a summed 1 petaflops of compute power lol |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
|
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
i'd think we'd need to wait, as GPUs are certainly expensive in terms of its cost and its very high power consumption. but for sure for vectorized compute they deliver supercomputer performances on desktops and servers |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
i'd think we'd need to wait Oh, well, the first post in R@H about OpenCl was at beginning of 2009, over 10 ys ago. The developers, 5 or 6ys ago, tried to put R@H on a gpu, but only with a small improvement. During these years a lot of thing has changed so, i don't know if they will try again in their lab. R@H uses a lot of eterogenous simulation on a very different proteins, so it's very difficult to port all these complexity on gpu. I think there is a simpler way: create a "specialized" app that does only one kind of work (for example, "ab initio") and start with an OpenCl C++ app on cpu. When this app will be stable and debugged, try to port it on gpu. Are they interested? Have they the knowledge to do that? I don't know. |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
well but literally, i'm not too sure if boinc have any constraints, but if not my guess is that there can be different apps for different purposes. i doubt all kinds of apps can benefit from a gpu. but there will be some that do! i'd think the hard part again is the server to do just that as they'd need to associate different jobs with different binaries and for that matter, they can have a monolithic version of rosetta with opencl extensions, which only for those volunteers who wants to run tasks on their gpu, they would use those binaries. my guess is for volunteers wishing to crunch on gpu, the motivation is partly that the credits accumulation would need to be much higher or that the jobs are much shorter. that as a decent gpu easily consume 150-200 watts or more power. that is easily 2-5 times more power hungry than crunching on cpu with 8-16 concurrent threads using gpu would also have some rather troublesome driver dependencies and the volunteers would need to set it up appropriately so that the gpu can be linked / loaded at run time but gpu isn't new, folding at home has been there done that https://foldingathome.org/ but nevertheless with moores law breaking down, my suspicion is that OpenCL may literally become the *next big thing*, i.e. OpenCL will become so pervasive that it is *expected* for any compute intensive jobs. actually this is a *bad thing*, as power consumption increase dramatically and cpu become linearly or worse exponentially more expensive for those with high core counts (this is exactly what is happening today). but yes OpenCL vector compute gives you the Tflops to Pflops supercomputing power that is not achievable trying to scale transistors down any further. at the cost of much much higher power consumption tutorials about OpenCL abound on the internet, and it is very much a subset of C language with restrictions. https://handsonopencl.github.io/ back then and even today GPUs are expensive hardware but that things have improved considerably especially with the breakneck speed of features creeping into the newer OpenCL and OpenGL versions the other thing is i'm not sure if engineers and scientists at intel, amd etc can push moores law further up scale further down and run CPUs at say 0.1 v peak, if you can run a cpu at 0.1 v and you drive 100 amps into it it is a mere 10 watts |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
i'd think the hard part again is the server to do just that as they'd need to associate different jobs with different binaries I don't think so. With new versions of Boinc Server, you can manage different apps very easily. they can have a monolithic version of rosetta with opencl extensions, which only for those volunteers who wants to run tasks on their gpu, they would use those binaries. I repeat: start with simple, unique, specialized app it's the easy way. After that they can add, if possible, opencl extension to monolithic code. my guess is for volunteers wishing to crunch on gpu, the motivation is partly that the credits accumulation would need to be much higher or that the jobs are much shorter. Or, for example, simulate bigger and more complex proteins that cannot simulate with cpu. Faster=more science. using gpu would also have some rather troublesome driver dependencies and the volunteers would need to set it up appropriately so that the gpu can be linked / loaded at run time Rosetta volunteers are smarter of what you think. And latest version of boinc client has less problems with gpu than in the past. back then and even today GPUs are expensive hardware but that things have improved considerably especially with the breakneck speed of features creeping into the newer OpenCL and OpenGL versions Not very expansive. With 150 euros/dollars you have a gpu with 5 Tflops single precision. Not bad. |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
true, there are lots of used high end gpus dumped in the market due to the bitcoin fallout, it is a blessing in disguise sort of. but energy costs aren't low, high end gpus normally consume > 150 watts. i'd consider it if say i'm able to use things like solar to supplement the power, but otherwise it is costly burning fossil fuel. the other consideration would be if crunching time can be much shorter |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
but energy costs aren't low, high end gpus normally consume > 150 watts. i'd consider it if say i'm able to use things like solar to supplement the power, but otherwise it is costly burning fossil fuel. But you can crunch also with mid-end gpus, that consume = 150 watts or less. For example, the incoming AMD RX5500 seems ok for this kind of work. But we are writing about dreams. There is NO SSEx support and NO 64 bit native support for Windows, let alone gpu. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
I think there is a simpler way: create a "specialized" app that does only one kind of work (for example, "ab initio") and start with an OpenCl C++ app on cpu. A lot of steps are done in this way PoCl 1.4 Clang support for OpenCl C++ etc |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
Some days ago, KronosGroup released Sycl 1.2.1 Release 6, with a lot of improvements (like Tensorflow support, CUDA back-end, etc) and bugfixes. This is great to write C++ code for GPU. |
Message boards :
Number crunching :
OpenCL
©2024 University of Washington
https://www.bakerlab.org