Message boards : Number crunching : OT: A closer look at Folding@home on the GPU
Previous · 1 · 2
Author | Message |
---|---|
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code. When nVidia switches approaches - then we'll likely see the latest nVidia cards being supported. If Intel can switch from Speed-at-all-costs to multiple cores at reasonable speeds with higher instructions per clock - following AMD's lead, then there's hope for nVidia's following ATI's lead. And yes, I, too, am an nVidiot, not a fanATIc. :) |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
This may be an interesting read: AMD, Intel are come-back kids with X86 vectorisation If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code. |
darkclown Send message Joined: 28 Sep 06 Posts: 3 Credit: 222,345 RAC: 0 |
The next gen of nVidia cards, which will be DirectX 10 cards, should be much better in terms of performance. From what I understand, Microsoft has been fairly heavy handed in dictating tight ranges for acceptable results on various functions in order to be dx10 compliant/certified. The 8800, I believe, will have 128 shaders. |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
One model of the new G80s was listed as having 96 shaders, and the top end was listed as having 128 shaders in the german review I read. |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
I said most, not all :-) Team mauisun.org |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
"In summary, whether for gaming physics, black hole research or financial simulations, a combination of multi-core and vector processing will bring PCs close to the teraflop performance, and most probably cross the teraflop peak speed barrier by 2010 - whether Intel decides to introduce the feature earlier in Nehalem, or wait till Gesher in that year. In a sense, both CPU and GPU approaches can be combined anyway, as they don't exclude each other. At the end, that CPU vector unit could become a core of its on-chip high-end GPU too, couldn't it?" This may be an interesting read: AMD, Intel are come-back kids with X86 vectorisation |
Tarx Send message Joined: 2 Apr 06 Posts: 42 Credit: 103,468 RAC: 0 |
By the way, I just noticed that folding@home now supports multiple ATI graphic cards crunching away on the same system. |
MintabiePete Send message Joined: 5 Nov 05 Posts: 30 Credit: 418,959 RAC: 0 |
If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code. Good one mate , I like your enthusiasm , guess I'm a fanATIc and thats fantAsTIc :) |
FluffyChicken Send message Joined: 1 Nov 05 Posts: 1260 Credit: 369,635 RAC: 0 |
If you've been following the F@H GPU saga for the past few years, you'll remember that it started with nVidia cards - which were considered the most powerful at the time. They finally abandoned nVidia development and went ATI due to cards like the x1800 being setup with enough shader engines with 32 bit precision to make the job relatively easy. Mr. Houston also mentioned that nVidia is using a technique to hide latency that also caused problems for the GPU code. Now ATI are called AMD (or is the 'new AMD' ;-) http://ati.amd.com/ you need to get them both in the name. Team mauisun.org |
River~~ Send message Joined: 15 Dec 05 Posts: 761 Credit: 285,578 RAC: 0 |
like guess I'm a fanATIc AMD thats fantAsTIc HTH ;) R~~ |
Christoph Jansen Send message Joined: 6 Jun 06 Posts: 248 Credit: 267,153 RAC: 0 |
Hi all, just a thing that came to my mind when browsing for current graphics cards' prices: It looks very much like the focus of crunching on GPUs concentrates on ATI based solutions, which is also picked up in this thread. How does that fit with the problems that quite some users run into when using ATI cards with BOINC screensavers and graphical frontend in general on several projects? There might be a train that pulls out of the station and leaves BOINCers behind, at least to some degree. Is the problem rare enough to just neglect it or are the people reporting these problems just the tip of an iceberg, most of which is concealed by just terminating BOINC without further comment? Just intended as a cautious question, as I do not have the slightest notion about problems associated with trying to reconfigure BOINC to cope with that and the value of it in comparison to the effort. [Note]Of course I do not think that is a generic train of thought only realised by me. I am sure the programmers themselves are quite aware of all that. It is just that it has not been mentioned in this thread before and I wanted to throw it in to the arena. |
Seventh Serenity Send message Joined: 30 Nov 05 Posts: 18 Credit: 87,811 RAC: 0 |
I've just upgraded my GPU from an ATI X850XT to an nVidia 7800 GTX - mostly because of SM3, HDR and the better driver support. I just don't like leaving it idle with it's 24-shaders. I seriously wish there was a project that could make use of it. "In the beginning the universe was created. This made a lot of people very angry and is widely considered as a bad move." - The Hitchhiker's Guide to the Galaxy |
Tarx Send message Joined: 2 Apr 06 Posts: 42 Credit: 103,468 RAC: 0 |
Folding@Home now has added the X1600/X1650 and the X1800 series graphic cards to the supported (already had X1900/X1950). NVIDIA 8000 series is not yet sure, but most expect it will be supported in the future. Existing NVIDIA 7000 series (and lower) and ATI X850 series and lower, will never be supported. The ATI X850 and lower is due to lack of capability. NVIDIA 7000 series and earlier is due to serious bottlenecks for that type of computation even though the alpha version did run on NVIDIA cards (but was just way too slow) (and no, it didn't have much to do with the number of shaders - it was issues like branching, cache coherency, etc.) |
Paydirt Send message Joined: 10 Aug 06 Posts: 127 Credit: 960,607 RAC: 0 |
For F@H, CPU crunching and GPU crunching is fairly different. Yeah, GPUs are much more powerful, but they cannot yet crunch sets that are as complex as the CPUs do. So the CPUs can crunch a wider variety of things while the GPUs are doing more grunt work. This is how things stand presently. I think that BOINC (David Anderson of Berkeley) realize that PS3, XBOX 360, and GPUs offer a whole new level of crunching power (per dollar of equipment) and they will get BOINC code or whatnot to utilize this. It's also possible that the Gates Foundation or Paul Allen might support a programming project if BOINC cannot foot the bill or programmers are unwilling to donate their efforts...? |
Message boards :
Number crunching :
OT: A closer look at Folding@home on the GPU
©2025 University of Washington
https://www.bakerlab.org