Message boards : Number crunching : Could GPU's mean the end of DC as we know it?
Previous · 1 · 2
Author | Message |
---|---|
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
I doubt additional CPU power will go unused anytime soon :) Current calculations are designed to be as 'simple' as possible in order to run in a decent amount of time. In many cases assumptions have to be made or the calculation would be millions or billions of times larger. An example from my college days. . We simulated the interactions between stars researching galaxy formation. Since each star gravitationally interacted with every other star, there were ~N^2 calculations where N is the number of stars in the simulation. I think we were able to get away with several thousand and get results in a couple days, but real galaxies have hundreds of billions of stars (100,000,000 calculations per unit of time vs ~10^20). What your time slices are have an impact as well, do you calculate forces on objects over a minute, day, year, century at a time? When galaxies take many millions of years to form the time slice is yet another simplification that needs to be made. The same is true of things that happen very quickly only in reverse. With the exception of looking for primes or star dust in gel, I can't think of any other DC projects that work with without making trade-offs on the accuracy of the results in order to get results in a reasonable amount of time. That’s why the help of everyone participating in R@H is so useful, it allows the project to get better results in a shorter period of time. |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
Parallel processing machine 100X faster than current PCs Researchers at the University of Maryland have come up with a desktop parallel computing system they say is 100 times faster than current PCs and the kicker is, they want you to name it. That's right, researchers are inviting the public to propose names for the prototype that they say uses a circuit board about the size of a license plate on which they have mounted 64 parallel processors. To control those processors, they have developed the crucial parallel computer organization that allows the processors to work together and make programming practical and simple for software developers said Uzi Vishkin and the University of Maryland James Clark School of Engineering researchers who developed the machine. Parallel processing on a massive scale, based on interconnecting numerous chips, has been used for years to create supercomputers. However, its application to desktop systems has been a challenge because of severe programming complexities. The Clark School team found a way to use single chip parallel processing technology to change that. Vishkin presented his computer last week at Microsoft's Workshop on Many-Core Computing..... |
AgnosticPope Send message Joined: 16 Dec 05 Posts: 18 Credit: 148,821 RAC: 0 |
The problem at the moment is getting enough people interested and "in the know" about it. I work at a "major corporation" that has "more than 50,000 employees," each of whom has a fairly substantial laptop or desktop computer. By corporate policy we are all absolutely prohibited from running "third party" applications on our computers. So, if anybody has any good contacts with the Boards of Directors of any good Fortune 500 companies, most of whom would be in a similar situation, but most of whom encourage employees to undertake "charity work" using some company supplied resources, how about getting said Boards of Directors to allow or even encourage employees to use BOINC on their work computers? If you want more "bang for your buck" of time spent advocating for some cause, that would be the path I would recommend. == Bill |
GeneM Send message Joined: 4 Aug 06 Posts: 7 Credit: 1,112,726 RAC: 0 |
The problem at the moment is getting enough people interested and "in the know" about it. If memory serves me I believe IBM has for years encouraged its employees to connect their company computers to one of the projects on the World Computing Grid. |
Greg_BE Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
The problem at the moment is getting enough people interested and "in the know" about it. If we could only get microsoft to get online, not to mention alot of the other high tech companies in Seattle/Bellevue area. |
FoldingSolutions Send message Joined: 2 Apr 06 Posts: 129 Credit: 3,506,690 RAC: 0 |
Surely supercomputers such as the IBM bluegene must have downtime when they're not doing anything, couldn't they be used for DC (having overcome programming difficulties of course. Think of the RAC on that :o |
Greg_BE Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Surely supercomputers such as the IBM bluegene must have downtime when they're not doing anything, couldn't they be used for DC (having overcome programming difficulties of course. Think of the RAC on that :o oh help! that just blows my mind thinking about it. gees...how many work units could you run simultaneously on that machine? and the number of decoys made on it in our standard 4-8 hour runtime?! bluegene (280 TFLOPS, with current config of 65,536 "Compute Nodes" (i.e., 216 nodes) vs cray's baddest baby(xt4 with 320 cabinets with 96 cores per cabinet and processing speed 1Tflop per cabinet, so 320Tflops in total)with RAH...hows that add up? thats 600Tflops of compute power between them with maximum configuration. And Cray uses my favorite chip company AMD Opteron 64-bit. But can BOINC/RAH be adapted to work with fortan and c++? |
AgnosticPope Send message Joined: 16 Dec 05 Posts: 18 Credit: 148,821 RAC: 0 |
Teraflops are so yesterday. The new upper limit is 3 petaflops, according to this BBC article: By comparison the standard one petaflop Blue Gene/P comes with 294,912-processors connected by a high-speed, optical network. The real question is whether BOINC software could be adapted to snuggle into arrays of cell processors. The Roadrunner architecture seems promising as the BOINC part could run in the standard processors so that only the computationally intensive code would need to be cellularized. == Bill |
AgnosticPope Send message Joined: 16 Dec 05 Posts: 18 Credit: 148,821 RAC: 0 |
Speaking of petaflops, any of these teraflop+ machines would make a large dent in the Rosetta processing as the home page says this: "TeraFLOPS estimate: 51.857" So, all of us folks are only managing to contribute a measely 52 teraflops, more or less. So, do you think one of those supercomputer owners would run BOINC for us in the computer's spare time? == Bill |
Greg_BE Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Speaking of petaflops, any of these teraflop+ machines would make a large dent in the Rosetta processing as the home page says this: "TeraFLOPS estimate: 51.857" Heck, get Dr Baker to ask them. lol Cray is in Seattle so he could just drive over to them from the campus and have a chat with them and call up some of the clients of Cray. I machine would clean up in one cycle what it takes us, what, a month or more to create the same amount of data? |
Message boards :
Number crunching :
Could GPU's mean the end of DC as we know it?
©2025 University of Washington
https://www.bakerlab.org