Is there a GPU version of the app in the works?

koschi
koschi
Joined: 17 Mar 05
Posts: 86
Credit: 1686887555
RAC: 829515

I'm setting the nice value

I'm setting the nice value manually when I use my work station...
I have a small script that I call which automatically switches the acemd process to smallest priority.

[...]

hehe, now that I checked, I found that in BOINC Manager 6.4.2 a nice value of 10 seems to be the default for the GPU process. Anyway, it doesn't have much impact on the overall calculation speed. Like one month ago when I checked that, the speed difference was less then 2%, when running the GPU feeder with a priority of 0 or 19...

ML1
ML1
Joined: 20 Feb 05
Posts: 347
Credit: 86563414
RAC: 823

RE: ... the participant

Message 87131 in response to message 87128

Quote:
... the participant population is frozen (leaving participants balanced by gains). I was doing a little research and found I was x of 1.5 million ... well, 2-3 years ago we had about that many participants.


Do not underestimate the indifference and the proportion of the population that are indifferent towards computing. In contrast, those on these forums are very highly motivated for computing and/or for science.

... Perhaps that is why the computer world is so vulnerable to market manipulation...

Quote:
I just like the idea I can add processing capability to existing systems with relatively inexpensive GPU cards and better, can incrementally add processing speed by replacing older cards without having to junk the whole computer.


That certainly changes the running equations for the Boinc-farm people!

Happy fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)

MarkJ
MarkJ
Joined: 28 Feb 08
Posts: 437
Credit: 139002861
RAC: 0

A quote from the NVIDIA web

A quote from the NVIDIA web site...

"Distributing computing applications such as Folding@home, Einstein@home, GPUGRID and SETI@home have also seen performance improve by orders of magnitude through NVIDIA CUDA technology. Recently Adobe Creative Suite 4 became the latest application to speed up performance and enhance features by moving processing to the GPU."

I guess Bernd is well advanced on the GPU app then :)

Click here for the full article.

MarkJ
MarkJ
Joined: 28 Feb 08
Posts: 437
Credit: 139002861
RAC: 0

RE: A quote from the NVIDIA

Message 87133 in response to message 87132

Quote:

A quote from the NVIDIA web site...

"Distributing computing applications such as Folding@home, Einstein@home, GPUGRID and SETI@home have also seen performance improve by orders of magnitude through NVIDIA CUDA technology. Recently Adobe Creative Suite 4 became the latest application to speed up performance and enhance features by moving processing to the GPU."

I guess Bernd is well advanced on the GPU app then :)

Click here for the full article.

The GPUGRID home page also has a link to a NVIDIA press release dated the 17th of December about Einstein having a GPU app...

Einstein@Home
NVIDA CUDA technology will soon be powering the third most widely used BOINC project, Einstein@Home, which uses distributing computing to search for spinning neutron stars (also called pulsars) using data from gravitational wave detectors.

“We expect that porting Einstein@Home to GPUs will increase the throughput of our computing by an order of magnitude,� said Bruce Allen, director of the Max Plank Institute for Gravitational Physics and Einstein@Home Leader for the LIGO Scientific Collaboration. “This would permit deeper and more sensitive searches for continuous-wave sources of gravitational waves.�

Click here for the press release.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2955296552
RAC: 719878

RE: RE: "Distributing

Message 87134 in response to message 87133

Quote:
Quote:
"Distributing computing applications such as Folding@home, Einstein@home, GPUGRID and SETI@home have also seen performance improve by orders of magnitude through NVIDIA CUDA technology."

Einstein@Home
NVIDA CUDA technology will soon be powering the third most widely used BOINC project, Einstein@Home, which uses distributing computing to search for spinning neutron stars (also called pulsars) using data from gravitational wave detectors.

“We expect that porting Einstein@Home to GPUs will increase the throughput of our computing by an order of magnitude,� said Bruce Allen, director of the Max Plank Institute for Gravitational Physics and Einstein@Home Leader for the LIGO Scientific Collaboration. “This would permit deeper and more sensitive searches for continuous-wave sources of gravitational waves.�

Click here for the press release.


At least the 17 December release (the second one quoted) say CUDA will "soon" be powering Einstein, and says that Einstein "expects" increased throughput by "an" order of magnitude. The full release has some evidence of substantiation for the speed claim in a couple of footnotes citing SETI benchmarking runs.

The 18 December release is pure hype. Claiming that Einstein [has] seen ... orders of magnitude, for a Beta driver release with no benchmarking data, is irresponsible.

ulenz
ulenz
Joined: 22 Jan 05
Posts: 27
Credit: 17897764
RAC: 0

I tested the GPU-client for

I tested the GPU-client for folding@home while running the cpu-client of seti@home additionally on the same PC. The results were disappointing:

The power consumption of the PC increases dramatically. Running it 24/7 would be an exspensive pleasure.
The fans of cpu, gpu and power supply unit needed their whole power to keep the system cool. Therefore the PC became as noisy as a SUN-SPARC-server. Not the right choice for your office or home.

Therefore I don't think that these GPU-clients will be a real success in the near future. Our existing gpu-systems were not designed for distributed computing but for PC games and some workstation tasks. Running at 100 % 24/7 is just a different job.

Intel Q9300 Quadcore, 2500 Mhz, 4096 MB RAM, GeForce 9800 GT, Vista Ultimate 64-bit, Ubuntu 10.10 64 bit

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 722700165
RAC: 1154419

RE: Our existing

Message 87136 in response to message 87135

Quote:
Our existing gpu-systems were not designed for distributed computing but for PC games and some workstation tasks. Running at 100 % 24/7 is just a different job.

I don't quite agree here. CUDA and other similar APIs are now aggressively and enthusiastically advertised by video card vendors (see the NVIDIA press release as an example), so we users have a right to expect (e.g. in the sense of warranty etc) current hardware to be designed for load levels caused by DC on GPUs.

Whether it's a user-friendly experience is another issue, of course. But think about the success of DC on the PS3, which also consumes IIRC around 200 W and more under full load (at least the early versions did). That didn't stop people from doing DC on their PS3s either.

Maybe many users will use GPU apps only a few hours per day instead of 24/7, realizing that the increased performance still gives an overall boost to their contribution. That would still make GPU apps worthwhile.

CU
Bikeman

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1445
Credit: 375901498
RAC: 131604

I have to agree with ulenz.

I have to agree with ulenz. In that I don't think that graphics cards are good enough for long periods of crunching BOINC projects.
Plus despite claims of 10* (plus) performance, the fastest I have been able to verify on Seti and Seti Beta at similar AR is about four times faster.

But to get that one core of the cpu has to be taken off crunching so that it can feed the gpu(s).
And there is the serious question of power consumption. My quad uses just over 100W when on but idle and approx 150W when at 100% load. But a high end graphics card uses over 200W when loaded. So for same cost as a high end graphics card one could build a new quad, with minimum components, and use less power.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 722700165
RAC: 1154419

Even given these figures (4

Even given these figures (4 fold performance, for an additional 200 W), the work done per Watt*h ratio would be better with than w/o GPU, right?

CU
Bikeman

Klimax
Klimax
Joined: 27 Apr 07
Posts: 87
Credit: 1370205
RAC: 0

Given size of memory on cards

Given size of memory on cards I think that all datas should get there and no cpu core would be needed to feed it,so where is limitation?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.