I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.
That's quite amazing. I've been told that this is impossible.
Actually running a second Application (and Workunits) on the same project is quite possible on BOINC, though I don't know how many projects actually do this (I could imagine Leiden Classical). Erik Korpela is visiting the AEI this week, he told us that SETI@home will run Astropulse as a second Application some time soon. We're currently looking into implementing it, it might become an option for Einstein@home, too. This way we could actually run a "stream computing" search in parallel.
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.
That's quite amazing. I've been told that this is impossible.
At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.
That's quite amazing. I've been told that this is impossible.
At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.
That's quite amazing. I've been told that this is impossible.
At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:
There is no standard for GPU computing (yet). Picking one particular model: how many Einstein@home participants do have an NVidia Quadro card that they want to actually use for crunching?
You don't need the expensive Quadro cards. Any nVidia card utilizing a G80, G92, G94 or G200 supports Cuda (in slightly different versions though). Even the onboard graphic chips (GeForce 8200 IGPs) should support Cuda but I'm not sure about that (wouldn't make much sense anyway I think). So you have about 70-80 million Cuda enabled GPUs out there (according to nVidia presentations).
Quote:
As far as I understand the Folding@home application is based on Brook or some similar higher level language, the Einstein@home application is (currently) not. Our "Fstat engine" could be thought of as an FFT for narrow frequency bands. It's actually possible to use standard FFT implementations to calculate it, but in the current framework this would be rather inefficient.
You don't have to use fft. But you have to recode and optimize you application for Cuda, e.g. you should have a certain number of stream processors busy at a time (parallelism), don't use to much branching, ...
Cuda is not a high level language with a certain number of fixed functions but rather a C interface with some GPU specific synchronisation routines. I don't say it's easy to program but it should be possible to run any algorithm on a GPU. The question is if it makes sense (can the algorithm be parallelized enough and is the algorithm computation bound, not I/O bound).
Apparently, I'm now running Einstein on BOINC on QUAD-CORE and Folding on GPU of GeForce 7500 with CUDA enabled drivers (there are a lot of GPUs supporting CUDA even from 7200). And I can watch movies and play 3D games (but folding works slower) and run Einstein (again, Folding is running slower, because it needs one CPU to feed the GPU with data and it works faster when BOINC is paused).
The difference in Einstein and Folding is in computation model. Folding chooses SMP model as basic platform and it makes it easier to scale work between CPUs or kernels in GPUs or even different machines in a clusters. Besides, it relyes on standard SMP libraries, that is common for *nix OSes. But it is not common for Windows and new for GPUs. But it works. I even can see 3D model of what I'm working on. The only confusing factor is that GPU core is working yet on beta-workunits that are produced only to test the core and to compare results between GPU-WUs and SMP-WUs.
I think we should not brake any computation models now, at least until S5R4 ends. We can parallelize our work between cores in CPU and that's enough for now. BOINC is more stable platform as I see. And we should look what will happen with Folding (will it be useful?) and only then try to think about a new programming model.
RE: I understand that even
)
That's quite amazing. I've been told that this is impossible.
Actually running a second Application (and Workunits) on the same project is quite possible on BOINC, though I don't know how many projects actually do this (I could imagine Leiden Classical). Erik Korpela is visiting the AEI this week, he told us that SETI@home will run Astropulse as a second Application some time soon. We're currently looking into implementing it, it might become an option for Einstein@home, too. This way we could actually run a "stream computing" search in parallel.
BM
BM
RE: RE: I understand that
)
At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:
http://folding.stanford.edu/English/FAQ-ATI2#ntoc23
CU
Bikeman
RE: RE: RE: I
)
I see. The information I got apparently was bound to CUDA / NVidia.
BM
BM
RE: RE: RE: RE: I
)
Apparently it works for NVidia as well, see this FAQ entry http://folding.stanford.edu/English/FAQ-ATI2#ntoc10 that refers explicitly to both ATI and NVidia visualizatons.
CU
Bikeman
RE: There is no standard
)
You don't need the expensive Quadro cards. Any nVidia card utilizing a G80, G92, G94 or G200 supports Cuda (in slightly different versions though). Even the onboard graphic chips (GeForce 8200 IGPs) should support Cuda but I'm not sure about that (wouldn't make much sense anyway I think). So you have about 70-80 million Cuda enabled GPUs out there (according to nVidia presentations).
You don't have to use fft. But you have to recode and optimize you application for Cuda, e.g. you should have a certain number of stream processors busy at a time (parallelism), don't use to much branching, ...
Cuda is not a high level language with a certain number of fixed functions but rather a C interface with some GPU specific synchronisation routines. I don't say it's easy to program but it should be possible to run any algorithm on a GPU. The question is if it makes sense (can the algorithm be parallelized enough and is the algorithm computation bound, not I/O bound).
Apparently, I'm now running
)
Apparently, I'm now running Einstein on BOINC on QUAD-CORE and Folding on GPU of GeForce 7500 with CUDA enabled drivers (there are a lot of GPUs supporting CUDA even from 7200). And I can watch movies and play 3D games (but folding works slower) and run Einstein (again, Folding is running slower, because it needs one CPU to feed the GPU with data and it works faster when BOINC is paused).
The difference in Einstein and Folding is in computation model. Folding chooses SMP model as basic platform and it makes it easier to scale work between CPUs or kernels in GPUs or even different machines in a clusters. Besides, it relyes on standard SMP libraries, that is common for *nix OSes. But it is not common for Windows and new for GPUs. But it works. I even can see 3D model of what I'm working on. The only confusing factor is that GPU core is working yet on beta-workunits that are produced only to test the core and to compare results between GPU-WUs and SMP-WUs.
I think we should not brake any computation models now, at least until S5R4 ends. We can parallelize our work between cores in CPU and that's enough for now. BOINC is more stable platform as I see. And we should look what will happen with Folding (will it be useful?) and only then try to think about a new programming model.