Modern graphics cards contain GPUs with programmable vertex and fragment processors which can effectively be used as a set of parallel CPUs (See www.gpgpu.org for papers on the subject).
Could E@H make use of this parallelism? Or even use the GPU as a just a single separate processor?
High level programming languages, such as Cg developed by NVidia (www.nvidia.com), are now available for porting code from regular CPU programs to GPU vertex and fragment programs. These programs can be invoked from OpenGL calls which are supported on a wide range of operating systems.
I realise that not all computational algorithms can be recast in a form suitable for the GPU's streaming capabilities, but surely any increase in processing power would be a bonus.
Steve
Copyright © 2024 Einstein@Home. All rights reserved.
E@H on a GPU
)
SETI@home project was together with NVIDIA working on this feature, but the progress didn't look good.
They got the FFT to run on the NVIDIA chip, but not very fast: only about 1/4 the speed of the CPU. Problem seems to be the slow speed of moving data from GPU memory back to main memory.
So to use the GPU efficiently they need to do ALL the analysis (not just the FFT) on the GPU, but this will require better high-level-language support.
Possible the 'Brook' (a GPU language being developed at Stanford) project will help here in the future.
Greetings from Bremen/Germany
Jens Seidler (TheBigJens)