What performance of GPU-applications expected? Comparable with MilkyWay GPU or not?
I have no idea of the MilkyWay "performance".
We have some highly experimental (and non-optimal) ABP1 CUDA Apps that need more work to fit into the BOINC framework; they have shown a speedup of 1.5 (GeForce) to 3.5 (Tesla) compared to the standard ABP1 (Linux) App. The first CUDA release probably will show similar speedup, but we'll keep working on optimization once we have it working with BOINC at all.
What performance of GPU-applications expected? Comparable with MilkyWay GPU or not?
I have no idea of the MilkyWay "performance".
We have some highly experimental (and non-optimal) ABP1 CUDA Apps that need more work to fit into the BOINC framework; they have shown a speedup of 1.5 (GeForce) to 3.5 (Tesla) compared to the standard ABP1 (Linux) App. The first CUDA release probably will show similar speedup, but we'll keep working on optimization once we have it working with BOINC at all.
BM
Hi,
please please make a beta version available ASAP, I need a pretext for buying a new GPU :)
What performance of GPU-applications expected? Comparable with MilkyWay GPU or not?
I have no idea of the MilkyWay "performance".
We have some highly experimental (and non-optimal) ABP1 CUDA Apps that need more work to fit into the BOINC framework; they have shown a speedup of 1.5 (GeForce) to 3.5 (Tesla) compared to the standard ABP1 (Linux) App. The first CUDA release probably will show similar speedup, but we'll keep working on optimization once we have it working with BOINC at all.
BM
May be measure performance in CS/hour? :) For example - how many CS can be "calculated" on GTX260/280/285...?
please please make a beta version available ASAP, I need a pretext for buying a new GPU :)
Oh heck. Just buy it. Life is too short. Say you have to do benchmarking and profiling in advance of the app release. Cooling studies, vector optimisations, pfoffle metric adjustments, that sort of thing .... :-)
Quote:
why do I even bother crunching on the CPU any longer ?
Because you love us? :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
What performance of GPU-applications expected? Comparable with MilkyWay GPU or not?
I have no idea of the MilkyWay "performance".
We have some highly experimental (and non-optimal) ABP1 CUDA Apps that need more work to fit into the BOINC framework; they have shown a speedup of 1.5 (GeForce) to 3.5 (Tesla) compared to the standard ABP1 (Linux) App. The first CUDA release probably will show similar speedup, but we'll keep working on optimization once we have it working with BOINC at all.
BM
Plenty of willing testers here when you get it fitted to the BOINC framework.
Oh heck. Just buy it. Life is too short. Say you have to do benchmarking and profiling in advance of the app release. Cooling studies, vector optimisations, pfoffle metric adjustments, that sort of thing .... :-)
Cheers, Mike.
No, no, it must be something sensible, e.g. recharging my flux capacitor...
As for the other GPU projects out there: SETI -- no linux (stock) app; GPUGRID -- OK this is an alternative, but I don't want to join another project.
I think he meant CL ...
)
I think he meant CL ... :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Perhaps that the explanation
)
Perhaps that the explanation of what the acronyms stand for helps. :-)
OpenGL = Open (source) Graphics Library
OpenCL = Open (source) Computing Language
OpenGL is a standard specification defining a cross-language, cross-platform API for writing applications that produce 2D and 3D computer graphics.
OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors.
RE: What performance of
)
I have no idea of the MilkyWay "performance".
We have some highly experimental (and non-optimal) ABP1 CUDA Apps that need more work to fit into the BOINC framework; they have shown a speedup of 1.5 (GeForce) to 3.5 (Tesla) compared to the standard ABP1 (Linux) App. The first CUDA release probably will show similar speedup, but we'll keep working on optimization once we have it working with BOINC at all.
BM
BM
RE: RE: What performance
)
Hi,
please please make a beta version available ASAP, I need a pretext for buying a new GPU :)
Michael
Team Linux Users Everywhere
RE: RE: What performance
)
May be measure performance in CS/hour? :) For example - how many CS can be "calculated" on GTX260/280/285...?
:)
RE: I have no idea of the
)
It has taken me 4 Long years to reach 400K on Einstein @ Home.
I reached 400K in two weeks over at MW running on the GPU.
I thought Wow, why do I even bother crunching on the CPU any longer ?
So, I have been trying to reassess what to do. It all boils down to
which projects interest you the most.
Einstein @ Home is still the most interesting project out there.
Find ET ? Meh ! - what a waste of cpu power.
This did not answer the questions. Just some thoughts.
Bill
RE: May be measure
)
My GTX 260 gets ~13000credits/day on GPUgrid.
http://boincstats.com/stats/user_graph.php?pr=ps3grid&id=16674
RE: please please make a
)
Oh heck. Just buy it. Life is too short. Say you have to do benchmarking and profiling in advance of the app release. Cooling studies, vector optimisations, pfoffle metric adjustments, that sort of thing .... :-)
Because you love us? :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: RE: What performance
)
Plenty of willing testers here when you get it fitted to the BOINC framework.
BOINC blog
RE: Oh heck. Just buy it.
)
No, no, it must be something sensible, e.g. recharging my flux capacitor...
As for the other GPU projects out there: SETI -- no linux (stock) app; GPUGRID -- OK this is an alternative, but I don't want to join another project.
So I am eagerly awaiting E@H GPU beta app.
Michael
Team Linux Users Everywhere