I haven't been keeping up with this subject so I apologise if this has been asked before. Is there an ATI equivalent of NVidia's CUDA? If so, what, where and when? If not, why not? It must be in an FAQ somewhere but the internet is . . . big. Thanks.
Partially the reason there is a CUDA/NVidia one and not an ATI one is because NVidia put programmers on the BOINC code to help develop the CUDA application. The other video cards would have to be done on the side by the BOINC programmers who are already busy trying to get other things done.
I haven't been keeping up with this subject so I apologise if this has been asked before. Is there an ATI equivalent of NVidia's CUDA? If so, what, where and when? If not, why not? It must be in an FAQ somewhere but the internet is . . . big. Thanks.
Partially the reason there is a CUDA/NVidia one and not an ATI one is because NVidia put programmers on the BOINC code to help develop the CUDA application. The other video cards would have to be done on the side by the BOINC programmers who are already busy trying to get other things done.
Folding seems to have gotten it done, they are using both NVidia and ATI cards. Now I do not know if it is a CUDA app for the ATI, I do not have one of the cards.
Folding seems to have gotten it done, they are using both NVidia and ATI cards. Now I do not know if it is a CUDA app for the ATI, I do not have one of the cards.
Folding@Home made their own applications for ATI, they do not have official support from ATI, more like one ATI developer who likes to help them out in his free time.
BOINC and plenty of its projects got an offer from ATI to give hardware, but they couldn't give support in porting over the applications and ironing out the bugs. Projects would have to do that themselves. Nvidia did offer to port over the applications and help out with ironing out the bugs, so that's why they were chosen first.
I think NVidia is really seriously trying to get a foot in the door when it comes to High Performance Computing. BOINC projects would just be showcases and testing grounds for their technology aimed at research in industry and academia. AMD might miss a chance there.
The "Extreme CUDA processing?" thread appears in most BOINC projects with exactly the same wording. You may be right, in my Einstein mailbox I found an advertisement for a Tesla minisupercomputer produced by nVidia.
Tullio
I too would bring them over to Einstein if their was a way to do it while keeping the cpu's and the video cards crunching full time. I do not want a cpu crunching at only 10% just so I can crunch with a video card.
The problem at the moment is that the task has to be updated in videoRAM and this doesn't happen by magic, the CPU is used for that. So the present solution is to exclude one of your CPUs to only work with the GPU. That CPU or core is only partially used, the rest of it is free.
This is done because when the task in videoRAM has to be written back to the disk, exchanged for another task, you want this to happen as quickly as possible. The GPU can only run at full whack or not at all, there's no way yet to gradually increase its use. So if the GPU has to wait for the CPU to free up enough resources that it can send the data from the disk to the videoRAM, there's a good chance the task will time out and exit with an error.
Not a problem AFAIK. I run GPUGRID(cuda) and SETI(cuda) and FOLDING@HOME(gpu) all together on my AMD dual core(x2) 4200+ computers with 9600GSO gpus, plus I am attached to several cpu applications including EINSTEIN, COSMOLOGY and QMC on these computers at the same time, and BOINC always selects 2 of these cpu applications to run along with a GPUGRID or SETI gpu application.
The BOINC Tasks window displays the fraction of cpu used for the gpu applications, and it is generally about 0.05, so the remaining 0.95 fraction of the cpu core seems to be available for use by a cpu application. Since FOLDING is not a BOINC project, it always grabs 50% of the gpu time and some of a cpu core, but both BOINC and FOLDING seem to have no problem running together.
Note that SETI(cuda) hangs up on some wu, so it requires a lot of babysitting, while GPUGRID(cuda) seems to be a more mature application, similar to FOLDING@HOME(gpu). I hope that EINSTEIN(cuda), if it appears, does not have the kind of rough edges that SETI(cuda) does. If you want to try a BOINC CUDA application, I recommend you start with GPUGRID(cuda).
In summary, there may be some overhead penalty in running gpu and cpu applications on the same cpu core, but it does not seem to be large, and the gpu does not capture a cpu core, IMHO.
The number displayed on the BOINC Manager window is an estimate and it ranges from 0.02 to 0.05 and it not at all correct.
You have to look in the Task Manager or system monitor to see the actual CPU usage by the applications. SaH on my Q9300 was 1 to 3% which GPU Grid is running at a solid 22% ... and this is of the system as a whole ... meaning that, in essence, one whole CPU core is taken up managing the GPU processing ...
On my i7 it is a little better in that each GPU core (I have three running) takes 7% meaning I lose 21% of CPU power to run three GPU Grid tasks ... if this could be lowered the total system throughput could be increased ...
A new GPU Grid application has been promised though the best of them that I have seen had a load on the i7 of 3-4% which is also fairly high ... several of us have made suggestions as to how this might be changed, but, still waiting ...
The number displayed on the BOINC Manager window is an estimate and it ranges from 0.02 to 0.05 and it not at all correct.
You have to look in the Task Manager or system monitor to see the actual CPU usage by the applications. SaH on my Q9300 was 1 to 3% which GPU Grid is running at a solid 22% ... and this is of the system as a whole ... meaning that, in essence, one whole CPU core is taken up managing the GPU processing ...
On my i7 it is a little better in that each GPU core (I have three running) takes 7% meaning I lose 21% of CPU power to run three GPU Grid tasks ... if this could be lowered the total system throughput could be increased ...
A new GPU Grid application has been promised though the best of them that I have seen had a load on the i7 of 3-4% which is also fairly high ... several of us have made suggestions as to how this might be changed, but, still waiting ...
Thanks, Paul. You are quite right. My task manager shows GPU Grid hogging almost 40% of one 4200+ core. Too bad it could not be as efficient as FAH which runs closer to 1%.
The number displayed on the BOINC Manager window is an estimate and it ranges from 0.02 to 0.05 and it not at all correct.
You have to look in the Task Manager or system monitor to see the actual CPU usage by the applications. SaH on my Q9300 was 1 to 3% which GPU Grid is running at a solid 22% ... and this is of the system as a whole ... meaning that, in essence, one whole CPU core is taken up managing the GPU processing ...
On my i7 it is a little better in that each GPU core (I have three running) takes 7% meaning I lose 21% of CPU power to run three GPU Grid tasks ... if this could be lowered the total system throughput could be increased ...
A new GPU Grid application has been promised though the best of them that I have seen had a load on the i7 of 3-4% which is also fairly high ... several of us have made suggestions as to how this might be changed, but, still waiting ...
Thanks, Paul. You are quite right. My task manager shows GPU Grid hogging almost 40% of one 4200+ core. Too bad it could not be as efficient as FAH which runs closer to 1%.
Well, version 6.62 is running at under 1% on all systems. I have not gotten to the GPU Grid boards yet so not sure if they made the switch though all the tasks I have gotten on the i7 are now 6.62 tasks. The change of the application means that though the CPU usage is where I want it some are unhappy because the GPU efficiency may have been lowered by as much as 17% ...
The debate was raging last night ... :)
Now instead of hours on the GPU Grid tasks is now down to about 10 min per task ... much more tolerable ...
Well, version 6.62 is running at under 1% on all systems. I have not gotten to the GPU Grid boards yet so not sure if they made the switch though all the tasks I have gotten on the i7 are now 6.62 tasks. The change of the application means that though the CPU usage is where I want it some are unhappy because the GPU efficiency may have been lowered by as much as 17% ...
The debate was raging last night ... :)
Now instead of hours on the GPU Grid tasks is now down to about 10 min per task ... much more tolerable ...
Even though the GPU task is running at less than 1%, is there any significant dropoff in CPU efficiency in terms of increased time to finish the CPU task? I would think we are talking maybe 20% slower(?)
Additionally, would an icrease in L1 or L2 cache help with the overload problem earlier discussed? I have a pair of Q6600 quads with some nifty cache numbers, so I wonder if those who have lots of L2 cache might be at an advantage here.
RE: I haven't been keeping
)
Partially the reason there is a CUDA/NVidia one and not an ATI one is because NVidia put programmers on the BOINC code to help develop the CUDA application. The other video cards would have to be done on the side by the BOINC programmers who are already busy trying to get other things done.
RE: RE: I haven't been
)
Folding seems to have gotten it done, they are using both NVidia and ATI cards. Now I do not know if it is a CUDA app for the ATI, I do not have one of the cards.
RE: Folding seems to have
)
Folding@Home made their own applications for ATI, they do not have official support from ATI, more like one ATI developer who likes to help them out in his free time.
BOINC and plenty of its projects got an offer from ATI to give hardware, but they couldn't give support in porting over the applications and ironing out the bugs. Projects would have to do that themselves. Nvidia did offer to port over the applications and help out with ironing out the bugs, so that's why they were chosen first.
I think NVidia is really
)
I think NVidia is really seriously trying to get a foot in the door when it comes to High Performance Computing. BOINC projects would just be showcases and testing grounds for their technology aimed at research in industry and academia. AMD might miss a chance there.
CU
Bikeman
The "Extreme CUDA
)
The "Extreme CUDA processing?" thread appears in most BOINC projects with exactly the same wording. You may be right, in my Einstein mailbox I found an advertisement for a Tesla minisupercomputer produced by nVidia.
Tullio
RE: RE: I too would bring
)
Not a problem AFAIK. I run GPUGRID(cuda) and SETI(cuda) and FOLDING@HOME(gpu) all together on my AMD dual core(x2) 4200+ computers with 9600GSO gpus, plus I am attached to several cpu applications including EINSTEIN, COSMOLOGY and QMC on these computers at the same time, and BOINC always selects 2 of these cpu applications to run along with a GPUGRID or SETI gpu application.
The BOINC Tasks window displays the fraction of cpu used for the gpu applications, and it is generally about 0.05, so the remaining 0.95 fraction of the cpu core seems to be available for use by a cpu application. Since FOLDING is not a BOINC project, it always grabs 50% of the gpu time and some of a cpu core, but both BOINC and FOLDING seem to have no problem running together.
Note that SETI(cuda) hangs up on some wu, so it requires a lot of babysitting, while GPUGRID(cuda) seems to be a more mature application, similar to FOLDING@HOME(gpu). I hope that EINSTEIN(cuda), if it appears, does not have the kind of rough edges that SETI(cuda) does. If you want to try a BOINC CUDA application, I recommend you start with GPUGRID(cuda).
In summary, there may be some overhead penalty in running gpu and cpu applications on the same cpu core, but it does not seem to be large, and the gpu does not capture a cpu core, IMHO.
The number displayed on the
)
The number displayed on the BOINC Manager window is an estimate and it ranges from 0.02 to 0.05 and it not at all correct.
You have to look in the Task Manager or system monitor to see the actual CPU usage by the applications. SaH on my Q9300 was 1 to 3% which GPU Grid is running at a solid 22% ... and this is of the system as a whole ... meaning that, in essence, one whole CPU core is taken up managing the GPU processing ...
On my i7 it is a little better in that each GPU core (I have three running) takes 7% meaning I lose 21% of CPU power to run three GPU Grid tasks ... if this could be lowered the total system throughput could be increased ...
A new GPU Grid application has been promised though the best of them that I have seen had a load on the i7 of 3-4% which is also fairly high ... several of us have made suggestions as to how this might be changed, but, still waiting ...
RE: The number displayed on
)
Thanks, Paul. You are quite right. My task manager shows GPU Grid hogging almost 40% of one 4200+ core. Too bad it could not be as efficient as FAH which runs closer to 1%.
RE: RE: The number
)
Well, version 6.62 is running at under 1% on all systems. I have not gotten to the GPU Grid boards yet so not sure if they made the switch though all the tasks I have gotten on the i7 are now 6.62 tasks. The change of the application means that though the CPU usage is where I want it some are unhappy because the GPU efficiency may have been lowered by as much as 17% ...
The debate was raging last night ... :)
Now instead of hours on the GPU Grid tasks is now down to about 10 min per task ... much more tolerable ...
RE: Well, version 6.62 is
)
Even though the GPU task is running at less than 1%, is there any significant dropoff in CPU efficiency in terms of increased time to finish the CPU task? I would think we are talking maybe 20% slower(?)
Additionally, would an icrease in L1 or L2 cache help with the overload problem earlier discussed? I have a pair of Q6600 quads with some nifty cache numbers, so I wonder if those who have lots of L2 cache might be at an advantage here.
(Click for detailed stats)