I don't know why you got a value as low as 10.5K. Perhaps there is more variation in what the GPU use can save and perhaps that was a task with little or no GPU contention. I'm only guessing.
Indeed, this is curious. Not all WUs are alike, there is some data dependency in the WUs' runtime, but this is the most noticeable I've ever seen. I suggest to wait if it validates and then see if the wingman also spent a less-than-average runtime on it.
My humble suggestion is contention yes, but what aspect of GPU? GPU thread time per se? Memory on the graphics card? GPU/CPU bandwidth? What option selected for "Suspend GPU work while computer is in use?" .....
Those all vary widely. Then throw in a first person shooter and it all becomes moot. :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
The funny thing is that we have here an outlier (the 10k sec job) on the fast end. I can understand outliers on the slow end, but if using an ego-shooter accelerates ABP1 jobs we are sending the wrong message to kids :-).
@ Gary Roberts
My system is a einstein dedicated one. It has a and 2 9800GX2, so every cpu core has one gpu.
One 9800GX2 is in a PCIe x16 slot and one in PCIex4. There was a difference in run time earlier. The gpu in PCIex4 took 15.5k s and the one in PCIx16 took 14.3k s. Now for the 90 cr claimed units I dont get times above 14.4k. So there is some kind of speed up. 100cr claimed 15.2k s 70cr claimed 10.7k s 92 cr claimed 14k s 67 cr claimed 10.1k s
All units granted 250cr.
The run times seem more consistently now. The last transition between the cuda apps gave me also some weird cpu numbers.
In conclusion I have to get a better mainboard with a least two PCIex16 slots ;)
...And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!
i agree with that !!
No one told anyone to leave the project. XJR-Maniac was just quite rude without reason.
If you don't want Einstein CUDA tasks, deselect them in your preferences. Nothing easier than that.
Yes, I too am unsure how that deduction was made from Gary's post. :-)
In any case the primary requirement for CUDA to yield significant benefit is that the problem must lend itself to massive parallelism ( ideally thousands of threads, plus other restrictions ). This is a basic reason ( plus of course issues like compiler technology ) that leads to variable success with apps.
The development here at E@H is quite cautious, with a considerable user pool feeding back via beta testing. CUDA is no exception. While not always successful ( a failure outcome is within the definition of testing ), one hopes to be able to productively generalise beyond the test participants. One can opt out of CUDA if it doesn't fly well enough. In fact that is likely to be a common response for those with unsuitable hardware for optimal CUDA use. Alas as Oliver pointed out, without changing BOINC code ( not under E@H control ) then a default setting of opt-out was/is not available.
Cheers, Mike.
My understanding is that you can opt out of CUDA work for E@H. The problem is how do you do it and not lose downloaded WUs.
Here is the approach I am taking:
#1. Not allow new E@H tasks (WU) to be downloaded.
#2. When all E@H tasks have been completed and uploaded then change the E@H preferences to not allow CUDA tasks.
#3. Allow new E@H tasks.
You may ask why I am taking these steps. Mainly because E@H does not make efficient use of the GPU. I can see this simply by watching the Elapsed Time and the "To Completion" time on the Tasks tab. SETI@Home shows a decrement of the To Completion time of about 10~15 seconds for every second of CPU time while E@H only decrements the "To Completion" value by about 1~2 seconds for every second of Elapsed time.
I look forward to running E@H tasks on the GPU in the future but currently it is not the best use of my hardware. My GPU is an NVIDIA card GeForce 8800 GTS 512 and the mobo is an Intel D975XBX2 with an Intel Core2 Quad 2.40 GHz.
Mainly because E@H does not make efficient use of the GPU. I can see this simply by watching the Elapsed Time and the "To Completion" time on the Tasks tab. SETI@Home shows a decrement of the To Completion time of about 10~15 seconds for every second of CPU time while E@H only decrements the "To Completion" value by about 1~2 seconds for every second of Elapsed time.
I'm not at all contesting your statement about the degree to which ABP1 CUDA currently uses the GPU, but you arrive at this conclusion for the wrong reason.
The rate at which the "time to completion" is diminishing is NOT a good indicator of app efficiency. When BOINC downloads a new workunit, it tries to predict the runtime of it, based on information that is embedded in the workunit itself and based on statistics BOINC gathered during earlier WUs for the same project.
So the rate at which the "time to completion" changes for a job in execution is just a measure of how good this runtime prediction was. If BOINC made a good guess, the rate will diminish at a rate of 1:1 and in the end the predicted runtime will turn out to be about right.
If the initial guess was way too high, BOINC will sense during the execution of the task that the progress (in % as shown in the boincmanager) is actually faster than predicted and it will correct the time to completion slowly.
If the initial guess was way too low, you will even see the "Time to completion" going UP instead of down for some time during the execution of the task.
@ Gary Roberts
My system is a einstein dedicated one. It has a and 2 9800GX2, so every cpu core has one gpu.
As your computers are hidden, it wasn't possible for me to see what hardware you had. You didn't comment on how many cards or that they were duals so I just made the (wrong) assumption of a single GPU. I did warn that I don't have any NVIDIA experience and that I was just guessing :-).
So just ignore my comments about possible contention.
I'm not at all contesting your statement about the degree to which ABP1 CUDA currently uses the GPU, but you arrive at this conclusion for the wrong reason.
The rate at which the "time to completion" is diminishing is NOT a good indicator of app efficiency. When BOINC downloads a new workunit, it tries to predict the runtime of it, based on information that is embedded in the workunit itself and based on statistics BOINC gathered during earlier WUs for the same project.
So the rate at which the "time to completion" changes for a job in execution is just a measure of how good this runtime prediction was. If BOINC made a good guess, the rate will diminish at a rate of 1:1 and in the end the predicted runtime will turn out to be about right.
If the initial guess was way too high, BOINC will sense during the execution of the task that the progress (in % as shown in the boincmanager) is actually faster than predicted and it will correct the time to completion slowly.
If the initial guess was way too low, you will even see the "Time to completion" going UP instead of down for some time during the execution of the task.
CU
Bikeman
Bikeman,
Thanks for the information. You may be right on the reason for slow decrementing of the To Completion time. It just seemed to me a good indication since the BOINC client is the same for both E@H and S@H. In any case, I'm flushing all the E@H tasks and will "disallow" GPU tasks for E@H until this gets resolved. I also hope by then the size of the E@H CUDA/GPU tasks will also be less than 3 hours (a limit which appears to be what the people at S@H are using) so E@H plays better with others on the GPU.
I also hope by then the size of the E@H CUDA/GPU tasks will also be less than 3 hours (a limit which appears to be what the people at S@H are using) so E@H plays better with others on the GPU.
Our tests indicate that the upcoming ABP2 GPU tasks typically take ~0.6 hours per WU. This is still "just" a factor of 2-3 faster than the ABP2 CPU version, but after the ABP2 release we are going to concentrate on improving the GPU code.
RE: RE: I don't know why
)
My humble suggestion is contention yes, but what aspect of GPU? GPU thread time per se? Memory on the graphics card? GPU/CPU bandwidth? What option selected for "Suspend GPU work while computer is in use?" .....
Those all vary widely. Then throw in a first person shooter and it all becomes moot. :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: ... Those all vary
)
It wouldn't even need to be a game, would it? I'd imagine an animated 3D screensaver kicking in while he's sleeping would have some effect ...?
Cheers,
Gary.
The funny thing is that we
)
The funny thing is that we have here an outlier (the 10k sec job) on the fast end. I can understand outliers on the slow end, but if using an ego-shooter accelerates ABP1 jobs we are sending the wrong message to kids :-).
CU
Bikeman
RE: RE: ... Those all
)
With my young lads - "Online Call of Duty 4" or some such - the GPU temp can go up 20+ degrees real quick, real easy. :-)
I think it's polygon rate related? [ thus scene complexity and rate of change thereof ]
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
@ Gary Roberts My system is a
)
@ Gary Roberts
My system is a einstein dedicated one. It has a and 2 9800GX2, so every cpu core has one gpu.
One 9800GX2 is in a PCIe x16 slot and one in PCIex4. There was a difference in run time earlier. The gpu in PCIex4 took 15.5k s and the one in PCIx16 took 14.3k s. Now for the 90 cr claimed units I dont get times above 14.4k. So there is some kind of speed up.
100cr claimed 15.2k s
70cr claimed 10.7k s
92 cr claimed 14k s
67 cr claimed 10.1k s
All units granted 250cr.
The run times seem more consistently now. The last transition between the cuda apps gave me also some weird cpu numbers.
In conclusion I have to get a better mainboard with a least two PCIex16 slots ;)
RE: RE: RE: RE: ...An
)
My understanding is that you can opt out of CUDA work for E@H. The problem is how do you do it and not lose downloaded WUs.
Here is the approach I am taking:
#1. Not allow new E@H tasks (WU) to be downloaded.
#2. When all E@H tasks have been completed and uploaded then change the E@H preferences to not allow CUDA tasks.
#3. Allow new E@H tasks.
You may ask why I am taking these steps. Mainly because E@H does not make efficient use of the GPU. I can see this simply by watching the Elapsed Time and the "To Completion" time on the Tasks tab. SETI@Home shows a decrement of the To Completion time of about 10~15 seconds for every second of CPU time while E@H only decrements the "To Completion" value by about 1~2 seconds for every second of Elapsed time.
I look forward to running E@H tasks on the GPU in the future but currently it is not the best use of my hardware. My GPU is an NVIDIA card GeForce 8800 GTS 512 and the mobo is an Intel D975XBX2 with an Intel Core2 Quad 2.40 GHz.
RE: Mainly because E@H
)
I'm not at all contesting your statement about the degree to which ABP1 CUDA currently uses the GPU, but you arrive at this conclusion for the wrong reason.
The rate at which the "time to completion" is diminishing is NOT a good indicator of app efficiency. When BOINC downloads a new workunit, it tries to predict the runtime of it, based on information that is embedded in the workunit itself and based on statistics BOINC gathered during earlier WUs for the same project.
So the rate at which the "time to completion" changes for a job in execution is just a measure of how good this runtime prediction was. If BOINC made a good guess, the rate will diminish at a rate of 1:1 and in the end the predicted runtime will turn out to be about right.
If the initial guess was way too high, BOINC will sense during the execution of the task that the progress (in % as shown in the boincmanager) is actually faster than predicted and it will correct the time to completion slowly.
If the initial guess was way too low, you will even see the "Time to completion" going UP instead of down for some time during the execution of the task.
CU
Bikeman
RE: @ Gary Roberts My
)
As your computers are hidden, it wasn't possible for me to see what hardware you had. You didn't comment on how many cards or that they were duals so I just made the (wrong) assumption of a single GPU. I did warn that I don't have any NVIDIA experience and that I was just guessing :-).
So just ignore my comments about possible contention.
Cheers,
Gary.
RE: I'm not at all
)
Bikeman,
Thanks for the information. You may be right on the reason for slow decrementing of the To Completion time. It just seemed to me a good indication since the BOINC client is the same for both E@H and S@H. In any case, I'm flushing all the E@H tasks and will "disallow" GPU tasks for E@H until this gets resolved. I also hope by then the size of the E@H CUDA/GPU tasks will also be less than 3 hours (a limit which appears to be what the people at S@H are using) so E@H plays better with others on the GPU.
Tom
RE: I also hope by then the
)
Our tests indicate that the upcoming ABP2 GPU tasks typically take ~0.6 hours per WU. This is still "just" a factor of 2-3 faster than the ABP2 CPU version, but after the ABP2 release we are going to concentrate on improving the GPU code.
Cheers,
Oliver
Einstein@Home Project