I have opted in for the Beta and it seems these tasks have a much higher priority over standard units. It's no problem as far as Beta would (besides testing) crunch real data (not dummies). Is that so ?
It is definitely real data, so 100% of the computing time goes to real science. By flagging the app "beta" we make sure that the second result ("wingman's result") that is used to validate a result is generated by a non-beta application, which gives us the extra-safety to let beta-apps crunch real data.
Quote:
I have also noticed that when running a single WU on a Tesla K20, the GPU usage seems to be fluctuating from 0% to >90% ever few seconds. Not sure if that is normal behavior..
Depends on the sampling interval. There are small time intervals when the CPU is processing stuff and if the sampling interval of the NVIDIA tool is small enough to fit in, this will result in a 0% reading.
It is definitely real data, so 100% of the computing time goes to real science. By flagging the app "beta" we make sure that the second result ("wingman's result") that is used to validate a result is generated by a non-beta application, which gives us the extra-safety to let beta-apps crunch real data.
Thanks for the clarification, this is the perfect combination.
Quote:
Depends on the sampling interval. There are small time intervals when the CPU is processing stuff and if the sampling interval of the NVIDIA tool is small enough to fit in, this will result in a 0% reading.
Sampling interval is set to 2 seconds and almost every other (or a few other) tick the fluctuation occurs. I have checked the standard non-beta units and they show similar behavior, though the load seems somewhat more stable there. If you wish (and think this is not OK and is worth checking), I can create some graphs.
If you wish (and think this is not OK and is worth checking), I can create some graphs.
I suppose this is quite normal, my HD4000 showed it as well. It's part of the reason you get higher throughput with 2 concurrent WUs on almost any GPU here.
Not sure yet. As I said the GW search needs most of our attention at the moment. For those participating in the beta test this should make no difference (except you will drop in the stats because everyone else will be catching up ;-) )
... except for the Intel GPU guys who stopped receiving betas since the 1st few of them last week. Any particular reason for this? Is it just that the full WUs take so long on those GPUs?
Some preliminary numbers for the Beta run for comparison:
GPU: NVIDIA GeForce GTX 660 Ti
CPU: Intel Core i7-860
2 x BRP6 v1.39: 17,300 - 18,400
2 x BRP6 v1.52: 11,200 - 11,400
GPU: NVIDIA Tesla K20c (ECC off, GPU clock 758 MHz)
CPU: 2 x Intel Xeon E5-2650 v3
2 x BRP6 v1.39: 11,200 - 12,700
2 x BRP6 v1.52: 6,600 - 9,200
Here you can see the variance with the Beta is much higher. I don't know why yet..
RE: I have opted in for the
)
It is definitely real data, so 100% of the computing time goes to real science. By flagging the app "beta" we make sure that the second result ("wingman's result") that is used to validate a result is generated by a non-beta application, which gives us the extra-safety to let beta-apps crunch real data.
Depends on the sampling interval. There are small time intervals when the CPU is processing stuff and if the sampling interval of the NVIDIA tool is small enough to fit in, this will result in a 0% reading.
HB
RE: It is definitely real
)
Thanks for the clarification, this is the perfect combination.
Sampling interval is set to 2 seconds and almost every other (or a few other) tick the fluctuation occurs. I have checked the standard non-beta units and they show similar behavior, though the load seems somewhat more stable there. If you wish (and think this is not OK and is worth checking), I can create some graphs.
-----
RE: If you wish (and think
)
I suppose this is quite normal, my HD4000 showed it as well. It's part of the reason you get higher throughput with 2 concurrent WUs on almost any GPU here.
MrS
Scanning for our furry friends since Jan 2002
HBM wrote:Not sure yet. As I
)
... except for the Intel GPU guys who stopped receiving betas since the 1st few of them last week. Any particular reason for this? Is it just that the full WUs take so long on those GPUs?
MrS
Scanning for our furry friends since Jan 2002
Here some graphics FYI. Used
)
Here some graphics FYI.
Used tool: HWiNFO64, info retrieved via NVML, sampling interval 2 secs
GPU: Tesla K20c (ECC off, 758 MHz)
single WU
v1.39:
v1.52:
-----
Some preliminary numbers for
)
Some preliminary numbers for the Beta run for comparison:
GPU: NVIDIA GeForce GTX 660 Ti
CPU: Intel Core i7-860
2 x BRP6 v1.39: 17,300 - 18,400
2 x BRP6 v1.52: 11,200 - 11,400
GPU: NVIDIA Tesla K20c (ECC off, GPU clock 758 MHz)
CPU: 2 x Intel Xeon E5-2650 v3
2 x BRP6 v1.39: 11,200 - 12,700
2 x BRP6 v1.52: 6,600 - 9,200
Here you can see the variance with the Beta is much higher. I don't know why yet..
-----
Here in graph - this is on
)
Here in graph - this is on the Tesla K20c.
-----
HOST: I3 4130 + AMD RADEON R9
)
HOST: I3 4130 + AMD RADEON R9 280 (CPU Running 1 GW unit):
BRP6 v1.41 21600sec (5 GPU tasks running simultaneously)
BRP6 v1.52 13000sec (5 GPU tasks running simultaneously)
Improvement: 40%!!!!
I have updated the above
)
I have updated the above graph with more data and other GPUs.
-----
I only ran 2 so far. 15%
)
I only ran 2 so far.
15% faster on a GTX 640 (about 25k seconds vs 30k seconds), but the CPU usage was reduced to a third (~2k instead of ~6k).