Indeed the new tasks should earn you 20k. Fixed, also for previously generated workunits.
This could easily mean previously generated but not yet completed, submitted, and/or validated workunits, especially since if you look at your task list, they still have the 1-4k credit assigned to them and not 20k.
In short, all tasks granted credit from now on will get you 20k, regardless of how old the workunits are. Credit of tasks that were already granted credit will not be changed, that would be too much effort. But tasks pending or newly replicated from older workunits will get the new credit.
In short, all tasks granted credit from now on will get you 20k, regardless of how old the workunits are. Credit of tasks that were already granted credit will not be changed, that would be too much effort. But tasks pending or newly replicated from older workunits will get the new credit.
A combination of me upgrading my Fedora installation, the latest 565.77 release of the Nvidia Linux drivers being available for it [https://rpmfusion.org/Howto/NVIDIA], plus the new low frequency O3ASHF1 WUs having started all came together to make me test E@H under these new circumstances.
Confirming what others have previously reported and to add a further data point, the newer WUs take approx. 2.34x longer on my 2080 Super. The doubling of credit from 10,000 to 20,000 therefore almost balances that out.
A combination of me upgrading my Fedora installation, the latest 565.77 release of the Nvidia Linux drivers being available for it [https://rpmfusion.org/Howto/NVIDIA], plus the new low frequency O3ASHF1 WUs having started all came together to make me test E@H under these new circumstances.
Confirming what others have previously reported and to add a further data point, the newer WUs take approx. 2.34x longer on my 2080 Super. The doubling of credit from 10,000 to 20,000 therefore almost balances that out.
This is exactly what I also see on my cards...tasks quite a bit longer, some 3x as much!
Also, Tthrottle has downgraded my GPU contribution...as GPU is heated more with new tasks, from ~2/3 to ~1/4 (or ~25%)...so watch out for the cooling also!
Look like the problem with the work of the old (OpenCL 1.2, without OpenCL 2.0 support) GPUs in the O3AS project have not been solved?
Some users (including me) wrote about it a year and two years ago about all previous GW apps - the E@H GW applications launches, it seems to work correctly, but it is EXTREMELY slow. And by extremely, I mean about hundreds or even few thousand times slower than can be expected from a comparison of technical characteristics of hardware.
For example, AMD RX570 (GCN 4.0 micro-architecture, supports OpenCL 2.0) performs one task in about 1.5 hours, provided sufficient CPU support to avoid GPU starvation (in the statistics of my computers, the execution time is usually about 2 times longer, but this is because they process 2 GW tasks in parallel or sometimes even 1 GW task + 2 BRP7/MeerKAT tasks).
Whereas, for example, on AMD HD 7850 / HD 7870 / HD 7950 / R7 265 / R9 270 / R9 280 etc (Pitcairn and Tahiti GPU chips, GCN 1.0 micro-architecture, supports only OpenCL 1.2), progress is extremely slow - less than 1% per full day of calculation, i.e. few thousands of times slower!
The application did not freeze/hung - the logs show the progress of calculations, saving checkpoints, hardware monitoring shows low (20-40% plus reduced operating frequencies), but non-zero GPU usage + maximum load of 1 CPU core/thread.
And it keeps adding few dots every hours to the log and write checkpoints on regular intervals. Whereas the full task, as I understand it = 9000 such dots.
So everything looks like normal, correct Work. Except that it's moving abnormally slowly.
Although at the hardware level, in terms of theoretical FP performance, these GPUs differ by less than 2 times:
AMD RX570 = 2048 Shader Processors running @ 1.3 GHz = 5.3 TFLOPS of Theoretical Peak FP32 performance
HD 7870 = 1280 SP@ 1.1 GHz = 2.8 TFLOPS theoretical peak FP32 performance
5.3/2.8 ~= 1.89 times
The hardware architecture is also very close - these are different iterations/sub-versions of the same AMD GCN architecture. And VRAM almost the same (GDDR5 on 256bit bus, only RAM clock speeds differs by a few dozen %)
The only significant difference that I am aware of is that the newer GCN versions supports OpenCL 2.0, whereas its first iteration supported only OpenCL 1.2.
The OS and driver versions are also the exactly the same used to compare the RX 570 and HD 7870.
I also tried using this and other similar outdated but still well-functioning(and in fact, even faster than some of the latest generation iGPUs.) GPUs to crunch for the BRP7/MeerKAT project. It works without visible problems and is quite fast for an old GPUs. BUT gives a huge number of validation errors after sending results back to server. Moreover, it is also inconsistent and unstable on this part - in one or several days it can be only 1-3% of tasks with validation errors, and in another it suddenly exceeds 50%, and then again several % for few days, then jumps to 30%, and so on. Despite the fact that nothing changed on my side at all - even the computer did not restart in between.
P.S.
All same GPUs worked without any issues and with near 100% validation rates on few other GPU DC projects when there was work for them: FGRPB1G here for E@H and OPNG1 for WCG (Open Pandemics - molecular dynamics for Medical Research). But all of them have no more tasks to process for at least half a year or more.
historically those old GCN 1.0 GPUs have not been able to process the work from the GW application. probably some required feature used in the app that's not supported by the GPU.
the app is actually hung. it's not just processing slow. it's not processing at all. BOINC will count at minimal rate when it gets no feedback about progress. in increments of 0.001%. but that doesnt mean it's doing anything, just default BOINC behavior.
each line showing ".c" seems to indicate as much. if it was doing anything, you would see lines with longer outputs like "........c" ".......................c" etc. You can see this in your RX570 results
stick to the BRP7 app for your old GPUs since they seem to work there.
Bernd Machenschalk
)
maeax wrote:Bernd
)
This could easily mean previously generated but not yet completed, submitted, and/or validated workunits, especially since if you look at your task list, they still have the 1-4k credit assigned to them and not 20k.
yes I'm starting to see the
)
yes I'm starting to see the 20k tasks now. many thanks :)
_________________________________________________________________________
In short, all tasks granted
)
In short, all tasks granted credit from now on will get you 20k, regardless of how old the workunits are. Credit of tasks that were already granted credit will not be changed, that would be too much effort. But tasks pending or newly replicated from older workunits will get the new credit.
BM
Bernd Machenschalk wrote: In
)
Thank you for the clarification!
Proud member of the Old Farts Association
Thanks Bernd and team, the
)
Thanks Bernd and team, the new O3AS WUs seem to be running nicely.
A combination of me upgrading
)
A combination of me upgrading my Fedora installation, the latest 565.77 release of the Nvidia Linux drivers being available for it [https://rpmfusion.org/Howto/NVIDIA], plus the new low frequency O3ASHF1 WUs having started all came together to make me test E@H under these new circumstances.
Confirming what others have previously reported and to add a further data point, the newer WUs take approx. 2.34x longer on my 2080 Super. The doubling of credit from 10,000 to 20,000 therefore almost balances that out.
David Crick wrote: A
)
This is exactly what I also see on my cards...tasks quite a bit longer, some 3x as much!
Also, Tthrottle has downgraded my GPU contribution...as GPU is heated more with new tasks, from ~2/3 to ~1/4 (or ~25%)...so watch out for the cooling also!
non-profit org. Play4Life in Zagreb, Croatia, EU
The problem with the work of
)
Look like the problem with the work of the old (OpenCL 1.2, without OpenCL 2.0 support) GPUs in the O3AS project have not been solved?
Some users (including me) wrote about it a year and two years ago about all previous GW apps - the E@H GW applications launches, it seems to work correctly, but it is EXTREMELY slow. And by extremely, I mean about hundreds or even few thousand times slower than can be expected from a comparison of technical characteristics of hardware.
For example, AMD RX570 (GCN 4.0 micro-architecture, supports OpenCL 2.0) performs one task in about 1.5 hours, provided sufficient CPU support to avoid GPU starvation (in the statistics of my computers, the execution time is usually about 2 times longer, but this is because they process 2 GW tasks in parallel or sometimes even 1 GW task + 2 BRP7/MeerKAT tasks).
Whereas, for example, on AMD HD 7850 / HD 7870 / HD 7950 / R7 265 / R9 270 / R9 280 etc (Pitcairn and Tahiti GPU chips, GCN 1.0 micro-architecture, supports only OpenCL 1.2), progress is extremely slow - less than 1% per full day of calculation, i.e. few thousands of times slower!
The application did not freeze/hung - the logs show the progress of calculations, saving checkpoints, hardware monitoring shows low (20-40% plus reduced operating frequencies), but non-zero GPU usage + maximum load of 1 CPU core/thread.
Example of log(stderr.txt) after >=1 day of calc:
https://pastebin.com/6x7dm8Cg
And it keeps adding few dots every hours to the log and write checkpoints on regular intervals. Whereas the full task, as I understand it = 9000 such dots.
So everything looks like normal, correct Work. Except that it's moving abnormally slowly.
Although at the hardware level, in terms of theoretical FP performance, these GPUs differ by less than 2 times:
AMD RX570 = 2048 Shader Processors running @ 1.3 GHz = 5.3 TFLOPS of Theoretical Peak FP32 performance
HD 7870 = 1280 SP@ 1.1 GHz = 2.8 TFLOPS theoretical peak FP32 performance
5.3/2.8 ~= 1.89 times
The hardware architecture is also very close - these are different iterations/sub-versions of the same AMD GCN architecture. And VRAM almost the same (GDDR5 on 256bit bus, only RAM clock speeds differs by a few dozen %)
The only significant difference that I am aware of is that the newer GCN versions supports OpenCL 2.0, whereas its first iteration supported only OpenCL 1.2.
The OS and driver versions are also the exactly the same used to compare the RX 570 and HD 7870.
I also tried using this and other similar outdated but still well-functioning(and in fact, even faster than some of the latest generation iGPUs.) GPUs to crunch for the BRP7/MeerKAT project. It works without visible problems and is quite fast for an old GPUs. BUT gives a huge number of validation errors after sending results back to server. Moreover, it is also inconsistent and unstable on this part - in one or several days it can be only 1-3% of tasks with validation errors, and in another it suddenly exceeds 50%, and then again several % for few days, then jumps to 30%, and so on. Despite the fact that nothing changed on my side at all - even the computer did not restart in between.
P.S.
All same GPUs worked without any issues and with near 100% validation rates on few other GPU DC projects when there was work for them: FGRPB1G here for E@H and OPNG1 for WCG (Open Pandemics - molecular dynamics for Medical Research). But all of them have no more tasks to process for at least half a year or more.
historically those old GCN
)
historically those old GCN 1.0 GPUs have not been able to process the work from the GW application. probably some required feature used in the app that's not supported by the GPU.
the app is actually hung. it's not just processing slow. it's not processing at all. BOINC will count at minimal rate when it gets no feedback about progress. in increments of 0.001%. but that doesnt mean it's doing anything, just default BOINC behavior.
each line showing ".c" seems to indicate as much. if it was doing anything, you would see lines with longer outputs like "........c" ".......................c" etc. You can see this in your RX570 results
stick to the BRP7 app for your old GPUs since they seem to work there.
_________________________________________________________________________