The problem was present already in O3ASHF, but according to the validator logs it was never triggered. Also too, it would lead to results being falsely rejected as invalid, not to results falsely being accepted as valid. A possible loss of credit and computing power, but not of a detection.
I'd like to know if that could be handled as well. Maybe it might be possible to restrict 3GB and above GPUs to Bu and 2GB units to BuB and thereby add to the overall output as was the original intention.
I have 2GB GPUs doing BRP7 and I'd prefer for them to be doing O3AS.
I've been running a test machine which is now out of work. The results at the moment are:-
Total 55
In Progress 0
Pending 4
Valid 44
Invalid 6
Error 0
Inconclusive 1
I don't know if there are still more repeat validations to be done but if some more invalids get 'fixed' then I would be happy for all my hosts doing BRP7 to switch to O3AS. After all BRP7 has periods of high rates of invalids at times, which is quite frustrating.
While I'm having a bit of a gripe about BRP7 validation rates, there's a point that has been bothering me ever since Bernd made a comment quite some time ago that part of the problem for BRP7 validation variability was due to "noisy data" from time to time, if I remember correctly. In other words, if the signal to noise ratio is low, it's likely that different OSes, compute libraries, drivers, whatever, may end up with different 'toplists' and hence lead to validation inconsistencies where some will win and others will lose.
In those cases, who is to say that the 'closer agreeing' pair of results were the best to choose as the 'canonical' result and saved? Why should the result deemed 'not quite close enough' get no credit at all for the work done?
It would be very nice if the validation process were smart enough to recognise when that type of situation had arisen and instead of outright rejecting the 'not quite close enough' result, still award credit for the work done but ignore it for the purposes of selecting the canonical result. No change to the final outcome as far as the database is concerned but at least some credit for the unlucky machine that got excluded, through no fault of its own.
I don't really care about any BuB invalids being flipped to valid and credited. but I don't think I ever saw any corrections. Just as well to let sleeping dogs lie in my opinion.
Re: BRP7, probably not the best thread for it, being the O3AS thread and all. I'll post a response from petri about it in the BRP7 thread for you.
There was something in the
)
There was something in the validator that stopped working a couple of years ago, but went unnoticed until these tasks triggered it.
BM
Does that potentially cause
)
Does that potentially cause some uncertainty in the results from O3ASHF?
_________________________________________________________________________
The problem was present
)
The problem was present already in O3ASHF, but according to the validator logs it was never triggered. Also too, it would lead to results being falsely rejected as invalid, not to results falsely being accepted as valid. A possible loss of credit and computing power, but not of a detection.
BM
glad to hear it :) just need
)
glad to hear it :)
just need the validator to catch up now. lots of tasks in the validator queue.
_________________________________________________________________________
@Bernd Now that the
)
@Bernd Now that the validator has been fixed, will you continue to send both Bu and BuB tasks?
I'd like to know if that
)
I'd like to know if that could be handled as well. Maybe it might be possible to restrict 3GB and above GPUs to Bu and 2GB units to BuB and thereby add to the overall output as was the original intention.
I have 2GB GPUs doing BRP7 and I'd prefer for them to be doing O3AS.
I've been running a test machine which is now out of work. The results at the moment are:-
I don't know if there are still more repeat validations to be done but if some more invalids get 'fixed' then I would be happy for all my hosts doing BRP7 to switch to O3AS. After all BRP7 has periods of high rates of invalids at times, which is quite frustrating.
Cheers,
Gary.
While I'm having a bit of a
)
While I'm having a bit of a gripe about BRP7 validation rates, there's a point that has been bothering me ever since Bernd made a comment quite some time ago that part of the problem for BRP7 validation variability was due to "noisy data" from time to time, if I remember correctly. In other words, if the signal to noise ratio is low, it's likely that different OSes, compute libraries, drivers, whatever, may end up with different 'toplists' and hence lead to validation inconsistencies where some will win and others will lose.
In those cases, who is to say that the 'closer agreeing' pair of results were the best to choose as the 'canonical' result and saved? Why should the result deemed 'not quite close enough' get no credit at all for the work done?
It would be very nice if the validation process were smart enough to recognise when that type of situation had arisen and instead of outright rejecting the 'not quite close enough' result, still award credit for the work done but ignore it for the purposes of selecting the canonical result. No change to the final outcome as far as the database is concerned but at least some credit for the unlucky machine that got excluded, through no fault of its own.
Cheers,
Gary.
I don't really care about any
)
I don't really care about any BuB invalids being flipped to valid and credited. but I don't think I ever saw any corrections. Just as well to let sleeping dogs lie in my opinion.
Re: BRP7, probably not the best thread for it, being the O3AS thread and all. I'll post a response from petri about it in the BRP7 thread for you.
_________________________________________________________________________
+1
)
+1
move BRP7 discussion here
)
move BRP7 discussion here pls: https://einsteinathome.org/goto/comment/232693
_________________________________________________________________________