All WUs on RTX 2070 Error Out

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

There are reports of high

There are reports of high number of 2080ti failing on the nvidia forums. No definite numbers on that.  Hope it's not true.  

CElliott
CElliott
Joined: 9 Feb 05
Posts: 28
Credit: 1048923710
RAC: 1869970

My system aborted 1140 WUs

My system aborted 1140 WUs (1.20 Gamma-ray pulsar binary search #1 on GPUs, mostly LATeah1034Ls) starting at 12:47 AM EST.  The RTX 2070 hit a 104V and aborted on it and a 1034L it was simultaneously processing.  Then at 12:58 AM EST the GTX 1070 finished two WUs correctly but proceeded to flush the rest of the WUs in the system at a rate of about 4-5 a minute.  Interestingly the RTX 2070 continued processing 2 WUs every 28 minutes until 03:12:58 AM EST.  The GTX 1070 had aborted all the WUs by 03:07:55 AM EST, so there was no work left to process.  E@H will not send any more work, saying, "reached daily quota of 22 tasks."  Interestingly enough, when I opened the S@H flood gates to allow work from it, BOINC/S@H proceeded to process successfully one WU on each GPU about every five minutes, WITH NO REBOOTS OR RESTARTS OF BOINC.  It just worked.

I flushed all the 104* WUs for several days to avoid just this problem, but it took a while, so I stopped doing it.  Maybe I should return to that practice.

I use the computer to heat the house in the winter, so my workroom is simulating Nome, Alaska.

Why is it that the time to process a E@H work unit almost exactly doubled on October 31?

archae86
archae86
Joined: 6 Dec 05
Posts: 3161
Credit: 7274195057
RAC: 1861798

CElliott wrote:Why is it that

CElliott wrote:
Why is it that the time to process a E@H work unit almost exactly doubled on October 31?

It did not, on my systems. There was a transition very close to October 31 from issue of what I call high-pay work units to the low-pay work units, but while the difference in processing time is substantial between the two types, it is nowhere near double.

So if you actually saw doubling there was something about your system specifically, not about Einstein. The easiest way to do that would be to transition your system from processing one work unit at a time per GPU to two work units at a time (a configuration item I generally term 1X versus 2X). This behavior is most easily controlled by altering the value for GPU utilization factor in the project preferences portion of your Einstein website.  However you can also change it with a configuration file.

DanNeely
DanNeely
Joined: 4 Sep 05
Posts: 1364
Credit: 3562358667
RAC: 0

On my 1080 there's about a

On my 1080 there's about a 50% difference in fast vs slow WUs.  20 vs 30m when run in batches of 3.

Penguin
Penguin
Joined: 8 Oct 12
Posts: 14
Credit: 396469840
RAC: 128829

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.