My GTX1060s have very reliably crunched these things 2 at a time in ~30m. I recently noticed run time has dropped to ~24m. The shorter run time work seems to be validating just fine so there is no problem but I wonder what has changed, different data or an improved app or did I hit a magic setting on OCing on 1 card, the other 1060 hasn't yet experienced this speed up.
Copyright © 2024 Einstein@Home. All rights reserved.
Well the box in question has
)
Well the box in question has gone back to ~30m runs so I conclude it was not the computer or a different app so it must have been a anomaly in the data. I wonder why.
It looks like my GTX 1060 at
)
It looks like my GTX 1060 at stock speed (1923 MHz) is doing two work units at a time in 28 minutes 28 seconds, for whatever that is worth. That is on Win7 64-bit with the 385.28 drivers.
Hi Betreger, Take a look at
)
Hi Betreger,
Take a look at the task IDs for the tasks that you have run recently and they will look something like this "LATeah0040L_412.0_0_0.0_1044160_0".
We recently finished the LATeah0039L series and have moved onto the LATeah0040L series and the first bunch of workunits for the new series seems to have gone very quickly. I noted something similar with the LATeah0039L series where, as soon as the next set of numbers in the task ID after the LATeah0039L went above about 1000 the runtimes got longer by about a minute for me (2 at a time on a GTX 1080). The example task ID I posted above has the number 412 in it, and it is still going about a minute faster for me than the last tasks in the LATeah0039L series. I suspect run times will be back to what we saw before switching over to LATeah0040L as soon as that number goes above 1000 again.
Something else that I have noted is that the speedup seems to occur in the last 10% of the runtime for the task. Where the last group of tasks in the LATeah0039L series were staying at 89.997% for about two minutes, these new tasks in the LATeah0040L series are staying there for less than a minute for me.
I don't know the details of what is going on, or what the rest of the numbers in the task ID (other than the last digit) means though. Perhaps someone with more knowledge about this than myself can go into more detail :)
Regards,
Kellen
Betreger wrote: it must have
)
This happens regularly at the boundary of sets of data. The first small fraction of the next set has faster run times, which rapidly rise the near the typical value. If you look carefully you can see a continued rise after that. But most of the deviation from typical takes place in a rather small fraction of the first work distributed.
The most recent such transition I see in my own returned work was when WU's starting LATeah0138L, with the next field reading higher than 1000 transitioned to WU's starting LATeah0039L. Early distribution work had the next field in one and two digits. If your system has otherwise very consistent run times the effect is easy to see. But most of us don't notice, most of the time, and there is a steady set of posts from people noticing for the first time, and asking what has changed. Usually the correct answer is "nothing".
[Edit: I was typing when Kellen Shenton posted. I'll leave mine as written]
Thanx for the replies, all
)
Thanx for the replies, all seems to be as it was, i GTX1060 a bit under 29m, that was the one that got the fast ones, the other a bit over 30m as always. I got excited hoping for an improved app.