In April I got my first GTX1060. It consistently ran about 29 mins on this app running 2 at a time thru yesterday. Today I noticed run times have dropped to about 23 mins. The work seems to be validating so that is good but I ask what has changed?
Copyright © 2024 Einstein@Home. All rights reserved.
That application is my sole
)
That application is my sole diet on three machines. I've not spotted a recent shift.
Compared to most previous Einstein GPU applications of my experience (which is all on Windows with Nvidia hardware) this one has unusually strong dependence on CPU support from the host system. Perhaps you may have had a change affecting that?
For a few tasks, I noticed
)
For a few tasks, I noticed similar behavior. From time to time raw data add some variation to the crunching process, it seems.
However I'd be interested to know, whether a single task can be repeated on the same system for testing purposes. For example, if I suspect an older GPU driver allows shorter crunching times than the newest. Because of raw data variation, any influence of system or driver is difficult to find out - unless the same task is crunched again on an unchanged system.
AFAIK nothing has changed on
)
AFAIK nothing has changed on that box.
This looks like a normal
)
This looks like a normal variation in runtime due to the data and parameters used. We recently (within last two days) started a new dataset. It is possible to extract a task and rerun it to do timing tests with different driver versions. There are some experts on the forums here that surely can do that as I'm not sure I can find the time.
Ah that could very well
)
Ah that could very well explain it, that box is crunching data downloaded yesterday the other GTX1060 is a day behind because it splits it's time with another project.
My answer was therefore bad,
)
My answer was therefore bad, because I recently increased my requested work queue depth, and I have not yet completed any with a task ID starting LATeah0036L... instead of the preceding LATeah0035L...
I've suspended remaining 35L unstarted units, so should soon have an idea whether my systems see a substantial elapsed time difference at this boundary.
But perhaps this is not the boundary in question.
I've now completed a fair
)
I've now completed a fair number of LATeah0036L tasks. An informal review suggests to me that tasks distributed at the very beginning, with value after the LATeah0036L of as low as 4 and on up through high two digits have appreciably shorter completion times than the general run of late LATeah0035L on the same machine and same conditions. However the value of this port of the task ID rose quickly in the early period of 36L task distribution, quickly reach value over 500. By then the time advantage shrank considerably, though a modest advantage remained. As in just a couple of days of distribution, we have already hit 788 in this field, I imagine any performance advantage form this source will be rather brief.
This field had hit about 1200 by the end of 35L distribution.
It was fun while it lasted.
)
It was fun while it lasted.
I've noticed the opposite.
)
I've noticed the opposite. My times have increased on my 7950. Nothing on that box has changed.
EDIT: Grrr, Damn technology! Not sure what happened. No driver crash or anything unusual, but a reboot fixed whatever was the mater.