I am running Binary Radio Pulsar Search on my Ulefone smartphone with an ARM64 8 cores CPU and there is plenty of them. I am also running SETI@home tasks on the same CPU, Einstein@home tasks on the Windows 10 PC with its GTX 1050 Ti, SETI@home on my Linux HP laptop, GPUGRID both GPU and CPU on my main Linux box with its GTX 750 Ti.
... I suspect taking the trouble to induce offset is unusually unfruitful at the moment.
Yes, you are probably quite correct about that since I made my observations during a time when there was a significant follow-up stage and I haven't repeated them for the recently completed 'no observable followup' in more recent times.
However, the current data file LATeah0104M.dat gives rise to tasks where the follow-up stage is back again, so I'm going to continue to encourage an 'offset' whenever I notice the lack of one.
However, the current data file LATeah0104M.dat gives rise to tasks where the follow-up stage is back again, so I'm going to continue to encourage an 'offset' whenever I notice the lack of one.
Agreed. My comment was remarkably ill-timed, probably.
I promoted a few WU computations from the new file to run out-of-order on two of my three machines. Elapsed time appeared to be only about two-thirds of the immediate previous work.
I only timed the follow-up stage on a single WU. It was running 2X on a GTX 1070, on a quad-core host which hosts that 1070 running 2X and a 1060 running 2X. The follow-up stage took about 45 elapsed seconds. The GPU temperature did not drop, so primary work on the "other task", plus some GPU work on the task in question, added up to enough to keep the GPU as busy as it had been before. I suspect had the other task also been in follow-up that the GPU temperature would have taken a big drop (as it did on past data files).
We may possibly expect to see follow-up elapsed time and compute characteristics vary systematically within a data file (and, by a lesser percentage, task elapsed time). I think this was true in the older days, before data files behaving like LATeah1019L (and a dozen or so preceding it) got us used to much more uniformity, lack of appreciable follow-up, and long total elapsed times).
Agreed. My comment was remarkably ill-timed, probably.
Not at all! Your comments, as usual, are extremely well-timed! :-).
I, quite often, don't immediately see the blindingly obvious, even when it's right there in front of me.
Until you made the comment, it hadn't occurred to me that the lack of a significant period between 89.997% and 100% would probably have removed a lot of the benefit of having 'offset' tasks. So, thanks for 'setting me straight'! :-).
So, please don't stop making such comments. I always find them extremely useful - as were the other points you have made, as well.
Looks like it was just a file boundary. For my hosts there was about a four hour hole in availability between the last delivery of work from LATeah0104N and my first from LATeah0104O.
I am running Binary Radio
)
I am running Binary Radio Pulsar Search on my Ulefone smartphone with an ARM64 8 cores CPU and there is plenty of them. I am also running SETI@home tasks on the same CPU, Einstein@home tasks on the Windows 10 PC with its GTX 1050 Ti, SETI@home on my Linux HP laptop, GPUGRID both GPU and CPU on my main Linux box with its GTX 750 Ti.
Tullio
That is also my experience on
)
That is also my experience on staggered start times with a W10/ NVidea host.
The new data file looks like
)
The new data file looks like it will benefit from staggered start times.
https://einsteinathome.org/content/new-data-file-fgrpb1g-searcharchae86 wrote:... I suspect
)
Yes, you are probably quite correct about that since I made my observations during a time when there was a significant follow-up stage and I haven't repeated them for the recently completed 'no observable followup' in more recent times.
However, the current data file LATeah0104M.dat gives rise to tasks where the follow-up stage is back again, so I'm going to continue to encourage an 'offset' whenever I notice the lack of one.
Old habits die hard! :-).
Cheers,
Gary.
Gary Roberts wrote:However,
)
Agreed. My comment was remarkably ill-timed, probably.
I promoted a few WU computations from the new file to run out-of-order on two of my three machines. Elapsed time appeared to be only about two-thirds of the immediate previous work.
I only timed the follow-up stage on a single WU. It was running 2X on a GTX 1070, on a quad-core host which hosts that 1070 running 2X and a 1060 running 2X. The follow-up stage took about 45 elapsed seconds. The GPU temperature did not drop, so primary work on the "other task", plus some GPU work on the task in question, added up to enough to keep the GPU as busy as it had been before. I suspect had the other task also been in follow-up that the GPU temperature would have taken a big drop (as it did on past data files).
We may possibly expect to see follow-up elapsed time and compute characteristics vary systematically within a data file (and, by a lesser percentage, task elapsed time). I think this was true in the older days, before data files behaving like LATeah1019L (and a dozen or so preceding it) got us used to much more uniformity, lack of appreciable follow-up, and long total elapsed times).
archae86 wrote:Agreed. My
)
Not at all! Your comments, as usual, are extremely well-timed! :-).
I, quite often, don't immediately see the blindingly obvious, even when it's right there in front of me.
Until you made the comment, it hadn't occurred to me that the lack of a significant period between 89.997% and 100% would probably have removed a lot of the benefit of having 'offset' tasks. So, thanks for 'setting me straight'! :-).
So, please don't stop making such comments. I always find them extremely useful - as were the other points you have made, as well.
Cheers,
Gary.
Looks like there's another
)
Looks like there's another shortage now.
Richie_9 wrote:Looks like
)
Agreed. My most recent standard production task (not a resend) is about three hours ago.
Looks like it was just a file
)
Looks like it was just a file boundary. For my hosts there was about a four hour hole in availability between the last delivery of work from LATeah0104N and my first from LATeah0104O.
Shortage going on again.
)
Shortage going on again.