During Dec 2011 - Feb 2012 the number of active hosts almost doubled (for no apparent reason). This increase was not visible in the number of active or total participants. Since about mid Mar 2012 these additional active hosts almost vanished, our total computing power which peaked above 600 TFLOPS dropped below 500. I guess that there have been one or two computing clusters which used Einstein@Home for burn-in, but I'm not sure. Could also be that some college machines were running BOINC almost exclusively during holidays or sth. Whatever happend, these additional computers took a good deal of tasks with them when they vanished.
Anyway, an interesting number to watch on the server status page is the number of unsent tasks for S6LV1. With currently 80.000 it's pretty high at the moment, but it has been dropping for the past three weeks and should continue to do so. The lower this number, the faster "paired" tasks are sent out, and the shorter the time should be that you have to wait for validation of your "pending" tasks.
Whatever happend, these additional computers took a good deal of tasks with them when they vanished.
SETI had more than the usual problem keeping long-queue heavy users supplied with work a while back, and based on comments from those posting on the forums at least some of them took Einstein work to keep busy. They generally preferred SETI, so may have allowed Einstein work to expire when SETI availability got better.
In some commonly used BOINCmgr variants, the automatic fallback project function implemented by work share = zero in preferences has the awkward characteristic of attempting to load your full requested queue length from the fallback project once the preferred project drops to zero jobs in queue, which could tempt even the virtuous not to work off the full backlog when the preferred project comes back online.
I consider this binging on fallback project behavior a bug, but I believe the scheduler guy has posted that he implemented it in response to specific request.
My suggestion does not match the specified active/total number trends well enough to be the whole answer, but I suspect it was part of the transient.
During Dec 2011 - Feb 2012
)
During Dec 2011 - Feb 2012 the number of active hosts almost doubled (for no apparent reason). This increase was not visible in the number of active or total participants. Since about mid Mar 2012 these additional active hosts almost vanished, our total computing power which peaked above 600 TFLOPS dropped below 500. I guess that there have been one or two computing clusters which used Einstein@Home for burn-in, but I'm not sure. Could also be that some college machines were running BOINC almost exclusively during holidays or sth. Whatever happend, these additional computers took a good deal of tasks with them when they vanished.
Anyway, an interesting number to watch on the server status page is the number of unsent tasks for S6LV1. With currently 80.000 it's pretty high at the moment, but it has been dropping for the past three weeks and should continue to do so. The lower this number, the faster "paired" tasks are sent out, and the shorter the time should be that you have to wait for validation of your "pending" tasks.
BM
BM
RE: Whatever happend, these
)
SETI had more than the usual problem keeping long-queue heavy users supplied with work a while back, and based on comments from those posting on the forums at least some of them took Einstein work to keep busy. They generally preferred SETI, so may have allowed Einstein work to expire when SETI availability got better.
In some commonly used BOINCmgr variants, the automatic fallback project function implemented by work share = zero in preferences has the awkward characteristic of attempting to load your full requested queue length from the fallback project once the preferred project drops to zero jobs in queue, which could tempt even the virtuous not to work off the full backlog when the preferred project comes back online.
I consider this binging on fallback project behavior a bug, but I believe the scheduler guy has posted that he implemented it in response to specific request.
My suggestion does not match the specified active/total number trends well enough to be the whole answer, but I suspect it was part of the transient.