Keep in mind that estimated time to completion is just that an estimate, the important thing is the actual time needed to complete a task.
If you're up for a more technical explanation then here goes.
Different projects works in different ways depending on how new or old the server software they run.
Einstein is running a old version of the server software that only provides and estimate of the work needed to complete a task, it is then up to your local Boinc to convert this into an estimated time using as a starting point a CPU benchmark. Boinc then uses something called a "Duration correction factor" or DCF to adjust the estimate up or down as tasks complete. If a task takes much longer that estimated then the DCF is changed to immediately reflect this longer time for future tasks. If a task task completes in a shorter time than estimated only a small change to the DCF is made.
So for example if a task is estimated at 10 hours but runs for 20 hours the DCF will be changed so all tasks with the same estimated work will now show 20 hours as the new estimated time to completion. If it was the other way around I would expect it to end up at 19 hours or so and then continue down as more tasks are completed.
Now if you run more than one type of task for Einstein there will still only be one DCF to apply to all estimates and therefor you could end up in a scenario where FGRP4 is underestimated and needs a bigger DCF but GPU tasks are overestimated and needs a smaller DCF. This will usually lead to a situation where estimates are going up and down a lot as different tasks completes.
Newer server software versions handle the estimation of time needed to run a task on the server side and has different estimates for every type of task. They tend to be more accurate once enough tasks have been completed.
The estimated time to complete a task is used to calculate how much work you have in your cache and if it's time to ask for more work according to what you have set in your preferences for how much work to keep on hand.
As Gary said in his first reply the run times for FGRP4 work on you i7 950 is way longer than it should be so something seems to be wrong. Right now there are two tasks from the start of the month showing with more normal times, has anything changed since then?
If you ignore the estimates for a while, does your completion times on tasks from other projects match those for comparable systems on those projects?
... everything that has been suggested would affect WUs from all the projects and not just this one. I don't see any great differences between run times and cpu times for WUs from other projects, nor do I see any great variance in the actual run time against the initial estimate and E@H is the only project where estimated times get changed when any work unit completes that has a longer run time than the current estimate.
It does come down again if WUs take less time than the new estimate, but very slowly. Like say the estimate was set at 20 hours and the next WU finishes in time, the new estimate might be 19 hours. There doesn't appear to be any weighting applied to a rogue WU that does take a long time where most WUs are completing in around the originally estimated time.
This modification of the estimated time must have some impact on the work manager and requesting new WUs. I have noticed with the issue I am seeing that all of my projects are getting fewer WUs as the work manager is not requesting them usually with the "work queue full" message. I can only ascribe this to the massive increase in estimated time that I reported.
The section I have underlined is actually a very good description of the way BOINC runtime estimates would be expected to behave for this project, and for very few others. It's because Einstein still uses a very old version of the BOINC server software. Most other projects use BOINC server software which manages Runtime Estimation on the server: Einstein uses "The old system" described in the first section of that paper. One problem is that any fast-running tasks for other Einstein applications will drive estimates downwards, and then a slow task from the application which is causing problems will drive the estimate back up for all tasks.
If it's just one task type which is running a long way outside its expected runtimes, it might be best to exclude that type in preferences until you've completed your planned hardware changes.
[Edit at preview stage: I see Holmis has been thinking along the same lines while I've been typing - we do that sometimes. But I'll post anyway so you have a choice of explanations.]
Thanks for the explanation. That helps me to understand waht is happening. The batch where I noticed the problem on this machine had an estimated time of just under 5 hours and the first two in these batch were more than twice the estimate when I posted. I didn't see the finsih times of these tasks but some of the subsequent tasks were running in a shorter time and these may be the ones you have seen.
As far as I'm aware, nothing has changed to cause this but it may well be my system is struggling although I see similar effects with GPR4 on other machines and they have a much higher spec. All this system does is WUs from a variety of projects. I'm hoping the problems go away when I replace the CPU and motherboard on another system and put the old motherboard and CPU from that on this system.
Maybe some downtime on all my machines will help as I am to the Caribbean for a week.
Once again tahnks to you and everybody else who has offered advice. It was all gratefully received.
Keep in mind that estimated
)
Keep in mind that estimated time to completion is just that an estimate, the important thing is the actual time needed to complete a task.
If you're up for a more technical explanation then here goes.
Different projects works in different ways depending on how new or old the server software they run.
Einstein is running a old version of the server software that only provides and estimate of the work needed to complete a task, it is then up to your local Boinc to convert this into an estimated time using as a starting point a CPU benchmark. Boinc then uses something called a "Duration correction factor" or DCF to adjust the estimate up or down as tasks complete. If a task takes much longer that estimated then the DCF is changed to immediately reflect this longer time for future tasks. If a task task completes in a shorter time than estimated only a small change to the DCF is made.
So for example if a task is estimated at 10 hours but runs for 20 hours the DCF will be changed so all tasks with the same estimated work will now show 20 hours as the new estimated time to completion. If it was the other way around I would expect it to end up at 19 hours or so and then continue down as more tasks are completed.
Now if you run more than one type of task for Einstein there will still only be one DCF to apply to all estimates and therefor you could end up in a scenario where FGRP4 is underestimated and needs a bigger DCF but GPU tasks are overestimated and needs a smaller DCF. This will usually lead to a situation where estimates are going up and down a lot as different tasks completes.
Newer server software versions handle the estimation of time needed to run a task on the server side and has different estimates for every type of task. They tend to be more accurate once enough tasks have been completed.
The estimated time to complete a task is used to calculate how much work you have in your cache and if it's time to ask for more work according to what you have set in your preferences for how much work to keep on hand.
As Gary said in his first reply the run times for FGRP4 work on you i7 950 is way longer than it should be so something seems to be wrong. Right now there are two tasks from the start of the month showing with more normal times, has anything changed since then?
If you ignore the estimates for a while, does your completion times on tasks from other projects match those for comparable systems on those projects?
RE: ... everything that has
)
The section I have underlined is actually a very good description of the way BOINC runtime estimates would be expected to behave for this project, and for very few others. It's because Einstein still uses a very old version of the BOINC server software. Most other projects use BOINC server software which manages Runtime Estimation on the server: Einstein uses "The old system" described in the first section of that paper. One problem is that any fast-running tasks for other Einstein applications will drive estimates downwards, and then a slow task from the application which is causing problems will drive the estimate back up for all tasks.
If it's just one task type which is running a long way outside its expected runtimes, it might be best to exclude that type in preferences until you've completed your planned hardware changes.
[Edit at preview stage: I see Holmis has been thinking along the same lines while I've been typing - we do that sometimes. But I'll post anyway so you have a choice of explanations.]
Thanks for the explanation.
)
Thanks for the explanation. That helps me to understand waht is happening. The batch where I noticed the problem on this machine had an estimated time of just under 5 hours and the first two in these batch were more than twice the estimate when I posted. I didn't see the finsih times of these tasks but some of the subsequent tasks were running in a shorter time and these may be the ones you have seen.
As far as I'm aware, nothing has changed to cause this but it may well be my system is struggling although I see similar effects with GPR4 on other machines and they have a much higher spec. All this system does is WUs from a variety of projects. I'm hoping the problems go away when I replace the CPU and motherboard on another system and put the old motherboard and CPU from that on this system.
Maybe some downtime on all my machines will help as I am to the Caribbean for a week.
Once again tahnks to you and everybody else who has offered advice. It was all gratefully received.
George