I've finally received a pair of these >= 800Hz jobs. They completed in about 76,000 seconds, far less than the 110,000 - 120,000 seconds that would be normal for this machine. So, there's definitely something strange here.
I've finally received a pair of these >= 800Hz jobs. They completed in about 76,000 seconds, far less than the 110,000 - 120,000 seconds that would be normal for this machine. So, there's definitely something strange here.
I've finally received a pair of these >= 800Hz jobs. They completed in about 76,000 seconds, far less than the 110,000 - 120,000 seconds that would be normal for this machine. So, there's definitely something strange here.
I've finally received a pair of these >= 800Hz jobs. They completed in about 76,000 seconds, far less than the 110,000 - 120,000 seconds that would be normal for this machine. So, there's definitely something strange here.
Well, to be honest, I hadn't looked at the credits for the 800's until you mentioned it.
Calculating Credit/hour with the benchmark means it should get 14.16/hour.
After 407 Rosetta wus(recent app), this host got 12.79/hour avg.
After 125 stock 5.27 Seti wus it got 15.21/hour avg.
With 4 4.15 wus 800 and 4.15 it got 28.65/hour. And with >800 AND 4.26 it yields 33.1125/hour. WOW
OK, now in fairness/full disclosure: The other project I've recently ran was Boinc Simap and this host is getting and avg. 22.45/hour there.
The data files currently on Einstein@home of 800Hz and above (h1_0800.0_S5R2* / l1_0800.0_S5R2*) are wrong. While we are generating the correct ones, we stopped generating workunits for 800Hz and above.
We intend to let the some thousand WUs that point to the wrong files that are already in the database simply run out. The ones on the boundary (that have 0799.5 files as well as 0800.0) will error out just at the beginning when trying to read the files ("error in SFT sequence"), with no CPU time wasted. The ones above 800Hz that are already in the database will run shorter as the assigned credit would suggest, because the run-time and this the credit was estimated based on correct datafiles. If we would simply cancel these workunits, people that already have completed such a task would get no credit at all for this, so I decided to be rather too generous and let them run.
The current WU generator will only generate new WUs below 800Hz. There are ~300,000 left to be generated, which should be work for the project for about a week in total. During that time we will generate correct data files and set up a new WU generator for the work of 800Hz and above.
So the second half run of S5R3 (currently internally called S5R3b) should start early next week. The new Tasks will run as long as estimated and thus will get the same credit we currently give to the ones with the same base-frequency (but wrong data files).
Brian, we are considering your proposal to extend the deadline for these new WUs.
Brian, we are considering your proposal to extend the deadline for these new WUs.
Thanks... The speed increase from 4.26 is definitely appreciated and would probably reduce the incidence of tasks missing deadline by only a couple of days as it appears to be 10-20% faster, depending on hardware. I guess it all will depend on how long the new results take...
Anyway, as for the boundary tasks, do you know if all of those have already been distributed? Since they fail very quickly, any host that gets them will likely be driven down to only 1/day quota...
Anyway, as for the boundary tasks, do you know if all of those have already been distributed? Since they fail very quickly, any host that gets them will likely be driven down to only 1/day quota...
You're right, I cancelled the workunits, which means that no new tasks should be generated for them. For the few dozen tasks that have already been generated for these in the DB I'm afraid I won't be able to do anything (without risking DB inconsistencies).
I've finally received a pair
)
I've finally received a pair of these >= 800Hz jobs. They completed in about 76,000 seconds, far less than the 110,000 - 120,000 seconds that would be normal for this machine. So, there's definitely something strange here.
Dual Pentium III 866
RE: I've finally received a
)
My timing always sucks... I am only up to 779... :-(
RE: RE: I've finally
)
Brian, Here's a look at my Mobile AMD64 3700 laptops wus using windows and the work done so far:
RE: RE: RE: I've
)
Yeah yeah... rub it in... You got the credit boost from going above 799 and then the performance boost by going to 4.26... :-P on you too...
Well, to be honest, I hadn't
)
Well, to be honest, I hadn't looked at the credits for the 800's until you mentioned it.
Calculating Credit/hour with the benchmark means it should get 14.16/hour.
After 407 Rosetta wus(recent app), this host got 12.79/hour avg.
After 125 stock 5.27 Seti wus it got 15.21/hour avg.
With 4 4.15 wus 800 and 4.15 it got 28.65/hour. And with >800 AND 4.26 it yields 33.1125/hour. WOW
OK, now in fairness/full disclosure: The other project I've recently ran was Boinc Simap and this host is getting and avg. 22.45/hour there.
The data files currently on
)
The data files currently on Einstein@home of 800Hz and above (h1_0800.0_S5R2* / l1_0800.0_S5R2*) are wrong. While we are generating the correct ones, we stopped generating workunits for 800Hz and above.
We intend to let the some thousand WUs that point to the wrong files that are already in the database simply run out. The ones on the boundary (that have 0799.5 files as well as 0800.0) will error out just at the beginning when trying to read the files ("error in SFT sequence"), with no CPU time wasted. The ones above 800Hz that are already in the database will run shorter as the assigned credit would suggest, because the run-time and this the credit was estimated based on correct datafiles. If we would simply cancel these workunits, people that already have completed such a task would get no credit at all for this, so I decided to be rather too generous and let them run.
The current WU generator will only generate new WUs below 800Hz. There are ~300,000 left to be generated, which should be work for the project for about a week in total. During that time we will generate correct data files and set up a new WU generator for the work of 800Hz and above.
So the second half run of S5R3 (currently internally called S5R3b) should start early next week. The new Tasks will run as long as estimated and thus will get the same credit we currently give to the ones with the same base-frequency (but wrong data files).
Brian, we are considering your proposal to extend the deadline for these new WUs.
BM
BM
Current app will handle this
)
Current app will handle this new WU?
RE: Current app will handle
)
The new workunits will reference the same Apps. No change there.
BM
BM
RE: Brian, we are
)
Thanks... The speed increase from 4.26 is definitely appreciated and would probably reduce the incidence of tasks missing deadline by only a couple of days as it appears to be 10-20% faster, depending on hardware. I guess it all will depend on how long the new results take...
Anyway, as for the boundary tasks, do you know if all of those have already been distributed? Since they fail very quickly, any host that gets them will likely be driven down to only 1/day quota...
RE: Anyway, as for the
)
You're right, I cancelled the workunits, which means that no new tasks should be generated for them. For the few dozen tasks that have already been generated for these in the DB I'm afraid I won't be able to do anything (without risking DB inconsistencies).
BM
BM