I have cunched two WUs with the 4.35 application on an Intel Core 2 @ 2.13 GHz. Computing time here is about 7 to 15% more than with the official 4.21.
Hi!
All the results I can see on this website are tagged as cruinched (at least partly by 4.21. Are the new results not submitted yet?
The results unter Linux on my boxes seem to be more or less equal in crunching time to what the official app did.
... All the results I can see on this website are tagged as cruinched (at least partly by 4.21. Are the new results not submitted yet?
Anything that was downloaded and in the cache at the time of the transition from 4.21 to 4.35 will be "branded" 4.21 and will report as 4.21 even though fully crunched with 4.35. As new work is downloaded, it will be "branded" as 4.35. So eventually this "temporary confusion" will disappear :).
EDIT: But I can see why you are asking about Juergen Mell's results list. The credit is essentially constant at 396.8 but there are significant differences in crunching times not related to an app change. It almost looks like his machine is running at different speed - possible thermal throttling?
.... The time is back down to 94.1K and should drop a little more with the next result which will be fully 4.35 crunched. Unfortunately it will still be a little slower than 90.4K secs.
Well I'm happy to say that I was wrong with this statement. The first result fully on 4.35 has come in at 89.9K so just slightly faster on Athlon XP hardware.
... All the results I can see on this website are tagged as cruinched (at least partly by 4.21. Are the new results not submitted yet?
One result was just reported, the other shows 4.21 when it really was 4.35.
Quote:
...But I can see why you are asking about Juergen Mell's results list. The credit is essentially constant at 396.8 but there are significant differences in crunching times not related to an app change. It almost looks like his machine is running at different speed - possible thermal throttling?
Thermal throttling should normally not occur. The processor temperature is reported as 46 °C and what we have got as summer here in Germany does not really deserve this name...
I have checked BOINC's benchmarks in the logs: The floating point MIPS range from 1608 .. 1880, the integer MIPS range from 3904 to 5528, where minimum and maximum values for floating point and integer MIPS do not fall on the same day.
Anyway, I have no idea why the different computing times always yield the same number of cobblestones. But there is one other strange point: I am running Seti and Einstein each for 50% of total computing time which means that one core is running Seti and the other is running Einstein most of the time. That should lead to the same number of cobblestones for both projects but over the last week the points are 65% : 35% in favor of Einstein. There is not much pending credit for Seti and there were always tasks available for both projects. Any ideas?
.... The time is back down to 94.1K and should drop a little more with the next result which will be fully 4.35 crunched. Unfortunately it will still be a little slower than 90.4K secs.
Well I'm happy to say that I was wrong with this statement. The first result fully on 4.35 has come in at 89.9K so just slightly faster on Athlon XP hardware.
So maybe it's just Intel C2D/Q boxes which have a speed loss with the new beta? My host has now completed 12 WU's fully with 4.35 and is consistently 2200s slower than the official app. :-(
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
... I have checked BOINC's benchmarks in the logs: The floating point MIPS range from 1608 .. 1880, the integer MIPS range from 3904 to 5528, where minimum and maximum values for floating point and integer MIPS do not fall on the same day.
If you are getting that much variation in benchmarks I would regard that as pretty convincing evidence of something causing a speed variation on your machine.
Quote:
Anyway, I have no idea why the different computing times always yield the same number of cobblestones.
This is quite normal. All credits are set serverside so it doesn't matter what sort of variations there are at the client end. You will always get the same credit for a result from the same frequency series irrespective of how long it takes to crunch.
Quote:
... but over the last week the points are 65% : 35% in favor of Einstein. There is not much pending credit for Seti and there were always tasks available for both projects. Any ideas?
The fact that you have a 50/50 resource share setting is not a guarantee of 50/50 allocation of CPU time in the short term - and a week is definitely still the short term, particularly with EAH :). A better metric to use would be credit per unit time - say credit per hour for example. Just take the credits earned for a result and divide by the number of CPU hours used. This should be relatively constant for EAH (irrespective of the particular frequency data being crunched) but could vary quite a bit for different WUs at Seti. I'm not really familiar with it but I believe there are some nasty AR range values at Seti which give a lousy credit return. Also if you use the "chicken soup" app over at Seti you will get a much better return. I'm presuming that you are not using an optimised app for Seti. If you were your Seti credits would be killing EAH at the moment :).
I've processed 5 fully 4.35 results and one mixed 4.21/4.35 (judging by the stderr output) on a pentium III S 1.4GHz. All have validated.
Processing times are about the same at about 38 hours. On the whole it looks like 4.35 is about 3% faster but I suspect that may just be differences in the work itself (or just me wanting it to be that way); I can see at least one case where 4.21 has been about 1% faster. I'm not really sure if I'm comparing apples to apples.
Please ignore any further beta test results from me. My host has developed some sort of disk issue and I've had to hit reset a couple of times. I notice that at least one result http://einsteinathome.org/task/86148376 has failed trying to resume from a checkpoint, but I strongly suspect it's not einstein that's at fault. I'm no longer running 4.35 on this host.
I notice that at least one result has failed trying to resume from a checkpoint, but I strongly suspect it's not einstein that's at fault. I'm no longer running 4.35 on this host.
I wouldn't be so sure about that. I just noticed my Einstein result was stuck at a checkpoint since 13:20 this afternoon. So for almost 12 hours it's been doing nothing. Showing as running, but actually stuck.
Exiting BOINC and restarting it fixed it for now. Here's to hoping it doesn't happen again, as I'm nearing deadline on this one.
Oh well, never mind. It went on for a couple of checkpoints before crashing in an exit code 99. Here's the result.
2007-08-12 01:09:18 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:10:20 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:11:22 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:12:22 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:12:28 [Einstein@Home] Deferring communication for 1 min 0 sec
2007-08-12 01:12:28 [Einstein@Home] Reason: Unrecoverable error for result h1_0548.40_S5R2__455_S5R2c_1 (process exited with code 99 (0x63, -157))
2007-08-12 01:12:28 [Einstein@Home] Computation for task h1_0548.40_S5R2__455_S5R2c_1 finished
2007-08-12 01:12:28 [Einstein@Home] Output file h1_0548.40_S5R2__455_S5R2c_1_0 for task h1_0548.40_S5R2__455_S5R2c_1 absent
RE: I have cunched two WUs
)
Hi!
All the results I can see on this website are tagged as cruinched (at least partly by 4.21. Are the new results not submitted yet?
The results unter Linux on my boxes seem to be more or less equal in crunching time to what the official app did.
EDIT:
E.g. these results:
http://einsteinathome.org/task/86004191
http://einsteinathome.org/task/85609037
both for 473 credits, taking 117,000 and 118,000 seconds, respectively, on an AMD Athlon XP.
CU
BRM
RE: ... All the results I
)
Anything that was downloaded and in the cache at the time of the transition from 4.21 to 4.35 will be "branded" 4.21 and will report as 4.21 even though fully crunched with 4.35. As new work is downloaded, it will be "branded" as 4.35. So eventually this "temporary confusion" will disappear :).
EDIT: But I can see why you are asking about Juergen Mell's results list. The credit is essentially constant at 396.8 but there are significant differences in crunching times not related to an app change. It almost looks like his machine is running at different speed - possible thermal throttling?
Cheers,
Gary.
RE: .... The time is back
)
Well I'm happy to say that I was wrong with this statement. The first result fully on 4.35 has come in at 89.9K so just slightly faster on Athlon XP hardware.
Cheers,
Gary.
RE: ... All the results I
)
One result was just reported, the other shows 4.21 when it really was 4.35.
Thermal throttling should normally not occur. The processor temperature is reported as 46 °C and what we have got as summer here in Germany does not really deserve this name...
I have checked BOINC's benchmarks in the logs: The floating point MIPS range from 1608 .. 1880, the integer MIPS range from 3904 to 5528, where minimum and maximum values for floating point and integer MIPS do not fall on the same day.
Anyway, I have no idea why the different computing times always yield the same number of cobblestones. But there is one other strange point: I am running Seti and Einstein each for 50% of total computing time which means that one core is running Seti and the other is running Einstein most of the time. That should lead to the same number of cobblestones for both projects but over the last week the points are 65% : 35% in favor of Einstein. There is not much pending credit for Seti and there were always tasks available for both projects. Any ideas?
RE: RE: .... The time is
)
So maybe it's just Intel C2D/Q boxes which have a speed loss with the new beta? My host has now completed 12 WU's fully with 4.35 and is consistently 2200s slower than the official app. :-(
There are 10^11 stars in the galaxy. That used to be a huge number. But it's only a hundred billion. It's less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers. - Richard Feynman
RE: ... I have checked
)
If you are getting that much variation in benchmarks I would regard that as pretty convincing evidence of something causing a speed variation on your machine.
This is quite normal. All credits are set serverside so it doesn't matter what sort of variations there are at the client end. You will always get the same credit for a result from the same frequency series irrespective of how long it takes to crunch.
The fact that you have a 50/50 resource share setting is not a guarantee of 50/50 allocation of CPU time in the short term - and a week is definitely still the short term, particularly with EAH :). A better metric to use would be credit per unit time - say credit per hour for example. Just take the credits earned for a result and divide by the number of CPU hours used. This should be relatively constant for EAH (irrespective of the particular frequency data being crunched) but could vary quite a bit for different WUs at Seti. I'm not really familiar with it but I believe there are some nasty AR range values at Seti which give a lousy credit return. Also if you use the "chicken soup" app over at Seti you will get a much better return. I'm presuming that you are not using an optimised app for Seti. If you were your Seti credits would be killing EAH at the moment :).
Cheers,
Gary.
Just finished a WU using 4.31
)
Just finished a WU using 4.31 for the first third and 4.35 for the following. Validated with 424.68 credits. Started a new WU with 4.35.
Tullio
I've processed 5 fully 4.35
)
I've processed 5 fully 4.35 results and one mixed 4.21/4.35 (judging by the stderr output) on a pentium III S 1.4GHz. All have validated.
Processing times are about the same at about 38 hours. On the whole it looks like 4.35 is about 3% faster but I suspect that may just be differences in the work itself (or just me wanting it to be that way); I can see at least one case where 4.21 has been about 1% faster. I'm not really sure if I'm comparing apples to apples.
Please ignore any further
)
Please ignore any further beta test results from me. My host has developed some sort of disk issue and I've had to hit reset a couple of times. I notice that at least one result http://einsteinathome.org/task/86148376 has failed trying to resume from a checkpoint, but I strongly suspect it's not einstein that's at fault. I'm no longer running 4.35 on this host.
RE: I notice that at least
)
I wouldn't be so sure about that. I just noticed my Einstein result was stuck at a checkpoint since 13:20 this afternoon. So for almost 12 hours it's been doing nothing. Showing as running, but actually stuck.
Exiting BOINC and restarting it fixed it for now. Here's to hoping it doesn't happen again, as I'm nearing deadline on this one.
Oh well, never mind. It went on for a couple of checkpoints before crashing in an exit code 99. Here's the result.
2007-08-12 01:09:18 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:10:20 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:11:22 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:12:22 [Einstein@Home] [checkpoint_debug] result h1_0548.40_S5R2__455_S5R2c_1 checkpointed
2007-08-12 01:12:28 [Einstein@Home] Deferring communication for 1 min 0 sec
2007-08-12 01:12:28 [Einstein@Home] Reason: Unrecoverable error for result h1_0548.40_S5R2__455_S5R2c_1 (process exited with code 99 (0x63, -157))
2007-08-12 01:12:28 [Einstein@Home] Computation for task h1_0548.40_S5R2__455_S5R2c_1 finished
2007-08-12 01:12:28 [Einstein@Home] Output file h1_0548.40_S5R2__455_S5R2c_1_0 for task h1_0548.40_S5R2__455_S5R2c_1 absent