@Michael R., your box is overclocked enough that BOINC can't even identify the CPU any more, just says "unknown AMD", have you noticed that? :-P
Bill
Thanks for offering the heads-up. It's not a problem, however.
What I have is a laptop CPU (Athlon XP-Mobile) in a desktop, and even my BIOS doesn't recognize it for what it is, never did, in fact, but accepts pretty much whatever settings I assign to it, though it somehow won't allow me to set my multiplier (unlocked,btw) past 12.5. Anyway, if the BIOS can't recognize it, I hardly expect Boinc to do any better. LOL
Thanks again, Bill
Michael
microcraft
"The arc of history is long, but it bends toward justice" - MLK
@Michael R., your box is overclocked enough that BOINC can't even identify the CPU any more, just says "unknown AMD", have you noticed that? :-P
Bill
Thanks for offering the heads-up. It's not a problem, however.
What I have is a laptop CPU (Athlon XP-Mobile) in a desktop, and even my BIOS doesn't recognize it for what it is, never did, in fact, but accepts pretty much whatever settings I assign to it, though it somehow won't allow me to set my multiplier (unlocked,btw) past 12.5. Anyway, if the BIOS can't recognize it, I hardly expect Boinc to do any better. LOL
Thanks again, Bill
Michael
The choice of a mobile Athlon for a desktop is interesting. It overclocks better?
I notice the boinc stats for your cpu say:
float speed= 1928.34
int speed = 7181.39
E@H WU time = 18,100 sec
I have a dual-core athlon-64 & the boinc stats are
float speed = 1151.39
int speed = 3755.7
E@H WU time = 18,500 sec on each core.
So your mobile athlon appears almost twice as fast on both the float & int benchmarks, but only slightly faster than one core alone running E@H.
Is that just the to-be-expected bungled benchmarking by boinc? How does that severe technical difficulty in accounting for two cpus come about?
The choice of a mobile Athlon for a desktop is interesting. It overclocks better?
Yes, for a few reasons. 1) The multiplier is unlocked, to allow the laptop to "scale-down" to prevernt overheating and to extend battery life. 2) The Mobiles are pick-of-the-litter of the wafers, so that they can be run at lower voltages (1.45 nominal) and consume less power (45 watts, again nominal), yet still are as capable as desktop to run at 1.7+ volts, which uses more power than nominal but provides much more stability when OC'ing. Currently, I'm running the Athlon XP-Mobile 2600 (nominal speed 2.0 GHz) at "conservative", 202 FSB and 12.5 multiplier = 2.525 GHz air-cooled, 42 degrees C (according to ASUS Probe) while crunching 24/7, because the family I live with insists upon occasionally using "auxiliary" heating, their gas/hotwater furnace, on these cold days, instead of letting me run another of these rigs on the ground floor, and getting some entertainment value from the "heating system". LOL When they leave the heat off and the indoor temps fall to 50-55, I can push the FSB up to 206 stable and crunching and get 2.575GHz (@50-52 degrees C) without tripping ASUS' conservative overheating protection. That speed results in reliable WU times in thr 17700-17800 range.
Quote:
I notice the boinc stats for your cpu say:
float speed= 1928.34
int speed = 7181.39
E@H WU time = 18,100 sec
I have a dual-core athlon-64 & the boinc stats are
float speed = 1151.39
int speed = 3755.7
E@H WU time = 18,500 sec on each core.
So your mobile athlon appears almost twice as fast on both the float & int benchmarks, but only slightly faster than one core alone running E@H.
I use an optimized BOINC client that boosts benchmarks up closer to the kind of numbers I get with Sysoft SANDRA or other standardized benchmark utility, so they are not directly comparable to yours.
Quote:
Is that just the to-be-expected bungled benchmarking by boinc? How does that severe technical difficulty in accounting for two cpus come about?
I don't understand what you mean by the two cpu thing.
Quote:
ADDMP
Hope this helps
microcraft
"The arc of history is long, but it bends toward justice" - MLK
Is that just the to-be-expected bungled benchmarking by boinc? How does that severe technical difficulty in accounting for two cpus come about?
I don't understand what you mean by the two cpu thing.
Because science apps run on only one of the two CPUs in a dual-core box, the benchmarks _should_ also be limited in that way. They aren't. So, when someone realized that the benchmarks for a dual-core box were twice as high as they "should" be, the decision was apparently made to "divide by two" if 2 CPUs were detected. Unfortunately, this makes any benchmark problems _twice_ as bad, and is _completely_ wrong for an HT CPU. The normal result is that a dual-core benchmarks several percent lower than it should, and an HT benchmarks nearly half what it should. If Linux is involved, where the benchmarks are just flat wrong, it's even worse.
In other words, MY PC benchmarks (single core, WinXP) _should_ be right with the standard client, where his will be low. Yours probably wouldn't be right, because of cache issues and overclocking; the app will do better on your oddball system than the benchmarks would show. And my Mac is going to be _way_ under-benchmarked, because the Einstein app uses Altivec and the benchmarks don't. More ammunition every day to get rid of the benchmarks...
... More ammunition every day to get rid of the benchmarks...
Bill,
I heartily agree, at least with the current system of benchmarking, and especially with respect to Einstein. I'd go further, but my experience is limited to Einstein-only. A few of the dev team have been campaigning for actual work-based benchmarks, but there seem to be a lot of cross-project incompatibilities to overcome re non-uniform sized WUs elsewhere. If we were speaking of Einstein only, where the WUs have been of uniform "size" (as far as number of calculations go) so far, we could easily assign a uniform credit value for every WU, and that would be almost absolutely fair and just. I'm sure that you know far better than I how this sort of thing would not scale accurately across all the other projects, especially those with much-varying "size" work.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
A few of the dev team have been campaigning for actual work-based benchmarks, but there seem to be a lot of cross-project incompatibilities to overcome re non-uniform sized WUs elsewhere.
You might want to read the Code release and redundancy thread at Rosetta, as the entire issue was well hashed out there...
Because science apps run on only one of the two CPUs in a dual-core box,
I'm not sure what you mean by "apps run on only one". I have two E@H WUs being processed simultaneously, one on each core. It is more likely the benchmarks that run on only one core, isn't that right? Unless boinc starts up two copies of the benchmark on dual cpus, just as it starts up two copies of E@H. That sounds sensible, but is it done?
I'm not sure what you mean by "apps run on only one". I have two E@H WUs being processed simultaneously, one on each core. It is more likely the benchmarks that run on only one core, isn't that right? Unless boinc starts up two copies of the benchmark on dual cpus, just as it starts up two copies of E@H. That sounds sensible, but is it done?
Having two E@H results "running" means you have two copies of the application running, but each one is limited to one core. So for determining how long it takes to run a result, it doesn't matter if the _other_ core is running E@H, SETI, or Word.
The benchmark isn't a separate application, it's just run inside the boinc daemon (boinc.exe on Windows), with the applications stopped and both cores "available". Running it this way means you get a value back that is "approximately" twice as fast as it "should be", which is why they divide by two. That MAY be too "oversimplified", but it's what I understand from various conversations. I haven't looked at the code to verify it, maybe one of the optimizers or developers will stumble on this thread and comment.
You can also look at a couple other places too ...
In the Wiki, look up benchmarking.
Also this proposal (for which the first parts are being created), and lectures, specifically "Performance" Lastly we did a study on benchmarking (link is in the proposal) during BOINC Beta test.
The bottom line is that this is a very old topic ... :)
I am no longer as convinced as some with the cross-project issues as I did a small study on Cobblestones per Second by project (not saved, sorry) and the general conslusion was when I corrected for all the issues the basic numbers are real close.
RE: @Michael R., your box
)
Bill
Thanks for offering the heads-up. It's not a problem, however.
What I have is a laptop CPU (Athlon XP-Mobile) in a desktop, and even my BIOS doesn't recognize it for what it is, never did, in fact, but accepts pretty much whatever settings I assign to it, though it somehow won't allow me to set my multiplier (unlocked,btw) past 12.5. Anyway, if the BIOS can't recognize it, I hardly expect Boinc to do any better. LOL
Thanks again, Bill
Michael
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: RE: @Michael R., your
)
The choice of a mobile Athlon for a desktop is interesting. It overclocks better?
I notice the boinc stats for your cpu say:
float speed= 1928.34
int speed = 7181.39
E@H WU time = 18,100 sec
I have a dual-core athlon-64 & the boinc stats are
float speed = 1151.39
int speed = 3755.7
E@H WU time = 18,500 sec on each core.
So your mobile athlon appears almost twice as fast on both the float & int benchmarks, but only slightly faster than one core alone running E@H.
Is that just the to-be-expected bungled benchmarking by boinc? How does that severe technical difficulty in accounting for two cpus come about?
ADDMP
RE: The choice of a mobile
)
Yes, for a few reasons. 1) The multiplier is unlocked, to allow the laptop to "scale-down" to prevernt overheating and to extend battery life. 2) The Mobiles are pick-of-the-litter of the wafers, so that they can be run at lower voltages (1.45 nominal) and consume less power (45 watts, again nominal), yet still are as capable as desktop to run at 1.7+ volts, which uses more power than nominal but provides much more stability when OC'ing. Currently, I'm running the Athlon XP-Mobile 2600 (nominal speed 2.0 GHz) at "conservative", 202 FSB and 12.5 multiplier = 2.525 GHz air-cooled, 42 degrees C (according to ASUS Probe) while crunching 24/7, because the family I live with insists upon occasionally using "auxiliary" heating, their gas/hotwater furnace, on these cold days, instead of letting me run another of these rigs on the ground floor, and getting some entertainment value from the "heating system". LOL When they leave the heat off and the indoor temps fall to 50-55, I can push the FSB up to 206 stable and crunching and get 2.575GHz (@50-52 degrees C) without tripping ASUS' conservative overheating protection. That speed results in reliable WU times in thr 17700-17800 range.
I use an optimized BOINC client that boosts benchmarks up closer to the kind of numbers I get with Sysoft SANDRA or other standardized benchmark utility, so they are not directly comparable to yours.
I don't understand what you mean by the two cpu thing.
Hope this helps
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: RE: Is that just the
)
Because science apps run on only one of the two CPUs in a dual-core box, the benchmarks _should_ also be limited in that way. They aren't. So, when someone realized that the benchmarks for a dual-core box were twice as high as they "should" be, the decision was apparently made to "divide by two" if 2 CPUs were detected. Unfortunately, this makes any benchmark problems _twice_ as bad, and is _completely_ wrong for an HT CPU. The normal result is that a dual-core benchmarks several percent lower than it should, and an HT benchmarks nearly half what it should. If Linux is involved, where the benchmarks are just flat wrong, it's even worse.
In other words, MY PC benchmarks (single core, WinXP) _should_ be right with the standard client, where his will be low. Yours probably wouldn't be right, because of cache issues and overclocking; the app will do better on your oddball system than the benchmarks would show. And my Mac is going to be _way_ under-benchmarked, because the Einstein app uses Altivec and the benchmarks don't. More ammunition every day to get rid of the benchmarks...
RE: ... More ammunition
)
Bill,
I heartily agree, at least with the current system of benchmarking, and especially with respect to Einstein. I'd go further, but my experience is limited to Einstein-only. A few of the dev team have been campaigning for actual work-based benchmarks, but there seem to be a lot of cross-project incompatibilities to overcome re non-uniform sized WUs elsewhere. If we were speaking of Einstein only, where the WUs have been of uniform "size" (as far as number of calculations go) so far, we could easily assign a uniform credit value for every WU, and that would be almost absolutely fair and just. I'm sure that you know far better than I how this sort of thing would not scale accurately across all the other projects, especially those with much-varying "size" work.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: A few of the dev team
)
You might want to read the Code release and redundancy thread at Rosetta, as the entire issue was well hashed out there...
RE: Because science apps
)
I'm not sure what you mean by "apps run on only one". I have two E@H WUs being processed simultaneously, one on each core. It is more likely the benchmarks that run on only one core, isn't that right? Unless boinc starts up two copies of the benchmark on dual cpus, just as it starts up two copies of E@H. That sounds sensible, but is it done?
ADDMP
RE: I'm not sure what you
)
Having two E@H results "running" means you have two copies of the application running, but each one is limited to one core. So for determining how long it takes to run a result, it doesn't matter if the _other_ core is running E@H, SETI, or Word.
The benchmark isn't a separate application, it's just run inside the boinc daemon (boinc.exe on Windows), with the applications stopped and both cores "available". Running it this way means you get a value back that is "approximately" twice as fast as it "should be", which is why they divide by two. That MAY be too "oversimplified", but it's what I understand from various conversations. I haven't looked at the code to verify it, maybe one of the optimizers or developers will stumble on this thread and comment.
You can also look at a couple
)
You can also look at a couple other places too ...
In the Wiki, look up benchmarking.
Also this proposal (for which the first parts are being created), and lectures, specifically "Performance" Lastly we did a study on benchmarking (link is in the proposal) during BOINC Beta test.
The bottom line is that this is a very old topic ... :)
I am no longer as convinced as some with the cross-project issues as I did a small study on Cobblestones per Second by project (not saved, sorry) and the general conslusion was when I corrected for all the issues the basic numbers are real close.