With the recent credit drop i again have the old problem:
I again get almost the same credits / hour on my old Athlon XP 2200+ running linux like on my Athlon 64 3500+ running windows.... i thought the clients are about the same speed now?
Or is the Credit system still bad? The 2200+ always gets small WU's, the 3500+ always long ones (and i read thats a thing the project wants) and it seems that the small ones give much more C / h..... Thats also not too good i think.....
With the recent credit drop i again have the old problem:
I again get almost the same credits / hour on my old Athlon XP 2200+ running linux like on my Athlon 64 3500+ running windows.... i thought the clients are about the same speed now?
Or is the Credit system still bad? The 2200+ always gets small WU's, the 3500+ always long ones (and i read thats a thing the project wants) and it seems that the small ones give much more C / h..... Thats also not too good i think.....
If it's any consolation your 2200+ will get lower credits. The new credits for short wu's are 13.xx (just checked and so far its being awarded 16.xx).
If Bruce reads this thread, I would be VERY interested in his opinion since he has to deal with all these elements.
My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.
Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).
Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.
Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.
If it's any consolation your 2200+ will get lower credits. The new credits for short wu's are 13.xx (just checked and so far its being awarded 16.xx).
Hmm ok, i'll wait how much it will be soon... But still then i get 16.7 c/h on the 2200+ and 21.4 c/h on the 3500+.
In other Benchmarks etc the 3500+ has almost double the score than the 2200+, so the difference is "not big enough" in my eyes.. but ok, ill wait and see
Hmm ok, i'll wait how much it will be soon... But still then i get 16.7 c/h on the 2200+ and 21.4 c/h on the 3500+.
In other Benchmarks etc the 3500+ has almost double the score than the 2200+, so the difference is "not big enough" in my eyes.. but ok, ill wait and see
Benchmarks are good for giving you an estimation of the relative speed of computers, but they are not good at predicting exactly how two computers will behave in the "real world". Benchmarks use only one or a small number of different calculations and they are only run for a short time. The benchmarks also dont write to disk or anything of that nature. So they cannot tell the whole story. Someone mentioned elsewhere that the benchmarks BOINC uses only tests using L1 cache memory while the science applications make heavy use of L2 cache. Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.
I saw one cruncher that had the exact same issue that you do (except his faster computer was actually "slower" at crunching than his "slow" computer) and he found that his memory timings were off on the "faster" computer. When he reset them to the proper value the speed jumped to near what it was supposed to be. So as I said, benchmarks don't tell the whole story. If you don't get the performance you think you should in the "real world", there may be a cure, such as memory timings, bus speed settings, that you can change, then you may have memory bandwidth problems that can only be fixed by replacing the mobo.
As a perfect example of a problem on one computer, though not related to this issue, I have a 1gig AMD Duron processor on a mobo with IDE ports running at up to 133 mhz bus speed. My limit here is my hdd which is a 100mhz bus. Windows would load programs in no time (what little time I had it on here! haha) but Linux, while usually much faster than Windows on a slower computer with only a 33mhz IDE bus, slowed to a crawl during disk i/o. I found that Linux must be told the IDE bus speed or it defaults to 33mhz!
Also some computers, as Mr. Allen pointed out, are better at one type of wu than another. So to make the best use of the computer, you may even want to switch it to some other project and use another one to crunch here, or trade it in on one more suited for these wu's.
When asked a question and you are not sure of the right answer, I've found that the best answer is always "I don't know for sure, but I'll find out!"
Someone mentioned elsewhere that the benchmarks BOINC uses only tests using L1 cache memory while the science applications make heavy use of L2 cache. Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.
Unless the new science app has a larger working set than the s4 ones for some reason, it will fit entirely within the 32k L1 cache of an athlon with room to spare (Akos's apps used between 10-20k depending on the variant). P4s with only an 8k l1 cache had to continually shuffle data between it and the l2 cache which meant they began to see smaller gains from akos's latest apps because they were increasinly bound not by the speed of the cpu, but by having to move data in the caches.
Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.
Yes sure, thats right.. I just dont see ANY advantage of an athlon xp at 1800 Mhz, with 266 Mhz noname ram on an old Via Chipset against an athlon 64 at 2350 Mhz with DualChanel Corsair Ram running 428 Mhz on a nForce 3 Chipset *g*
But ok, the Ram isn't really used and i think the calculations are so "basic" that most of the new features that are needed by games, movie encoding or whatever aren't really used by the science app, so that should be one reason.. i will just let it crunch and dont look at credits ;)
It think you have to blame the benchmarks for overrating your 3500. the einstien app scales linearly with cpu speed, and the 30% gain of your 3500 is a good fit to the difference in credit rates between them. If part of the benchmark uses SSE2/3 instructions (not on the XP), or does depend on the higher memory speeds of the 3500 that gives you the explanation behind the benchmark difference.
If Bruce reads this thread, I would be VERY interested in his opinion since he has to deal with all these elements.
My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.
Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).
Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.
Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.
Cheers,
Bruce
Thanks for your reply Bruce
Well, it seems we have different points of view.
I agree that all Boinc project should grant the same credit / hour / cpu at the begining. But when a project makes the effort of optimizing its code to suit it better to the hosts participating in it, I think that project should also be granted by attracting rac hunters.
What I understand from your reply is that there might be a problem with new projects (or new client versions) that could be intentionally coded like shit just to gain wonderful optimizations later.
So to avoid this, the easiest way is to force an average credit / hour / cpu between Boinc projects. The project that makes effort in optimizing gains time (and saves money), but won't disturb its boinc mates. This might be the only way to keep the boinc plateform attractive for current and future projects.
If Bruce reads this thread, I would be VERY interested in his opinion since he has to deal with all these elements.
My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.
Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).
Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.
Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.
Cheers,
Bruce
While I am in agreement regarding cross-project comparability, I still cannot fathom the method that is being used to reach this goal. Attempting to correct across projects using the "averages" you discuss would seem to be a near impossibility. Since credit rate is completely arbitrary, why not negotiate a standard rate (e.g., X credits per hour on machine Y) to which all projects must conform in order to use the BOINC system? It seems to me that the BOINC developers are stuck on trying to bring credits back in to line with the pre-optimzed rate from SETI.
With the recent credit drop i
)
With the recent credit drop i again have the old problem:
I again get almost the same credits / hour on my old Athlon XP 2200+ running linux like on my Athlon 64 3500+ running windows.... i thought the clients are about the same speed now?
Or is the Credit system still bad? The 2200+ always gets small WU's, the 3500+ always long ones (and i read thats a thing the project wants) and it seems that the small ones give much more C / h..... Thats also not too good i think.....
RE: With the recent credit
)
If it's any consolation your 2200+ will get lower credits. The new credits for short wu's are 13.xx (just checked and so far its being awarded 16.xx).
RE: If Bruce reads this
)
My intention is a simple one: ON THE AVERAGE a host machine running Einstein@Home should get the same number of credits/cpu-hour as a host machine running the other BOINC projects that grant credit.
Here ON THE AVERAGE means averaged across all the hosts that are attached to multiple projects, and averaged across all the projects (suitably weighed by the number of cross-project hosts).
Rationale: this way, people will chose projects based on their scientific and other merits, and likelihood of success and impact, NOT for other reasons such as credit granted.
Corollary: assuming that other BOINC projects do the same, this will tend to make hosts move to the projects that they are best suited for.
Cheers,
Bruce
Director, Einstein@Home
RE: If it's any
)
Hmm ok, i'll wait how much it will be soon... But still then i get 16.7 c/h on the 2200+ and 21.4 c/h on the 3500+.
In other Benchmarks etc the 3500+ has almost double the score than the 2200+, so the difference is "not big enough" in my eyes.. but ok, ill wait and see
RE: Hmm ok, i'll wait how
)
Benchmarks are good for giving you an estimation of the relative speed of computers, but they are not good at predicting exactly how two computers will behave in the "real world". Benchmarks use only one or a small number of different calculations and they are only run for a short time. The benchmarks also dont write to disk or anything of that nature. So they cannot tell the whole story. Someone mentioned elsewhere that the benchmarks BOINC uses only tests using L1 cache memory while the science applications make heavy use of L2 cache. Also you can't just compare clock speed or basic benchmarks. You have to consider every variable in the computer. One computer may have a very high main clock speed but low memory bus speeds. One computer may have the memory timings set correctly while the other may have timings that are not as efficient.
I saw one cruncher that had the exact same issue that you do (except his faster computer was actually "slower" at crunching than his "slow" computer) and he found that his memory timings were off on the "faster" computer. When he reset them to the proper value the speed jumped to near what it was supposed to be. So as I said, benchmarks don't tell the whole story. If you don't get the performance you think you should in the "real world", there may be a cure, such as memory timings, bus speed settings, that you can change, then you may have memory bandwidth problems that can only be fixed by replacing the mobo.
As a perfect example of a problem on one computer, though not related to this issue, I have a 1gig AMD Duron processor on a mobo with IDE ports running at up to 133 mhz bus speed. My limit here is my hdd which is a 100mhz bus. Windows would load programs in no time (what little time I had it on here! haha) but Linux, while usually much faster than Windows on a slower computer with only a 33mhz IDE bus, slowed to a crawl during disk i/o. I found that Linux must be told the IDE bus speed or it defaults to 33mhz!
Also some computers, as Mr. Allen pointed out, are better at one type of wu than another. So to make the best use of the computer, you may even want to switch it to some other project and use another one to crunch here, or trade it in on one more suited for these wu's.
When asked a question and you are not sure of the right answer, I've found that the best answer is always "I don't know for sure, but I'll find out!"
RE: Someone mentioned
)
Unless the new science app has a larger working set than the s4 ones for some reason, it will fit entirely within the 32k L1 cache of an athlon with room to spare (Akos's apps used between 10-20k depending on the variant). P4s with only an 8k l1 cache had to continually shuffle data between it and the l2 cache which meant they began to see smaller gains from akos's latest apps because they were increasinly bound not by the speed of the cpu, but by having to move data in the caches.
RE: Also you can't just
)
Yes sure, thats right.. I just dont see ANY advantage of an athlon xp at 1800 Mhz, with 266 Mhz noname ram on an old Via Chipset against an athlon 64 at 2350 Mhz with DualChanel Corsair Ram running 428 Mhz on a nForce 3 Chipset *g*
But ok, the Ram isn't really used and i think the calculations are so "basic" that most of the new features that are needed by games, movie encoding or whatever aren't really used by the science app, so that should be one reason.. i will just let it crunch and dont look at credits ;)
It think you have to blame
)
It think you have to blame the benchmarks for overrating your 3500. the einstien app scales linearly with cpu speed, and the 30% gain of your 3500 is a good fit to the difference in credit rates between them. If part of the benchmark uses SSE2/3 instructions (not on the XP), or does depend on the higher memory speeds of the 3500 that gives you the explanation behind the benchmark difference.
RE: RE: If Bruce reads
)
Thanks for your reply Bruce
Well, it seems we have different points of view.
I agree that all Boinc project should grant the same credit / hour / cpu at the begining. But when a project makes the effort of optimizing its code to suit it better to the hosts participating in it, I think that project should also be granted by attracting rac hunters.
What I understand from your reply is that there might be a problem with new projects (or new client versions) that could be intentionally coded like shit just to gain wonderful optimizations later.
So to avoid this, the easiest way is to force an average credit / hour / cpu between Boinc projects. The project that makes effort in optimizing gains time (and saves money), but won't disturb its boinc mates. This might be the only way to keep the boinc plateform attractive for current and future projects.
By the way, my opteron is struggling with the army of yours to reach the top computers :) And you may like this software to manage them : http://forum.boincstudio.boinc.fr/boincstudio/support-international/liste_sujet-1.htm
RE: RE: If Bruce reads
)
While I am in agreement regarding cross-project comparability, I still cannot fathom the method that is being used to reach this goal. Attempting to correct across projects using the "averages" you discuss would seem to be a near impossibility. Since credit rate is completely arbitrary, why not negotiate a standard rate (e.g., X credits per hour on machine Y) to which all projects must conform in order to use the BOINC system? It seems to me that the BOINC developers are stuck on trying to bring credits back in to line with the pre-optimzed rate from SETI.