Will there be faster code for PPC (he asked with despair in his eyes)?
Sorry - didn't throughly read the title...
I am working on it. The App for G5 PPC that's on our Power User page is meant as a first step.
However, I'm not much of a prophet, and if there's one thing I learned from recent coding, it's that it's almost impossible to predict the speedup a particular change would make on a certain CPU. One feature I'm desperately missing in the AltiVec / Velocity Engine (compared to SSE2 on x86) is double precision calculation. We'll see how far I can get with PPC.
I won't forget it in the sense that I will make sure that the Apps run on it. However lacking the AltiVec Unit / Velocity Engine I doubt that I can speed up the code for it any further.
I won't forget it in the sense that I will make sure that the Apps run on it. However lacking the AltiVec Unit / Velocity Engine I doubt that I can speed up the code for it any further.
BM
Thanks for the answer - the speed of the previous 4.06 app would be fine.
Maybe there are some general improvements in the future.
In another thread you mentioned an alternative app without the crossvalidationproblem. Any results or showstopper for release so far?
I apologise in advance for raising what is possibly a stupid question, but for th life of me, I feel I must ask.
I have 3 computers working on Einstein (ALL! my computers!). They are: MacBookPro, G5, and G4
They are returning the following approximate readings
Comp / CPU Secs / Credit / CPU Secs/Credit point
G5 / 40,000 / 176 / 227
MacBookPro / 2,219 / 13.24 / 167.59
G4 / 8,900 / 13.5 / 65.9
Now, I thought that the concept was - equal work, equal recognition.
Looking at the above, it seems that this is not the case. Or am I wrong?
I too, am in this for the science - but in that there are thousands out there who are in there for the kudos ... well, that also should affect me equally.
Forgetting the "Science" bit ... is the above table correct? And if not, what can I do to regularise it?
I apologise in advance for raising what is possibly a stupid question, but for th life of me, I feel I must ask.
I have 3 computers working on Einstein (ALL! my computers!). They are: MacBookPro, G5, and G4
They are returning the following approximate readings
[I'll try and make them line up better:]
[pre]Comp / CPU Secs / Credit / CPU Secs/Credit point
G5 / 40,000 / 176 / 227
MacBookPro / 2,219 / 13.24 / 167.59
G4 / 8,900 / 13.5 / 65.9[/pre]
That G5 seems to be making very good time; mine (a dual-core 2.3-GHz) takes about 70000 s for the long WUs, or nearly 400 s/cobblestone, for a rate of 9 CS/h. But there may be significant variation among WUs, and since in this project we tend to do long runs of similar WUs from one ‘batch’ file, those variations don’t often get averaged out when comparing hosts.
Your decimal-point slipped on the last figure: that should be 659 s/CS. My G4s take between 600 and 1200 s/CS, pretty well according to their clock speeds (400 - 733 MHz), earning 3 to 6 CS/h, so we’re all in the same range.
Quote:
Now, I thought that the concept was - equal work, equal recognition.
Looking at the above, it seems that this is not the case. Or am I wrong?
Once the G4’s inverse-speed figure is corrected, the above looks broadly reasonable to me: the faster the processor, the less CPU-time it takes to earn a cobblestone.
Back to the topic in the subject line, my last couple of results also appear to have earned about one-fifth to one-quarter less per CPU-hour than their predecessors did.
I apologise in advance for raising what is possibly a stupid question, but for th life of me, I feel I must ask.
Now, I thought that the concept was - equal work, equal recognition.
Looking at the above, it seems that this is not the case. Or am I wrong?
Well, you're partially right.
The idea is to set the credit rate for each project such that computer X crunching for project A receives the same credit per hour as computer X crunching for project B. Computer Y crunching for project A will probably get a different number of credits per hour than computer X for project A, but that is only because one processes faster/slower than the other.
Equal work should get equal credits, but faster computers get more credits/hour than slower ones when crunching the same WU.
The idea is to set the credit rate for each project such that computer X crunching for project A receives the same credit per hour as computer X crunching for project B. Computer Y crunching for project A will probably get a different number of credits per hour than computer X for project A, but that is only because one processes faster/slower than the other.
Equal work should get equal credits, but faster computers get more credits/hour than slower ones when crunching the same WU.
I dont fully agree - equal work should get equal credits, faster computers get more credits/hour is true, but better more optimised code using simd unlocks the performance of the processor, flops rise so credit should accordingly rise.
Credit is a direct function of flops, so more flops = more credit, and this can be achieved by optimisation as well as brute force, and should be rewarded accordingly.
I'm not terribly happy with the credits being marked down across the board. One of my machines is still doing v4.17 and hasn't yet switched to v4.24, but its credit/hour has also been cut. (It was bad enough before for a dual p4!)
I'm not terribly happy with the credits being marked down. One of my machines is still doing v4.17 and hasn't yet switched to v4.24, but its credit/hour has also been cut. (It was bad enough before for a dual p4!)
The 4.24 is for Windows, the 4.17 for Linux. So it already has the new version and won't switch again. And the speed increase wasn't as high on linux as on windows, because the windows client was much slower before.
I'm not terribly happy with the credits being marked down. One of my machines is still doing v4.17 and hasn't yet switched to v4.24, but its credit/hour has also been cut. (It was bad enough before for a dual p4!)
The 4.24 is for Windows, the 4.17 for Linux. So it already has the new version and won't switch again. And the speed increase wasn't as high on linux as on windows, because the windows client was much slower before.
Thanks for the clarification re 4.17 for Linux, but the principle remains that the machine now dows a job in 54k secs instead of 56k secs, but suffers an artificial 30% cut in credit?
This is enough for me to take it to another project :(
Thanks for the clarification re 4.17 for Linux, but the principle remains that the machine now dows a job in 54k secs instead of 56k secs, but suffers an artificial 30% cut in credit?
This is enough for me to take it to another project :(
G'day Andy
I don't mean to be flippant, but which other project will give you a higher credit/hour?
RE: RE: Will there be
)
Please dont forget the old but reliable G3. ;)
MB
RE: Please dont forget the
)
I won't forget it in the sense that I will make sure that the Apps run on it. However lacking the AltiVec Unit / Velocity Engine I doubt that I can speed up the code for it any further.
BM
BM
RE: RE: Please dont
)
Thanks for the answer - the speed of the previous 4.06 app would be fine.
Maybe there are some general improvements in the future.
In another thread you mentioned an alternative app without the crossvalidationproblem. Any results or showstopper for release so far?
bye MB
I apologise in advance for
)
I apologise in advance for raising what is possibly a stupid question, but for th life of me, I feel I must ask.
I have 3 computers working on Einstein (ALL! my computers!). They are: MacBookPro, G5, and G4
They are returning the following approximate readings
Comp / CPU Secs / Credit / CPU Secs/Credit point
G5 / 40,000 / 176 / 227
MacBookPro / 2,219 / 13.24 / 167.59
G4 / 8,900 / 13.5 / 65.9
Now, I thought that the concept was - equal work, equal recognition.
Looking at the above, it seems that this is not the case. Or am I wrong?
I too, am in this for the science - but in that there are thousands out there who are in there for the kudos ... well, that also should affect me equally.
Forgetting the "Science" bit ... is the above table correct? And if not, what can I do to regularise it?
RE: I apologise in advance
)
[I'll try and make them line up better:]
[pre]Comp / CPU Secs / Credit / CPU Secs/Credit point
G5 / 40,000 / 176 / 227
MacBookPro / 2,219 / 13.24 / 167.59
G4 / 8,900 / 13.5 / 65.9[/pre]
That G5 seems to be making very good time; mine (a dual-core 2.3-GHz) takes about 70000 s for the long WUs, or nearly 400 s/cobblestone, for a rate of 9 CS/h. But there may be significant variation among WUs, and since in this project we tend to do long runs of similar WUs from one ‘batch’ file, those variations don’t often get averaged out when comparing hosts.
Your decimal-point slipped on the last figure: that should be 659 s/CS. My G4s take between 600 and 1200 s/CS, pretty well according to their clock speeds (400 - 733 MHz), earning 3 to 6 CS/h, so we’re all in the same range.
Once the G4’s inverse-speed figure is corrected, the above looks broadly reasonable to me: the faster the processor, the less CPU-time it takes to earn a cobblestone.
Back to the topic in the subject line, my last couple of results also appear to have earned about one-fifth to one-quarter less per CPU-hour than their predecessors did.
RE: I apologise in advance
)
Well, you're partially right.
The idea is to set the credit rate for each project such that computer X crunching for project A receives the same credit per hour as computer X crunching for project B. Computer Y crunching for project A will probably get a different number of credits per hour than computer X for project A, but that is only because one processes faster/slower than the other.
Equal work should get equal credits, but faster computers get more credits/hour than slower ones when crunching the same WU.
Seti Classic Final Total: 11446 WU.
RE: The idea is to set the
)
I dont fully agree - equal work should get equal credits, faster computers get more credits/hour is true, but better more optimised code using simd unlocks the performance of the processor, flops rise so credit should accordingly rise.
Credit is a direct function of flops, so more flops = more credit, and this can be achieved by optimisation as well as brute force, and should be rewarded accordingly.
I'm not terribly happy with the credits being marked down across the board. One of my machines is still doing v4.17 and hasn't yet switched to v4.24, but its credit/hour has also been cut. (It was bad enough before for a dual p4!)
Hostid
before:
177.60 / 55,818.66 * 3600 = 11.5 /hour
after:
122.19 / 54,472.71 * 3600 = 8.1 /hour
I think this needs to be reviewed.
Andy.
RE: I'm not terribly happy
)
The 4.24 is for Windows, the 4.17 for Linux. So it already has the new version and won't switch again. And the speed increase wasn't as high on linux as on windows, because the windows client was much slower before.
RE: RE: I'm not terribly
)
Thanks for the clarification re 4.17 for Linux, but the principle remains that the machine now dows a job in 54k secs instead of 56k secs, but suffers an artificial 30% cut in credit?
This is enough for me to take it to another project :(
RE: Thanks for the
)
G'day Andy
I don't mean to be flippant, but which other project will give you a higher credit/hour?