Ah, OK, thanks. That string isn't in the executable, tho... Sure it won't go to berkely's symstore ?
An Id of the pdb should be in there. The Core Client gets the location of the Symbol Store from the project config (should end up in the scheduler reply).
The Linux was always faster than the Win app, even if the Win app uses SSE2.
My 3700+ has now completed the last result I was working on with the edited app.
320.01 / 54246 * 3600 = 21.24 cr/hr (my host)
320.81 / 50786 * 3600 = 22.74 cr/hr (your host I was comparing against)
That is a roughly 7% delta. Beyond that, I don't know how much of a factor the difference in frequencies played in (409 for me, 410 for you), not to mention that my single-core has an inherent "handicap" (my system will take a bigger hit in performance when multi-tasking). In regards to that though, did my raw clock speed advantage (and also likely memory speed / latency advantage) outweigh the single-core disadvantage?
As far as I can tell from my experiences, there is a max. difference of about ± 5% in cr/hr between some WUs. Multitasking shouldn't have an impact to the crunching times, but to the efficiency of the host('Average CPU efficiency' and 'Result duration correction factor'). So you can calculate your daily credits with the values on the 'Computer Summary' page. But if you mean that os stuff on a dual core is spread and so somehow less per core than on a single core system, I agree that the RAC of a dual core system will be a bit more than twice the RAC of a single core system with same clock speed.
If you want to compare with my host:
Result duration correction factor: 0.44883
Imho the best way to measure the absolute performance is done by looking at the wall time and figure out how much work/credits are done/gathered in some interval.
Quote:
Quote:
Maybe all that debugging code causes the slows down.
What may slow it down some are the disk writes into stderr. If all that stuff is not really necessary, some performance improvement can be gained by not writing it all out...but otherwise, the debug info that they are talking about is the symbolic info in the PDB that is only called when the app crashes.
Yes, I'm aware of that massive disk writing, but this is done under Linux too. I just thought, there might be some additional 'inline' code for detecting error situations in the Win app.
I assume that there will be no benefit to me in running this app (P4 1.4ghz with sse2) ?
A beta app is for finding bugs and helping the developers. It's alway your decision if you want to participate or not. ;-)
With your host crunching times might get longer with 4.23, so you will get no benefit, only the project. On the other hand it's still a bit early to judge about the speed of this beta.
I assume that there will be no benefit to me in running this app (P4 1.4ghz with sse2) ?
No performance benefit. Main purpose is to help fix remaining stability problems. Once those are fixed, E@H programmers will move on to improve performance, which will benefit all, tho :-).
If you want to compare with my host:
Result duration correction factor: 0.44883
I don't think a comparison can be made at this point because it is still likely on the high side due to the penalty, but for posterity's sake:
Average CPU efficiency 0.965721
Result duration correction factor 0.710124
Quote:
Yes, I'm aware of that massive disk writing, but this is done under Linux too. I just thought, there might be some additional 'inline' code for detecting error situations in the Win app.
Try/catch/finally blocks may be there, but I doubt there would be any real measurable performance hit that would have the assumption of the app continuing on. Most of the things they are looking for would be either handled or unhandled exceptions that would cause the result to "end". The symbolic debugging is enabled via the compiler (/DEBUG /Zi is likely if they are using Visual C++ 2005). If /RTC isn't set, they may want to do that as well (/RTC = Run-Time Checks) and can reveal some things that would not ordinarily be caught...
After failing in trying to reproduce the client errors on our own systems in order to fix them, this is an App release that is primarily meant to enrich the information returned from your machines in case of a client error.
In case of a (debuggable) client error a debugger will be loaded (by newer BOINC Clients) that will in turn contact the Einstein@Home server in order to download debugging symbols ("phone home"). This means that the PDB is no longer distributed with the App, and symbol information will be downloaded compressed and only when needed.
This version of the App also avoids the modf() call with the buggy CPU detection, which should eliminate one reason for instability and speed up tasks on machins with AMD CPUs (and probably non-SSE Intel, too).
BM
The following is mentioned in the beta install page:
... If the application returns erroneous results, invalid results, or produces client errors, then please rename or move the app_info.xml file in order to stop using the beta version of the application
Should it added that you can also do the rename/move for the .xml if you just want to stop ( "uninstall" ) the beta ( not only in the case of errorneous etc results ( I now that this is sligtly OT but anyway .. )
Shure a comparison doesn't make sence because I patched the Win app a while ago, so my RDCF must be lower than yours.
By mistake I wrote down the Linux value anyway, Win value is 0.54722. ACE is bad for comparing, 'cause the load on my development machine pulls this value down.
On hosts doing nothing else than crunching, the ACE should be close to 1.
When I've finished my two WUs with 4.23 the RDCF will probably rise.
After ~22% done cr/hr are 17.3 right now.
Second core: 21.8% done, 17.56 cr/hr.
RE: "...speed up tasks on
)
You're right, I'll fix that.
BM
BM
RE: RE: BTW, to which URL
)
Ah, OK, thanks. That string isn't in the executable, tho... Sure it won't go to berkely's symstore ?
CU
RE: Ah, OK, thanks. That
)
An Id of the pdb should be in there. The Core Client gets the location of the Symbol Store from the project config (should end up in the scheduler reply).
BM
BM
RE: RE: The Linux was
)
As far as I can tell from my experiences, there is a max. difference of about ± 5% in cr/hr between some WUs. Multitasking shouldn't have an impact to the crunching times, but to the efficiency of the host('Average CPU efficiency' and 'Result duration correction factor'). So you can calculate your daily credits with the values on the 'Computer Summary' page. But if you mean that os stuff on a dual core is spread and so somehow less per core than on a single core system, I agree that the RAC of a dual core system will be a bit more than twice the RAC of a single core system with same clock speed.
If you want to compare with my host:
Result duration correction factor: 0.44883
Imho the best way to measure the absolute performance is done by looking at the wall time and figure out how much work/credits are done/gathered in some interval.
Yes, I'm aware of that massive disk writing, but this is done under Linux too. I just thought, there might be some additional 'inline' code for detecting error situations in the Win app.
cu,
Michael
I assume that there will be
)
I assume that there will be no benefit to me in running this app (P4 1.4ghz with sse2) ?
RE: I assume that there
)
A beta app is for finding bugs and helping the developers. It's alway your decision if you want to participate or not. ;-)
With your host crunching times might get longer with 4.23, so you will get no benefit, only the project. On the other hand it's still a bit early to judge about the speed of this beta.
cu,
Michael
RE: I assume that there
)
No performance benefit. Main purpose is to help fix remaining stability problems. Once those are fixed, E@H programmers will move on to improve performance, which will benefit all, tho :-).
CU
BRM
RE: If you want to compare
)
I don't think a comparison can be made at this point because it is still likely on the high side due to the penalty, but for posterity's sake:
Average CPU efficiency 0.965721
Result duration correction factor 0.710124
Try/catch/finally blocks may be there, but I doubt there would be any real measurable performance hit that would have the assumption of the app continuing on. Most of the things they are looking for would be either handled or unhandled exceptions that would cause the result to "end". The symbolic debugging is enabled via the compiler (/DEBUG /Zi is likely if they are using Visual C++ 2005). If /RTC isn't set, they may want to do that as well (/RTC = Run-Time Checks) and can reveal some things that would not ordinarily be caught...
RE: RE: A new Windows App
)
Shure a comparison doesn't
)
Shure a comparison doesn't make sence because I patched the Win app a while ago, so my RDCF must be lower than yours.
By mistake I wrote down the Linux value anyway, Win value is 0.54722. ACE is bad for comparing, 'cause the load on my development machine pulls this value down.
On hosts doing nothing else than crunching, the ACE should be close to 1.
When I've finished my two WUs with 4.23 the RDCF will probably rise.
After ~22% done cr/hr are 17.3 right now.
Second core: 21.8% done, 17.56 cr/hr.
cu,
Michael