on Linux my mflops seem to be half what the windows machines running on the same processor gets.
This is truly hard to believe.
Have you tried the Intel Linux Compilers & sse2 instruction options ???
Robert Somerville
Copyright © 2024 Einstein@Home. All rights reserved.
poor measured performance on Linux vs Windows (bad Compiler choi
)
A while back I used the intel c/c++ compiler for linux available on the net and
used a small floating point (amateurish) program of mine. The results were astonishing to me. The intel compiler was 13 times as fast as the gnu compiler
shipping with debian kernel 2.6.8. Do you know what compiler einstein is using?
Do you know if they would qualify for the "free" use of the intel compiler? I
was going to try and recompile some of my downloaded linux programs but am having trouble getting thru the maze of the details of "make".
merle
What is freedom of expression? Without the freedom to offend, it ceases to exist.
— Salman Rushdie
If you used the optimized
)
If you used the optimized Boinc clients for Linux, you will notice your mflops about double.
The only thing, this does not matter, since there are no optimized einstein binaries for Linux. The best solution is the use the windows binaries in Linux using wine.
> on Linux my mflops seem to be half what the windows machines running on the
> same processor gets.
>
> This is truly hard to believe.
>
> Have you tried the Intel Linux Compilers & sse2 instruction options ???
>
such things just should not be writ so please destroy this if you wish to live 'tis better in ignorance to dwell than to go screaming into the abyss worse than hell
The problem is that the
)
The problem is that the binary that really does the crunching is sent to you via the boinc client ie I've got an amd64 over here running 64 bit linux but the actual cruncher seems to be something 32 bits.
mark@ubuntu:~/.boinc/projects/einstein.phys.uwm.edu $ file ~/.boinc/boinc
/home/mark/.boinc/boinc: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.0, dynamically linked (uses shared libs), not stripped
mark@ubuntu:~/.boinc/projects/einstein.phys.uwm.edu $ file einstein_4.80_i686-pc-linux-gnu
einstein_4.80_i686-pc-linux-gnu: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.0.30, dynamically linked (uses shared libs), not stripped
mark@ubuntu:~/.boinc/projects/einstein.phys.uwm.edu $ uname -a Linux ubuntu 2.6.10-5-amd64-generic #1 Tue Mar 15 14:59:03 UTC 2005 x86_64 GNU/Linux
I really wonder if it makes sense running optimize boinc clients
Regards
Mark
> I really wonder if it makes
)
> I really wonder if it makes sense running optimize boinc clients
It does because it affects the claimed credits which, in turn, affects the credits awarded - not only for you but for the others that crunch the same units.
As for crunching for Einstein, I have, along with many others, dumped it as it is too Linux-unfriendly and they refuse to release the source code to allow us to improve the situation.
Sad but true. :(
Be lucky,
Neil
> > I really wonder if it
)
> > I really wonder if it makes sense running optimize boinc clients
>
> It does because it affects the claimed credits which, in turn, affects the
> credits awarded - not only for you but for the others that crunch the same
> units.
>
> As for crunching for Einstein, I have, along with many others, dumped it as it
> is too Linux-unfriendly and they refuse to release the source code to allow us
> to improve the situation.
I just spent some time looking, and I did not see times that were too out of line of what I am getting on my machines. Heck, my best performing system time wise for almost all projects is my G5 which is running OS-X which is Linux ...
As far as not releasing the client, that is their option. Though I think I remember Bruce saying that they may get some expertise from the Participant population to try to address these concerns.
I grant that the spread from my slowest to fastest is over 3 hours (7:43 to 11:02 average processing time), but this is very clearly shown to be more of an issue of architecture, at least to me.
For example, my best performing systems are the 2.8 GHz Intel P4 and the G5. And the one thing in common that these have is that they are single threaded processors. The HT processors have times of 10:49, 11:02, and 10:50 ...
Now, my FIRST guess would be that they are running into some kind of contention problem. This could be in either the CPU/FPU or it could be cache "thrashing" ... because there is not a substantial difference in the processing times of my 3.2 GHz processors over the motherboard change, my best guess then points to contention of the FPU. If it was a cache problem I wouild have expected to see a higher difference between the single channel memory vs. dual channel.
ANd Mark, the best reason to run optimized programs is to get more done in the same unit time. That is why I am so intrested in the optimization work going on with SETI@Home, once they get some stuff validated and in general "production" the intent is to move the optimizations into the baseline code.
But, the common routines that are developed could be used by other projects as they "tune" their Science Applications.
> I just spent some time
)
> I just spent some time looking, and I did not see times that were too out of
> line of what I am getting on my machines. Heck, my best performing system time
> wise for almost all projects is my G5 which is running OS-X which is Linux
Most of my linux results have already dropped off the database (as I'm one of those who has also stopped running einstein on my linux systems) but the following are the same computer, booted in windows vs. linux - and there are substantial differences in the times.
http://einsteinathome.org/host/44709/tasks
http://einsteinathome.org/host/66491/tasks
About 10.5 hours average in windows, and about 18 hours average in linux.
> About 10.5 hours average in
)
> About 10.5 hours average in windows, and about 18 hours average in linux.
Expect 13.5 hours on Linux (hopefully) soon.
Maybe Windows will be faster too, maybe ... but this is another job.
> I just spent some time
)
> I just spent some time looking, and I did not see times that were too out of
> line of what I am getting on my machines.
The subject was the BOINC client. There is still a substantial disparity between OSs on the benchmarks - therefore the optimised clients are very worthwhile.
> Heck, my best performing system time
> wise for almost all projects is my G5 which is running OS-X which is Linux
OS X is not Linux. It is based on BSD Unix which is a very different animal. ;)
> As far as not releasing the client, that is their option.
Of course it's their option - just as it is mine not to crunch Einstein. ;)
If it's any consolation to anyone, Predictor is even worse to Linux users and that gets my thumbs-down too. :(
Be lucky,
Neil
> If it's any consolation to
)
> If it's any consolation to anyone, Predictor is even worse to Linux users and
> that gets my thumbs-down too. :(
Predictor recently made some substantial changes for the better.
I've only returned one 0.0 in the past week. The other returns have received full credit - Over 1200 credits in the past few days.
I still can't place my old AMD-K6 box on it without client errors though. No big loss. It's about as computationally gifted as your typical mall rat.
> > > If it's any
)
>
> > If it's any consolation to anyone, Predictor is even worse to Linux users
> and
> > that gets my thumbs-down too. :(
>
> Predictor recently made some substantial changes for the better.
Yep, they finally updated the server software. :)
> I've only returned one 0.0 in the past week. The other returns have received
> full credit - Over 1200 credits in the past few days.
I'm happy to say that I never had that problem. However, the Linux client takes 50% longer to complete a WU than the Windows version on my machines and, because Linux hosts get lumped together, the credit awarded tends to be very low due to most Linux users not running optimised BOINC clients.
Whilst I appreciate that it should be the science that matters, it is the competitive streak in me that keeps me crunching. ;)
Be lucky,
Neil