32-bit S6BucketLVE sent to the 64 bit systems.

Sebastian M. Bobrecki
Sebastian M. Bo...
Joined: 20 Feb 05
Posts: 63
Credit: 1529602847
RAC: 105
Topic 196957

I have a problem that my machines with 64 bit linux (multilib) gets a 32-bit version of S6BucketLVE application. It's not a terrible disaster but my tests show that the 64-bit version is faster by about 5-10%.

P. S. This problem does not occur in Albert@home.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117693649166
RAC: 35072906

32-bit S6BucketLVE sent to the 64 bit systems.

I believe the GW apps are 32-bit only. I don't know of a 64-bit version. As far as I know, this applies to Albert as well. It's fine to run a 32-bit app on a 64-bit OS.

What's the full name of the 64-bit app you've used on Albert?

Cheers,
Gary.

Sebastian M. Bobrecki
Sebastian M. Bo...
Joined: 20 Feb 05
Posts: 63
Credit: 1529602847
RAC: 105

It exists for sure. Take a

It exists for sure.
Take a look here: http://einstein.phys.uwm.edu/apps.php
...
Linux running on an AMD x86_64 or Intel EM64T CPU 1.04 14 Jan 2013 8:08:00 UTC
...
And output form ldd:
projects/einstein.phys.uwm.edu/einstein_S6BucketLVE_1.04_x86_64-pc-linux-gnu: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.0, not stripped

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 729882930
RAC: 1191712

Hi! Is it the exact same

Hi!

Is it the exact same host on Albert and Einstein that is getting the 64 bit app on Albert and the 32 bit app on EInstein ?

Cheers
HB

Sebastian M. Bobrecki
Sebastian M. Bo...
Joined: 20 Feb 05
Posts: 63
Credit: 1529602847
RAC: 105
Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 729882930
RAC: 1191712

Hi! Sorry my bad, I should

Hi!

Sorry my bad, I should have remembered this earlier: It's a feature, not a bug :-) . Every Linux host that can execute the 32 bit version will get it. Only the hosts that signal to the server that they cannot execute the 32 bit version will get the 64 bit version.

This was set this way about a year ago, the discussion around this question and a comparison of performance of 32 bit and 64 bit app versions can be found near this spot: http://einsteinathome.org/node/196352&nowrap=true#117734.

On Albert, this wasn't configured in the same way. Sorry for the confusion.

Cheers
HB

Sebastian M. Bobrecki
Sebastian M. Bo...
Joined: 20 Feb 05
Posts: 63
Credit: 1529602847
RAC: 105

Ok, that explains this

Ok, that explains this behavior.

But, question turns on what hardware you performed a test about which you write in this post. And what were the other conditions. To do a test I prepared a dedicated machine with a Ivybridge processor and turned off all the options related to frequency changes. In addition, there were only init, agetty, ssh and boinc + S6BucketLVE application processes running. Tests was performed for 1 and 4 instances for three different samples. And each time, the results for the 64-bit version was better, on average, by about 8%. Perhaps the newer architecture, faster memory, etc. cause that 64 bit application is doing better and better.

floyd
floyd
Joined: 12 Sep 11
Posts: 133
Credit: 186610495
RAC: 0

I wonder ... isn't BOINC

I wonder ... isn't BOINC supposed to evaluate the performance levels of different application versions on a particular computer and include that info in scheduler requests?

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 729882930
RAC: 1191712

RE: Ok, that explains this

Quote:

Ok, that explains this behavior.

But, question turns on what hardware you performed a test about which you write in this post. And what were the other conditions.

In the quoted thread, Bernd compared the runtime not just one one of our machines, but actually queried the database to look for volunteer hosts that had executed both versions in the past, and then (for each such host) had the performance difference calculated. The result was that there was no significant performance difference, at least back then when the test was performed.

As for BOINC making a decision on a per host basis based on performance measurements: at least that's not done in the BOINC version we use in production (w/o the "CreditNew" feature).

Cheers
HB

Sebastian M. Bobrecki
Sebastian M. Bo...
Joined: 20 Feb 05
Posts: 63
Credit: 1529602847
RAC: 105

Yes I saw this. Except that

Yes I saw this. Except that such results may be misleading. Namely, in the project database there are no information about the external factors that may affect the operation of the application. For example, other programs running in parallel that have different load characteristics of both CPU and memory may influence in an unpredictable way. Moreover, many processors are equipped in technology such as Turbo Boost, Turbo Core or thermal throttling which may additionally affect the computation time (which is saved to database), even depending upon such circumstances as what is current weather. You have also take into account the fact that over time the distribution of types of processors are also changing. Increasing number of new devices and decreases the amount of older models. For example, in Core2 based processors macroop fusion mechanism only works for 32 bit operations. While in i7 and above it also works for 64bit operations. Therefore, I think, that only properly prepared tests may in fact show if the application runs faster, slower or roughly the same. Moreover, I think that such tests should be repeated periodically to see if something has not changed with the progress of technology.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250584637
RAC: 34590

From my post in the other

From my post in the other thread:

Quote:
My numbers are taken from the E@H database, averaged over >2000 computers and roughly 50000 tasks, and these are consistent with the result of a single previous experiment under tightly controlled conditions.

Not only did I average over the whole database, but actually did experiments under tightly controlled conditions (AFAIK tighter than yours, i.e. without BOINC and with the exact same workunit / task). And at least at that time, the results were indeed consistent.

A fundamental problem that we can't solve is that neither the average participant nor the average machine does actually exist, therefore what's good for the average participant need not be good for a particular one.

The majority of Linux hosts attached to E@H are still from (LSC) clusters (like Atlas) and almost exclusively based on Intels Core2 architecture. This, btw. is the reason I chose such a cluster node for the comparison test I did at that time. So I doubt that the average timing has changed that much since back then.

Anyway, I'll put a re-evaluation of that issue on my todo-list; these days I'm too busy with other things.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.