SPARC Solaris Einstein@Home?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250585392
RAC: 34471

The stock clients all request

The stock clients all request sparc-sun-solaris2.7 platform, as they are built on a Sol7 so they should run on all systems from then on. If you compile your own client, you are advised to use --build=sparc-sun-solaris2.7 as additional configure option, so the client will report the same platform.

Due to an old build process (which has been improved since then) the old 4.19 links to shared libraries that are not present on all systems (to say the least), in particular libstdc++.so.3 and libgcc_s.so.1. If you have a gcc installed, you can create a symlink named libstdc++.so.3 to your version of libstdc++.so.

I am running the stock recommended 4.43 client without problems.

BM

BM

Stefan Urbat
Stefan Urbat
Joined: 9 Feb 05
Posts: 16
Credit: 147672
RAC: 0

RE: The stock clients all

Message 15748 in response to message 15747

Quote:

The stock clients all request sparc-sun-solaris2.7 platform, as they are built on a Sol7 so they should run on all systems from then on. If you compile your own client, you are advised to use --build=sparc-sun-solaris2.7 as additional configure option, so the client will report the same platform.

Due to an old build process (which has been improved since then) the old 4.19 links to shared libraries that are not present on all systems (to say the least), in particular libstdc++.so.3 and libgcc_s.so.1. If you have a gcc installed, you can create a symlink named libstdc++.so.3 to your version of libstdc++.so.

I am running the stock recommended 4.43 client without problems.

BM

Indeed I had to softlink the libstdc++.so v3 to a current v6 and not tried the more recent client version 4.43 there, but on another machine running Solaris 10 albert v4.36 works so far smoothly (with the anonymous platform mechanism too). Your proposal makes sense, to force an older Solaris version; so far it didn't make a difference, because SETI@home is open source like BOINC and I used to compile both subsequently, so I need in this situation always the app_info.xml to run the self-compiled application.

[AF>ALSACE>EDLS] Phil68
[AF>ALSACE>EDLS...
Joined: 30 Dec 05
Posts: 32
Credit: 39832
RAC: 0

Hi... I'm very happy to have

Hi...
I'm very happy to have one application for my solaris (other than seti)...
But at this time i have problems whit the computing... some WUs come with no time (and they stay with 0% in my BViewer...)...
is that a WU problem or a Application problem ?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250585392
RAC: 34471

RE: Hi... I'm very happy to

Message 15750 in response to message 15749

Quote:
Hi...
I'm very happy to have one application for my solaris (other than seti)...
But at this time i have problems whit the computing... some WUs come with no time (and they stay with 0% in my BViewer...)...
is that a WU problem or a Application problem ?

Which client are you using? I don't know about BViewer, but what do you get when you grep client_state.xml for "fraction_done" on the machine in question?

BM

BM

[AF>ALSACE>EDLS] Phil68
[AF>ALSACE>EDLS...
Joined: 30 Dec 05
Posts: 32
Credit: 39832
RAC: 0

RE: Which client are you

Message 15751 in response to message 15750

Quote:

Which client are you using? I don't know about BViewer, but what do you get when you grep client_state.xml for "fraction_done" on the machine in question?

BM

i have 2 processors and at this time 1 of them is occupied with seti...
is this application able to work with 2 WUs at the same time ?.......
my Version is -> BOINC client version 4.43 for sparc-sun-solaris2.7

after a kill and a restart from boinc-client.... the % is 0.0 but the CPU-Time is 05:07:20....

thanks

before kill

http://einstein.phys.uwm.edu/
z1_0175.0__1078_S4R2a_0
1
436
1
2
18440.180000
0.776293
18463.590000
0.000000
0.000000

http://einstein.phys.uwm.edu/
z1_0175.0__1077_S4R2a_0
9
436
2
1
0.000000
0.000000
0.000000
0.000000
0.000000

after restart

http://einstein.phys.uwm.edu/
z1_0175.0__1077_S4R2a_0
0
436
2
1
0.000000
0.000000
0.000000
0.000000
0.000000

http://einstein.phys.uwm.edu/
z1_0175.0__1076_S4R2a_0
0
436
3
1
0.000000
0.000000
0.000000
0.000000
0.000000

http://einstein.phys.uwm.edu/
z1_0175.0__1075_S4R2a_0
9
436
4
1
0.000000
0.000000
0.000000
0.000000
0.000000

ebahapo
ebahapo
Joined: 22 Jan 05
Posts: 47
Credit: 755276
RAC: 0

IMHO, it would make much more

IMHO, it would make much more sense to have a native x86-64 client, volume-wise.

Stefan Urbat
Stefan Urbat
Joined: 9 Feb 05
Posts: 16
Credit: 147672
RAC: 0

RE: IMHO, it would make

Message 15753 in response to message 15752

Quote:
IMHO, it would make much more sense to have a native x86-64 client, volume-wise.

It depends, but Solaris and Linux clients for x86_64 CPUs would be fine regarding performance, as I know from SETI@home. Can't beat heavy SMP SPARC systems on the other hand (even a 4 core system with AMD 275 Opterons is not able to top 32 or the like UltraSPARC III systems, for example).

Stefan Urbat
Stefan Urbat
Joined: 9 Feb 05
Posts: 16
Credit: 147672
RAC: 0

The first result seems to be

The first result seems to be cleanly completed on the fastest SPARC-Solaris system I have access too, though it is not visible on the website (log statement).

But the performance looks rather poor: on this 1062 MHz UltraSPARC IIIi CPU it took nearly exactly 100000 seconds, i.e. more than one day CPU time, to finish it. I would have expected values of 40000 seconds on this hardware taking into account the relations in resp. to SETI@home among different CPUs.

So there seems to be a lot of potential for optimization of the SPARC client, doesn't it? The other machine, launched earlier for Einstein@home, a mere 550 MHz UltraSPARC II CPU driven, will take about two CPU days to complete, it seems...

[AF>ALSACE>EDLS] Phil68
[AF>ALSACE>EDLS...
Joined: 30 Dec 05
Posts: 32
Credit: 39832
RAC: 0

Today i have 3 WUs on my

Message 15755 in response to message 15754

Today i have 3 WUs on my Sun-machine.
I let the first one work alone, i have paused the 2 other and the SETI-ones....
This WU (z1_0175.0__1072_S4R2a_0) seams to be blocked now.... and i'm sure that if i restart the boinc-client, the WU will have 0 CPU-Time.....

http://einstein.phys.uwm.edu/
z1_0175.0__1072_S4R2a_0
1
436
1
2
3606.320000
0.141345
3606.320000
0.000000
0.000000

http://einstein.phys.uwm.edu/
z1_0175.0__1071_S4R2a_0
9
436
2
1
0.000000
0.000000
0.000000
0.000000
0.000000

http://einstein.phys.uwm.edu/
z1_0175.0__1070_S4R2a_0
9
436
3
1
0.000000
0.000000
0.000000
0.000000
0.000000

and after the kill of the boinc....

http://einstein.phys.uwm.edu/
z1_0175.0__1072_S4R2a_0
1
436
1
2
3606.320000
0.000000
3606.320000
0.000000
0.000000

and the work didn't go farther.... and after i canceld it.... here the results (as if it were completed)...

15004119 3706440 17 Jan 2006 0:51:59 UTC 17 Jan 2006 8:45:49 UTC Over Client error Computing 3,606.32 3.66 ---

Stefan Urbat
Stefan Urbat
Joined: 9 Feb 05
Posts: 16
Credit: 147672
RAC: 0

RE: The first result seems

Message 15756 in response to message 15754

Quote:
The first result seems to be cleanly completed on the fastest SPARC-Solaris system I have access too, though it is not visible on the website (log statement).

Meanwhile the first two results are completed there, as you can see here:

http://einsteinathome.org/host/515255/tasks

Quote:
But the performance looks rather poor: on this 1062 MHz UltraSPARC IIIi CPU it took nearly exactly 100000 seconds, i.e. more than one day CPU time, to finish the first one. I would have expected values of 40000 seconds on this hardware taking into account the relations in resp. to SETI@home among different CPUs.

As it is visible on the above result page, the processing is clearly to slow compared to usual PC systems.

Quote:
So there seems to be a lot of potential for optimization of the SPARC client, doesn't it? The other machine, launched earlier for Einstein@home, a mere 550 MHz UltraSPARC II CPU driven, will take about two CPU days to complete the first run, it seems...

Not two complete CPU days, but comes near --- do you took any optimization measures at all? What compiler do you used to build the client? gcc 3.0, 3.3, 3.4, 4.0, Sun Studio 8,9,10,11? There are different possibilities to get the client faster, on SPARC-Solaris sometimes the program may run even faster with -O2 than -O3 with gcc...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.