HostID 1001562 - Richard Haselgrove's Q6600 Quad Core

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2991132968
RAC: 700873

Just in case anyone wants to

Just in case anyone wants to jump the gun, and get a sneak preview - the first results are in. CPU time ranges from 34,337.27 to 32,564.19 seconds - and the good news is, they're decreasing, so we've snagged ourselves a minimum as well as a maximum. RCDF after the last (slowest) of this batch is up to 1.370399, and the task estimate is now 9:08:27 - we might even get a graph on Sunday morning, depending on the final runtime variation between min and max.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2991132968
RAC: 700873

Here's the first graph. With

Here's the first graph. With a bit of help from 256.60, we've got pretty much a full cycle:


(direct link)

Runtime over the cycle varied from 37,517 to 32,118 seconds - much the same range as S5R3. Credit varied from 21.04 to 24.63 credits/hour (170.7 to 146.2 seconds/credit) using the 'granted' figure of 219.73 - this is the new 'adjusted upwards by 15%' figure applied after 6 August.

I have records for the same hardware showing a time range of 44,043 to 38,122 seconds, and a credit rate of 18.12 to 22.34 credits/hour, using app 4.07 on a frequency of 464.35 at the beginning of S5R3 (October 2007). So we're still seeing some small benefit from the optimisation work which has been done since then.

Returning to the graph, I think I'm seeing hints that there will still be 'wiggles' and other fine detail, but like last time it probably won't really become clear until we get up to much higher frequencies.

One other observation: there are only 831 skypoints in these WUs, so the new app is checkpointing a lot less often.

The server is now giving me 304.70, and a nice steady run down from __50, so that'll be the next graph - unless I can find an AMD with a nice steady throughput for comparison. Any takers?

Here is the raw graph data in ready reckoner format:

265.50,0,37517.48,6.04_1
265.50,1,36366.98,6.04_1
265.50,2,36027.28,6.04_1
265.50,3,34570.25,6.04_1
265.50,4,34079.52,6.04_1
265.50,5,33088.23,6.04_1
265.50,6,32595.86,6.04_1
265.50,7,32213.19,6.04_1
265.50,8,32181.31,6.04_1
265.50,9,32201.83,6.04_1
265.50,10,32118.33,6.04_1
265.50,11,32223.47,6.04_1
265.50,12,32186.22,6.04_1
265.50,13,32564.19,6.04_1
265.50,14,33139.52,6.04_1
265.50,15,33894.38,6.04_1
265.50,16,34337.27,6.04_1
265.60,17,35531,6.04_1
265.60,18,36487.67,6.04_1
265.60,19,37479.41,6.04_1
Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1492
Credit: 395259011
RAC: 555690

Richard, The direct link is

Richard,
The direct link is not working.

Andy

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2991132968
RAC: 700873

RE: The direct link is not

Message 78930 in response to message 78929

Quote:
The direct link is not working.


It works here, but I noticed a new flag on the upload page asking us not to use it - maybe they're trying to up their ad revenue by blocking access except from the image uploader.

Try this 'link for friends' instead. (Sorry about the advert - my fault for using a free site!)

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5877
Credit: 118655063850
RAC: 18967528

RE: The direct link is not

Message 78931 in response to message 78929

Quote:
The direct link is not working.

It works fine for me ...

Cheers,
Gary.

Winterknight
Winterknight
Joined: 4 Jun 05
Posts: 1492
Credit: 395259011
RAC: 555690

Think I might have some

Think I might have some protection running that doesn't like imageshack. Cannot see either link.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 325517562
RAC: 67378

Thanks Richard! That gives a

Thanks Richard! That gives a runtime variance of ~ (37.5 - 32.0)/37.5 = 5.5/37.5 ~ 14.7% across the cycle. This is not atypical compared with R3, though is toward the low end of such variability. I can't remember exactly which aspect of your host ought be responsible for this - handling of the CPU bound code, Hough, or otherwise. I'll look that up, as I'm sure Bernd told us that at one point.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

archae86
archae86
Joined: 6 Dec 05
Posts: 3161
Credit: 7310061689
RAC: 2314165

RE: Thanks Richard! That

Message 78934 in response to message 78933

Quote:
Thanks Richard! That gives a runtime variance of ~ (37.5 - 32.0)/37.5 = 5.5/37.5 ~ 14.7% across the cycle. This is not atypical compared with R3, though is toward the low end of such variability. I can't remember exactly which aspect of your host ought be responsible for this - handling of the CPU bound code, Hough, or otherwise.


I think we saw some evidence that on a given architecture host, the variability increased with poorer memory performance, at least in some cases.

That suggests that the component of code which contributes the most execution time variability across the cycle is more memory intensive than the rest in some sense.

If one were to try to choose a variability estimate for the purpose of attempting a more level credit award, it would be good to see actual variability cycle data from some of the types high up in current contribution rate.

For that purpose, it is better to sort the BOINCStats table previous alluded to by the "average credit" column, like this:

Einstein CPU fleet current RAC
Despite the name suggesting something else, I think this is a sum of current host RACs of the type.

An anomaly in this table is the third entry (a dual core Opteron 185). The hosts of that type appear to have run at a vastly higher average Einstein utilization than other entries. Are these used in one or more of the Grid machines?

We've seen evidence that the some hosts are getting re-registered and counted many times. Though I believe Einstein does decay the RAC of inactive host IDs at intervals, this decay takes a good while, and types which get re-registered especially often recently might be over-represented in the type RAC total.

Some of the highest RAC Xeon entries may be participants in this re-registration excess effect. It seems unlikely to me that we have over six times as many Xeon X5355 as Q6600 hosts, yet their average contribution is so low.

Brian Silvers
Brian Silvers
Joined: 26 Aug 05
Posts: 772
Credit: 282700
RAC: 0

RE: An anomaly in this

Message 78935 in response to message 78934

Quote:

An anomaly in this table is the third entry (a dual core Opteron 185). The hosts of that type appear to have run at a vastly higher average Einstein utilization than other entries. Are these used in one or more of the Grid machines?

Yep. Those are typically Steffen Grunewald for Merlin/Morgane

Ziran
Ziran
Joined: 26 Nov 04
Posts: 194
Credit: 665118
RAC: 772

RE: Returning to the

Message 78936 in response to message 78928

Quote:

Returning to the graph, I think I'm seeing hints that there will still be 'wiggles' and other fine detail, but like last time it probably won't really become clear until we get up to much higher frequencies.

Shouldn’t one be able to trick the scheduler to give you tasks from higher frequencies? Since the filename is know, one could manually download the sky grid. If some one talked nicely to Bruce or BM, they could provide you with the cksum, so you could edit the client_state.xml file. It probably could be useful to do one sky grid in every 100 frequency range to see that they are working correctly.

Then you're really interested in a subject, there is no way to avoid it. You have to read the Manual.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.