Just in case anyone wants to jump the gun, and get a sneak preview - the first results are in. CPU time ranges from 34,337.27 to 32,564.19 seconds - and the good news is, they're decreasing, so we've snagged ourselves a minimum as well as a maximum. RCDF after the last (slowest) of this batch is up to 1.370399, and the task estimate is now 9:08:27 - we might even get a graph on Sunday morning, depending on the final runtime variation between min and max.
Runtime over the cycle varied from 37,517 to 32,118 seconds - much the same range as S5R3. Credit varied from 21.04 to 24.63 credits/hour (170.7 to 146.2 seconds/credit) using the 'granted' figure of 219.73 - this is the new 'adjusted upwards by 15%' figure applied after 6 August.
I have records for the same hardware showing a time range of 44,043 to 38,122 seconds, and a credit rate of 18.12 to 22.34 credits/hour, using app 4.07 on a frequency of 464.35 at the beginning of S5R3 (October 2007). So we're still seeing some small benefit from the optimisation work which has been done since then.
Returning to the graph, I think I'm seeing hints that there will still be 'wiggles' and other fine detail, but like last time it probably won't really become clear until we get up to much higher frequencies.
One other observation: there are only 831 skypoints in these WUs, so the new app is checkpointing a lot less often.
The server is now giving me 304.70, and a nice steady run down from __50, so that'll be the next graph - unless I can find an AMD with a nice steady throughput for comparison. Any takers?
Here is the raw graph data in ready reckoner format:
It works here, but I noticed a new flag on the upload page asking us not to use it - maybe they're trying to up their ad revenue by blocking access except from the image uploader.
Try this 'link for friends' instead. (Sorry about the advert - my fault for using a free site!)
Thanks Richard! That gives a runtime variance of ~ (37.5 - 32.0)/37.5 = 5.5/37.5 ~ 14.7% across the cycle. This is not atypical compared with R3, though is toward the low end of such variability. I can't remember exactly which aspect of your host ought be responsible for this - handling of the CPU bound code, Hough, or otherwise. I'll look that up, as I'm sure Bernd told us that at one point.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanks Richard! That gives a runtime variance of ~ (37.5 - 32.0)/37.5 = 5.5/37.5 ~ 14.7% across the cycle. This is not atypical compared with R3, though is toward the low end of such variability. I can't remember exactly which aspect of your host ought be responsible for this - handling of the CPU bound code, Hough, or otherwise.
I think we saw some evidence that on a given architecture host, the variability increased with poorer memory performance, at least in some cases.
That suggests that the component of code which contributes the most execution time variability across the cycle is more memory intensive than the rest in some sense.
If one were to try to choose a variability estimate for the purpose of attempting a more level credit award, it would be good to see actual variability cycle data from some of the types high up in current contribution rate.
For that purpose, it is better to sort the BOINCStats table previous alluded to by the "average credit" column, like this:
Einstein CPU fleet current RAC
Despite the name suggesting something else, I think this is a sum of current host RACs of the type.
An anomaly in this table is the third entry (a dual core Opteron 185). The hosts of that type appear to have run at a vastly higher average Einstein utilization than other entries. Are these used in one or more of the Grid machines?
We've seen evidence that the some hosts are getting re-registered and counted many times. Though I believe Einstein does decay the RAC of inactive host IDs at intervals, this decay takes a good while, and types which get re-registered especially often recently might be over-represented in the type RAC total.
Some of the highest RAC Xeon entries may be participants in this re-registration excess effect. It seems unlikely to me that we have over six times as many Xeon X5355 as Q6600 hosts, yet their average contribution is so low.
An anomaly in this table is the third entry (a dual core Opteron 185). The hosts of that type appear to have run at a vastly higher average Einstein utilization than other entries. Are these used in one or more of the Grid machines?
Returning to the graph, I think I'm seeing hints that there will still be 'wiggles' and other fine detail, but like last time it probably won't really become clear until we get up to much higher frequencies.
Shouldn’t one be able to trick the scheduler to give you tasks from higher frequencies? Since the filename is know, one could manually download the sky grid. If some one talked nicely to Bruce or BM, they could provide you with the cksum, so you could edit the client_state.xml file. It probably could be useful to do one sky grid in every 100 frequency range to see that they are working correctly.
Then you're really interested in a subject, there is no way to avoid it. You have to read the Manual.
Just in case anyone wants to
)
Just in case anyone wants to jump the gun, and get a sneak preview - the first results are in. CPU time ranges from 34,337.27 to 32,564.19 seconds - and the good news is, they're decreasing, so we've snagged ourselves a minimum as well as a maximum. RCDF after the last (slowest) of this batch is up to 1.370399, and the task estimate is now 9:08:27 - we might even get a graph on Sunday morning, depending on the final runtime variation between min and max.
Here's the first graph. With
)
Here's the first graph. With a bit of help from 256.60, we've got pretty much a full cycle:
(direct link)
Runtime over the cycle varied from 37,517 to 32,118 seconds - much the same range as S5R3. Credit varied from 21.04 to 24.63 credits/hour (170.7 to 146.2 seconds/credit) using the 'granted' figure of 219.73 - this is the new 'adjusted upwards by 15%' figure applied after 6 August.
I have records for the same hardware showing a time range of 44,043 to 38,122 seconds, and a credit rate of 18.12 to 22.34 credits/hour, using app 4.07 on a frequency of 464.35 at the beginning of S5R3 (October 2007). So we're still seeing some small benefit from the optimisation work which has been done since then.
Returning to the graph, I think I'm seeing hints that there will still be 'wiggles' and other fine detail, but like last time it probably won't really become clear until we get up to much higher frequencies.
One other observation: there are only 831 skypoints in these WUs, so the new app is checkpointing a lot less often.
The server is now giving me 304.70, and a nice steady run down from __50, so that'll be the next graph - unless I can find an AMD with a nice steady throughput for comparison. Any takers?
Here is the raw graph data in ready reckoner format:
Richard, The direct link is
)
Richard,
The direct link is not working.
Andy
RE: The direct link is not
)
It works here, but I noticed a new flag on the upload page asking us not to use it - maybe they're trying to up their ad revenue by blocking access except from the image uploader.
Try this 'link for friends' instead. (Sorry about the advert - my fault for using a free site!)
RE: The direct link is not
)
It works fine for me ...
Cheers,
Gary.
Think I might have some
)
Think I might have some protection running that doesn't like imageshack. Cannot see either link.
Thanks Richard! That gives a
)
Thanks Richard! That gives a runtime variance of ~ (37.5 - 32.0)/37.5 = 5.5/37.5 ~ 14.7% across the cycle. This is not atypical compared with R3, though is toward the low end of such variability. I can't remember exactly which aspect of your host ought be responsible for this - handling of the CPU bound code, Hough, or otherwise. I'll look that up, as I'm sure Bernd told us that at one point.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: Thanks Richard! That
)
I think we saw some evidence that on a given architecture host, the variability increased with poorer memory performance, at least in some cases.
That suggests that the component of code which contributes the most execution time variability across the cycle is more memory intensive than the rest in some sense.
If one were to try to choose a variability estimate for the purpose of attempting a more level credit award, it would be good to see actual variability cycle data from some of the types high up in current contribution rate.
For that purpose, it is better to sort the BOINCStats table previous alluded to by the "average credit" column, like this:
Einstein CPU fleet current RAC
Despite the name suggesting something else, I think this is a sum of current host RACs of the type.
An anomaly in this table is the third entry (a dual core Opteron 185). The hosts of that type appear to have run at a vastly higher average Einstein utilization than other entries. Are these used in one or more of the Grid machines?
We've seen evidence that the some hosts are getting re-registered and counted many times. Though I believe Einstein does decay the RAC of inactive host IDs at intervals, this decay takes a good while, and types which get re-registered especially often recently might be over-represented in the type RAC total.
Some of the highest RAC Xeon entries may be participants in this re-registration excess effect. It seems unlikely to me that we have over six times as many Xeon X5355 as Q6600 hosts, yet their average contribution is so low.
RE: An anomaly in this
)
Yep. Those are typically Steffen Grunewald for Merlin/Morgane
RE: Returning to the
)
Shouldn’t one be able to trick the scheduler to give you tasks from higher frequencies? Since the filename is know, one could manually download the sky grid. If some one talked nicely to Bruce or BM, they could provide you with the cksum, so you could edit the client_state.xml file. It probably could be useful to do one sky grid in every 100 frequency range to see that they are working correctly.
Then you're really interested in a subject, there is no way to avoid it. You have to read the Manual.