highFreq or lowFreq

Christian Beer
Christian Beer
Joined: 9 Feb 05
Posts: 595
Credit: 188474153
RAC: 237703

Sebastian M. Bobrecki

Sebastian M. Bobrecki wrote:
After response from Christian
Quote:
The i7 family of processors in general falls into the "fast hosts" category...
I think, but this is only my guess, that decision is made based on processor name string returned by BOINC client, rather than on Vendor name and Family, Model and Stepping numbers combinations, what I assumed it is. Maybe Christianin will find some time to give some more details about this.

Yes, we make the decision based on the model reported by BOINC where we apply a regular expression (I3-3225| i[57][- ]| E3[1-]| E8[23456]| E5-[12]| X5[26]| X5[45]6| X7| G[28]| G64| G32). If the model matches the expression the host is considered "fast". Since we don't know want is the reason behind this and we also see variations in runtime when looking at a single CPU model our goal is to have a simple separator between the two host populations.

Now that you have finished some Hi workunits with the i7 you see that the actual runtime is twice the time we would expect for this kind of task (this messes up your DCF and screws with runtime estimations of the other EaH applications on this host). This is not the case for other CPUs of the i7 family. For example the i7-6700K (4GHz) has 128 hosts that have an average runtime of <14h and 32 hosts >14h. We only have two hosts with the Q 840 CPU so I couldn't use this as an example.

There is still a small FAQ about this when we first introduced this feature: https://einsteinathome.org/content/gravitational-wave-search-o1as20-100-f-and-i-faq

solling2
solling2
Joined: 20 Nov 14
Posts: 219
Credit: 1577577976
RAC: 19696

Christian Beer schrieb:But

Christian Beer wrote:

But our gravitational wave search is no simple application and as I wrote earlier we also don't understand what is the reason for the different runtimes. You would for example expect that some AMD CPUs would be in the "fast" host category and I would have expected the Ryzen to be but that is not the case. 

The science app still is supporting single threading only, isn't it? That's a pity as it might be interesting to know whether a Ryzen or comparable could crunch a task in a few minutes if multithreading was allowed. Unfortunately, it is clear that users may not want to dedicate all of their cores to the project, or a core needs to feed a GPU. Thus a special app to take advantage of the Ryzens/i7s of this world doesn't make sense, does it? ]

MarkJ
MarkJ
Joined: 28 Feb 08
Posts: 437
Credit: 139002861
RAC: 0

I have a couple of Ryzens.

I have a couple of Ryzens. They're getting Lo work units as the rule says.

Run time doubles if I run 16 at a time. Sure you expect it to be slower using all cores as they compete for the AVX units but it seems consistently double. The Intel seems to be better under the same situation.

If I run 8 at a time they get done between 5.5 - 6.25 hours. It seems to vary a bit based upon the frequency but I suspect its mostly down to the memory channels (dual on a Ryzen) and the CPU cache (the Ryzen uses a victim cache, unlike Intel).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.