Estimated app speed for S6LV

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0
Topic 196288

My main host, has been crunching s6lv on 7 cores 24/7 since the new app was launched but still that estimation leads to a DCF of about 1.8
(Im not using app_info since the same time due to the option for concurrent GPU apps available on preferences, so there is no flops tags or any other local settings, that I know, acting up)

The run times were always the same, and Ive seen that the estimated task size was always the same too, so I've thought that by this time it should be more close to the "real" speed (or at least more close to the value needed to keep the DCF around 1)...
(The same happens in my other hosts, with different values, but always with values higher than the "real" speed)

So, Is there some kind of cap on that estimation? Or any other reason for this value to be so lazy to get adjusted?

The "issue" is that the app speeds for the GPU tasks are very accurate and then they get overestimated due to the high DCF which makes BOINC lazy to fill the cache for the GPUs, or, sometimes, it enters in panic mode for the GPU... Indeed, not a serious issue, just a minor annoyance... But, may be there is something there in the calcs that needs a second look...

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 3055456535
RAC: 2163341

Estimated app speed for S6LV

You may find that other projects you are familiar with, such as SETI, use a form of runtime estimation associated with a BOINC server component known as 'CreditNew'.

The significant aspect of this component is that your computer's real-world speed is monitored in a dynamic rolling average known as the APR - Average Processing Rate. On projects operating CreditNew, runtime estimates driven by APR eventually normalise all applications at or near a DCF of 1.00

Einstein has not yet adopted CreditNew, as you can tell by the absence of an 'Application Details' link (leading to a display of current APR, among other metrics) on your Host Details page.

Bernd or Heinz ('bikeman') will be able to elucidate further, but I believe Einstein uses a static average speed derived from the benchmark speed of the host and a generalised efficiency average for the class of processor in use. It's good, but without the feedback loop of the dynamic average, it can't be as good as APR (when properly used).

Horacio
Horacio
Joined: 3 Oct 11
Posts: 205
Credit: 80557243
RAC: 0

RE: I believe Einstein uses

Quote:
I believe Einstein uses a static average speed derived from the benchmark speed of the host and a generalised efficiency average for the class of processor in use. It's good, but without the feedback loop of the dynamic average, it can't be as good as APR (when properly used).


Well, that makes sense explaining why it doesnt get adjusted...

I didnt knew that the APR was part of CreditNew... (APR seems to be such a good idea, that there is no way to imagine an association).
So, if to get better estimations the project needs CreditNew, then forget it! Who wants good estimations? LOL

angler
angler
Joined: 17 Dec 05
Posts: 8
Credit: 4810035
RAC: 0

not to hijack the subject

not to hijack the subject speed on Linux 64 appears to be very slow at least for AMD Athlon 64
http://einsteinathome.org/task/288825696

only 8% crunched after 8hrs - similar (and slower processor) running on Win x32 gets it done in 10-13 hrs. are the math libs not taking advantage of SSE2 on these processors? I think it should support SSE2 at least

my Linux AMD host
http://einsteinathome.org/host/1315264

compared to AMD laptop Win 32
http://einsteinathome.org/host/660301

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 834485346
RAC: 1111729

RE: only 8% crunched after

Quote:
only 8% crunched after 8hrs

This is definitely not normal. You should check whether your host is actually running at the stock cpu frequency, e.g.

cat /proc/cpuinfo

The app supports SSE2, but since almost all calculations are done in single precision, not double precision, it get's most of its performance from hand coded SSE assembly code.

Cheers
HB

angler
angler
Joined: 17 Dec 05
Posts: 8
Credit: 4810035
RAC: 0

have checked, yes cpu does

have checked, yes cpu does throttle down occassionally to 1000 but generally runs at rated 1800-2400GHz

processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 4
model name : AMD Athlon(tm) 64 Processor 3300+
stepping : 10
cpu MHz : 1800.000
cache size : 256 KB
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext lm 3dnowext 3dnow up rep_good
bogomips : 2010.00
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management: ts fid vid ttp

noticed similar behavior in LHC (suspect due to ifort libs) not sure it that's the case here

http://lhcathomeclassic.cern.ch/sixtrack/forum_thread.php?id=3370&nowrap=true#23967

thanks for responding
cheers
-john

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 834485346
RAC: 1111729

Different distribution

Different distribution versions of Linux had different rules for throttling down the CPU when power saving features were enabled. For some, the CPU frequency would stay low even when all cores are busy doing BOINC CPU tasks, because those tasks are running with maximum 'niceness' (= low priority). For maximum performance, you will want Linux to allow full CPU frequency even for 'nice' tasks.

See the discussion here: http://boinc.berkeley.edu/dev/forum_thread.php?id=5771

The only othervason for throttling down even under BOINC load would be thermal-throttling: the cooling is bad and the CPU is throttled down to prevent overheating (check the heat sink for dust blocking the airflow).

Cheers
HB

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.