Hello, i have a question. It has probably been some
months since i have run the project now.
But prior to that there was a period when i would have
regular crashes when trying to run Einstein@home.
At first i thought it didn't play well with my server,
running a task on every core, meaning i ran about 48
at the same time. but i tried it on my pc, and it kept
crashing there aswell.
The Einstein tasks might be longer and more sensitive
compared to some other projects, but i don't think
it's reasonable that it would run both of my computers
to the ground.
was this a known issue at the time that has later been
addressed? was it just temporary? or do you think it was
caused by something else. If i tried to pick up Einstein@home
again would the crashes likely return?
Copyright © 2024 Einstein@Home. All rights reserved.
computer crashes.
)
Lots of us run Einstein without frequent crashes. Something must have been a bit different about your systems to make them not work well with it.
There is more than one application running here. I suspect quite a bit of the code on at least some of them has been around a while, and might be shared with versions your systems did not behave well when running, but others might not be.
I suggest you try and see, and perhaps consider restricting application types to one at a time so you can experiment to see whether your systems behave well with one and not with another.
Huhu Tobben, Maybe this
)
Huhu Tobben,
Maybe this will help first limitation of cores in Boincmanager.
Boincmanager / Wizard / Settings / use of the processor -> Use on multiprocessor systems not more than 95% (with 48 cores, 87% for 8 cores) to the processors.
Please tick:
Convert permitted if the computer is used
Convert permitted if cpu usage is less than 0% (0 = no restriction)
have a nice day
Pollux
RE: ... At first i thought
)
You should use your preferences to limit the number of cores that BOINC can use. Why don't you set 25% to start off with which would allow only 12 simultaneous tasks. If you get no errors with that level you could try increasing the percentage slowly to see how it goes.
The PC you used was an i7 with a Tahiti series GPU. If you take a look at the top 20 hosts on this project, many of them are i7s with Tahiti series GPUs - often multiple GPUs. If you drill down into the tasks lists of many of these, you will find that a lot of them crunch on the GPU only. They don't have CPU tasks. Of course, their CPUs could be attached to other projects :-). I suspect, in order to maximise GPU production, the CPUs are deliberately not being used for crunching.
The reasoning behind this is that GPUs are far more productive than CPUs so it's more efficient to run multiple GPU tasks and keep the CPUs relatively free of load so that they can give maximum support to the GPUs. If you run CPU tasks as well, you risk slowing down the GPUs and perhaps causing system instability with all the heat being produced.
I have lots of hosts with GPUs and I have a somewhat different philosophy. I'm keen to support the CPU searches as well as the GPU ones. I'm willing to sacrifice a small amount of GPU efficiency in order to do CPU tasks as well. My thinking is that a single mid-range GPU can still be quite efficient even when some CPU cores are running CPU tasks. On AMD GPUs, the default resource allocation is 0.5 CPUs + 1.0 GPUs for a GPU task. If you run 2 GPU tasks concurrently, this will automatically reserve a full CPU core. So I run 4 concurrent GPU tasks and BOINC reserves two CPU cores for GPU support. With an i3 host (dual core, 4 threads) and a 7850 GPU, I always have 6 tasks crunching, 4 GPU tasks and 2 CPU tasks. Currently this machine has a RAC of over 78K which is pretty good for a GPU that cost me around $135. I have quite a few like this running for nearly a year now and they don't seem to produce any errors. I would think your i7 could be very productive running 4 or 6 GPU tasks concurrently and perhaps 4 CPU tasks, leaving 4 free threads for GPU support.
Other than the fact that tasks are compute intensive, there is no evidence to suggest that tasks themselves cause crashes in a properly configured and maintained system. It's up to participants to use their preference settings to restrict the load that BOINC puts on their systems. I notice your PC has a 'K' processor. Were you overclocking it when it was failing?
Cheers,
Gary.
Gary Roberts wrote: I notice
)
Especially if you were overclocking (but even possibly otherwise), it could be as simple as that by bad luck the Einstein code in conjunction with the Einstein data exercised speed paths in your part(s) which were slower than the slowest speed paths exercised by your other work.
There is a popular misconception that there is such a thing as a completely thorough speed test--either in possession of the manufacturer binning the parts in the first place, or even under the fingers of an overclocking individual running Prime or such.
T'ain't so. There are pretty good tests, but there are no perfect ones. And just to make life more complicated, every single billion-transistor CPU produced has multiple sub-critical defects which allow the part to function, but mean that the speed path they affect is slower relative to all others than on a part not so cursed. Usually this matters not a whit, as the afflicted paths are either still faster than they need to be to meet spec, or or just faster than something else that gets routinely exercised, so don't matter. But...
We get a regular stream of posts from people who have fiddled with overclocking on some other piece of code until their CPU is tweaked up to the very edge of failure, than come here and declare there to be something wrong with Einstein code since it won't run as fast on their CPU as their chosen tweak-bait. So long as the Einstein code is legal, this is just false.
Of course none of this may in fact have anything to do with your case, Tobben, I just thought it might be worth a short trip around the "some things work better than others" track.
Thanks for all the quick
)
Thanks for all the quick replies. Considering that the Einstein@home
tasks seem pretty heavy, i have refrained from doing any overclocking,
or running the project while overclocked more specifically.
I have tried numerous projects to try and find the ones that play well
with my server, asteroid@home is one, but there are certain things
i find abit questionable with it. Einstein was another one, that was
until i started crashing.. it should be fine to run on all cores
but i guess i could try and leave atleast 2-4 free and see if it does
anything. But i find that really strange as it was crashing on my stock
i7 aswell. Both has run just fine on other projects at 100% load. And at
the time of tesing my i7 i wasn't running the gpu at the same time.
I know that gpu's are alot more capable than cpu's at certain tasks,
but cpu's still have some favorable qualities? are there any projects
that value cpu crunching power? moreso than others.
RE: ... are there any
)
This project values CPU crunching. It primarily exists to crunch LIGO data in the search for gravity waves. The other search that is CPU only at the moment is the gamma ray pulsar search which uses data from the Fermi space telescope.
With advanced LIGO coming on stream in the not too distant future and providing much more sensitive data, a very elusive goal (the direct detection of gravity waves) may at last become a reality. Something very worthwhile to 'be around' for!! :-).
Cheers,
Gary.