I don't know that BOINC stops creating slots at 10 - I've actually seen one of my hosts (some time ago) with >250 concurrently occupied slots.
It was a quad core host with a GTX650Ti running GPU tasks 2x and CPU tasks on all 4 cores - so normally just 6 slots. For some unknown reason (overnight), BOINC decided to go into panic mode and so running tasks were preempted and new ones started which were then subsequently preempted after a variable amount of time. Eventually, newly started tasks were being preempted every minute or two.
In the morning when I discovered the situation, I think there were actually 268 slot dirs from memory and there were still tasks being crunched (rather slowly if I recall correctly). I remember suspending the vast bulk of tasks and then resuming a few at a time which seemed to allow those few to complete fairly normally. As tasks completed, I would resume a few more and over a period of a couple of days, the whole mess got cleaned up and recovered.
So I don't think it's BOINC refusing to allocate new slots. Maybe it's something in the science app that puts an upper limit on the number of simultaneous instances of the app. If you have 10 running, what happens if you suspend one of them? Will an 11th slot be created and populated with a running instance? If you then resume the suspended one what happens?
so, my machine is 4 core i5 with two ati 7970.
if i set 5 WUs per gpu then each gpu take 5 WU,
if i set 8 WUs per gpu then 1st gpu take 8 and 2nd take only 2.
if i set 10 WUs per gpu then 1st take 10 and 2nd remains idle.
don't know there is the problem and how to fit it.
so, my machine is 4 core i5 with two ati 7970.
if i set 5 WUs per gpu then each gpu take 5 WU,
if i set 8 WUs per gpu then 1st gpu take 8 and 2nd take only 2.
if i set 10 WUs per gpu then 1st take 10 and 2nd remains idle.
don't know there is the problem and how to fit it.
by changing GPU utilization factor of BRP in Einstein@Home preferences
Did you have enough workunits in place at the time to run 20 tasks on your GPUs? you don't have any in progress tasks on your Dual ATI GPU host at the moment, and the last 12 were aborted.
If you have it set on 10 instead of 20, then yes. You can also omit that line for no limits.
This limits the number of simultaneous tasks of that application on entire host, not per single GPU.
For per single GPU, is used.
Can't run more then 10 WU at once.
)
I don't know that BOINC stops creating slots at 10 - I've actually seen one of my hosts (some time ago) with >250 concurrently occupied slots.
It was a quad core host with a GTX650Ti running GPU tasks 2x and CPU tasks on all 4 cores - so normally just 6 slots. For some unknown reason (overnight), BOINC decided to go into panic mode and so running tasks were preempted and new ones started which were then subsequently preempted after a variable amount of time. Eventually, newly started tasks were being preempted every minute or two.
In the morning when I discovered the situation, I think there were actually 268 slot dirs from memory and there were still tasks being crunched (rather slowly if I recall correctly). I remember suspending the vast bulk of tasks and then resuming a few at a time which seemed to allow those few to complete fairly normally. As tasks completed, I would resume a few more and over a period of a couple of days, the whole mess got cleaned up and recovered.
So I don't think it's BOINC refusing to allocate new slots. Maybe it's something in the science app that puts an upper limit on the number of simultaneous instances of the app. If you have 10 running, what happens if you suspend one of them? Will an 11th slot be created and populated with a running instance? If you then resume the suspended one what happens?
Cheers,
Gary.
so, my machine is 4 core i5
)
so, my machine is 4 core i5 with two ati 7970.
if i set 5 WUs per gpu then each gpu take 5 WU,
if i set 8 WUs per gpu then 1st gpu take 8 and 2nd take only 2.
if i set 10 WUs per gpu then 1st take 10 and 2nd remains idle.
don't know there is the problem and how to fit it.
RE: so, my machine is 4
)
How are you doing that?
Claggy
Do you have 20 in your
)
Do you have 20 in your app_config.xml ?
RE: Do you have 20 in your
)
no. that is the reason?
by changing GPU utilization
)
by changing GPU utilization factor of BRP in Einstein@Home preferences
RE: RE: Do you have 20 in
)
O.K then, do you have Do you have 10 in your app_config.xml ?
Claggy
RE: by changing GPU
)
Did you have enough workunits in place at the time to run 20 tasks on your GPUs? you don't have any in progress tasks on your Dual ATI GPU host at the moment, and the last 12 were aborted.
Claggy
i'm far away from my
)
i'm far away from my computer, and it is turned off. will try to play with max_concurrent option tomorrow.
thank you!
RE: RE: Do you have 20 in
)
If you have it set on 10 instead of 20, then yes. You can also omit that line for no limits.
This limits the number of simultaneous tasks of that application on entire host, not per single GPU.
For per single GPU, is used.