Some scheduler/task selection weirdness is going on. My settings were:
Binary Radio Pulsar Search (Arecibo): yes
Gravitational Wave S6 GC search : yes
Gravitational Wave S6 LineVeto search: no
Gamma-ray pulsar search #1: no
But I had gotten a number of S6LV tasks in the last few days. S6LV wasn't on the list teh last time I updated my settings; so I don't know why it ended up defaulting to off.
I updated to add S6LV to the approved list (FGRP still disabled), and triggered an update in my client; only to get a huge number of FGRP tasks.
Disable LV, enable FGRP and triggered an update on a 2nd computer; same results as above (got a lot of FGRP tasks).
For the moment I'm going to turn everything on to make sure I don't run out of work; but would like to go back to BRP4-GPU and S6* tasks only (FGRP credit rates were significantly lower on my computers when I last ran them)
Earlier today I aborted a number of the FGRP tasks; only to have my queue refill them them despite their being deselected on my settings page.
You need also to turn off the pref that allows 'other apps' to get work if your 'selected apps' can't get work. There must be a bit of a GW task 'shortage' of some sort because I've noticed exactly the same behaviour - getting lots FGRP tasks even though FGRP is not ticked.
The solution was to untick the 'use other apps' pref as well. I'm noticing that there are often a few retries (no GW work available) before new GW tasks are actually snagged. Once the supply becomes more uniform, you could re-select the 'other apps' checkbox if you wanted it for backup protection in the event there was a long term shortage of GW tasks.
You need also to turn off the pref that allows 'other apps' to get work if your 'selected apps' can't get work. There must be a bit of a GW task 'shortage' of some sort because I've noticed exactly the same behaviour - getting lots FGRP tasks even though FGRP is not ticked.
The solution was to untick the 'use other apps' pref as well. I'm noticing that there are often a few retries (no GW work available) before new GW tasks are actually snagged. Once the supply becomes more uniform, you could re-select the 'other apps' checkbox if you wanted it for backup protection in the event there was a long term shortage of GW tasks.
Hm - this must be a (currently unintended) side effect of the update of the project preferences page that was necessary for the "gpu utilization factor".
GW tasks were sent out rather irregular in past days, mainly related to the transition from S6Bucket to S6LV1. But there shouldn't be any shortage anymore, at least I can't see any problems ahead.
Earlier today I aborted a number of the FGRP tasks; only to have my queue refill them them despite their being deselected on my settings page.
You need also to turn off the pref that allows 'other apps' to get work if your 'selected apps' can't get work. There must be a bit of a GW task 'shortage' of some sort because I've noticed exactly the same behaviour - getting lots FGRP tasks even though FGRP is not ticked.
The solution was to untick the 'use other apps' pref as well. I'm noticing that there are often a few retries (no GW work available) before new GW tasks are actually snagged. Once the supply becomes more uniform, you could re-select the 'other apps' checkbox if you wanted it for backup protection in the event there was a long term shortage of GW tasks.
OK. I thought that only kicked in if my queue was empty; similar to the backup project feature.
OK. I thought that only kicked in if my queue was empty; similar to the backup project feature.
No, your cache doesn't have to be exhausted - it can actually be just about full. If BOINC makes a request for just 1 second of work and a a task cannot be immediately supplied from your 'ticked' list, it will get a task from your 'unticked' list (ie FGRP) if the 'other apps' box is ticked. I've watched that happen on a Win 7 host with a GTX 550Ti card that is using the new mechanism for running two GPU tasks simultaneously.
That host (Q9400 CPU) used to allow all apps but its efficiency was severely lowered with FGRP tasks - particularly if all available cores (3 - one is disabled) were doing FGRP tasks. Performance improves a bit if there is a mix of GW and FGRP and it's even better if there are 3 GW tasks. So when I started running 2 CUDA tasks, I unticked FGRP and was dismayed to see it keep getting FGRP. So I unticked the other apps box and that started giving the error message about "no tasks available for the chosen apps". I wasn't worried as my cache was almost full so I just decided to watch what happened. There were several of these 'no tasks available' over a period of 5-10 minutes or so and then on a subsequent request, I was allocated a LV1 task. That was just before I wrote my previous response to you.
Earlier today, I saw another work request on that host which also resulted in a 'no tasks available' response but, 1 minute later, the next request got the desired LV1 task. I'm not surprised at this - I've noticed this sort of pattern many times before. So if (like me) you don't want to risk getting ANY FGRP tasks on a particular host, you probably need to leave the 'other apps' box unticked. I'm sure I would have gotten a FGRP tasks if that setting was ticked. Of course, if you have other hosts where you do want a mix of apps including FGRP, you need to separate them into different venues. So far, I can manage with just the 4 available venues to cover the various crunching scenarios, but I'm sure it will become a problem in future.
Not getting S6Bucket tasks
)
Some scheduler/task selection weirdness is going on. My settings were:
Binary Radio Pulsar Search (Arecibo): yes
Gravitational Wave S6 GC search : yes
Gravitational Wave S6 LineVeto search: no
Gamma-ray pulsar search #1: no
But I had gotten a number of S6LV tasks in the last few days. S6LV wasn't on the list teh last time I updated my settings; so I don't know why it ended up defaulting to off.
I updated to add S6LV to the approved list (FGRP still disabled), and triggered an update in my client; only to get a huge number of FGRP tasks.
Binary Radio Pulsar Search (Arecibo): yes
Gravitational Wave S6 GC search : yes
Gravitational Wave S6 LineVeto search: yes
Gamma-ray pulsar search #1: no
Disable LV, enable FGRP and triggered an update on a 2nd computer; same results as above (got a lot of FGRP tasks).
For the moment I'm going to turn everything on to make sure I don't run out of work; but would like to go back to BRP4-GPU and S6* tasks only (FGRP credit rates were significantly lower on my computers when I last ran them)
see here. BM
)
see here.
BM
BM
That explains the
)
That explains the disappearance of S6 tasks; but not that the tasks I'm currently getting don't match up with my server settings.
Run only the selected applications
Binary Radio Pulsar Search (Arecibo): yes
Gravitational Wave S6 GC search : yes
Gravitational Wave S6 LineVeto search: yes
Gamma-ray pulsar search #1: no
Earlier today I aborted a number of the FGRP tasks; only to have my queue refill them them despite their being deselected on my settings page.
RE: Earlier today I aborted
)
You need also to turn off the pref that allows 'other apps' to get work if your 'selected apps' can't get work. There must be a bit of a GW task 'shortage' of some sort because I've noticed exactly the same behaviour - getting lots FGRP tasks even though FGRP is not ticked.
The solution was to untick the 'use other apps' pref as well. I'm noticing that there are often a few retries (no GW work available) before new GW tasks are actually snagged. Once the supply becomes more uniform, you could re-select the 'other apps' checkbox if you wanted it for backup protection in the event there was a long term shortage of GW tasks.
Cheers,
Gary.
RE: You need also to turn
)
Hm - this must be a (currently unintended) side effect of the update of the project preferences page that was necessary for the "gpu utilization factor".
GW tasks were sent out rather irregular in past days, mainly related to the transition from S6Bucket to S6LV1. But there shouldn't be any shortage anymore, at least I can't see any problems ahead.
BM
BM
RE: RE: Earlier today I
)
OK. I thought that only kicked in if my queue was empty; similar to the backup project feature.
RE: OK. I thought that
)
No, your cache doesn't have to be exhausted - it can actually be just about full. If BOINC makes a request for just 1 second of work and a a task cannot be immediately supplied from your 'ticked' list, it will get a task from your 'unticked' list (ie FGRP) if the 'other apps' box is ticked. I've watched that happen on a Win 7 host with a GTX 550Ti card that is using the new mechanism for running two GPU tasks simultaneously.
That host (Q9400 CPU) used to allow all apps but its efficiency was severely lowered with FGRP tasks - particularly if all available cores (3 - one is disabled) were doing FGRP tasks. Performance improves a bit if there is a mix of GW and FGRP and it's even better if there are 3 GW tasks. So when I started running 2 CUDA tasks, I unticked FGRP and was dismayed to see it keep getting FGRP. So I unticked the other apps box and that started giving the error message about "no tasks available for the chosen apps". I wasn't worried as my cache was almost full so I just decided to watch what happened. There were several of these 'no tasks available' over a period of 5-10 minutes or so and then on a subsequent request, I was allocated a LV1 task. That was just before I wrote my previous response to you.
Earlier today, I saw another work request on that host which also resulted in a 'no tasks available' response but, 1 minute later, the next request got the desired LV1 task. I'm not surprised at this - I've noticed this sort of pattern many times before. So if (like me) you don't want to risk getting ANY FGRP tasks on a particular host, you probably need to leave the 'other apps' box unticked. I'm sure I would have gotten a FGRP tasks if that setting was ticked. Of course, if you have other hosts where you do want a mix of apps including FGRP, you need to separate them into different venues. So far, I can manage with just the 4 available venues to cover the various crunching scenarios, but I'm sure it will become a problem in future.
Cheers,
Gary.