I have it in most Boinc Project folders and change it depending on how many other Projects I'm trying to run at the same time. It also lets me crank up the resource share so I get tasks on a regular basis, I just need to keep the cache sizes low enough so I don't get too much work by throttling the number of cpu cores allowed for each Project.
The one problem this one has is it affects all types of tasks for a Project not a specific set of them like your first app_config file did.
I like the way I am running right now, I setup a simple batch file to swap in the appropriate running conditions based on work availability. If there is no Rosetta work I simply run my NoRosetta.bat file and it changes the app_config files to maximize my CPU usage for Einstein, if Rosetta comes up with more CPU related work then I run my Rosetta.bat file and it changes the app_config to maximize Rosetta CPU tasks etc... Of course I have to look at the machine, run the file, and read the configs again, next step would be to write a program to do it for me. (Probably not going there, at least not for a while..)
Forgot to add, It sure would be nice if the developers would add the ability to stagger the startup of GPU tasks when running more than one task per GPU!
Our developers did that for our Seti special app using a mutex lock. You could stagger the startup of one task when running multiples on a card. The task would use a mutex lock and preload the task on the card but not process it until the previous co-adjacent task finished. Since the task times varied it allowed the tasks to be run simultaneously and appear to have all of the cards resources to itself. Good throughput.
With the development of the AIO version of EAH optimizations I'm heading to that direction.
1) load a task and initialize it.
2) do GPU and CPU memory allocations.
3) Get a mutex or an other 'license to use that specific GPU that this task is assigned to '. Mutex was all right when runnig "2" at a time: the other waiting to begin GPU processing and the other doing pre- and post- work unit tasks. One WU was enough to make the CPU usage to hit the top.
4) Do the processing on the GPU:
Asynchronous
send (to GPU)
process
fetch (from GPU)
with multiple phases of the job executing simultaneously using multiple GPU queues and a set work buffers for each queue.
5) Release the GPU (mutex etc.) to other tasks.
6) Report result back to project.
7) Free memory allocations from the GPU and CPU. (Why so late free GPU memory?) -- there is a plenty of memory to run other processes and since both allocating and freeing GPU memory is time consuming. Just release when there is nothing else to do.
O3ASE was the name for the previous Gravitational wave O3 All-Sky GPU tasks. "E" stood for "Engineering" I believe. I think there were the earlier batch, and i think it was later renamed to just O3AS when out of beta channel. these are from the summer of 2021 based on a quick google search.
Right now we are doing the O3MDF tasks, where MD = Multi-Directional and F is a designation for GPU tasks, while "1" denotes the CPU tasks (O3MD1).
[quote=Gandolph1 Rose
)
[quote=Gandolph1
Rosetta "app_config.xml";
<app_config>
<app>
<name>rosetta</name>
<max_concurrent>8</max_concurrent>
</app>
</app_config>
You don't need the name line in this one as long as it's in the right directory, ie:
<app_config>
<project_max_concurrent>12</project_max_concurrent>
</app_config>
I have it in most Boinc Project folders and change it depending on how many other Projects I'm trying to run at the same time. It also lets me crank up the resource share so I get tasks on a regular basis, I just need to keep the cache sizes low enough so I don't get too much work by throttling the number of cpu cores allowed for each Project.
The one problem this one has is it affects all types of tasks for a Project not a specific set of them like your first app_config file did.
I like the way I am running
)
I like the way I am running right now, I setup a simple batch file to swap in the appropriate running conditions based on work availability. If there is no Rosetta work I simply run my NoRosetta.bat file and it changes the app_config files to maximize my CPU usage for Einstein, if Rosetta comes up with more CPU related work then I run my Rosetta.bat file and it changes the app_config to maximize Rosetta CPU tasks etc... Of course I have to look at the machine, run the file, and read the configs again, next step would be to write a program to do it for me. (Probably not going there, at least not for a while..)
OK - So these seem to be
)
OK - So these seem to be working quite well for my 3080ti system;
Einstein:
<app_config>
<project_max_concurrent>5</project_max_concurrent>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>3</max_concurrent>
<gpu_versions>
<gpu_usage>.33</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
</app_config>
Rosetta:
<app_config>
<project_max_concurrent>9</project_max_concurrent>
<app>
<name>rosetta_python_projects</name>
<max_concurrent>8</max_concurrent>
</app>
</app_config>
Keith Myers wrote: Gandolph1
)
With the development of the AIO version of EAH optimizations I'm heading to that direction.
1) load a task and initialize it.
2) do GPU and CPU memory allocations.
3) Get a mutex or an other 'license to use that specific GPU that this task is assigned to '. Mutex was all right when runnig "2" at a time: the other waiting to begin GPU processing and the other doing pre- and post- work unit tasks. One WU was enough to make the CPU usage to hit the top.
4) Do the processing on the GPU:
Asynchronous
with multiple phases of the job executing simultaneously using multiple GPU queues and a set work buffers for each queue.
5) Release the GPU (mutex etc.) to other tasks.
6) Report result back to project.
7) Free memory allocations from the GPU and CPU. (Why so late free GPU memory?) -- there is a plenty of memory to run other processes and since both allocating and freeing GPU memory is time consuming. Just release when there is nothing else to do.
<app_config> <app> <name>eins
)
Any chance this might be a reasonable starting point for this box?
host/13131086
Thanx, Skip
PS: And thanx for helping me get this dog runnin' in the OpenCL thread.
Skip Da Shu
)
Hi. Where did you get that app name ? I haven't seen that anywhere.
Richie wrote: Skip Da Shu
)
Hi. Where did you get that app name ? I haven't seen that anywhere.
You can look here, it's at the bottom of most pages:
https://einsteinathome.org/apps.php
Yes, but I haven't seen any
)
Yes, but I haven't seen any app specifically with "O3ASE". Can't find an app like that here either:
https://einsteinathome.org/apps.php?xml=1
Richie wrote: Yes, but I
)
O3ASE was the name for the previous Gravitational wave O3 All-Sky GPU tasks. "E" stood for "Engineering" I believe. I think there were the earlier batch, and i think it was later renamed to just O3AS when out of beta channel. these are from the summer of 2021 based on a quick google search.
Right now we are doing the O3MDF tasks, where MD = Multi-Directional and F is a designation for GPU tasks, while "1" denotes the CPU tasks (O3MD1).
_________________________________________________________________________
Ian&Steve C. wrote: O3ASE
)
Okay, I see. Thanks for that information !