An Einstein Schizoid Embolism?

mikey
mikey
Joined: 22 Jan 05
Posts: 12,689
Credit: 1,839,094,599
RAC: 3,726

[quote=Gandolph1   Rose

[quote=Gandolph1

 

Rosetta "app_config.xml";

<app_config>
       <app>
      <name>rosetta</name>      
      <max_concurrent>8</max_concurrent>
    </app>
</app_config>

You don't need the name line in this one as long as it's in the right directory, ie:

<app_config>
<project_max_concurrent>12</project_max_concurrent>
</app_config>

I have it in most Boinc Project folders and change it depending on how many other Projects I'm trying to run at the same time. It also lets me crank up the resource share so I get tasks on a regular basis, I just need to keep the cache sizes low enough so I don't get too much work by throttling the number of cpu cores allowed for each Project.

The one problem this one has is it affects all types of tasks for a Project not a specific set of them like your first app_config file did.

Gandolph1
Gandolph1
Joined: 20 Feb 05
Posts: 180
Credit: 389,649,451
RAC: 1,032

I like the way I am running

I like the way I am running right now, I setup a simple batch file to swap in the appropriate running conditions based on work availability.  If there is no Rosetta work I simply run my NoRosetta.bat file and it changes the app_config files to maximize my CPU usage for Einstein, if Rosetta comes up with more CPU related work then I run my Rosetta.bat file and it changes the app_config to maximize Rosetta CPU tasks etc...  Of course I have to look at the machine, run the file, and read the configs again, next step would be to write a program to do it for me. (Probably not going there, at least not for a while..) 

 

 

 

Gandolph1
Gandolph1
Joined: 20 Feb 05
Posts: 180
Credit: 389,649,451
RAC: 1,032

OK - So these seem to be

OK - So these seem to be working quite well for my 3080ti system;

 

Einstein:

<app_config>
   <project_max_concurrent>5</project_max_concurrent>       
    <app>
      <name>hsgamma_FGRPB1G</name>      
      <max_concurrent>3</max_concurrent>
    <gpu_versions>       
      <gpu_usage>.33</gpu_usage>
      <cpu_usage>1</cpu_usage>
    </gpu_versions>
    </app>
</app_config>

 

Rosetta:

<app_config>
       <project_max_concurrent>9</project_max_concurrent>
    <app>
        <name>rosetta_python_projects</name>
        <max_concurrent>8</max_concurrent>
    </app>
</app_config>

 

 

petri33
petri33
Joined: 4 Mar 20
Posts: 123
Credit: 4,051,475,819
RAC: 6,969,697

Keith Myers wrote: Gandolph1

Keith Myers wrote:

Gandolph1 wrote:

Forgot to add, It sure would be nice if the developers would add the ability to stagger the startup of GPU tasks when running more than one task per GPU!

Our developers did that for our Seti special app using a mutex lock.  You could stagger the startup of one task when running multiples on a card.  The task would use a mutex lock and preload the task on the card but not process it until the previous co-adjacent task finished. Since the task times varied it allowed the tasks to be run simultaneously and appear to have all of the cards resources to itself.  Good throughput.

 

 

With the development of the AIO version of EAH optimizations I'm heading to that direction.

1) load a task and initialize it.

2) do GPU and CPU memory allocations.

3) Get a mutex or an other 'license to use that specific GPU that this task is assigned to '. Mutex was all right when runnig "2" at a time: the other waiting to begin GPU processing and the other doing pre- and post- work unit tasks. One WU was enough to make the CPU usage to hit the top.

4) Do the processing on the GPU:

    Asynchronous

  • send (to GPU)
  • process
  • fetch (from GPU)

with multiple phases of the job executing simultaneously using multiple GPU queues and a set work buffers for each queue.

5) Release the GPU (mutex etc.) to other tasks.

6) Report result back to project.

7) Free memory allocations from the GPU and CPU. (Why so late free GPU memory?) -- there is a plenty of memory to run other processes and since both allocating and freeing GPU  memory is time consuming. Just release when there is nothing else to do.

 

Skip Da Shu
Skip Da Shu
Joined: 18 Jan 05
Posts: 152
Credit: 1,043,529,300
RAC: 712,851

<app_config> <app> <name>eins

<app_config>
<app>
<name>einstein_O3ASE</name>
<gpu_versions>
<cpu_usage>1.0</cpu_usage>
<gpu_usage>1.0</gpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRPB1G</name>
<gpu_versions>
<cpu_usage>1.0</cpu_usage>
<gpu_usage>0.5</gpu_usage>
</gpu_versions>
</app>
</app_config>

Any chance this might be a reasonable starting point for this box?

host/13131086

Thanx, Skip

PS: And thanx for helping me get this dog runnin' in the OpenCL thread.

 

 

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1,702,989,778
RAC: 0

Skip Da Shu

Skip Da Shu wrote:

<name>einstein_O3ASE</name>

Hi. Where did you get that app name ? I haven't seen that anywhere.

mikey
mikey
Joined: 22 Jan 05
Posts: 12,689
Credit: 1,839,094,599
RAC: 3,726

Richie wrote: Skip Da Shu

Richie wrote:

Skip Da Shu wrote:

<name>einstein_O3ASE</name>

Hi. Where did you get that app name ? I haven't seen that anywhere.

You can look here, it's at the bottom of most pages:

https://einsteinathome.org/apps.php

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1,702,989,778
RAC: 0

Yes, but I haven't seen any

Yes, but I haven't seen any app specifically with "O3ASE". Can't find an app like that here either:

https://einsteinathome.org/apps.php?xml=1

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3,956
Credit: 46,950,752,642
RAC: 64,638,211

Richie wrote: Yes, but I

Richie wrote:

Yes, but I haven't seen any app specifically with "O3ASE". Can't find an app like that here either:

https://einsteinathome.org/apps.php?xml=1

O3ASE was the name for the previous Gravitational wave O3 All-Sky GPU tasks. "E" stood for "Engineering" I believe. I think there were the earlier batch, and i think it was later renamed to just O3AS when out of beta channel. these are from the summer of 2021 based on a quick google search.

Right now we are doing the O3MDF tasks, where MD = Multi-Directional and F is a designation for GPU tasks, while "1" denotes the CPU tasks (O3MD1).

_________________________________________________________________________

Richie
Richie
Joined: 7 Mar 14
Posts: 656
Credit: 1,702,989,778
RAC: 0

Ian&Steve C. wrote: O3ASE

Ian&Steve C. wrote:

O3ASE was the name for the previous Gravitational wave O3 All-Sky GPU tasks. ...

Okay, I see. Thanks for that information !

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.