The following file is definitely being used because I can now change its max_concurrent value and see an immediate effect when I tell the client to read the config file.
i would just toss the <project_max_concurrent>n</project_max_concurrent> element outside of the <app></app> element to limit total einstein jobs like i mentioned in my previous reply. it should give you desired effect.
also I feel its a little redundant to add the <max_concurrent> element inside your <app> element for hsgamma_FGRPB1G since if you only have 1 GPU and gpu_usage to 0.5, then more than 2 jobs can't run anyway.
I would also highly suggest changing cpu_usage to 1.0. changing this value does NOT change actual CPU use by the app, it only goes into BOINCs accounting logic to figure out how many resources are being used. by telling it 0.5, you're basically telling BOINC that you're using a half of a core for each job (1 total) when in reality you're using 1 core per job (2 total). this will result in BOINC thinking you have 1 more core free to use than you really do and can result in BOINC running too many CPU jobs that you intend, depending on your other settings.
what is the logic in setting cpu_usage to 0.125 for the cpu app? I would just remove that line. it's not doing what you think it's doing and possibly complicating things
Going to run this way for a few days, Bumped cpu to 1 for GPU jobs, and removed the 1/8 cpu usage line, though it made no difference as far as I can tell. Thanks for all the help!
I was going to comment that you have a very misconstructed app_config file but I see that Ian straightened you out.
Remember that max_concurrent only applies to an application within the app delimiters.
Project_max_concurrent applies to the total of ALL applications outside any app delimiters.
Any scientific application will use as much or as little cpu usage to support the application as needed. You have no control over that. Only the app developer does.
As Ian pointed out the cpu usage parameters are only for internal BOINC accounting of host resources for project scheduling.
The gpu_usage parameter is the only one that can directly influence how many concurrent tasks are run on a gpu.
And for Einstein project, that parameter is actually redundant and not needed as the project offers direct configuration control of gpu applications on the Project Settings page with the Other Settings section where you can control each of the gpu applications task concurrency directly.
First, apologies to Einstein Stakeholders if the title of this thread offended, as it implies my problem was on the implementation server-side as opposed to the client-side, that was not my intent, I need to work on my bad jokes a bit more.
I've got a couple of days off now and I'll try downloading the client version suggested and updating the app_config.xml files to see what I get, then post back the results.
Many good workable solutions were offered and all are appreciated.
First, apologies to Einstein Stakeholders if the title of this thread offended, as it implies my problem was on the implementation server-side as opposed to the client-side, that was not my intent, I need to work on my bad jokes a bit more.
I've got a couple of days off now and I'll try downloading the client version suggested and updating the app_config.xml files to see what I get, then post back the results.
Many good workable solutions were offered and all are appreciated.
Thanks,
FWIW - Didn't bother me at all. Any time I'm able to learn something then its all good!!
OK here are the app_config.xml's I am using on my 3080ti machine for Einstein and Rosetta. I had to use the config file for rosetta to keep it from grabbing all available CPU cores. With the current configuration I can easily tweak the number of CPU/GPU jobs I want to run based on work availability. (Just have to change the config file and re-read to see instant results.) I am also running pre-release BOINC v 7.19 which has definitely fixed the problem of downloading too much work. I am using a similar config on my 2080ti machine but with much reduced CPU limits...
All working great! Would like to see a new release of Boinc with this fix in it.
Forgot to add, It sure would be nice if the developers would add the ability to stagger the startup of GPU tasks when running more than one task per GPU!
Forgot to add, It sure would be nice if the developers would add the ability to stagger the startup of GPU tasks when running more than one task per GPU!
Our developers did that for our Seti special app using a mutex lock. You could stagger the startup of one task when running multiples on a card. The task would use a mutex lock and preload the task on the card but not process it until the previous co-adjacent task finished. Since the task times varied it allowed the tasks to be run simultaneously and appear to have all of the cards resources to itself. Good throughput.
The following file is
)
The following file is definitely being used because I can now change its max_concurrent value and see an immediate effect when I tell the client to read the config file.
<app_config>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
</app_config>
Still messing with it to limit CPU work..... I'll post when I get it right...
OK - This seems to be working
)
OK - This seems to be working perfectly. You get 2 GPU tasks at a time and 8 CPU tasks. Will change later to tweak it for thru-put.
<app_config>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>.5</cpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRP5</name>
<max_concurrent>8</max_concurrent>
<cpu_usage>.125</cpu_usage>
</app>
</app_config>
i would just toss the
)
i would just toss the <project_max_concurrent>n</project_max_concurrent> element outside of the <app></app> element to limit total einstein jobs like i mentioned in my previous reply. it should give you desired effect.
also I feel its a little redundant to add the <max_concurrent> element inside your <app> element for hsgamma_FGRPB1G since if you only have 1 GPU and gpu_usage to 0.5, then more than 2 jobs can't run anyway.
I would also highly suggest changing cpu_usage to 1.0. changing this value does NOT change actual CPU use by the app, it only goes into BOINCs accounting logic to figure out how many resources are being used. by telling it 0.5, you're basically telling BOINC that you're using a half of a core for each job (1 total) when in reality you're using 1 core per job (2 total). this will result in BOINC thinking you have 1 more core free to use than you really do and can result in BOINC running too many CPU jobs that you intend, depending on your other settings.
_________________________________________________________________________
Gandolph1 wrote: OK - This
)
what is the logic in setting cpu_usage to 0.125 for the cpu app? I would just remove that line. it's not doing what you think it's doing and possibly complicating things
_________________________________________________________________________
Going to run this way for a
)
Going to run this way for a few days, Bumped cpu to 1 for GPU jobs, and removed the 1/8 cpu usage line, though it made no difference as far as I can tell. Thanks for all the help!
<app_config>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRP5</name>
<max_concurrent>8</max_concurrent>
</app>
</app_config>
I was going to comment that
)
I was going to comment that you have a very misconstructed app_config file but I see that Ian straightened you out.
Remember that max_concurrent only applies to an application within the app delimiters.
Project_max_concurrent applies to the total of ALL applications outside any app delimiters.
Any scientific application will use as much or as little cpu usage to support the application as needed. You have no control over that. Only the app developer does.
As Ian pointed out the cpu usage parameters are only for internal BOINC accounting of host resources for project scheduling.
The gpu_usage parameter is the only one that can directly influence how many concurrent tasks are run on a gpu.
And for Einstein project, that parameter is actually redundant and not needed as the project offers direct configuration control of gpu applications on the Project Settings page with the Other Settings section where you can control each of the gpu applications task concurrency directly.
Project Preferences >> Other Settings >> "GPU Utilization"
First, apologies to Einstein
)
First, apologies to Einstein Stakeholders if the title of this thread offended, as it implies my problem was on the implementation server-side as opposed to the client-side, that was not my intent, I need to work on my bad jokes a bit more.
I've got a couple of days off now and I'll try downloading the client version suggested and updating the app_config.xml files to see what I get, then post back the results.
Many good workable solutions were offered and all are appreciated.
Thanks,
Carter9304 wrote: First,
)
FWIW - Didn't bother me at all. Any time I'm able to learn something then its all good!!
OK here are the
)
OK here are the app_config.xml's I am using on my 3080ti machine for Einstein and Rosetta. I had to use the config file for rosetta to keep it from grabbing all available CPU cores. With the current configuration I can easily tweak the number of CPU/GPU jobs I want to run based on work availability. (Just have to change the config file and re-read to see instant results.) I am also running pre-release BOINC v 7.19 which has definitely fixed the problem of downloading too much work. I am using a similar config on my 2080ti machine but with much reduced CPU limits...
All working great! Would like to see a new release of Boinc with this fix in it.
3080TI PC Config
Einstein "app_config.xml";
<app_config>
<app>
<name>hsgamma_FGRPB1G</name>
<max_concurrent>2</max_concurrent>
<gpu_versions>
<gpu_usage>.5</gpu_usage>
<cpu_usage>1</cpu_usage>
</gpu_versions>
</app>
<app>
<name>hsgamma_FGRP5</name>
<max_concurrent>4</max_concurrent>
</app>
</app_config>
Rosetta "app_config.xml";
<app_config>
<app>
<name>rosetta</name>
<max_concurrent>8</max_concurrent>
</app>
</app_config>
Forgot to add, It sure would be nice if the developers would add the ability to stagger the startup of GPU tasks when running more than one task per GPU!
Gandolph1 wrote: Forgot to
)
Our developers did that for our Seti special app using a mutex lock. You could stagger the startup of one task when running multiples on a card. The task would use a mutex lock and preload the task on the card but not process it until the previous co-adjacent task finished. Since the task times varied it allowed the tasks to be run simultaneously and appear to have all of the cards resources to itself. Good throughput.