single 8970M now appears and runs (a bit faster) as GPU0 and GPU1

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5885
Credit: 119085317827
RAC: 23821139

rjs5 wrote:Reading the config

rjs5 wrote:
Reading the config files will change the behavior, but all the WU already downloaded will not show the new CPU/GPU percentage. The newly fetched WU will show the new fractional from the app_config.xml file.

Yes, certainly, using the app_config.xml mechanism and forcing a 'reread config files' through BOINC Manager will also cause an immediate switch to what is specified in the config file.  Since the OP mentioned that he decided to use the GPU utilization factor, I decided to show him the way to get that to work immediately by triggering a work fetch, without adding any comment about app_config.xml.

Unless you need to adjust CPU resources as well, or unless you need to have more than 4 (the number of different locations or venues) sets of different preference settings for different machines, I think that the utilization factor is the simplest way to run concurrent GPU tasks.  The reason for that is that it's very easy to set and then revert to default if you need to, and, more importantly, you don't have to worry about the correct syntax for the config file.

With the app_config.xml mechanism, removing the file will not revert to default (as you might have expected).  When you first create that file, the contents get included into the state file (client_state.xml) and there they remain, even if the app_config.xml file disappears.  Removing the file doesn't revert the settings.  This tends to catch people out.  To go back to default, you have to edit the app_config.xml file to contain the default settings and then force another 'reread'.  In frustration, when removing the file doesn't work, some people have even resorted to 'resetting the project' to clear the settings they no longer want to use.  Unfortunately, that throws everything away.

Don't get me wrong, all my machines have an app_config.xml file.  I have so many different hardware configurations that make this the best way for me.  I would be stuck without that feature.  However, for most people just wanting concurrent tasks on one or two machines, the utilization factor is probably a lot easier for them.

 

Cheers,
Gary.

pzajdel
pzajdel
Joined: 23 Mar 11
Posts: 9
Credit: 18874988
RAC: 20088

Thanks all. For now I will

Thanks all. For now I will stick to the web interface.

It's running 2x2  after fetching new work. Boinc client increased predicted processing time per WU from 33 mins to 45 mins. We will see after several WUs how it averages in actual unit time. I am running BOINC only in the evening so it might need a few days.

GPU thermal went from 77C to 80C but the fan is not at full speed yet so no issue here.

I just noticed that HWINFO64 also shows GPU Computing (Compute 0) and (Compute 1) but only the Compute 1 has a 100% usage. The Compute 0 unit stays at 0%.

Best

PZ

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5885
Credit: 119085317827
RAC: 23821139

pzajdel wrote:... It's

pzajdel wrote:
... It's running 2x2  after fetching new work. Boinc client increased predicted processing time per WU from 33 mins to 45 mins. We will see after several WUs how it averages in actual unit time. I am running BOINC only in the evening so it might need a few days.

First of all, predicted times are completely irrelevant and should be ignored.  Base you decisions solely on actual completed times.

Your computers are hidden (the default these days) so it's difficult for people wanting to help.  Often, the best way to help is to access the website results to try to work out what is going on.  If the computer ID (or a workunit ID) isn't known, that becomes impossible.  Seeing as you posted some details over at the BOINC website which included workunit IDs, I've been able to 'cheat' and look through your completed tasks by leveraging a WUID.  I hope you don't mind.

There are now several completed FGRPB1G results (including one that has validated) that were obviously running in the 4x mode.  Whilst there is insufficient data to really judge, it appears that the crunch time is very close to double that for tasks crunched 2x.  So you might think that it makes little difference compared to when you were running just 1 task on each virtual GPU instance.

However, I notice that the machine also runs other tasks, (O1OD1 and BRP4G), both of which use the CPU in some way and both of which are probably going to be affected when you try to run more FGRPB1G tasks.  I didn't look closely at the crunch times for those two but I did notice that the Intel GPU times had gone from less than 600 secs to around 1100 secs for the latest ones to be crunched.

Maybe you are now running 2x on the intel GPU??  You did mention when you were having difficulty getting the utilization factor to work that you changed the factor for "all projects".  However, if there is still only 1 Intel GPU task running, its crunch time is probably being adversely affected by the extra CPU load in supporting 4 concurrent FGRPB1G tasks.  The extra temperature you mention would probably be coming in part from that extra CPU load as well.  In choosing your optimal conditions, you should take into account all the searches you are running and how their crunch times change.  Make sure you use 'real' times and ignore estimates.

 

Cheers,
Gary.

pzajdel
pzajdel
Joined: 23 Mar 11
Posts: 9
Credit: 18874988
RAC: 20088

Hi Gary, Yes, I managed to

Hi Gary,

Yes, I managed to finish some 2x2 WU yesterday but went to bed until they showed up as pending. They roughly doubled the time (1h 7mins) vs 2x1 mode, so no more speed up vs the original 1x1.

Yes, there is also intel GPU running in 1x2 mode and the crunch time went down by about 100secs per 2WU. So there is also a slight speed up on intel.

I will unblock the computers so you can have a look.

PZ

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.