I manually edited the client state DCF values down by around 20% and re-ran BOINC, the timings were then estimated at about 20 mins over actual, and have decreased as BM sorts out its DCF estimates to approach reality:-)
Cliff, you don't have to manually edit the DCF values for E@H. That is the only project I run that uses them anymore. The others just default to 1. I agree the E@H estimated completion times for the Parkes data was absurd. Forced each one into Priority running. Fortunately I remembered about the BM custom Event Log Diagnostic Flags menu item in the later BM's. I just add dcf_debug to the standard event log flags and in very short order after each E@H task completion report, the estimated completion times converged down to realistic estimates. All very slick. I just run it standard now along with the normal flags. Doesn't cause any overhead or excess verbage in the logs either. Have a look.
Cliff, you don't have to manually edit the DCF values for E@H. That is the only project I run that uses them anymore. The others just default to 1. I agree the E@H estimated completion times for the Parkes data was absurd. Forced each one into Priority running. Fortunately I remembered about the BM custom Event Log Diagnostic Flags menu item in the later BM's. I just add dcf_debug to the standard event log flags and in very short order after each E@H task completion report, the estimated completion times converged down to realistic estimates. All very slick. I just run it standard now along with the normal flags. Doesn't cause any overhead or excess verbage in the logs either. Have a look.
Cheers, Keith
Hi Keith,
Obliged for the tip, dcf_debug set:-)
As it stands DCF came down from 4845 to 3413...
BM's estimates for 'any' project are daft anyway, but DCF does take the cake:-)
The way BM actually handles 'estimates' seems a bit odd, it will add x.xx to an
estimate, then suddenly decide its screwed the pooch and start deducting 3-5 seconds for every real second until task completes..
I confess, I've only tried once to automate my GPU overclocking on reboot, and so far have failed.
Manuel Palacios wrote:
I may look into the automation of the overclock on the memory through Nvidia Inspector as I also tried that once, but it didn't work and I haven't looked into it since
ExtraTerrestrial Apes wrote:
@Automating memory OC: I also tried it, but to no avail.
I tried again to automate GTX970 overclocking using Nvidia Inspector commands on startup and believe I have it working. This is on a Windows 7 system, and may not translate much at all to any Linux flavor.
In case someone else wants to try it "my way" I'll give an overview here. I started to prepare a much more detailed step-by-step, but realized it would be quite long, and possibly of no interest. So here is an overview instead.
1. create a batch file of commands you want to run after user login.
2. place in that file nvidia inspector command lines with parameters setting the P0 and P2 states as desired
3. place in that file the initial launch of boincmgr
4. place in that file a launch of MSI Afterburner to provided desired fan control
5. disable the automatic launch feature of boincmgr.
6. use the Windows task scheduler to schedule running the delayed start batch file some tens of seconds after user login of the user under which boinc GPU jobs run on the host.
Comments:
1. I obtained the content of the nvidia inspector commands by left-clicking the create clocks shortcut button, but needed to reformat it slightly to fit the Windows command line "start /d" command I was using.
2. I used the windows command "timeout /t nn" to introduce a delay of nn seconds between commands in my delayed startup batch file.
3. I tried but failed to find the syntax to specify NVI control of fan speed in a command line. This is why I still start up Afterburner--just to use fan speed control from it.
4. My efforts initially used right-click on the NVI create clocks shortcut. The resulting scheduled tasks were placed in a subdirectory of the scheduled tasks library specific to NVI. On my system they did not give the desired results, and I deleted them.
5. I don't claim this method is "best", but that (through more than one reboot) it seems to work for me. I (and some others) had trouble with various other attempted methods.
6. (lots) more details available on request.
Thanks for the tip. I also tried the NVI batch file commands but they didn't work. Looks like you figured out the formula. Right now I just stop BM and manually adjust the memory speed. The only thing that would speed up my process is to not have BM automatically start only to have to stop it after the initial bootup just to adjust NVI. Doesn't take all that much time and it also allows me to inspect the logfile for anything out of the ordinary before it grows too big. Thanks for adding to the thread some good information.
So i've been running my GTX970's at .33 for long enough now and I do believe that the ~163,000 credits is the maximum I am going to get out of it. I'm now going to try running .50 and see how that affects my credit output and compare which one is better. Since we are going to remain with the 1.52 BRP for the foreseeable future, it will be interesting to see what happens and i'll keep all of you updated. My guess is that .33 is optimal.
At 3x and 3705mhz memory settings my average runtimes are ~10,950.xx with driver 350.12 and 3 cores being utilized to run SoB primegrid units and 1 free to feed the GPUs. Thus, my theoretical output is 86400/10.950.xx*4400*3 (3wus at time)*2(2 GTX 970's)=208306.84 credits/day. Like I said my RAC was not budging much past 163,000.
So, finally, we shall see what 2x gives for output. In order for this to be worthwhile I would need to average runtimes of 7,300s and I do not think these cards will manage that.
Archae, do you have anymore insights that could improve output from the GTX970? Other than reverting to 344.60 which I do remember was slightly better for crunching.
Archae86, do you have anymore insights that could improve output from the GTX970? Other than reverting to 344.60 which I do remember was slightly better for crunching.
If you call me Archae, you are just calling me old, Archae86 is a reference to the original 8086, and also to old, which seems better.
So what CPU clock and memory clock are you running on the GTX970, and why? I'm not on the right machine to look right now, but believe I'm running a little faster than your memory setting, and I backed down from the bleeding edge because I wanted the machine to be unquestionably stable as it is my wife's daily driver.
I'm currently running my i5 4690k at 3.8ghz and my 970s are running at 1400mhz shader clock, 3705mhz memory clock, and I have 2x4gb CAS9 2133mhz RAM (soon to be 4x4gb).
I know yours had been running around 38xx mhz clock when I initially started this thread. I'm not sure if you have backed down from that since. Also, I know YMMV due system differences, yet your 970 runs some 400-800s faster than mine on average when measuring completion times.
All in all, it's just good fun trying to figure out how to get the most out of this system setup.
RE: I manually edited the
)
Cliff, you don't have to manually edit the DCF values for E@H. That is the only project I run that uses them anymore. The others just default to 1. I agree the E@H estimated completion times for the Parkes data was absurd. Forced each one into Priority running. Fortunately I remembered about the BM custom Event Log Diagnostic Flags menu item in the later BM's. I just add dcf_debug to the standard event log flags and in very short order after each E@H task completion report, the estimated completion times converged down to realistic estimates. All very slick. I just run it standard now along with the normal flags. Doesn't cause any overhead or excess verbage in the logs either. Have a look.
Cheers, Keith
RE: Cliff, you don't have
)
Hi Keith,
Obliged for the tip, dcf_debug set:-)
As it stands DCF came down from 4845 to 3413...
BM's estimates for 'any' project are daft anyway, but DCF does take the cake:-)
The way BM actually handles 'estimates' seems a bit odd, it will add x.xx to an
estimate, then suddenly decide its screwed the pooch and start deducting 3-5 seconds for every real second until task completes..
Methinks the programming is a tad iffy:-)
Regards,
Cliff,
Been there, Done that, Still no damm T Shirt.
archae86 wrote:I confess,
)
I tried again to automate GTX970 overclocking using Nvidia Inspector commands on startup and believe I have it working. This is on a Windows 7 system, and may not translate much at all to any Linux flavor.
In case someone else wants to try it "my way" I'll give an overview here. I started to prepare a much more detailed step-by-step, but realized it would be quite long, and possibly of no interest. So here is an overview instead.
1. create a batch file of commands you want to run after user login.
2. place in that file nvidia inspector command lines with parameters setting the P0 and P2 states as desired
3. place in that file the initial launch of boincmgr
4. place in that file a launch of MSI Afterburner to provided desired fan control
5. disable the automatic launch feature of boincmgr.
6. use the Windows task scheduler to schedule running the delayed start batch file some tens of seconds after user login of the user under which boinc GPU jobs run on the host.
Comments:
1. I obtained the content of the nvidia inspector commands by left-clicking the create clocks shortcut button, but needed to reformat it slightly to fit the Windows command line "start /d" command I was using.
2. I used the windows command "timeout /t nn" to introduce a delay of nn seconds between commands in my delayed startup batch file.
3. I tried but failed to find the syntax to specify NVI control of fan speed in a command line. This is why I still start up Afterburner--just to use fan speed control from it.
4. My efforts initially used right-click on the NVI create clocks shortcut. The resulting scheduled tasks were placed in a subdirectory of the scheduled tasks library specific to NVI. On my system they did not give the desired results, and I deleted them.
5. I don't claim this method is "best", but that (through more than one reboot) it seems to work for me. I (and some others) had trouble with various other attempted methods.
6. (lots) more details available on request.
Thanks for the tip. I also
)
Thanks for the tip. I also tried the NVI batch file commands but they didn't work. Looks like you figured out the formula. Right now I just stop BM and manually adjust the memory speed. The only thing that would speed up my process is to not have BM automatically start only to have to stop it after the initial bootup just to adjust NVI. Doesn't take all that much time and it also allows me to inspect the logfile for anything out of the ordinary before it grows too big. Thanks for adding to the thread some good information.
Cheers, Keith
Hey archae, So i've been
)
Hey archae,
So i've been running my GTX970's at .33 for long enough now and I do believe that the ~163,000 credits is the maximum I am going to get out of it. I'm now going to try running .50 and see how that affects my credit output and compare which one is better. Since we are going to remain with the 1.52 BRP for the foreseeable future, it will be interesting to see what happens and i'll keep all of you updated. My guess is that .33 is optimal.
At 3x and 3705mhz memory settings my average runtimes are ~10,950.xx with driver 350.12 and 3 cores being utilized to run SoB primegrid units and 1 free to feed the GPUs. Thus, my theoretical output is 86400/10.950.xx*4400*3 (3wus at time)*2(2 GTX 970's)=208306.84 credits/day. Like I said my RAC was not budging much past 163,000.
So, finally, we shall see what 2x gives for output. In order for this to be worthwhile I would need to average runtimes of 7,300s and I do not think these cards will manage that.
Archae, do you have anymore insights that could improve output from the GTX970? Other than reverting to 344.60 which I do remember was slightly better for crunching.
RE: Archae86, do you have
)
If you call me Archae, you are just calling me old, Archae86 is a reference to the original 8086, and also to old, which seems better.
So what CPU clock and memory clock are you running on the GTX970, and why? I'm not on the right machine to look right now, but believe I'm running a little faster than your memory setting, and I backed down from the bleeding edge because I wanted the machine to be unquestionably stable as it is my wife's daily driver.
Archae86, my
)
Archae86, my apologies:
I'm currently running my i5 4690k at 3.8ghz and my 970s are running at 1400mhz shader clock, 3705mhz memory clock, and I have 2x4gb CAS9 2133mhz RAM (soon to be 4x4gb).
I know yours had been running around 38xx mhz clock when I initially started this thread. I'm not sure if you have backed down from that since. Also, I know YMMV due system differences, yet your 970 runs some 400-800s faster than mine on average when measuring completion times.
All in all, it's just good fun trying to figure out how to get the most out of this system setup.
Cheers!