The "best" way is to actually do a running average over some period of time using a number of actual work units.
In other words, take the completion time of, say, 10 work units; average that time. Make the change, collect time on another 10 work units, average the times, compare the averages ...
Though I keep saying it, there are plenty of people that still look at the benchmark numbers as if they have meaning ... basically we proved otherwise back in the Beta test. It is just that there has not been a "good" idea of what to do to replace the current system (well, I have one now, but it is not likely to see daylight anytime soon).
Hey Chief
Good to hearfrom you...I'll do that since reading some of the information from your links, raised some quastions concerning benchmark(ing).
look at the performance lecture which discusses some of the more common benchmarks ...
Probably the best is the "Spec-something" where you have to really define what your test regime was. Which of course is another problem ... because you can still "cook" the results, but you have to document the recipe ...
Oh, and the numbers are useless except when you compare identical configurations with identical configurations ... :)
RE: Greg, The "best" way
)
Hey Chief
Good to hearfrom you...I'll do that since reading some of the information from your links, raised some quastions concerning benchmark(ing).
Greg
look at the performance
)
look at the performance lecture which discusses some of the more common benchmarks ...
Probably the best is the "Spec-something" where you have to really define what your test regime was. Which of course is another problem ... because you can still "cook" the results, but you have to document the recipe ...
Oh, and the numbers are useless except when you compare identical configurations with identical configurations ... :)