Well if your first WU completed a quorum then you would get granted credit while having very little of whatever goes into the denominator. I don't know if that is the explaination or not.
How is it possible to have a RAC that exceeds the total credit by far?
The RAC is the "speedometer" - if you turn off network connection for a while, then turn it on and return two results 1 second apart, where the quorums have already been met, and let's say you get 70 credits on each, you're getting 70 credits/second. Thats a VERY high RAC. Worthless - but then so is RAC in general, other than "oops, my RAC fell from 700 to 400, one of my PCs must have died", or "my RAC is consistently 1000, so I'm doing twice as much as you with the RAC of 500".
You can also merge 20 computers together and (briefly) the one remaining will have a very high RAC.
Then there's the "when you shut off a computer the RAC freezes and doesn't decay" problem.
Any given "momentary" RAC is totally meaningless. The "top computers" list _should_ exclude any host that has less than some arbitrary time, say two weeks, that went into creating that RAC, but doesn't.
Bill
If I understand (and remember correctly) there is a way for the servers to force a RAC update. It is not regularly used because of the load it puts on the servers.
Sorry Stalker, I will desist from further attempts to hijack your thread.
If I understand (and remember correctly) there is a way for the servers to force a RAC update. It is not regularly used because of the load it puts on the servers.
Sorry Stalker, I will desist from further attempts to hijack your thread.
Yes, as far as I know _none_ of the projects run that update... it was intended to be a weekly process, but it brings everything to it's knees. Most relational databases just aren't designed to deal well with transaction processing _and_ batch processing simultaneously. (You don't want to get me started on database design issues...)
Not hijacking at all - that would be "I can't upload to SETI". :-P Discussion of how RAC is (mis)calculated is pertinent to a RAC-question thread!
I would not count on it #2 has been so long I can't remember when it first it appeared.
And it'll be there until somebody manually deletes it, or that script is run... it hasn't contacted the project in months, so the RAC is frozen at that ridiculous level.
When I look at those lists, I just automatically discount however many at the very top don't "look right".
I would not count on it #2 has been so long I can't remember when it first it appeared.
And it'll be there until somebody manually deletes it, or that script is run... it hasn't contacted the project in months, so the RAC is frozen at that ridiculous level.
When I look at those lists, I just automatically discount however many at the very top don't "look right".
I'm looking at the code to see if I trust running 'update_stats'. If it looks OK, then I will run it once by hand and then periodically via a cron script after that.
I'm looking at the code to see if I trust running 'update_stats'. If it looks OK, then I will run it once by hand and then periodically via a cron script after that.
I think I'd run it right after a good database backup... I've been "bit" too many times.
After some testing, I have run update_stats to update the user, host and team values of recent average credit. This will now be run once per day, so that recent average credit values should decay exponentially for inactive users/hosts/teams.
Machines with odd statistics
)
Well if your first WU completed a quorum then you would get granted credit while having very little of whatever goes into the denominator. I don't know if that is the explaination or not.
RE: How is it possible to
)
The RAC is the "speedometer" - if you turn off network connection for a while, then turn it on and return two results 1 second apart, where the quorums have already been met, and let's say you get 70 credits on each, you're getting 70 credits/second. Thats a VERY high RAC. Worthless - but then so is RAC in general, other than "oops, my RAC fell from 700 to 400, one of my PCs must have died", or "my RAC is consistently 1000, so I'm doing twice as much as you with the RAC of 500".
You can also merge 20 computers together and (briefly) the one remaining will have a very high RAC.
Then there's the "when you shut off a computer the RAC freezes and doesn't decay" problem.
Any given "momentary" RAC is totally meaningless. The "top computers" list _should_ exclude any host that has less than some arbitrary time, say two weeks, that went into creating that RAC, but doesn't.
Bill If I understand (and
)
Bill
If I understand (and remember correctly) there is a way for the servers to force a RAC update. It is not regularly used because of the load it puts on the servers.
Sorry Stalker, I will desist from further attempts to hijack your thread.
Thank you, Bill, that sounds
)
Thank you, Bill, that sounds totally credible to me.
Lets see, if those odd stats will "normalize" over time.
MfG
L.
Proud member of the Heise OTF-Team.
RE: If I understand (and
)
Yes, as far as I know _none_ of the projects run that update... it was intended to be a weekly process, but it brings everything to it's knees. Most relational databases just aren't designed to deal well with transaction processing _and_ batch processing simultaneously. (You don't want to get me started on database design issues...)
Not hijacking at all - that would be "I can't upload to SETI". :-P Discussion of how RAC is (mis)calculated is pertinent to a RAC-question thread!
RE: Thank you, Bill, that
)
I would not count on it #2 has been so long I can't remember when it first it appeared.
RE: I would not count on it
)
And it'll be there until somebody manually deletes it, or that script is run... it hasn't contacted the project in months, so the RAC is frozen at that ridiculous level.
When I look at those lists, I just automatically discount however many at the very top don't "look right".
RE: RE: I would not count
)
I'm looking at the code to see if I trust running 'update_stats'. If it looks OK, then I will run it once by hand and then periodically via a cron script after that.
Director, Einstein@Home
RE: I'm looking at the code
)
I think I'd run it right after a good database backup... I've been "bit" too many times.
After some testing, I have
)
After some testing, I have run update_stats to update the user, host and team values of recent average credit. This will now be run once per day, so that recent average credit values should decay exponentially for inactive users/hosts/teams.
Director, Einstein@Home