I looked at some of the result files, these all appear ok.
Did you change something when or shortly before that problem started? Upgraded BOINC? Changed your username?
BM
Edit: There have been versions of the validator that were confused by non-ASCII characters in comments (e.g. usernames). I updated the validator to most recent code, if this was the problem, it should be solved now.
Edit#2: Yep, this seems to have solved the problem. I ran the validator on your results again and they were found valid. Unfortunately for some tasks it was too late, but at least four still got credit.
Because, the validator was called to examine a quorum (two successfully completed sets of answers for a given workunit) but it found that either (a) there was no agreement or (b) one of the answers could immediately be ruled out from further examination because it couldn't possibly be correct or perhaps was missing or corrupt in some way. If you take a good look at the two examples you linked, you will see an example of each of these two separate cases.
These two cases can be summarised as:-
* These two answers are structurally OK but they don't agree closely enough with each other so both are marked inconclusive until an extra answer from an extra host can be returned and used as a decider.
* These two answers don't even need to be checked. One of them is either corrupt, missing or clearly wrong since its contents don't conform with the expected structure that it should have. The label is 'inconclusive' for the OK task and 'validate error' for the presumably missing, corrupt or otherwise unacceptable task. The 'inconclusive' result will remain as a 'pending' until such time as an extra result is returned and a proper comparison of results is made. The 'validate error' task can be rejected immediately.
This is all fine and dandy for weeding out corrupt results coming from dodgy clients (eg excessive overclocking) but is a bit tough on the participant if the 'validate error' is actually a symptom of a server-side failure - as it has been in a number of instances. The purpose of this special thread (why it was made sticky in the first place) is to provide a "low noise" mechanism for contributing examples of "potential server-side" validate errors back to the Devs so that what caused the error (like bugs in the validator itself) can be found and fixed. As an example of a bug (now fixed) take a look at the two posts immediately prior to yours.
Your two 'inconclusives' are NOT 'validate errors'. They are really no different to any of your other 'pendings'. They are just waiting for a further result to be returned so that the validator can check for agreement (or otherwise). There is nothing inherently wrong with having the 'inconclusive' tag on your result. If you aren't sure about something it's usually best to start your own thread unless you are really certain that it's related to an existing thread.
I have just started to get a lot of invalid CUDA work units. However, it is not all of them, it is about 1 in every three processed. Here are the most recent 20 invalid ones :
I had been processing lots of these CUDA work units up until about one week ago, and all seemed fine although I was not checking results all the time, but the daily points total I was getting was consistent with most to all of them being valid. About a week and a bit ago, the scheduler over committed the computer, and I went a number of days when it did not download any CUDA work units, it was high priority processing some of the others (but a lot had to be aborted due to lack of time to process). When BOINC Manager finally started to download CUDA work units again about three days ago, this issue started.
I am running Ubuntu linux 32 bit version 10.10 with a GTX 570 Nvidia card, using the same Nvidia driver before and after (no change) .... it is the tested driver that Ubuntu offers to download from "Additional Drivers", not one from the Nvidia site. There were probably some general Ubuntu system updates done during the week of no CUDA processing, but I could not tell you what they might be. Another computer running the same version of Ubuntu and the same Nvidia driver but driving a Mac version of an Nvidia card, seems to work just fine.
Any idea why I am suddenly getting some invalid results ? Any suggestions as to possible fixes ?
No one seems to be looking here, and posting under the non-sticky threads does not seem to work either, but here goes another try. See the message above for some details. What happens now is that one third to one half of the CUDA work units fail immediately upon uploading with an error code of Validate error (2:00000010)
Those units that pass the initial validation and classified as successful later fail validation but these post an error of Validate Error (8:00001000).
In one comment I finally got in the non-sticky threads, Bernd indicated that the error 2:00000010 indicated some sort of calculation error. No one answered with the 8:00001000 is trying to indicate.
As noted in the earlier post, this suddenly started happening after many weeks of 100 % success on CUDA tasks. The video card seems to be working okay as a video card, but is there a chance there is some failed memory or something that would affect CUDA tasks but not show up in normal display ?
I have recently upgraded to Ubuntu 11.04, but the problem still occurs in the same way. The upgrade to Ubuntu 11.04 upgrades the Nvidia driver to the most recent stable one (which is more recent than the version running under 10.4), and I do have the error that a lot of prople have where it says the driver is activated but not in use (but I have a working display, so it DOES seem to be in use ...).
Any comments would be most welcome.
A couple of workunits to study would be
Name PM0119_01811.dm_64_0
Workunit 98091109
Created 26 May 2011 2:11:15 UTC
Sent 26 May 2011 11:29:38 UTC
Received 28 May 2011 18:27:29 UTC
Server state Over
Outcome Validate error (2:00000010)
and
Name PM0119_01641.dm_348_1
Workunit 98088126
Created 26 May 2011 1:36:45 UTC
Sent 26 May 2011 9:47:49 UTC
Received 28 May 2011 4:05:10 UTC
Server state Over
Outcome Validate error (8:00001000)
My first gamma-ray unit ended with Validate error:
FPU status flags: PRECISION OVERFLOW DENORMALIZED INVALID
I am using SuSE Linux 11.1 on an Opteron 1210 at 1.8 GHz, 8 GB RAM, Linux-pae kernel.
Tullio
two work units have stalled out
Both report "waiting to run (waiting for GPU memory)"
WU# 103876166
WU# 103845176
Not sure why the message
I don't have a compute capable GPU for Boinc programs to use
I did recently upgrade Boinc to version 6.12.33 and have noticed
since the change my results seem to be much more problematic with validation errors happening regularly
particularly with the Gamma Ray Pulsar search #1 v0.22
Perhaps it's an "erroneous error message" and they are waiting for main memory. You should check your "Memory: when computer is (not) in use, use at most" settings and open a new thread for that topic. :-)
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
I had a newest BOINC version - 6.12.34; WindowsXP SP3; nVidia GTX470 video card and 270.61 version of graphic drives. Workunits S6Bucket and FGRP1 successfully finish width claimed credit but all of BRP4 workunits computed on my GTX470 (successfully compute and send to server) finally had status - Validate Error.
Why ?
My graphic card is good and no errors reported in BOINC. What's wrong ?
If the problem isn't resolve I don't want to compute with my video card. My computer work for this workunits but finally nothing points are claimed. And this work are senseless.
I looked at some of the
I looked at some of the result files, these all appear ok.
Did you change something when or shortly before that problem started? Upgraded BOINC? Changed your username?
BM
Edit: There have been versions of the validator that were confused by non-ASCII characters in comments (e.g. usernames). I updated the validator to most recent code, if this was the problem, it should be solved now.
Edit#2: Yep, this seems to have solved the problem. I ran the validator on your results again and they were found valid. Unfortunately for some tasks it was too late, but at least four still got credit.
Thanks for reporting!
BM
I have 2 "inconclusive"
I have 2 "inconclusive" workunits (BRP3 CUDA).
Pending tasks / My Computer info
Why?
RE: Why? Because, the
Because, the validator was called to examine a quorum (two successfully completed sets of answers for a given workunit) but it found that either (a) there was no agreement or (b) one of the answers could immediately be ruled out from further examination because it couldn't possibly be correct or perhaps was missing or corrupt in some way. If you take a good look at the two examples you linked, you will see an example of each of these two separate cases.
These two cases can be summarised as:-
* These two answers are structurally OK but they don't agree closely enough with each other so both are marked inconclusive until an extra answer from an extra host can be returned and used as a decider.
* These two answers don't even need to be checked. One of them is either corrupt, missing or clearly wrong since its contents don't conform with the expected structure that it should have. The label is 'inconclusive' for the OK task and 'validate error' for the presumably missing, corrupt or otherwise unacceptable task. The 'inconclusive' result will remain as a 'pending' until such time as an extra result is returned and a proper comparison of results is made. The 'validate error' task can be rejected immediately.
This is all fine and dandy for weeding out corrupt results coming from dodgy clients (eg excessive overclocking) but is a bit tough on the participant if the 'validate error' is actually a symptom of a server-side failure - as it has been in a number of instances. The purpose of this special thread (why it was made sticky in the first place) is to provide a "low noise" mechanism for contributing examples of "potential server-side" validate errors back to the Devs so that what caused the error (like bugs in the validator itself) can be found and fixed. As an example of a bug (now fixed) take a look at the two posts immediately prior to yours.
Your two 'inconclusives' are NOT 'validate errors'. They are really no different to any of your other 'pendings'. They are just waiting for a further result to be returned so that the validator can check for agreement (or otherwise). There is nothing inherently wrong with having the 'inconclusive' tag on your result. If you aren't sure about something it's usually best to start your own thread unless you are really certain that it's related to an existing thread.
Cheers,
Gary.
I have just started to get a
I have just started to get a lot of invalid CUDA work units. However, it is not all of them, it is about 1 in every three processed. Here are the most recent 20 invalid ones :
97428036 97427419 96701073 96693557 96689873 96685650 97148041 97415492 97415099 97061785 97133766 97412449 97411745 97411278 97410844 97408685 97408302 97396701 97046448 97404221
I had been processing lots of these CUDA work units up until about one week ago, and all seemed fine although I was not checking results all the time, but the daily points total I was getting was consistent with most to all of them being valid. About a week and a bit ago, the scheduler over committed the computer, and I went a number of days when it did not download any CUDA work units, it was high priority processing some of the others (but a lot had to be aborted due to lack of time to process). When BOINC Manager finally started to download CUDA work units again about three days ago, this issue started.
I am running Ubuntu linux 32 bit version 10.10 with a GTX 570 Nvidia card, using the same Nvidia driver before and after (no change) .... it is the tested driver that Ubuntu offers to download from "Additional Drivers", not one from the Nvidia site. There were probably some general Ubuntu system updates done during the week of no CUDA processing, but I could not tell you what they might be. Another computer running the same version of Ubuntu and the same Nvidia driver but driving a Mac version of an Nvidia card, seems to work just fine.
Any idea why I am suddenly getting some invalid results ? Any suggestions as to possible fixes ?
Thanks
Richard
No one seems to be looking
No one seems to be looking here, and posting under the non-sticky threads does not seem to work either, but here goes another try. See the message above for some details. What happens now is that one third to one half of the CUDA work units fail immediately upon uploading with an error code of Validate error (2:00000010)
Those units that pass the initial validation and classified as successful later fail validation but these post an error of Validate Error (8:00001000).
In one comment I finally got in the non-sticky threads, Bernd indicated that the error 2:00000010 indicated some sort of calculation error. No one answered with the 8:00001000 is trying to indicate.
As noted in the earlier post, this suddenly started happening after many weeks of 100 % success on CUDA tasks. The video card seems to be working okay as a video card, but is there a chance there is some failed memory or something that would affect CUDA tasks but not show up in normal display ?
I have recently upgraded to Ubuntu 11.04, but the problem still occurs in the same way. The upgrade to Ubuntu 11.04 upgrades the Nvidia driver to the most recent stable one (which is more recent than the version running under 10.4), and I do have the error that a lot of prople have where it says the driver is activated but not in use (but I have a working display, so it DOES seem to be in use ...).
Any comments would be most welcome.
A couple of workunits to study would be
Name PM0119_01811.dm_64_0
Workunit 98091109
Created 26 May 2011 2:11:15 UTC
Sent 26 May 2011 11:29:38 UTC
Received 28 May 2011 18:27:29 UTC
Server state Over
Outcome Validate error (2:00000010)
and
Name PM0119_01641.dm_348_1
Workunit 98088126
Created 26 May 2011 1:36:45 UTC
Sent 26 May 2011 9:47:49 UTC
Received 28 May 2011 4:05:10 UTC
Server state Over
Outcome Validate error (8:00001000)
Regards
Richard
My first gamma-ray unit ended
My first gamma-ray unit ended with Validate error:
FPU status flags: PRECISION OVERFLOW DENORMALIZED INVALID
I am using SuSE Linux 11.1 on an Opteron 1210 at 1.8 GHz, 8 GB RAM, Linux-pae kernel.
Tullio
two work units have stalled
two work units have stalled out
Both report "waiting to run (waiting for GPU memory)"
WU# 103876166
WU# 103845176
Not sure why the message
I don't have a compute capable GPU for Boinc programs to use
I did recently upgrade Boinc to version 6.12.33 and have noticed
since the change my results seem to be much more problematic with validation errors happening regularly
particularly with the Gamma Ray Pulsar search #1 v0.22
Perhaps it's an "erroneous
Perhaps it's an "erroneous error message" and they are waiting for main memory. You should check your "Memory: when computer is (not) in use, use at most" settings and open a new thread for that topic. :-)
Gruß,
Gundolf
Computer sind nicht alles im Leben. (Kleiner Scherz)
I had a newest BOINC version
I had a newest BOINC version - 6.12.34; WindowsXP SP3; nVidia GTX470 video card and 270.61 version of graphic drives. Workunits S6Bucket and FGRP1 successfully finish width claimed credit but all of BRP4 workunits computed on my GTX470 (successfully compute and send to server) finally had status - Validate Error.
Why ?
My graphic card is good and no errors reported in BOINC. What's wrong ?
If the problem isn't resolve I don't want to compute with my video card. My computer work for this workunits but finally nothing points are claimed. And this work are senseless.
Task ID click for
Task ID
click for details
Show names Work unit ID
click for details Computer Sent Time reported
or deadline
explain Status Run time
(sec) CPU time
(sec) Claimed credit Granted credit Application
245736670 104423583 3831119 5 Sep 2011 4:28:55 UTC 5 Sep 2011 22:10:38 UTC Completed, marked as invalid 25,316.76 24,775.29 128.97 0.00 Gamma-ray pulsar search #1 v0.23
245364071 104256036 3831119 2 Sep 2011 19:28:59 UTC 3 Sep 2011 13:20:25 UTC Validate error 25,527.72 24,412.06 127.22 --- Gamma-ray pulsar search #1 v0.23
245272078 104214377 3831119 2 Sep 2011 5:27:44 UTC 2 Sep 2011 23:06:25 UTC Validate error 25,229.76 24,376.87 127.03 --- Gamma-ray pulsar search #1 v0.23
245090850 104132688 3831119 31 Aug 2011 22:15:40 UTC 1 Sep 2011 15:39:42 UTC Completed, marked as invalid 25,366.56 24,868.85 129.60 0.00 Gamma-ray pulsar search #1 v0.23
245060187 104118766 3831119 31 Aug 2011 18:02:45 UTC 1 Sep 2011 11:03:24 UTC Validate error 24,855.51 24,361.16 126.95 --- Gamma-ray pulsar search #1 v0.23
244974869 103666230 3831119 31 Aug 2011 6:12:45 UTC 1 Sep 2011 1:41:47 UTC Completed, marked as invalid 25,136.71 24,587.00 128.13 0.00 Gamma-ray pulsar search #1 v0.23
244939277 103954110 3831119 31 Aug 2011 1:15:34 UTC 31 Aug 2011 17:32:38 UTC Validate error 24,754.94 24,073.05 125.45 --- Gamma-ray pulsar search #1 v0.23
244685813 103949126 3267790 29 Aug 2011 15:41:10 UTC 30 Aug 2011 8:05:55 UTC Validate error 26,238.58 23,784.53 133.28 --- Gamma-ray pulsar search #1 v0.22
244657120 103935733 3831119 29 Aug 2011 12:03:16 UTC 30 Aug 2011 3:27:11 UTC Validate error 27,647.29 26,529.09 138.04 --- Gamma-ray pulsar search #1 v0.22
242909154 102985829 3267790 20 Aug 2011 1:39:55 UTC 21 Aug 2011 1:12:29 UTC Validate error 25,196.47 23,854.91 131.56 --- Gamma-ray pulsar search #1 v0.22