If you are, it will be because they have been paired with a newer v1.04 task. These will start to show up more quickly now that there are many more hosts getting these tasks.
A third (v1.04) task will be sent out and it will validate against the existing v1.04. This will cause the v1.03 task to be declared invalid.
Please don't get upset about this. The v1.03 -> v1.04 transition was to fix an error in the science code so it is to be expected that the results from each app could be different enough for validation to fail. I would anticipate that the Devs will intervene manually to award credit to all V1.03 tasks that have been affected by this change.
Cheers,
Gary.
Copyright © 2024 Einstein@Home. All rights reserved.
Are you getting 'inconclusive' for the validation of V1.03 O1AS2
)
Upset? It's a beta. That you have to opt in for. If it would have blown my CPU I *might* have been a little upset, but not for lost credits.
BTW, I even have some 1.02 flagged as inconclusive
RE: Upset? It's a beta.
)
Exactly!
As a matter of fact, they are not automagically invalid either: my first O1 unit (1.04 Mac) happened to validate fine with an already older 1.03 one (Windows SSE2).
As the title clearly
)
As the title clearly indicates, I'm explaining about tasks that are labeled "inconclusive" and why that might be so. If you see a pair of tasks that validate, well and good. If you see an inconclusive pair, then the chances are one may eventually become invalid when a third is returned. Of course, it's possible that the third may fall 'in the middle', allowing all three to be close enough to be validated. We don't know how much of a change there was between 1.03 and 1.04 and how validation might be affected.
I have been looking at 43 tasks from my hosts that presented for validation. 28 were declared valid and 15 became inconclusive. I didn't look at every quorum but the valid ones I saw were all 1.03-1.03 or 1.04-1.04 comparisons. Of the inconclusive results I looked at, all were 1.03-1.04 comparisons. It seems fair to expect that others might be seeking an explanation for this.
I just wanted to have some information available so that perhaps the inevitable questions might be forestalled :-).
Cheers,
Gary.
The title is clear to you,
)
The title is clear to you, might be taken otherwise by someone else, especially in a forum full of non-native speakers. Your message then is very positively affirmative when saying that possible inconclusives will be caused by version differences. I am sorry that all such reports you saw your side showed version differences between you and your wingmen, while my first one completed and validated OK this morning against an older, different version: https://einsteinathome.org/workunit/239917648
Anyway, the points remain though: test crunching is something one opts for and in case a unit ends inconclusive or invalid, it is not automatically because of the version difference; there can be many other reasons.
...so far 36 WU's inconclusiv
)
...so far 36 WU's inconclusiv an 1 invalid!
Greetings from the North
Hi Folks, I,ve (and
)
Hi Folks,
I,ve (and perhaps a couple of others) about 34 Invalids for "Gravitational Wave search O1 all-sky tuning v1.03 (SSE2)windows_intelx86" against the 1.04 Version.
Even if this App is still Beta (?) will all these Task for the Basket? So far I've canceled all remaining Task and excluded them from my App-List.
Greetings from the North
Validation errors between
)
Validation errors between 1.04 and previous versions where expected and unfortunately could not have been prevented. The latest version 1.04 is working very fine despite the runtime discrepancies.