I've now been informed that the problem is resolved and that uploads have "been coming in in a constant stream without any problems". Initially there may have been a "can't parse config file (transient)" message, but that was only very temporary.
If you still have results stuck in upload, try selecting the result on the transfers tab and hit "retry transfer" and the upload should complete.
Just one thing to add to your last two posts here:
This is one of those cases where it is almost certainly better to let BOINC handle the issue itself, rather than taking any manual action. ;-)
The reason is there is a hard coded limit to the number (~100, IIRC) of upload attempts the CC will make before it decides the upload is hopelessly futile and junks the task.
The reason is there is a hard coded limit to the number (~100, IIRC) of upload attempts the CC will make before it decides the upload is hopelessly futile and junks the task.
OK, thanks.
Since you are the OP for this thread, I presume your upload completed successfully?
The reason is there is a hard coded limit to the number (~100, IIRC) of upload attempts the CC will make before it decides the upload is hopelessly futile and junks the task.
OK, thanks.
Since you are the OP for this thread, I presume your upload completed successfully?
Won't know for sure until it's scheduled network access time today (0000 UTC), but based on the later reports I've got no reason to assume otherwise. ;-)
I'm reluctant to override the schedule, since all my hosts run MW as well. With their fast task purge, it can lead missed data points in my logs and/or wasted compute time if I don't stick to the scheduled access times rigorously. :-)
I'm reluctant to override the schedule, since all my hosts run MW as well. With their fast task purge, it can lead missed data points in my logs and/or wasted compute time if I don't stick to the scheduled access times rigorously. :-)
I understand MW's fast track (6 hr) purge so I think I understand what you must be doing. Correct me if I'm wrong, but I assume you are controlling network access so that you upload all completed MW tasks at a known time and therefore have a known period of 6 hours within which you can expect to get the data on those tasks from the website. Is that about it?
If so, aren't you having problems getting new MW tasks within that restricted "network on" window of opportunity? With the limit of 6 tasks/core at any one time and the high chance of "0 tasks received" messages when you try, isn't that driving you nuts? :-).
If you left network access always on and simply had a script that looked for new data on the website, say every 4 hours, wouldn't that give you more opportunity to get new work without risking the loss of data?
Just so I don't get accused of joining a particular side of the current MW script war, the script I'm suggesting would be quite neutral with respect to gaining new work :-).
As it turns out, since I run 4 projects at equal shares (except for the NT4 and G3 hosts which run 3) things come out pretty much a wash in the long run concerning getting work. If they get zipped on one day for new work, they typically will pull a full max load the next day, since the built up LTD puts MW at the top of the next fetch queue. BTW, constantly stabbing the manual update button tends to defeat this built in 'equalizer', since LTD doesn't accumulate for projects out of work when they are comm deferred.
The reason I don't use 'gadfly' scripts for data collection is getting the task granted credit data requires you to scrape most of the pages for all of your hosts, which can bring on a significant DB load hit since the project has to load all your account records into memory. This way, I just scrape all the projects once a day and then leave it at that. ;-)
RE: I've now been informed
)
Just one thing to add to your last two posts here:
This is one of those cases where it is almost certainly better to let BOINC handle the issue itself, rather than taking any manual action. ;-)
The reason is there is a hard coded limit to the number (~100, IIRC) of upload attempts the CC will make before it decides the upload is hopelessly futile and junks the task.
Alinator
RE: The reason is there is
)
OK, thanks.
Since you are the OP for this thread, I presume your upload completed successfully?
Cheers,
Gary.
RE: RE: The reason is
)
Won't know for sure until it's scheduled network access time today (0000 UTC), but based on the later reports I've got no reason to assume otherwise. ;-)
I'm reluctant to override the schedule, since all my hosts run MW as well. With their fast task purge, it can lead missed data points in my logs and/or wasted compute time if I don't stick to the scheduled access times rigorously. :-)
Alinator
RE: I'm reluctant to
)
I understand MW's fast track (6 hr) purge so I think I understand what you must be doing. Correct me if I'm wrong, but I assume you are controlling network access so that you upload all completed MW tasks at a known time and therefore have a known period of 6 hours within which you can expect to get the data on those tasks from the website. Is that about it?
If so, aren't you having problems getting new MW tasks within that restricted "network on" window of opportunity? With the limit of 6 tasks/core at any one time and the high chance of "0 tasks received" messages when you try, isn't that driving you nuts? :-).
If you left network access always on and simply had a script that looked for new data on the website, say every 4 hours, wouldn't that give you more opportunity to get new work without risking the loss of data?
Just so I don't get accused of joining a particular side of the current MW script war, the script I'm suggesting would be quite neutral with respect to gaining new work :-).
Cheers,
Gary.
Yep, that's about the size of
)
Yep, that's about the size of it.
As it turns out, since I run 4 projects at equal shares (except for the NT4 and G3 hosts which run 3) things come out pretty much a wash in the long run concerning getting work. If they get zipped on one day for new work, they typically will pull a full max load the next day, since the built up LTD puts MW at the top of the next fetch queue. BTW, constantly stabbing the manual update button tends to defeat this built in 'equalizer', since LTD doesn't accumulate for projects out of work when they are comm deferred.
The reason I don't use 'gadfly' scripts for data collection is getting the task granted credit data requires you to scrape most of the pages for all of your hosts, which can bring on a significant DB load hit since the project has to load all your account records into memory. This way, I just scrape all the projects once a day and then leave it at that. ;-)
Alinator
Just to conclude my first
)
Just to conclude my first adventure with ABPS, the upload went through when the window opened earlier today.
Now it's it's waiting for validation, but there is absolutely nothing I can do about that! :-)
Alinator