The thing is, the preferences are written to local disk, in the global_prefs.xml file. If this file becomes corrupt due to whatever is happening to your drive (anti virus scan, scandisk or defrag while BOINC is on, such things), a simple rewrite as paulcet did is useful.
I agree. It could be local disk issues that corrupted their prefs (if, indeed all 3 were corrupted). I just find it a bit curious that these 3 users all ran into disk space issues after some period of successful BOINC/Einstein operations. And, I am suggesting that we should be alert to the possibility that there may be a common thread related to BOINC 5.2.X.
I agree. It could be local disk issues that corrupted their prefs (if, indeed all 3 were corrupted). I just find it a bit curious that these 3 users all ran into disk space issues after some period of successful BOINC/Einstein operations. And, I am suggesting that we should be alert to the possibility that there may be a common thread related to BOINC 5.2.X.
Thanks, Stick. I also found it curious.... First, that I had an issue at all after successfully running several WU. Second, that others had the same problem at about the same time. I certainly can't say that the .xml file wasn't corrupted. I do run antivirus scans daily.
Why is it I can't even take a weekend on my friend's boat down at the shore without missing such a lively conversation? :-)
I'm having a little trouble following this one. Why is it we want to allow Boinc full rein to huge tracts of hard disk space? Pardon my ignorance, but since Climate Prediction is the longest crunchtime, does that mean that it is a huge datafile? How large? As some of you know, I run only Einstein. My Boinc folder is ~27MB. Assuming I had a dialup instead of my cable, and cached more datafiles, I could imagine that increasing to maybe 50-70MB, and adding a couple more projects might increase the need to possibly 100-150MB before I cached enough to endanger deadlines. Now assume I have a superfast multiproc rig instead of the existing one (which is capable of averaging almost 5 Einstein WUs/day), lets concede up to 300MB. What I can't seem to fathom is ... why should anyone need more than 1 gig for Boinc? I've allocated "no more than 1GB", and even that seems to me to be a waste of drive space, though My 80gig E drive still has 27GB open. Please, jump in and correct my logic if the flow has gone astray.
Respects,
Michael
edited for dyslexic typing , lol
microcraft
"The arc of history is long, but it bends toward justice" - MLK
Welcome to the meeting, Michael. A typical CPDN unit's size doesn't matter much. What matters is that it can take up to 600MB of disk space when it is running.
Now the other trouble is resource shares. If you are running multiple projects, the amount of resource shares you have given to all projects decides how much disk space the projects gets alotted.
It's easy if you leave your RS at the standard 100 for all projects, which means if you run Seti, CPDN and Einstein all at 100 on Gary's 10GB, that all get 33% of the disk space you set up there: 3.33GB.
Yet, this will change with different settings of course. 30 - 30 - 40 will give 30% to Seti & CPDN, 40% to Einstein. Yet 1000 - 1000 - 1 ... may not give enough space to Einstein for even downloading a unit! (4.9975 GB for Seti & CPDN = 9.995GB ... 10GB - 9.995GB = 0.005GB ... that is 5 Megabyte.)
Welcome to the meeting, Michael. A typical CPDN unit's size doesn't matter much. What matters is that it can take up to 600MB of disk space when it is running.
Now the other trouble is resource shares. If you are running multiple projects, the amount of resource shares you have given to all projects decides how much disk space the projects gets alotted.
It's easy if you leave your RS at the standard 100 for all projects, which means if you run Seti, CPDN and Einstein all at 100 on Gary's 10GB, that all get 33% of the disk space you set up there: 3.33GB.
Yet, this will change with different settings of course. 30 - 30 - 40 will give 30% to Seti & CPDN, 40% to Einstein. Yet 1000 - 1000 - 1 ... may not give enough space to Einstein for even downloading a unit! (4.9975 GB for Seti & CPDN = 9.995GB ... 10GB - 9.995GB = 0.005GB ... that is 5 Megabyte.)
Jord,
Thanks for trying to clear things up for me, though I'm still having trouble wrapping my brain around CPDN taking 600MB for processing. I have absolutely NO intention of taking on any more than Einstein, I was only trying to set up a worst-case scenario for needing vast tracts of disk space. I would suppose that the 600MB figure for CPDN was for virtual space, then.
With the colder weather setting in here and ambient temps in my room being on the wrong side of 60 degrees F, I've managed to finally break through the elusive 5-hour barrier for Einstein, by further tweaking my Athlon XP to run at 2763MHZ (or 2778, according to CpuZ), though that doesn't allow much leeway to do anything else alongside without crashing or freezing up.
Michael
microcraft
"The arc of history is long, but it bends toward justice" - MLK
I agree. It could be local disk issues that corrupted their prefs (if, indeed all 3 were corrupted). I just find it a bit curious that these 3 users all ran into disk space issues after some period of successful BOINC/Einstein operations. And, I am suggesting that we should be alert to the possibility that there may be a common thread related to BOINC 5.2.X.
Thanks, Stick. I also found it curious.... First, that I had an issue at all after successfully running several WU. Second, that others had the same problem at about the same time. I certainly can't say that the .xml file wasn't corrupted. I do run antivirus scans daily.
Paul,
Have you had any other indications of corrupted files on your computer? Especially, any non-BOINC files? If so, that would tend to confirm Ageless's theory of local disk issues. On the other hand, have you made any other BOINC related changes recently - like attaching additional projects or adjusting resource shares? As you have probably guessed, I am just brainstorming as to possible events that might have corrupted your prefs file.
Stick
EDIT: I am also trying to get the original problem back up to the top of the thread. :-)
As you have probably guessed, I am just brainstorming as to possible events that might have corrupted your prefs file.
Any file can easily become corrupted at the moment it is being written to while it is being moved, defragged or scanned. Hence why I advise to put the BOINC folder on its own partition and not to include it in an anti-virus scan, a defrag or scandisk in Windows. Only let it go through scandisk upon bootup.
@Michael, the CPDN file size is big when the work unit is being worked on. You can compare it to virtual memory, but for that when the work unit is finished, that only a portion of the data is sent back up, while the rest of it stays on your harddrive. CPDN does not clean this bit up. So if you have just run a big WU for the past 6 months and you start a new one, the old 400 to 600MB of data is still there.
Any file can easily become corrupted at the moment it is being written to while it is being moved, defragged or scanned. Hence why I advise to put the BOINC folder on its own partition and not to include it in an anti-virus scan, a defrag or scandisk in Windows. Only let it go through scandisk upon bootup.
Ageless,
If a prefs file were corrupted (via scanning or defrag) and it became unreadable due to checksum or EOF problems, would BOINC generate the same type of no "disc space" message as has been reported here? Or would that type of message only be generated from an "intact" prefs file whose data appeared to be valid?
I never had my files go corrupt. I know that when the client_state.xml file goes corrupt, that you lose some of the affected projects. Depends on from where the corruption took place. XML files are just text files, so part of them can still be correct, while the rest is missing/garbage.
Might be something to test.
At least when you 're-save' the data on the website and update the project, the file will be overwritten.
RE: The thing is, the
)
I agree. It could be local disk issues that corrupted their prefs (if, indeed all 3 were corrupted). I just find it a bit curious that these 3 users all ran into disk space issues after some period of successful BOINC/Einstein operations. And, I am suggesting that we should be alert to the possibility that there may be a common thread related to BOINC 5.2.X.
RE: I agree. It could be
)
Thanks, Stick. I also found it curious.... First, that I had an issue at all after successfully running several WU. Second, that others had the same problem at about the same time. I certainly can't say that the .xml file wasn't corrupted. I do run antivirus scans daily.
Hi All, Why is it I can't
)
Hi All,
Why is it I can't even take a weekend on my friend's boat down at the shore without missing such a lively conversation? :-)
I'm having a little trouble following this one. Why is it we want to allow Boinc full rein to huge tracts of hard disk space? Pardon my ignorance, but since Climate Prediction is the longest crunchtime, does that mean that it is a huge datafile? How large? As some of you know, I run only Einstein. My Boinc folder is ~27MB. Assuming I had a dialup instead of my cable, and cached more datafiles, I could imagine that increasing to maybe 50-70MB, and adding a couple more projects might increase the need to possibly 100-150MB before I cached enough to endanger deadlines. Now assume I have a superfast multiproc rig instead of the existing one (which is capable of averaging almost 5 Einstein WUs/day), lets concede up to 300MB. What I can't seem to fathom is ... why should anyone need more than 1 gig for Boinc? I've allocated "no more than 1GB", and even that seems to me to be a waste of drive space, though My 80gig E drive still has 27GB open. Please, jump in and correct my logic if the flow has gone astray.
Respects,
Michael
edited for dyslexic typing , lol
microcraft
"The arc of history is long, but it bends toward justice" - MLK
Welcome to the meeting,
)
Welcome to the meeting, Michael. A typical CPDN unit's size doesn't matter much. What matters is that it can take up to 600MB of disk space when it is running.
Now the other trouble is resource shares. If you are running multiple projects, the amount of resource shares you have given to all projects decides how much disk space the projects gets alotted.
It's easy if you leave your RS at the standard 100 for all projects, which means if you run Seti, CPDN and Einstein all at 100 on Gary's 10GB, that all get 33% of the disk space you set up there: 3.33GB.
Yet, this will change with different settings of course. 30 - 30 - 40 will give 30% to Seti & CPDN, 40% to Einstein. Yet 1000 - 1000 - 1 ... may not give enough space to Einstein for even downloading a unit! (4.9975 GB for Seti & CPDN = 9.995GB ... 10GB - 9.995GB = 0.005GB ... that is 5 Megabyte.)
RE: Welcome to the meeting,
)
Jord,
Thanks for trying to clear things up for me, though I'm still having trouble wrapping my brain around CPDN taking 600MB for processing. I have absolutely NO intention of taking on any more than Einstein, I was only trying to set up a worst-case scenario for needing vast tracts of disk space. I would suppose that the 600MB figure for CPDN was for virtual space, then.
With the colder weather setting in here and ambient temps in my room being on the wrong side of 60 degrees F, I've managed to finally break through the elusive 5-hour barrier for Einstein, by further tweaking my Athlon XP to run at 2763MHZ (or 2778, according to CpuZ), though that doesn't allow much leeway to do anything else alongside without crashing or freezing up.
Michael
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: RE: I agree. It
)
Paul,
Have you had any other indications of corrupted files on your computer? Especially, any non-BOINC files? If so, that would tend to confirm Ageless's theory of local disk issues. On the other hand, have you made any other BOINC related changes recently - like attaching additional projects or adjusting resource shares? As you have probably guessed, I am just brainstorming as to possible events that might have corrupted your prefs file.
Stick
EDIT: I am also trying to get the original problem back up to the top of the thread. :-)
RE: As you have probably
)
Any file can easily become corrupted at the moment it is being written to while it is being moved, defragged or scanned. Hence why I advise to put the BOINC folder on its own partition and not to include it in an anti-virus scan, a defrag or scandisk in Windows. Only let it go through scandisk upon bootup.
@Michael, the CPDN file size is big when the work unit is being worked on. You can compare it to virtual memory, but for that when the work unit is finished, that only a portion of the data is sent back up, while the rest of it stays on your harddrive. CPDN does not clean this bit up. So if you have just run a big WU for the past 6 months and you start a new one, the old 400 to 600MB of data is still there.
RE: Any file can easily
)
Ageless,
If a prefs file were corrupted (via scanning or defrag) and it became unreadable due to checksum or EOF problems, would BOINC generate the same type of no "disc space" message as has been reported here? Or would that type of message only be generated from an "intact" prefs file whose data appeared to be valid?
Stick
Good question. Not a clue.
)
Good question.
Not a clue. ;)
I never had my files go corrupt. I know that when the client_state.xml file goes corrupt, that you lose some of the affected projects. Depends on from where the corruption took place. XML files are just text files, so part of them can still be correct, while the rest is missing/garbage.
Might be something to test.
At least when you 're-save' the data on the website and update the project, the file will be overwritten.
RE: Not a clue. ;) I am
)
I am surprised! I thought you knew everything there was to know about BOINC. ;-)
I've never had a corrupt file either. Nor have I ever had a problem related to virus scanning or defragmenting.
Hopefully, Paul will answer my earlier post.