Error: couldn't parse symbol information for file name: ��'�D$��7#�D$���&�$������&�t$�$�7#�D$�M
Actually it seems like that, but I don't want to believe it, since my computer behaves stable (I have no problems except KDE4 bugs). My Einstein is running on 1TB Seagate ST31000528AS disk, partition 6, formatted as ReiserFS 3.6. The only known bug related to disk behaviour is some strange compatibility problem of KDE4 with non-UTF8 symbols, but I don't think it has something to do with Einstein.
But, there is one more bug in my system I know about - my motherboard (ASUS M4A78T-E) utilizes AMD SB790 which has a bug with Kingston USB pens... maybe there are some more problem. Do you have some similar reports?
Hello, just found this thread and wanted to let you know that I have the same problem and a similar configuration computer ID (Asus M3N78-EM, AMD Phenom(tm) II X4 940, 8GB Ram, Seagate ST3300831A formatted with ext3 and WD1001FALS-0 formatted with ext3, Suse 11.2).
Among others, I get the
Error: couldn't parse symbol information for file name: ��'�D$��7#�D$���&�$������&�t$�$�7#�D$�M
type of error most of the time.
Unfortunately, in a first attempt to fix the problem myself, I ended up with a configuration that doesn't allow me to do any work at all anymore. Upon stating the BOINC manager it doesn't show any of my unfinished WUs or any of the projects I am attached to. Also, I cannot attach to any new projects. The manager seems to try to connect to "localhost", probably fails and stays "disconnected". Installing BOINC in a different location does not fix the problem.
The second Problem is likely to be a configuration problem on my machine, but right now I don't have any clue where or what to look for. Any suggestions?
Thanks,
Michael
Hello,
actually there are two different problems to solce, so let's start with the first one - problems with Einstein:
It seems, that only common things in our PCs are CPUs and hard disk drives made by Seagate ... you have a nVidia chipset, but in my computer, there is an AMD chipset. Maybe the error itself is a matter of AMD's quad core CPUs, maybe in combination with Seagate's hard drives - but WHY our operating systems would be stable but Einstein would not?
Last week I have tried current version of BOINC - 6.10.24, downloaded from here and problems seem to disappear (see here, the update has been done on December 17th) ... try it, it may help you.
The second problem is related to BOINC itself - which version do you use? I mean - version downloaded from Berkeley or version installed from our repositories? I did not manage BOINC from openSUSE's repos get working, so I use Berkeley's version and it's fine. If you want fully to erase setings of this version, just delete your ~/.boinc directory. Settings of distribution version are stored directly in your home directory and they have an xml suffix (I think it's client_state.xml and maybe one or two more). Then install a new version of BOINC. You can debug it by running separately the client and the manager from console using scripts run_client and run_manager.
Error: couldn't parse symbol information for file name: ��'�D$��7#�D$���&�$������&�t$�$�7#�D$�M
Actually it seems like that, but I don't want to believe it, since my computer behaves stable (I have no problems except KDE4 bugs). My Einstein is running on 1TB Seagate ST31000528AS disk, partition 6, formatted as ReiserFS 3.6. The only known bug related to disk behaviour is some strange compatibility problem of KDE4 with non-UTF8 symbols, but I don't think it has something to do with Einstein.
But, there is one more bug in my system I know about - my motherboard (ASUS M4A78T-E) utilizes AMD SB790 which has a bug with Kingston USB pens... maybe there are some more problem. Do you have some similar reports?
Hello, just found this thread and wanted to let you know that I have the same problem and a similar configuration computer ID (Asus M3N78-EM, AMD Phenom(tm) II X4 940, 8GB Ram, Seagate ST3300831A formatted with ext3 and WD1001FALS-0 formatted with ext3, Suse 11.2).
Among others, I get the
Error: couldn't parse symbol information for file name: ��'�D$��7#�D$���&�$������&�t$�$�7#�D$�M
type of error most of the time.
Unfortunately, in a first attempt to fix the problem myself, I ended up with a configuration that doesn't allow me to do any work at all anymore. Upon stating the BOINC manager it doesn't show any of my unfinished WUs or any of the projects I am attached to. Also, I cannot attach to any new projects. The manager seems to try to connect to "localhost", probably fails and stays "disconnected". Installing BOINC in a different location does not fix the problem.
The second Problem is likely to be a configuration problem on my machine, but right now I don't have any clue where or what to look for. Any suggestions?
Thanks,
Michael
Hello,
actually there are two different problems to solce, so let's start with the first one - problems with Einstein:
It seems, that only common things in our PCs are CPUs and hard disk drives made by Seagate ... you have a nVidia chipset, but in my computer, there is an AMD chipset. Maybe the error itself is a matter of AMD's quad core CPUs, maybe in combination with Seagate's hard drives - but WHY our operating systems would be stable but Einstein would not?
Last week I have tried current version of BOINC - 6.10.24, downloaded from here and problems seem to disappear (see here, the update has been done on December 17th) ... try it, it may help you.
The second problem is related to BOINC itself - which version do you use? I mean - version downloaded from Berkeley or version installed from our repositories? I did not manage BOINC from openSUSE's repos get working, so I use Berkeley's version and it's fine. If you want fully to erase setings of this version, just delete your ~/.boinc directory. Settings of distribution version are stored directly in your home directory and they have an xml suffix (I think it's client_state.xml and maybe one or two more). Then install a new version of BOINC. You can debug it by running separately the client and the manager from console using scripts run_client and run_manager.
Hello,
Thanks for your quick response.
I have sorted out the BOINC related problem now - it's up and running again :-)
During my first attempt to fix the "compute error problem" I also installed the openSUSE BOINC and forgot about it. This client somehow conflicts with my usual Berkeley client. After de-installing the SUSE client everything is back to normal.
Now to problem one: The original BOINC was the Berkeley version 6.10.17. As my original intention was to upgrade BOINC, I did just that. I downloaded version 6.10.25 from the subversion repository as described here and compiled it myself. It's now up and running. Unfortunately, while I'm writing this, I also got the first compute error for one of the Einstein WUs see here - ID:152021931. Maybe it's just an artifact of the previous problem. So I will observe it for a couple of days and see what happens.
Actually, the error to look for is the exit code 38 and the "process got signal 8". Although you do have a later kernel which shouldn't portray this problem, do look in this FAQ for what may be causing those and check the status of CONFIG_PREEMPT in your kernel.
I checked my kernel config and the CONFIG_PREEMPT is indeed set to "yes".
The description in the FAQ and the related thread are pretty much what I observe on my machine. I always had the impression that this error was somehow load-dependent without having any real proof. The error always shows when I am doing something else, like opening a WEB browser or switching between desktops. After reading the FAQ I was able to provoke the error by starting a couple of videos and switching between desktops.
I also don't believe that is is an AMD related problem as the discussion between Pushkin and myself might suggest, because I observed the same behavior on my previous machine that was Intel only (with SUSE 11.1).
So everything is pointing to that kernel bug that should be fix by now.
As soon as I can find the time I will build a new kernel and hope that the error is gone then. I will let you know.
I re-compiled my kernel with the following preemption settings:
# CONFIG_PREEMPT_RCU is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
I started the kernel on Dec. 24th and let it run over the holidays until Dec. 27th.
As you can see here the compute errors are still there, although this and this look slightly different.
I have now compiled a new kernel with the following preemption settings:
# CONFIG_PREEMPT_RCU is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
This kernel is running since Dec.28th. Let's see what happens.
I see the first task ran to completion before it stopped working, first with that signal 8, then followed by a signal 11. Signal 11 is a SIGSEGV error, which can be caused by bad memory, or bad virtual memory (page file).
Since the task was essentially done and the application was wrapping things up when it happened, writing the contents from memory to disk, you may have a memory problem. You can check that with Memtest86+
The other task, an ABP1, only got signal 11. So I would really check the hardware first, before recompiling the kernel again.
Although I have checked the memory with Memtest86+ it most likely is the PEEMPTION bug after all. The standard setting for SUSE seems to be
CONFIG_PREEMPT=y.
During my first attempt to build a new kernel with
CONFIG_PREEMPT_VOLUNTARY=y
there was obviously a problem during the kernel build process. Although I definitely set CONFIG_PREEMPT_VOLUNTARY=y the resulting kernel image wouldn't reflect this setting. I discovered this after comparing the contents of the /usr/src/linux/.config to /proc/config.gz after re-booting the machine with the newly built kernel.
I have now built a new kernel and verified that CONFIG_PREEMPT_VOLUNTARY is really set to YES. This kernel is running since Dec.30th. So far everything seems to be fine. If the Einstein WUs keep behaving well it means that the bug is not really fixed in 2.6.25.6 kernels or higher (at least not in the SUSE kernels).
My SuSE kernel is 2.6.27-39-0.2-pae and it does not give me any error.But on the SuSE site I cannot find its source, so I cannot compile the module for the vboxdrv driver for Sun VirtualBox. So I can no longer run Solaris as a guest OS and the SETI WU which I was running on it shall not be completed on its Jan 27 deadline (it started on Dec 12). This kernel came as a security patch from SuSE and although the speed on Solaris was about one fourth of that on Linux for the same kind of WU, I was running Solaris to remind me of a Unix OS which I used about 15 years ago (Solaris is Berkeley Unix).
Tullio
Hello, I am sorry for late reaction, since I was out for holidays. I checked my kernel config (kernel 2.6.31.5-0.1-desktop, openSUSE 11.2) and its PREEMT configuration is:
pushkin@ek211p07-kev:~> cat /proc/config.gz | gunzip - | grep PREEMPT
# CONFIG_PREEMPT_RCU is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
CONFIG_PREEMPT_NOTIFIERS=y
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_PREEMPT_TRACER is not set
so it may lead to these errors in Einstein. Now the question is, if it is a general kernel error which should be put in kernel's bugzilla or it is an openSUSE related kernel error (since Novell patches the kernel with some own code) which should be sent into Novell's bugzilla. Do you have any suggestions?
RE: RE: RE: Looks like
)
Hello,
actually there are two different problems to solce, so let's start with the first one - problems with Einstein:
It seems, that only common things in our PCs are CPUs and hard disk drives made by Seagate ... you have a nVidia chipset, but in my computer, there is an AMD chipset. Maybe the error itself is a matter of AMD's quad core CPUs, maybe in combination with Seagate's hard drives - but WHY our operating systems would be stable but Einstein would not?
Last week I have tried current version of BOINC - 6.10.24, downloaded from here and problems seem to disappear (see here, the update has been done on December 17th) ... try it, it may help you.
The second problem is related to BOINC itself - which version do you use? I mean - version downloaded from Berkeley or version installed from our repositories? I did not manage BOINC from openSUSE's repos get working, so I use Berkeley's version and it's fine. If you want fully to erase setings of this version, just delete your ~/.boinc directory. Settings of distribution version are stored directly in your home directory and they have an xml suffix (I think it's client_state.xml and maybe one or two more). Then install a new version of BOINC. You can debug it by running separately the client and the manager from console using scripts run_client and run_manager.
RE: RE: RE: RE: Looks
)
Hello,
Thanks for your quick response.
I have sorted out the BOINC related problem now - it's up and running again :-)
During my first attempt to fix the "compute error problem" I also installed the openSUSE BOINC and forgot about it. This client somehow conflicts with my usual Berkeley client. After de-installing the SUSE client everything is back to normal.
Now to problem one: The original BOINC was the Berkeley version 6.10.17. As my original intention was to upgrade BOINC, I did just that. I downloaded version 6.10.25 from the subversion repository as described here and compiled it myself. It's now up and running. Unfortunately, while I'm writing this, I also got the first compute error for one of the Einstein WUs see here - ID:152021931. Maybe it's just an artifact of the previous problem. So I will observe it for a couple of days and see what happens.
Cheers,
Michael
Actually, the error to look
)
Actually, the error to look for is the exit code 38 and the "process got signal 8". Although you do have a later kernel which shouldn't portray this problem, do look in this FAQ for what may be causing those and check the status of CONFIG_PREEMPT in your kernel.
Thanks for pointing us to the
)
Thanks for pointing us to the FAQ.
I checked my kernel config and the CONFIG_PREEMPT is indeed set to "yes".
The description in the FAQ and the related thread are pretty much what I observe on my machine. I always had the impression that this error was somehow load-dependent without having any real proof. The error always shows when I am doing something else, like opening a WEB browser or switching between desktops. After reading the FAQ I was able to provoke the error by starting a couple of videos and switching between desktops.
I also don't believe that is is an AMD related problem as the discussion between Pushkin and myself might suggest, because I observed the same behavior on my previous machine that was Intel only (with SUSE 11.1).
So everything is pointing to that kernel bug that should be fix by now.
As soon as I can find the time I will build a new kernel and hope that the error is gone then. I will let you know.
Thanks,
Michael
Hello, I hope everybody
)
Hello,
I hope everybody had a nice holiday and is well.
Here is an update on my error problem.
I re-compiled my kernel with the following preemption settings:
# CONFIG_PREEMPT_RCU is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
I started the kernel on Dec. 24th and let it run over the holidays until Dec. 27th.
As you can see here the compute errors are still there, although this and this look slightly different.
I have now compiled a new kernel with the following preemption settings:
# CONFIG_PREEMPT_RCU is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
This kernel is running since Dec.28th. Let's see what happens.
Cheers,
Michael
I see the first task ran to
)
I see the first task ran to completion before it stopped working, first with that signal 8, then followed by a signal 11. Signal 11 is a SIGSEGV error, which can be caused by bad memory, or bad virtual memory (page file).
Since the task was essentially done and the application was wrapping things up when it happened, writing the contents from memory to disk, you may have a memory problem. You can check that with Memtest86+
The other task, an ABP1, only got signal 11. So I would really check the hardware first, before recompiling the kernel again.
Ok, I believe I have finally
)
Ok, I believe I have finally found the problem.
Although I have checked the memory with Memtest86+ it most likely is the PEEMPTION bug after all. The standard setting for SUSE seems to be
CONFIG_PREEMPT=y.
During my first attempt to build a new kernel with
CONFIG_PREEMPT_VOLUNTARY=y
there was obviously a problem during the kernel build process. Although I definitely set CONFIG_PREEMPT_VOLUNTARY=y the resulting kernel image wouldn't reflect this setting. I discovered this after comparing the contents of the /usr/src/linux/.config to /proc/config.gz after re-booting the machine with the newly built kernel.
I have now built a new kernel and verified that CONFIG_PREEMPT_VOLUNTARY is really set to YES. This kernel is running since Dec.30th. So far everything seems to be fine. If the Einstein WUs keep behaving well it means that the bug is not really fixed in 2.6.25.6 kernels or higher (at least not in the SUSE kernels).
Cheers,
Michael
Fingers crossed then. :-)
)
Fingers crossed then. :-)
My SuSE kernel is
)
My SuSE kernel is 2.6.27-39-0.2-pae and it does not give me any error.But on the SuSE site I cannot find its source, so I cannot compile the module for the vboxdrv driver for Sun VirtualBox. So I can no longer run Solaris as a guest OS and the SETI WU which I was running on it shall not be completed on its Jan 27 deadline (it started on Dec 12). This kernel came as a security patch from SuSE and although the speed on Solaris was about one fourth of that on Linux for the same kind of WU, I was running Solaris to remind me of a Unix OS which I used about 15 years ago (Solaris is Berkeley Unix).
Tullio
Hello, I am sorry for late
)
Hello, I am sorry for late reaction, since I was out for holidays. I checked my kernel config (kernel 2.6.31.5-0.1-desktop, openSUSE 11.2) and its PREEMT configuration is:
pushkin@ek211p07-kev:~> cat /proc/config.gz | gunzip - | grep PREEMPT
# CONFIG_PREEMPT_RCU is not set
# CONFIG_PREEMPT_RCU_TRACE is not set
CONFIG_PREEMPT_NOTIFIERS=y
# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_PREEMPT_TRACER is not set
so it may lead to these errors in Einstein. Now the question is, if it is a general kernel error which should be put in kernel's bugzilla or it is an openSUSE related kernel error (since Novell patches the kernel with some own code) which should be sent into Novell's bugzilla. Do you have any suggestions?