Assuming these two cards are in the same machine, you must have a configuration file that tells the client to use both GPUs.
Yes:
1
Also I have one free CPU HT core and swan song:0.
Quote:
So you are a bit on your own there, there's nothing that the BOINC Client or the project could do for you. The client detects and reports only the parameters of the "best" card, and the project scheduler sends work for these parameters. Pitty that they don't fit your smaller card, but per default the BOINC Client wouldn't use it anyway.
We are still not sure of how much memory the BRP computation actually takes, from just the reports we get it looks like this varies a lot between different cards, at least on Windows. Might be a driver issue. At least there were quite some 256MB cards that couldn't run these tasks successfully. For now we raised the memory requirement to 300MB, just to be on the safe side, and we'll keep monitoring the actual memory usage of our application (currently on Linux only). When we're sure of what's happening there, we may lower this requirement again.
BM
According to GPU-Z, my 8400GS Uses less than 200MB Memory. Stock cooler BFG card that I have oc'ed to 600/400/1500. Barely gets warm when running 90-100% load.
I'll have to drop E@H cuda for the time being. I want to use all of my resources. When I add a better 3rd cuda card in a number of days I guess I will be crunching elsewhere... It seems like such a waste of hardware resources for the client not to be able to assess hardware's capabilities, the requirements of different projects, and distribute work in an equitable way. Especially with all of the very different configurations of systems and the varied requirements of projects out there.
Thanks for your time
It seems like such a waste of hardware resources for the client not to be able to assess hardware's capabilities, the requirements of different projects, and distribute work in an equitable way. Especially with all of the very different configurations of systems and the varied requirements of projects out there.
Nice complaint, but to the wrong parties.
Most of this information would then have to be released by Nvidia, who to date refuse to do so. There is no information out there that program developers can use (through a so-called API) that contains all the information on the capabilities of all the CUDA cards in existence.
This does hamper BOINC in its capabilities, especially when it comes to detecting and using the newer Fermi's. Just a warning, in case your new card is going to be a Fermi... You will need a lot of hands-on telling the science applications what your GPU can do.
You want to have this fixed? Write a letter of complaint to Nvidia (and while you're at it, do so to ATI as well, as they lack this information as well!!).
I was surprised about the fact, that application using CUDA can not get information about the memory that will be required for a certain operation. It makes impossible to make memory intensive applications and to take benefits from big memory cards. Instead we have to assume the application is running on the 256Mb card.
I was surprised about the fact, that application using CUDA can not get information about the memory that will be required for a certain operation. It makes impossible to make memory intensive applications and to take benefits from big memory cards. Instead we have to assume the application is running on the 256Mb card.
The thing about this that I am unable to logically understand:
1. The client starts up & recognizes the amount of RAM on each GPU.
2. The client contacts the project and the project will choose whether to send an application if it sees that the client has a GPU with sufficient RAM.
3. Once a project is downloaded the client can not tell the application to not try to run on a GPU with insufficient RAM.
If my understanding is correct- the memory recognition that occurs between the client and server is more sophisticated than what occurs between the client and application.
This is difficult for me to fathom. I am not criticizing anyone, and I do not deserve an answer. I just want to do as much 'strong' science crunching as I can.
1. The client starts up & recognizes the amount of RAM on each GPU.
afaik boinc only reports the better one if more than one GPU is present - why?
go ask DA what's the prupose of this.
Quote:
2. The client contacts the project and the project will choose whether to send an application if it sees that the client has a GPU with sufficient RAM.
if the above is a fact, noone on project-side can do anything to fix it.
Quote:
3. Once a project is downloaded the client can not tell the application to not try to run on a GPU with insufficient RAM.
it will just produce an error - just like any other error which might happen. and boinc starts that process from scratch.
there is NO implementation inside boinc to react on a special error specifically.
the only thing that will happen, is that DA's obfuscated work-fetch policy will raise request timeouts for that special project. of course no matter why and how errors have occured.
Quote:
If my understanding is correct- the memory recognition that occurs between the client and server is more sophisticated than what occurs between the client and application.
This is difficult for me to fathom. I am not criticizing anyone, and I do not deserve an answer. I just want to do as much 'strong' science crunching as I can.
it's really complicated - this was just a quick attempt..
...is that DA's obfuscated work-fetch policy will raise request timeouts for that special project. of course no matter why and how errors have occured....
...is that DA's obfuscated work-fetch policy will raise request timeouts for that special project. of course no matter why and how errors have occured....
Who or what is this DA?
David Anderson - BOINC platform project developer. Look at Wiki for this.
RE: Assuming these two
)
Yes:
1
Also I have one free CPU HT core and swan song:0.
According to GPU-Z, my 8400GS Uses less than 200MB Memory. Stock cooler BFG card that I have oc'ed to 600/400/1500. Barely gets warm when running 90-100% load.
I'll have to drop E@H cuda for the time being. I want to use all of my resources. When I add a better 3rd cuda card in a number of days I guess I will be crunching elsewhere... It seems like such a waste of hardware resources for the client not to be able to assess hardware's capabilities, the requirements of different projects, and distribute work in an equitable way. Especially with all of the very different configurations of systems and the varied requirements of projects out there.
Thanks for your time
RE: It seems like such a
)
Nice complaint, but to the wrong parties.
Most of this information would then have to be released by Nvidia, who to date refuse to do so. There is no information out there that program developers can use (through a so-called API) that contains all the information on the capabilities of all the CUDA cards in existence.
This does hamper BOINC in its capabilities, especially when it comes to detecting and using the newer Fermi's. Just a warning, in case your new card is going to be a Fermi... You will need a lot of hands-on telling the science applications what your GPU can do.
You want to have this fixed? Write a letter of complaint to Nvidia (and while you're at it, do so to ATI as well, as they lack this information as well!!).
I was surprised about the
)
I was surprised about the fact, that application using CUDA can not get information about the memory that will be required for a certain operation. It makes impossible to make memory intensive applications and to take benefits from big memory cards. Instead we have to assume the application is running on the 256Mb card.
RE: I was surprised about
)
The thing about this that I am unable to logically understand:
1. The client starts up & recognizes the amount of RAM on each GPU.
2. The client contacts the project and the project will choose whether to send an application if it sees that the client has a GPU with sufficient RAM.
3. Once a project is downloaded the client can not tell the application to not try to run on a GPU with insufficient RAM.
If my understanding is correct- the memory recognition that occurs between the client and server is more sophisticated than what occurs between the client and application.
This is difficult for me to fathom. I am not criticizing anyone, and I do not deserve an answer. I just want to do as much 'strong' science crunching as I can.
ok, i'll
)
ok, i'll try...
afaik boinc only reports the better one if more than one GPU is present - why?
go ask DA what's the prupose of this.
if the above is a fact, noone on project-side can do anything to fix it.
it will just produce an error - just like any other error which might happen. and boinc starts that process from scratch.
there is NO implementation inside boinc to react on a special error specifically.
the only thing that will happen, is that DA's obfuscated work-fetch policy will raise request timeouts for that special project. of course no matter why and how errors have occured.
it's really complicated - this was just a quick attempt..
RE: ...is that DA's
)
Who or what is this DA?
RE: RE: ...is that DA's
)
David Anderson - BOINC platform project developer. Look at Wiki for this.