I'll attempt to answer that based on my very limited understanding. I think it has less to do with what users are willing to accept and more to do with making efficient use of all the various CPUs out there.
The "F" application requires a large cache on the CPU to avoid hitting RAM for data constantly (how constantly, I don't know, but enough that it would be causing the CPU to "stall" for a large percentage of cycles waiting for data).
I don't know precisely how the "I" application avoids that other than um... "less data" so it can work with smaller caches.
If you were to force an "I" machine to run an "F" application, you certainly could do it, but it would take really long and not accomplish as much science as an "I" machine running "I" tasks.
As already nicely commented, the "I" and "F" split of data is to make best use of the CPU types available.
I have a number of Phenom II CPUs all working nicely through Gravitational Wave "I" WUs.
If you are changing your settings on the E@H website, note that you need to force an update on your clients to immediately pick up the new settings...
And a note from my view: Intel has enjoyed an easy excess of transistors for their CPU designs that they have been able to squander on larger CPU caches and bigger FPUs. That is good for number crunching work but an expensive complete waste for most users that just click away at websites or a few office documents!
In contrast, I think AMD has hit a more useful sweet-spot for more efficient use of their transistor count. However, that also means that Intel wins by brute force for the FPU/cache performance, at a cost. Hence that suggests we should be making better use of GPU tech for more efficient number crunching all round...
If you were to force an "I" machine to run an "F" application...
Except you can't and I don't know why the project doesn't do what other projects do and let the users make the decision. Other projects state that some tasks need superior crunching power but they don't force users down any path.
How about observing this simple logic:
Preferences(F,I)
(0,0) - No tasks sent for either app
(1,0) - Only F tasks sent
(0,1) - Only I tasks sent
(1,1) - Tasks sent according to CPU strength, ie, if your CPU can handle both you get both, else you just get I tasks
Another question: Will discoveries and the appropriate user attribution only occur on F tasks?
If you were to force an "I" machine to run an "F" application...
Except you can't and I don't know why the project doesn't do what other projects do and let the users make the decision. Other projects state that some tasks need superior crunching power but they don't force users down any path.
How about observing this simple logic:
Preferences(F,I)
(0,0) - No tasks sent for either app
(1,0) - Only F tasks sent
(0,1) - Only I tasks sent
(1,1) - Tasks sent according to CPU strength, ie, if your CPU can handle both you get both, else you just get I tasks
Reading Christian's answer 5 posts before your post, one might draw the conclusion that implementing your request does take some time as they would have to build more custom code that no other project uses. They probably have other things to do that have a higher priority.
Quote:
Another question: Will discoveries and the appropriate user attribution only occur on F tasks?
NO! Both runs are equally important, see this message from Marialessandra Papa where she wrote "A work-unit from any of these runs is equally likely to harbour a signal and both runs are crucial to the search!"
So just because you have a CPU that don't like the F-task that doesn't mean that your contribution is any less than the next volunteer's, the I-tasks are also important!
You would have to be darn creative, but I'm sure there's some way you could do it with an anonymous platform or some such shenanigans.
That doesn't change the fact that I still don't understand the desire to willfully have a CPU poorly suited for the work thrash away at it futilely.
Might as well cut down all the trees in the forest WITH A HERRING. (and if you don't understand that reference, we just can't be friends) ;-)
One of my computers is old and slow enough to've gotten an I task instead of the F's that all the others have gotten. It was awarded 2000 credits vs the 1000 from the F's. Given the opportunity I could see credit whores abusing the system to give their fast boxes I units to get more points.
FWIW my elderly laptop does do particularly poorly with the O1 tasks (twice as long as FGRP); but credit parity vs the FGRP searches would probably be about 1300-1500/task.
Meanwhile my i7-930 system also takes almost 2x as long on O1 vs FGRP8; but its being given F tasks.
So my question is really about why the project feels the need for such task segregation. I personally don't have any issues with runtime estimation or work allocation so long as I can actually get work.
Quote:
Except you can't and I don't know why the project doesn't do what other projects do and let the users make the decision. Other projects state that some tasks need superior crunching power but they don't force users down any path.
That are some valid points and those were discussed internally prior to the decision to create two different applications.
Our main objective is to use the computational power that is available through your volunteered CPUs the most efficient way possible. Another objective is to get the data through Einstein@home as fast as possible. By limiting the application to certain CPUs we try to accomplish both of those objectives. Be assured that both searches are essential in our hunt for continuous gravitational waves and we need both to be finished in order to evaluate the results and start writing papers.
Making the limitation more user friendly would have taken more time to develop and test so we left it with the hard coded CPU models to start the run sooner than later.
Let me also assure you that this is not a model we want to establish for future searches. We were all surprised by the varying performance and will hopefully have a better solution for the following searches.
One of my computers is old and slow enough to've gotten an I task instead of the F's that all the others have gotten. It was awarded 2000 credits vs the 1000 from the F's. Given the opportunity I could see credit whores abusing the system to give their fast boxes I units to get more points.
That implies credit is awarded based on some form of political correctness to reward less-efficient machines. That in itself is merely annoying, but I wonder if they treat the data the same way?
One of my computers is old and slow enough to've gotten an I task instead of the F's that all the others have gotten. It was awarded 2000 credits vs the 1000 from the F's. Given the opportunity I could see credit whores abusing the system to give their fast boxes I units to get more points.
That implies credit is awarded based on some form of political correctness to reward less-efficient machines. That in itself is merely annoying, but I wonder if they treat the data the same way?
You can't draw that conclusion unless you know the relative speeds of the tasks involved, on the hardware used. 1,000 credits twice a day is worth a lot more than 2,000 credits every other month.
I'll attempt to answer that
)
I'll attempt to answer that based on my very limited understanding. I think it has less to do with what users are willing to accept and more to do with making efficient use of all the various CPUs out there.
The "F" application requires a large cache on the CPU to avoid hitting RAM for data constantly (how constantly, I don't know, but enough that it would be causing the CPU to "stall" for a large percentage of cycles waiting for data).
I don't know precisely how the "I" application avoids that other than um... "less data" so it can work with smaller caches.
If you were to force an "I" machine to run an "F" application, you certainly could do it, but it would take really long and not accomplish as much science as an "I" machine running "I" tasks.
As already nicely commented,
)
As already nicely commented, the "I" and "F" split of data is to make best use of the CPU types available.
I have a number of Phenom II CPUs all working nicely through Gravitational Wave "I" WUs.
If you are changing your settings on the E@H website, note that you need to force an update on your clients to immediately pick up the new settings...
And a note from my view: Intel has enjoyed an easy excess of transistors for their CPU designs that they have been able to squander on larger CPU caches and bigger FPUs. That is good for number crunching work but an expensive complete waste for most users that just click away at websites or a few office documents!
In contrast, I think AMD has hit a more useful sweet-spot for more efficient use of their transistor count. However, that also means that Intel wins by brute force for the FPU/cache performance, at a cost. Hence that suggests we should be making better use of GPU tech for more efficient number crunching all round...
It is all a balance of bottlenecks and costs!
Happy efficient crunchin'!!
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
RE: If you were to force an
)
Except you can't and I don't know why the project doesn't do what other projects do and let the users make the decision. Other projects state that some tasks need superior crunching power but they don't force users down any path.
How about observing this simple logic:
Preferences(F,I)
(0,0) - No tasks sent for either app
(1,0) - Only F tasks sent
(0,1) - Only I tasks sent
(1,1) - Tasks sent according to CPU strength, ie, if your CPU can handle both you get both, else you just get I tasks
Another question: Will discoveries and the appropriate user attribution only occur on F tasks?
RE: RE: If you were to
)
Reading Christian's answer 5 posts before your post, one might draw the conclusion that implementing your request does take some time as they would have to build more custom code that no other project uses. They probably have other things to do that have a higher priority.
NO! Both runs are equally important, see this message from Marialessandra Papa where she wrote "A work-unit from any of these runs is equally likely to harbour a signal and both runs are crucial to the search!"
So just because you have a CPU that don't like the F-task that doesn't mean that your contribution is any less than the next volunteer's, the I-tasks are also important!
You would have to be darn
)
You would have to be darn creative, but I'm sure there's some way you could do it with an anonymous platform or some such shenanigans.
That doesn't change the fact that I still don't understand the desire to willfully have a CPU poorly suited for the work thrash away at it futilely.
Might as well cut down all the trees in the forest WITH A HERRING. (and if you don't understand that reference, we just can't be friends) ;-)
RE: You would have to be
)
One of my computers is old and slow enough to've gotten an I task instead of the F's that all the others have gotten. It was awarded 2000 credits vs the 1000 from the F's. Given the opportunity I could see credit whores abusing the system to give their fast boxes I units to get more points.
FWIW my elderly laptop does do particularly poorly with the O1 tasks (twice as long as FGRP); but credit parity vs the FGRP searches would probably be about 1300-1500/task.
Meanwhile my i7-930 system also takes almost 2x as long on O1 vs FGRP8; but its being given F tasks.
Given that F Task require
)
Given that F Task require more Gigaflops than an I task, it would make more sense to award them the 2000 than the 1000.
However, since the server decides which CPU get which work units, it's a mute point.
If you don't want whatever points they award you, then you don't have to crunch the work unit. It's all voluntary anyway.
RE: So my question is
)
That are some valid points and those were discussed internally prior to the decision to create two different applications.
Our main objective is to use the computational power that is available through your volunteered CPUs the most efficient way possible. Another objective is to get the data through Einstein@home as fast as possible. By limiting the application to certain CPUs we try to accomplish both of those objectives. Be assured that both searches are essential in our hunt for continuous gravitational waves and we need both to be finished in order to evaluate the results and start writing papers.
Making the limitation more user friendly would have taken more time to develop and test so we left it with the hard coded CPU models to start the run sooner than later.
Let me also assure you that this is not a model we want to establish for future searches. We were all surprised by the varying performance and will hopefully have a better solution for the following searches.
RE: One of my computers is
)
That implies credit is awarded based on some form of political correctness to reward less-efficient machines. That in itself is merely annoying, but I wonder if they treat the data the same way?
RE: RE: One of my
)
You can't draw that conclusion unless you know the relative speeds of the tasks involved, on the hardware used. 1,000 credits twice a day is worth a lot more than 2,000 credits every other month.