@Bruce: What led to the building of such a large cluster?
Does the project need more computing power?
Would optimization or compilation into SSE3 specific binaries help?
UWM has a couple of large computer labs for their students. Most of the machines are from there.
Does it need more? Well, we are ahead of some parts, and behind on others. I am sure someone else can explain this part better.
Optimization has been built into Einstein, somewhat already. It's been shown that SSE3 specific does not give any advantages, for the type of math Einstein does. I do know they have a team working on the next run to optimize even further, but this run is close enough to finished not to play with the code anymore. With about 35 days left, there is no time to write codes, test, etc. before the end of the run.
I am hoping with the new Core 2 machines, and extra instructions on them that some of the optimizations can be enhanced. I heard that CPDN has found that the Core 2 machines can run their large models with some new optimazations a whole lot faster. I can not wait to see that happen. Since CPDN uses Fortran, that's a whole different story than here.
@Bruce: What led to the building of such a large cluster?
I am not Bruce, but I can can answer two of these questions.
Beowulf clusters are the primary way for the LSC to search for gravitational waves and before E@H beowulf clusters were the only way. The NEMO cluster you refer to is used for searching for inspirals, bursts and other aspects of data analysis. When no one else is using the cluster we run the E@H client on it.
It turns out that even with large beowulf clusters you don't have enough computing power to a good search for pulsars. E@H only searches for gravitational waves from pulsars.
Quote:
Does the project need more computing power?
Yes. The more computing power we have the further away we can "see" the pulsars.
Quote:
Would optimization or compilation into SSE3 specific binaries help?
Bernd would have to answer this one. I don't work on the apps.
Bruce is still in the "moving" process, so we can't be sure when he will drop into the forum again.
At the moment it seems as if someone may be using the cluster Bruce has at his disposal. His output has been down to around 50,000 for the last several days. Putting him in third or forth for our teams Einstein stats for yesterday.
Still, not too bad at all. :) And the entire team, BOINC@AUSTRALIA, is very grateful. So much so, many are still crunching away with a very high Einstein resource setting.
Thanks again for helping us Bruce. Hope the move to Germany is going well and everything falls into place.
After some months crunching alongside us at Boinc@Australia, Bruce has moved his impressive cattle station (a big Aussie Farm) to a new team.
I'd like to say "Thanks Bruce!". It's been an honour and a pleasure to crunch alongside you all this time. I'm sure all my mates at B@A feel the same way.
RE: @Bruce: What led to
)
UWM has a couple of large computer labs for their students. Most of the machines are from there.
Does it need more? Well, we are ahead of some parts, and behind on others. I am sure someone else can explain this part better.
Optimization has been built into Einstein, somewhat already. It's been shown that SSE3 specific does not give any advantages, for the type of math Einstein does. I do know they have a team working on the next run to optimize even further, but this run is close enough to finished not to play with the code anymore. With about 35 days left, there is no time to write codes, test, etc. before the end of the run.
I am hoping with the new Core 2 machines, and extra instructions on them that some of the optimizations can be enhanced. I heard that CPDN has found that the Core 2 machines can run their large models with some new optimazations a whole lot faster. I can not wait to see that happen. Since CPDN uses Fortran, that's a whole different story than here.
We crunch pretty fast as it is.
RE: @Bruce: What led to
)
I am not Bruce, but I can can answer two of these questions.
Beowulf clusters are the primary way for the LSC to search for gravitational waves and before E@H beowulf clusters were the only way. The NEMO cluster you refer to is used for searching for inspirals, bursts and other aspects of data analysis. When no one else is using the cluster we run the E@H client on it.
It turns out that even with large beowulf clusters you don't have enough computing power to a good search for pulsars. E@H only searches for gravitational waves from pulsars.
Yes. The more computing power we have the further away we can "see" the pulsars.
Bernd would have to answer this one. I don't work on the apps.
Just a quick AA update. The
)
Just a quick AA update. The team has reached 16th in Einstein with more than 19,000,000 credits of science crunched.
It just makes me feeel gooood!!
We still about eight days to go. Wonder what the team can achieve by then?
Join the #1 Aussie Alliance on Einstein
Good job! "The Knights Who
)
Good job! "The Knights Who Say Ni!" time, please?
Bruce is a good egg!
Bruce is still in the
)
Bruce is still in the "moving" process, so we can't be sure when he will drop into the forum again.
At the moment it seems as if someone may be using the cluster Bruce has at his disposal. His output has been down to around 50,000 for the last several days. Putting him in third or forth for our teams Einstein stats for yesterday.
Still, not too bad at all. :) And the entire team, BOINC@AUSTRALIA, is very grateful. So much so, many are still crunching away with a very high Einstein resource setting.
Thanks again for helping us Bruce. Hope the move to Germany is going well and everything falls into place.
Tschuess!
Join the #1 Aussie Alliance on Einstein
RE: Thanks again for
)
Hear! Hear!
After some months crunching
)
After some months crunching alongside us at Boinc@Australia, Bruce has moved his impressive cattle station (a big Aussie Farm) to a new team.
I'd like to say "Thanks Bruce!". It's been an honour and a pleasure to crunch alongside you all this time. I'm sure all my mates at B@A feel the same way.
cheers,
Mark
--miw