A lot of people have shown interest in participating. We are working to get our infrastructure ready for a larger number of users, but we want to increase it slowly to keep our servers and the project stable. We are still in testing phase and learning...
> A lot of people have shown interest in participating. We are working to get
> our infrastructure ready for a larger number of users, but we want to increase
> it slowly to keep our servers and the project stable. We are still in testing
> phase and learning...
>
> BM
>
And this is great news, because we do not want the same thing to happen to this proejct that happened to SETI@home, Predictor@home and LHC@home.
Too many particapnts before the infrastructure is ready.
Once you go live, what are the estimates for the maximum number of participants?
100,000 would not be unexpected.....more?....less?......
The limiting factor is not really the number of users, but the number of hosts (actually CPUs). Presently we do have users running one single machine, and some runnig e@h on clusters of machines with 50 CPUs or more. I'ts hard to predict the user/CPU ratio for the public case, but I do well expect to reach a number of 1.000.000 CPUs. So we have a lot of work ahead...
> The limiting factor is not really the number of users, but the number of hosts
> (actually CPUs). Presently we do have users running one single machine, and
> some runnig e@h on clusters of machines with 50 CPUs or more. I'ts hard to
> predict the user/CPU ratio for the public case, but I do well expect to reach
> a number of 1.000.000 CPUs. So we have a lot of work ahead...
Exactly. Most problems derive from the amount of CPU's, not the amount of participating users.
new users ?????
)
A lot of people have shown interest in participating. We are working to get our infrastructure ready for a larger number of users, but we want to increase it slowly to keep our servers and the project stable. We are still in testing phase and learning...
BM
BM
> A lot of people have shown
)
> A lot of people have shown interest in participating. We are working to get
> our infrastructure ready for a larger number of users, but we want to increase
> it slowly to keep our servers and the project stable. We are still in testing
> phase and learning...
>
> BM
>
And this is great news, because we do not want the same thing to happen to this proejct that happened to SETI@home, Predictor@home and LHC@home.
Too many particapnts before the infrastructure is ready.
Once you go live, what are the estimates for the maximum number of participants?
100,000 would not be unexpected.....more?....less?......
The limiting factor is not
)
The limiting factor is not really the number of users, but the number of hosts (actually CPUs). Presently we do have users running one single machine, and some runnig e@h on clusters of machines with 50 CPUs or more. I'ts hard to predict the user/CPU ratio for the public case, but I do well expect to reach a number of 1.000.000 CPUs. So we have a lot of work ahead...
BM
BM
> The limiting factor is not
)
> The limiting factor is not really the number of users, but the number of hosts
> (actually CPUs). Presently we do have users running one single machine, and
> some runnig e@h on clusters of machines with 50 CPUs or more. I'ts hard to
> predict the user/CPU ratio for the public case, but I do well expect to reach
> a number of 1.000.000 CPUs. So we have a lot of work ahead...
Exactly. Most problems derive from the amount of CPU's, not the amount of participating users.