Multi-Directional Gravitational Wave Search on O3 data (O3MD1/F)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3981
Credit: 47409432642
RAC: 63970007

I'm not sure the exact

I'm not sure the exact settings for the Einstien download buffer but i'm pretty sure it's larger than 100 tasks, I've downloaded more than 100 tasks in a single sched request.

_________________________________________________________________________

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519421540
RAC: 15777

Boca Raton Community HS

Boca Raton Community HS wrote:

Each CPU task is requiring ~2 GB of ram.(!) I don't think I have ever seen tasks with such large memory requirements. Our systems are chewing away at them, but wow- very memory intensive. 

I take it you never use LHC or Amicable Numbers or Yoyo.  They do 8GB tasks.

Or Climate Prediction at 20GB.

2GB is a negligibly tiny amount on a modern computer. Three of mine will take 128GB RAM if I maxed out the motherboard.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519421540
RAC: 15777

Richard Haselgrove

Richard Haselgrove wrote:

Ta. I'm used to projects where you have to read the errors from the bottom up. Yes, 0.6/0.4 will do it - I'll go round the shrubbery again.

I'm also from the UK (albeit Scotland) and have no idea what "go round the shrubbery" would mean.  Google doesn't either!

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519421540
RAC: 15777

Quote:Boca Raton Community HS

Quote:

Boca Raton Community HS wrote:

We did not really have too many issues with these tasks either (like Mikey). We were running three of the GPU tasks simultaneously. I ran them as hard as we could for about a week to be able to send a large enough sample set of completed tasks back in order to be somewhat helpful (well, hopefully large enough). 

All (6481)
In progress (4)
Pending (267)
Valid (6119)
Invalid (0)
Error (84)

Thanks for the response.  I know I'll have to wait at least a couple of weeks to try it again.

Did you have to set the permissions for execution in order to get the tasks completed?  I just set mine now.

Boca Raton only run Windows.  I assume this execution permission is a Linux thing.  Mikey has Windows and Linux, he might know what you're on about.  In Windows it just works [rolls eyes]

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519421540
RAC: 15777

Keith Myers wrote: Milkyway

Keith Myers wrote:

Milkyway had the same issue with N-body tasks swamping the download server buffers.  Nobody was getting any Separation work even though there was plenty in the RTS buffers.

The RTS category is not the same thing as the download buffer.  If projects follow suit as how Seti servers were configured, the download buffer holds 100 tasks.  That is all.  When you hit the scheduler for a work request the scheduler fills it out of that download server buffer of exactly 100 tasks.

When it gets emptied, it refills from all the Ready to Send sub-project caches.  When you hit the scheduler right after a fast host has just emptied it right before your scheduler connection is serviced, the buffer is empty and you get the no tasks to send message.

When the Ready to Send caches of a single sub-project are 10X -100X the size of the other sub-project caches,  the download buffer will be swamped and filled entirely by the unthrottled work 100X oversized cache and there will not be a single type of other work in that 100 task buffer.

So you get the same message from the scheduler . . . no work to send.  The end result is that the one sub-project, in our case the new O3MD* work completely excluded all other sub-project work from being available.

Something needs rewritten then.  When the buffer is empty (although I assume it tops up inbetween if it gets to say half full), it should grab some from each of the subprojects.

Although since a lot of users are maybe choosing one subproject or another, the server doesn't know how many of each it needs.   So there should be a seperate queue of each ready to send.  If I was to take all of subproject A, it should fill up with more A, not some B and C aswell.  Because the next user could want any of those.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519421540
RAC: 15777

Ian&Steve C. wrote: I'm not

Ian&Steve C. wrote:

I'm not sure the exact settings for the Einstien download buffer but i'm pretty sure it's larger than 100 tasks, I've downloaded more than 100 tasks in a single sched request.

I thought you could download whatever is shown in the server status.  On MW for example, it always shows 10000/1000 for seperation/nbody, or nearly that number.  I've downloaded 900 at once for seperation, and I'm sure I've often done a couple of 900 in close succession with two hosts.

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4981
Credit: 18802448322
RAC: 7894373

Yes, I agree, probably a bit

Yes, I agree, probably a bit larger.  Why I said that the Seti example was only for Seti.  How large the download buffer is at other projects can be different depending on how the admins set up the servers.

Based on the scheduler logs for Einstein, I suspect that the download buffer is set to 512 since that is max allowed by the scheduler connection.  And since that value is not set in the client it must come from the scheduler.

Still 512 tasks max in the buffer when the GW RTS caches were over 2.5M was way undersized for this type of occurrence. 

Just glad that my whinging about the issue got some attention by the admins and I got refilled overnight when I thought I was going to wake up to cold iron when I went to bed.

 

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4981
Credit: 18802448322
RAC: 7894373

Peter Hucker

Peter Hucker wrote:

Ian&Steve C. wrote:

I'm not sure the exact settings for the Einstien download buffer but i'm pretty sure it's larger than 100 tasks, I've downloaded more than 100 tasks in a single sched request.

I thought you could download whatever is shown in the server status.  On MW for example, it always shows 10000/1000 for seperation/nbody, or nearly that number.  I've downloaded 900 at once for seperation, and I'm sure I've often done a couple of 900 in close succession with two hosts.

No you would never set up the download server buffer that large.  It would slow downloads to a crawl because the I/O to the database would be saturated.

Since the max tasks allowed at Milkyway is 900, I would assume that is the size of the download buffer there.

 

Mr P Hucker
Mr P Hucker
Joined: 12 Aug 06
Posts: 838
Credit: 519421540
RAC: 15777

Keith Myers wrote:No you

Keith Myers wrote:

No you would never set up the download server buffer that large.  It would slow downloads to a crawl because the I/O to the database would be saturated.

Since the max tasks allowed at Milkyway is 900, I would assume that is the size of the download buffer there.

I don't understand.  It's quite likely me and another user both request 900 tasks within a few seconds.  If the buffer was only 900, one of us wouldn't get many tasks.  I never receive 846 or anything, it's always the full 900, so I can't believe the buffer is 900, as it would be likely some had just been taken.

And what do you mean by "the I/O to the database would be saturated"?

If this page takes an hour to load, reduce posts per page to 20 in your settings, then the tinpot 486 Einstein uses can handle it.

mikey
mikey
Joined: 22 Jan 05
Posts: 12715
Credit: 1839119349
RAC: 3605

GWGeorge007 wrote:mikey

GWGeorge007 wrote:

mikey wrote:

       

As for the O3 gpu tasks I am doing really good on those:

All (4558) In Progress (584) Pending (571) Valid (3232) and Error (170)

Hi Mikey,

What did you do to have a nice, successful processing of the 03 GPU tasks?

I don't have a single validation, and I have a bunch of errors.  Another member said that I may not have the executions set to be enabled in my app.  I don't recall any app...  where would it be? 

I didn't do anything they just ran on the Windows pc's with no problem but did not try them on my Linux pc's. Now they are/were taking up to 2+ DAYS on some of my pc's and I aborted a whole stack of them today that were due tomorrow that hadn't even been started yet. I am still running them on my laptop though at about 8 hours each.

On my laptop mine are only taking about 1.13 gb of memory per task not the 2gb Bernd was talking about, unless Boinc isn't counting all the memory they are really using.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.