1) How many beams are planned to be copied to the AEI?
This number is planned to grow indefinitely. The current number is from a year or so of backlog, but it is planned that after crunching through that we will get a more or less continuous stream of data from Arecibo as the data is taken.
Quote:
2) What does "Beam equivalent to successful work" mean?
Think of it as "beams processed". At a given time different beams are processed to a different fraction, this number simply adds up these fractions over all beams.
We will set up a more detailed progress page like we had for BRP3, I hope it will become clearer then.
I'd add that a 'beam' is equivalent to a 'pixel', and a 'good radio camera' has ... ooooh ... a whole seven pixels ( see here ) : one in the center surrounded by a hexagon of six others. These pixels lie at the target/focus/horn of the radio telescope. So the reason why 'different beams are processed to a different fraction' is to catch signals that don't directly/exclusively appear/trigger on a single pixel but land either between and/or overlap upon several pixels. Thus the telescope's "aiming point" is actually a small patch on the sky defined by some angular width(s) with each pixel covering a subset of that patch. The analysis seeks to account for sky sources that one should not evidently assume will neatly pass exactly aligned to any given pixel. Hence one ought analyse the response of the entire pixel array.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
So the reason why 'different beams are processed to a different fraction' is to catch signals that don't directly/exclusively appear/trigger on a single pixel but land either between and/or overlap upon several pixels.
This is true, but not what I meant here. Each beam is de-dispersed in pre-processing, resulting in 3808 dispersion measures (different distances from earth). 8 of these are bundled into a workunit, so we get 476 workunits from each beam. The fraction to which one beam has been processed is how many of the 476 workunits was found a canonical result for so far.
This is true, but not what I meant here. Each beam is de-dispersed in pre-processing, resulting in 3808 dispersion measures (different distances from earth). 8 of these are bundled into a workunit, so we get 476 workunits from each beam. The fraction to which one beam has been processed is how many of the 476 workunits was found a canonical result for so far.
Oh, my bad. The dispersion axis ....
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
So now parameter space include 3808 DM values instead of 628 DM in first Arecibo search?
Why this significant increase? (6x times more data to process)
May be this gives greater sensitivity / accuracy? (besides the obvious - greater accuracy in the determination of the distance to the pulsar)?
The new "Mock" instrument is much more sensitive than the original "camera", so for the distance both the resolution and the total range are increased.
I think it's time to plug in our eyes in the search :)
The server should send us a photo of a sky portion where is it looking at. And we will manually (visually) look for the spots on the picture. Of course the picture should be reexponated from radio range into a visual range. This way we will help our computers to identify where exactly it should look at the sky portion to find something interesting. And this will be truly distributed volunteer computing (looking) job.
Well, this is a task that can easily be automated the way we do it already, by using computing power.
One thing we are thinking about is to distribute the final post processing step that still requires visual inspection in order to find real pulsars in the candidates we get returned.
I supposed my message to be joke. But didn't even thought that each joke is a part of a truth.
And yes, including postprocessing in distributed computing is a great idea. It will exclude one more human factor from data analysis.
P.S. Visual inspection just in form of a screensaver game whould be a great improvement of the client attractivness. But this is for another thread I suppose.
BRP4 (Arecibo "Mock")
)
BRP4 (Arecibo "Mock") progress is now on the status page.
It's time for new questions.
1) How many beams are planned to be copied to the AEI?
2) What does "Beam equivalent to successful work" mean?
RE: 1) How many beams are
)
This number is planned to grow indefinitely. The current number is from a year or so of backlog, but it is planned that after crunching through that we will get a more or less continuous stream of data from Arecibo as the data is taken.
Think of it as "beams processed". At a given time different beams are processed to a different fraction, this number simply adds up these fractions over all beams.
We will set up a more detailed progress page like we had for BRP3, I hope it will become clearer then.
BM
BM
I'd add that a 'beam' is
)
I'd add that a 'beam' is equivalent to a 'pixel', and a 'good radio camera' has ... ooooh ... a whole seven pixels ( see here ) : one in the center surrounded by a hexagon of six others. These pixels lie at the target/focus/horn of the radio telescope. So the reason why 'different beams are processed to a different fraction' is to catch signals that don't directly/exclusively appear/trigger on a single pixel but land either between and/or overlap upon several pixels. Thus the telescope's "aiming point" is actually a small patch on the sky defined by some angular width(s) with each pixel covering a subset of that patch. The analysis seeks to account for sky sources that one should not evidently assume will neatly pass exactly aligned to any given pixel. Hence one ought analyse the response of the entire pixel array.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: So the reason why
)
This is true, but not what I meant here. Each beam is de-dispersed in pre-processing, resulting in 3808 dispersion measures (different distances from earth). 8 of these are bundled into a workunit, so we get 476 workunits from each beam. The fraction to which one beam has been processed is how many of the 476 workunits was found a canonical result for so far.
BM
BM
RE: This is true, but not
)
Oh, my bad. The dispersion axis ....
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
So now parameter space
)
So now parameter space include 3808 DM values instead of 628 DM in first Arecibo search?
Why this significant increase? (6x times more data to process)
May be this gives greater sensitivity / accuracy? (besides the obvious - greater accuracy in the determination of the distance to the pulsar)?
The new "Mock" instrument is
)
The new "Mock" instrument is much more sensitive than the original "camera", so for the distance both the resolution and the total range are increased.
BM
BM
I think it's time to plug in
)
I think it's time to plug in our eyes in the search :)
The server should send us a photo of a sky portion where is it looking at. And we will manually (visually) look for the spots on the picture. Of course the picture should be reexponated from radio range into a visual range. This way we will help our computers to identify where exactly it should look at the sky portion to find something interesting. And this will be truly distributed volunteer computing (looking) job.
Well, this is a task that can
)
Well, this is a task that can easily be automated the way we do it already, by using computing power.
One thing we are thinking about is to distribute the final post processing step that still requires visual inspection in order to find real pulsars in the candidates we get returned.
BM
BM
I supposed my message to be
)
I supposed my message to be joke. But didn't even thought that each joke is a part of a truth.
And yes, including postprocessing in distributed computing is a great idea. It will exclude one more human factor from data analysis.
P.S. Visual inspection just in form of a screensaver game whould be a great improvement of the client attractivness. But this is for another thread I suppose.