When I'm crunching for the "Binary Radio Pulsar Search" application, I see in the file name something like "p.2030.20191125 and ...26". Does that mean this data was gathered on November 25, 2019, and we're only analyzing it now?
If that is the case, out of curiosity, what computing power would it take to analyze the data in real time, and analyze the data in real time as it's gathered?
Thanks everyone.
Copyright © 2024 Einstein@Home. All rights reserved.
Paul wrote: When I'm
)
Personally, I don't know. You might get more & better responses by posting this in the Cruncher's Corner forum.
Proud member of the Old Farts Association
Paul wrote: When I'm
)
From what i know, yes those are the dates the data has been collected at the Arecibo Radio Telescope in Puerto Rico.
Please note that the Radio Telescope has collapsed in December 2020, so we won't be getting any new data from this source.
The BRP7 search is a new and similar search, but data comes from another Radio Telescope.
To analyze the data in real time? not sure but at least 2 or 3 times more compute power then the project has right now.
I wonder what gain we would
)
I wonder what gain we would have analyzing in real time ?
I could imagine that real-time data has to go through various stages of cross checking and what not bevor being put out to us.
Or that the data is not of the quality that is needed.
I guess these and other factors/situations have to be preanalyzed by humans.
Anyway, the universe seems to be kind-of-old, so what will a couple of years of belated crunching change ?
S-F-V
Real time processing or
)
Real time processing or close-to-real time processing, would be useful for time sensitive re-observations and/or candidate validations.
Real time processing? Hmmm
)
Real time processing? Hmmm ..... that means you'd want a power spectrum for each subset of the parameter space ie. Fourier decomposition on the fly and for all possible beam directions available etc. Wow. Never thought of that. It would take an epic amount of computing power, but not physically impossible* with enough electric power and hardware to do it with. E@H runs at Petaflop speed as a composite beast (see Server Status via link on bottom right of this page, today 9685.7 TFLOPS = 9.6857 PFLOPS) so that's your benchmark to build against. We couldn't do real-time as we are distributed, with inherent delays in all the to & fro of the data and results via the internet and your computer's reaction to that plus server responses etc .... etc. A signal filter of dimensions the size of a planet.
{But imagine some biological beast 'seeing' in real-time the radio sky as if it were natural vision, over a wide radio band, as it points its antennae about the place. You would have a lot of general noise and bright patches of sky and structures but punctuated by these rhythmic point sources winking at you. It could even see the motion of the Earth passing through the cosmic microwave background : brightening or 'hotter' in one general direction and 'cooler' in the antipode. I'm not sure there's any evolutionary advantage in that though, but there could be a sci-fi novel gone begging here.}
* no physical law being breached.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Mike Hewson wrote: Real time
)
Is there a Super Computer out there that runs at 9.6857 PFLOPS ? Just asking because, as always, I'm just curious.
Proud member of the Old Farts Association
Sure there is, here's
)
Sure there is, here's one:
https://www.lumi-supercomputer.eu/lumi_supercomputer/
Harri Liljeroos wrote: Sure
)
WOW!! I had no idea!! Thanks for sharing that tidbit of info!!
Proud member of the Old Farts Association
Harri Liljeroos wrote: Sure
)
380 petaflops for the LUMI supercomputer? That's way more than the 9 Petaflops estimated for einstein@home.
The Frontier supercomputer even breach the 1exaflops mark.
https://www.top500.org/lists/top500/2023/11/
Hmm, the comparision of 380
)
Hmm, the comparision of 380 PFLOPS in the Linpack benchmark against 9 PFLOPS at einstein@home seems a bit unfair. I think the distributed einstein "supercomputer" would perform a lot better than 9 PFLOPS when running LINPACK instead of signal analysis, Fast Fourier Transform, ... things like that. The LINPACK benchmark solves a large system of linear equations where many parameters can be configured to optimize Benchmark results for specific supercomputers (e.g. RAM, number of CPU cores, ...).