I guess with the Arecibo search it would be interesting to know whether certain known objects failed to be re-detected within the search space. Was the search sensitivity sufficient to allow all known objects to be found?
By analogy on GW, are there any anticipated test signals or patterns which were hoped to be found. Are there any astro events which would be expected to produce measurable anomaly? Can you create an artificial event that validates the detection scheme? Further has any type of dither signal been added into the signals on purpose to test whether discrete signal processing weeds out certain classes of signal?
Copyright © 2024 Einstein@Home. All rights reserved.
Any missing objects?
)
For continuous wave objects probably the pulsar in the Crab Nebula is the expected 'loudest' signal to gain detection sometime. Theory suggests it is tantalisingly close to the current lower bound of detection - when S5 and S6 were taking data anyway. Indeed one of the operational parameters used when the IFO's are up and running is an estimate called 'Crab Time', roughly meaning how long would the IFO have to take continuous data for in order that the Crab pulsar could be detected ( within some confidence/probability measure ). In that sense a lower Crab Time is good and a higher one worse. A sensitivity measure. The length of the data segments is relevant because the signal processing techniques allow any signal to 'rise out of the mist' of noise. One has to do this as the the noise level is way greater than the signal level. Unrelated ( non signal ) disturbances of the interferometers have no preference with regard to the astronomical source of interest ( at least that's a good assumption ). Like single waves at the seashore they don't tell you whether the tide is going in or out. You have to wait a while and see where the average wet mark/line on the sand is going. Over any given time period the tidal movement is much lower than the wave excursions, so longer observations give better comments about tidal trends.
Because G ~ 10^[-11] is so small* then we humans can't shove around enough mass or change it's dynamics quick enough to get anywhere near a measurable signal. The interferometers routinely have hardware and software 'injections'. In the first instance by literally bumping the instrument via some relevant transducers, and in the second by data point additions to the record already obtained. You can see how both would test the validity of our understanding of how the interferometers work, and our methods of signal analysis. So if either deliberate injection didn't come out in the wash, so to speak, then we'd be worried.
Cheers, Mike.
* .... and c is so large. Or put another way, since we are using light to measure gravity then the relative strength of the coupling constants, some 40 orders of magnitude, rules our endeavors. This is why spacetime appears to be so 'stiff' and thus hard to either budge or measure the wiggles of.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: I guess with the
)
Which objects we missed and why is currently being investigated, but results will take some time.
BM
BM
Hallo Mike ! What has to be
)
Hallo Mike !
What has to be the minimal signal to noise ratio of the GW that one will speak of a ducumented evidence of confirmation? And how far are we currently away from this limit? Have there been some detections that could result from GW but didn´t reach this strong rule of confirmation? Widely used in science is a limit of 3 sigma, but there is stil an uncertainty of 0.3%.
Kind regards
Martin
RE: Hallo Mike ! What has
)
Hello Martin! SNR = 20:1 or better which I think is way above 3 sigma. There's strong reasons for such stringency coming from the history of the subject when using resonant bars ( read : bunfights in the time of Joe Weber ). I think my estimate if S6 had gone better - long quiet periods without hardware issues - was even odds on the Crab, but that stood upon a host of assumptions ( the main one being my humble understandings ). I'm not in the right loop to speak of might-have-beens. In fact one of the lessons from the 70's as I understand matters, was to pre-agree on what the level of signal confidence should be. But there are other aspects/vetoes. Probably the main being ( near ) co-incidence of signals at separated detectors, hence the multi-continental sites.
The trouble with using phrases like 3 sigma and 0.3% etc in this case is that we've never heard a gravitational wave, and so the assumptions underlying the probability distributions may not hold. Other areas of science have the luxury of some established understandings, so say if I am a counter of rabbits then I already have a rabbit prototype to compare with. One very possible GW result is that we hear nothing at all despite good equipment and prolonged data sets. This would not be a failure of the program at all, as like the Michelson-Morley experiment it would trigger some radical re-thinking.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal