Hi everyone, I'd like to have a look at the paper, but arxiv are blocking my IP-address again. This has happened in the past and I think my ISP is using a bad range of addresses, but I've tried to contact arxiv and my e-mails were never answered.
So I was wondering, is there a chance you could point me to a mirror for the paper, or otherwise a proxy I could use to access it?
Copyright © 2024 Einstein@Home. All rights reserved.
S5 early results paper
)
No problemo, try here.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: No problemo, try
)
Thanks, downloading now.
(wow, that was a fast reply - I should've checked back sooner)
I've had a brief read,
)
I've had a brief read, thought I'd share my impressions :
This is the frequency band which contains the 'sweet spot' of the detector ie. greatest design sensitivity. So I think ( given assumptions in the analysis ):
- there are many sources above that strain threshold. We should have heard some. We have a problem. [ Eg. there are 10 sources of which 9 should have been heard. ]
OR
- there are few sources above that strain threshold. We were unlucky/unfortunate. [ Eg. there is 1 source which we should probably have heard. ]
OR
- there are no sources above that level. The detectors have correctly reported that. [ Eg. there are 0 sources so 0 were heard. ]
Cruncher’s please note this aspect, this explains why some desires can’t/won’t be met. :-)
So a detection ( yeah! ) may also give a correlation with other known data on some object.
Thus many crunchers will likely contribute to any notable spike in the data.
This is a brutal condition! :-)
A shame.
Yeah team!! :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
It says on the main page that
)
It says on the main page that we're currently in the process of analyzing 5280 hours of data from the 'later' part of S5 - am I right in thinking this is separate from the 660 + 180 hours of 'early' data discussed in this paper? In other words, is it fair to say we've analyzed (660 + 180) / (5280 + 660 + 180) = 13.7% of design sensitivity data with the rest in progress?
RE: It says on the main
)
Correct.
Well yes. Of the data on the table at present. What this paper and others discuss, or hint at, is that analysis is only really limited by resources.
When we say 'signal analysis' this technically involves 'convolution'. This is a mathematical way ( an integral ) of stepping along the data with a given waveform shape and seeing how well they match. The better the match the more 'area under the curve' common to the data and the given template will count in the result. So our computers take some given 30 hour stretch of data from the interferometers and try to align some template ( assumed waveform shape based on astrophysical thinking about rotating neutron stars ) which repeats along the time axis ( ie. at a certain frequency ) and yield a number which thus assesses the degree of overlap of the two.
So it is pattern matching and there's more than a few assumptions here. The full gory detail is spread over many published papers.
Anyhows, for us, if we remain available to E@H then even if no more data appeared from the LIGO ( or other ) arrays we could still go over the same set with different search parameters.
This approach will hopefully reward clever/reasoned/calculated guessing about what the golden needle looks like in a humungous haystack. I hope/feel one day we will go 'ouch'. :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: No statistically
)
This is VERY discouraging news :(
Does it mean our theory is wrong or we simply looking for wrong patterns due some mistakes in implementation ?
I've been looking for papers
)
I've been looking for papers that predict the expected values for the type of objects E@H is involved with searching for. There's no shortage of ideas out there! :-)
It's model dependent. For the continuous waves we seek, the base idea is a neutron star rotating. As far as I can tell :
If it was a perfect spherical shape then it wouldn't radiate any waves. There is a concept of "nonaxisymmetric" that implies there is some mass feature on or within the star not evenly distributed as you look along the ( "North - South" ) axis. So there's a bump or a pimple or somesuch in one area and when the star rotates it is 'unbalanced' - like a car wheel can be if the tire is not fitted right. So the first assumption is how out of balance or nonaxisymmetric the star is.
Then there is how much of the rotational energy goes out in gravitational waves versus other modes of loss ( say the traditional pulsar radio signal ). This affects the rate of spinning down of the neutron star. Our search has several choices for that.
Thirdly is where is the star with respect to Earth. If it's further away then the signal is smaller. So the talk is of the presumed population in space of these stars.
From my brief browsing it seems the expected strain is about 10^(-24) and below for 'reasonable' models. With the high part of the range being more nearby and/or more 'wobbly' ones, and decreasing signal with distance and degree of symmetry.
So I reckon that means my third option : "- there are no sources above that level. The detectors have correctly reported that."
No need to be discouraged! The LIGO planners weren't expecting firm detections until Advanced LIGO. It was always understood that the design and implementation would be incremental, that it would take progressive refinement toward the fancier engineering features. Each time one of these reports is published it shows ever more experience with the processes of the project. Practice makes perfect! :-)
It would have been better if the hardware injections ( deliberate 'bumping' of the interferometers ) were more timely. That's a neat check of the implementation. Simulate a wave arrival and see if we pick it up.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thank You for the
)
Thank You for the explanation. So if i understood you right we can two options here to move forward:
1) think of other patterns to look for in the same data
2) fine-tune instrumentation to look for the same but weaker patters
and if i've understood you right we still should be optimistic with the 2nd option
RE: 2) fine-tune
)
Describing this as 'fine-tuning the instruments' is really not very accurate.
The construction of the LIGO detectors was completed in 1999, when the serious commissioning work began. Since that time, the strain sensitivity has increased by more than two orders of magnitude: noise evolution graph.
The road-map for the LIGO detectors includes two more significant upgrades, so that by 2014 the instruments will be one order of magnitude more sensitive than during the S5 run. This means that we can observe a spatial volume that is 1000 times larger than what was visible during S5 (the visible volume grows like the cube of the sensitivity).
Describing this evolution as 'fine-tuning' of the instrument is really not accurate! It's like saying that a Porsche 911 is just a 'fine-tuned' version of a Ford Model T.
In addition, we are continuing to improve our analysis methods. For example see this paper on improved analysis methods.
As the detectors and the data analysis methods improve, our chances of making a CW source detection go up. But in absolute terms we can't say how probable this is, because we do not know how big neutron star 'mountains' are. See Figure 5 of this paper to see some reasonable solid UPPER LIMITS on what the expected maximum strain is, as a function of the (fractional) mountain-height epsilon.
Director, Einstein@Home
It was mentioned in the paper
)
It was mentioned in the paper that some more sources of instrumentation noise are now understood, and were removed from the data after crunching. Has this noise been pre-removed from the data we're now crunching, or is our understanding of these sources too recent for it to affect the rest of S5? (in terms of pre-processing)