Thank you for taking the time to point these things out, klasm. In the link in one of my posts a ways back in the thread (not in my previous couple of posts) is an analysis of the question about possibly observing BHs at the LHC, and also possibly in higher energy cosmic ray interactions. As you pointed out, a neutron can't collapse on its own. And also as you pointed out, the final moment of the evaporation of “semiclassical” (as referred to in the analysis) black holes is catastrophic. However, when the standard Heisenberg uncertainty principle is generalized to “d” dimensions (referred to as the GUP scenario in the analysis), then the math works out a bit differently, such that, among other things, the calculation for the specific heat, in the final moments, has the black hole with an insufficient amount to evaporate further, and a Planck-sized remnant is formed. This is considered to be a possible candidate for the dark matter (as pointed out in the analysis).
And as both Mark and klasm have pointed out, the rate of evaporation is very fast; so if one these were to absorb energy, it would re-radiate it in a very short time. So I'm wondering, if Planck-sized remnants do in fact exist, then in what manner would they interact, and what is their galactic distribution?
I was also asking how they might compare, not to a neutron, but how might they compare to a neutrino? They both have very little mass, and both are elusive. So keeping in mind the illustration of Cooper pairs (previous post) showing how electrons can appear to be boson-like (good trick for a fermion!) due to interaction with a superconducting lattice, and assuming that the BH remnants would be distributed in a dense (but not infinitely dense), lattice-like thermodynamic equilibrium, is it possible that neutrinos and BH remnants are one and the same thing?
And could a lattice of BH remnants also be responsible for the observed 2nd and 3rd generations of matter? This could be easily checked by comparing the decay times of the higher generations, with the evaporation rates of the quantum BHs...
ChipperQ:
I am not familiar with d "dimension" theory you are quoting.
According to NIST the Plank mass is 2.17645x10^-8 kg or 1.2209x10^28 eV. Neutrinos have a mass of <100 eV and probably <10 eV. There is little likelihood of any connection.
Fermions can pair up and behave like bosons as long as you don't look too close (or equivalently don't apply too much energy). For such pairs to form there has to be some kind of force to hold them together and enough component fermions to form pairs within the limits of the range and strength of the force. Given an overall density of the universe on the order of 1 nuclear mass equivalent per meter cubed it is hard for me to see how such a pairing could work.
Cosmology currently rules out all known particles as the source of dark matter. Our knowledge of elementary particles is constrained to masses <10^15 eV. Given the zoo of particles discovered in getting to the current level I think it is likely we will discover many more before we approach Plank mass.
Well what one can say is that what will be left at the end of BH evaporation will depend on the properties of the black hole it self. There are only three propertis BH can have, as far as classical GR allows, they are mass, angular momentum and electric charge.
Here neutrinos differ from BH since they also interact with the weak nuclear force in a way which so far neither string theory nor clasical particle physics/GR expects a BH to do. Another difference is that a neutrino has a very small mass, less than one eV, but the black holes in the paper you cite thave masses in the range of TeV
One of the main problems with actually getting a long live remnant is that for anything this small to be stable the must exist a conservation law which makes it stable, and for small BHs we do not know of any such conserved proptery to keep it stable.Any particle which is not stabilised by a conserved property is expected to decay into stable particles with lower energy. This last decay is not by Hawking radition but by processe more in line with traditional particle physics.
However, these are things which are very far from being well understood. In a seminar recently Gary Horowitz described how in string theory an electrically charged black holes can leave behind a remnant in the form of something called a "Kaluza-Klein bubble of nothing". This is not a particle in any normal sense but rather a missing piece of space-time itself. So at least for kinds of BHs there might be more interesting ends than just a cloud of classical particles.
In the end one can mainly say that we don't really know what will be left after a BH as evaporated, but that it is likely to either be unstable and decay further by other processes than Hawking-radiation or be something vey strange indeed.
Thanks again, Mark and klasm, your answers are thoughtful and very helpful. I noticed that in the GUP-scenario analysis a curious thing about the equation (eq. 21) giving the total multiplicity “N”, for finding the multiplicity of a particle species “i” produced in BH decay:
I was quite surprised to see a zeta function, the zeta(3) here. I checked the reference cited for the equation, and see that a zeta function is also in an equation for a Stefan-Boltzmann constant (generalized to n_i-dimensional slices of d-dimensional spacetime), where the argument for the zeta function is n_i.
Which zeta function is being used? More to the point, what is the physical interpretation for it in the equation? For example, if it's Riemann's zeta function, then zeta(3) is Apery's constant and
which has several particularly interesting series representations,
*
*
*
*
where each one, it seems to me, lends itself to a different physical interpretation.
That should be the Riemann zeta function. The Riemann zeta function and gamma function shows up quite often in both quamtum theory and statistical mechanics. This usually happens when you are counting discrete objects or states and really comes more from the underlying combinatorics, rather than something in the specific physical theory.
When you work with chaotic quantum systems, or in some parts of renormalisation theory, you often encounter expression containign something called a polylogarithm, and when you evaluate polylogarithms you often get value which are rather nice multiples of the riemann zeta function.
This usually happens when you are counting discrete objects or states and really comes more from the underlying combinatorics, rather than something in the specific physical theory.
Thanks, klasm, this is like music to my ears. I've had my head wrapped around the enigmatic distribution of prime numbers for years, from time to time, and it helps tremendously to think of them as “discrete objects” in an allowed “state”. I thought a helpful analogy might be drawn between the prime numbers and fermions: the primes “fill” into the number line as fermions fill energy states in a system in thermodynamic equilibrium, e.g., 2, 3, 5, 7, 11 are “allowed”, while 4, being really (2)(2), and 6 being (2)(3),... 10 being (2)(5), would be examples of states “not allowed”, obeying the Pauli exclusion principle.
But then I remembered there's already a math for the distribution of fermions, called Fermi-Dirac statistics, so I checked, and at first glance, the equation
didn't look like it had anything remotely to do with the primes. However, a plot of it as a function of temperature caught my eye, since I'd previously expressed the sequence of primes in a manner that produces a plot in the same family of curves – the curve rises sharply and then levels off converging on some value, 0.5 for the Fermi-Dirac distribution as a function of temperature:
This quote, from a wonderful book by John Derbyshire entitled Prime Obsession, Bernhard Riemann and the Greatest Unsolved Problem in Mathematics (ISBN 0-452-28525-9), p. 295, relates to my previous question about the zeta function:
Quote:
So yes, it seems that the non-trivial zeros of the zeta function and the eigenvalues of random Hermitian matrices are related in some way. This raises a rather large question, a question that has been hanging in the air ever since that encounter in Fuld Hall in 1972. {Between Freeman Dyson and Hugh Montgomery}
The non-trivial zeros of Riemann's zeta function arise from inquiries into the distribution of prime numbers. The eigenvalues of a random Hermitian matrix arise from inquiries into the behavior of systems of subatomic particles under the laws of quantum mechanics. What on earth does the distribution of prime numbers have to do with the behavior of subatomic particles?
While a bijection as simple as the one drawn above between primes and fermions is asking quite a bit, the folks working in those fields could probably tell if the following function (using the sequence of prime numbers) is either useful or helpful, by comparing it to the graph (above) and to their observables data. A few reasons for thinking it might be helpful, are that it's inherently discrete (like quanta), the zeta function is already “built in”, and whereas the above graph of Fermi-Dirac statistics is globally and locally regular, the following function is globally regular, but locally irregular. I'm happy to share it, anyway.
Without being too colorful, the function arose from the following line of reasoning: if, out of an infinite number of whole numbers, the number “2” removes from consideration as prime numbers exactly 50% (of all the whole numbers), how many more are extracted by “3”, that then adds to the total already removed, without counting same numbers already removed by “2”? And then what about “5”, and “7”, and so on. The function I worked out is:
“n” is (0,1,2,3,...), P_n is the n'th prime, E_n is the percentage of whole numbers extracted by the n'th prime, and R_n is the total percentage removed with the n'th prime (and all the primes less than it). R_0 = 0.
So for the 1st prime, “2”,
R_1 = 0 + (1 – 0)/2 = .5
and 50% of all whole numbers aren't prime, since they're multiples of 2, and for the next prime “3”:
E_2 = (1 - .5)/3 = .166666... meaning another ~17% of all whole numbers are multiples of 3 (but not of 2), and so
R_2 = .5 + .17 = .67 and now ~67% of all whole numbers are removed from consideration as being primes.
Here are the values for the first 10 primes (rounded off):
[pre]n P_n E_n R_n
1 2 .500 .500
2 3 .167 .667
3 5 .067 .733
4 7 .038 .771
5 11 .021 .792
6 13 .016 .808
7 17 .011 .819
8 19 .010 .829
9 23 .007 .836
10 29 .005 .842[/pre]
The values for R_n rise sharply, and level off to converge at 1 as n goes to infinity, so it's different from Fermi-Dirac distribution (as a function of temperature) at least by a factor of 2...
The distribution certainly gives me insight into the nature of the global regularity of R_n, and so thanks again klasm, for your remarks. Since it's an easy thing to solve for P_n in terms of R_n, I think combinations of functions, like one for the Fermi-Dirac distribution, can be used to work out something like a polyogarithm, that's essentially f(n) = n'th prime... bit of a challenge...
ChipperQ, what you have started doing in your last post is part of something called Sieve methods in number theory. These methods let you get abetter and better approximations of various counting function, like the function counting the number of primes less than N.
You migh want to take a look at euler's phi-funktion for example: phi
The fact that the behaviour of the primes show up in quantum mechanics for nuclei isn't quite as mysterious as Derbysghire makes it sound, but it is still a very nice connection which there has been a lot of work done on in the last few years.
It would be mroe accurate to look at things in the direction that it is number theory which influnces the way quantum system must behave, rather than the other way round.
ChipperQ, what you have started doing in your last post is part of something called Sieve methods in number theory. These methods let you get abetter and better approximations of various counting function, like the function counting the number of primes less than N.
You migh want to take a look at euler's phi-funktion for example: phi
The fact that the behaviour of the primes show up in quantum mechanics for nuclei isn't quite as mysterious as Derbysghire makes it sound, but it is still a very nice connection which there has been a lot of work done on in the last few years.
It would be mroe accurate to look at things in the direction that it is number theory which influnces the way quantum system must behave, rather than the other way round.
Thanks for this, klasm! Between working on a better understanding of GR and a regular day job (that I'm grateful to have kept, after seeing how complex Astronomy has grown!), my plate is overflowing, the way I like it. The program I wrote in Vbasic to generate my list of primes was essentially the 'Sieve of Eratosthenes', and so I'm going make room for number theory on the plate, too. I'm trying to look at things the way you advised, and it's pretty much the inspiration for the opinion in my profile. Who was it that said, with regard to quantum probability, that if there are no rules prohibiting an event, then the event has a probability of occurring? If it's not correct that math has some rules (e.g., volume of a sphere) that shape the resolution of these probabilities, then I should change my opinion... After seeing recently the remark that a mathematician once made (by Hilbert, see Thall's History of Quantum Mechanics), I should offer the physicists (and mathematicians) here high praise for their tolerance and patience, in advance, if I'm wrong.
And phi(n)(ln ln n )/n as a function of n looks awesome!
Re: black holes at the LHC, just ran across this - here's what the signature of a small black hole may look like, as it's formed and decays:
The caption for the image is, “This image is a simulation of the production and decay of a black hole in a proposed linear collider detector. The black hole quickly evaporates into every type of matter particle. The “democratic” selection of decay products is a distinct signature of black hole decay.” The image is from a slide-show of NOVA's “The Elegant Universe” (click on the image for the link).
Thank you for taking the time
)
Thank you for taking the time to point these things out, klasm. In the link in one of my posts a ways back in the thread (not in my previous couple of posts) is an analysis of the question about possibly observing BHs at the LHC, and also possibly in higher energy cosmic ray interactions. As you pointed out, a neutron can't collapse on its own. And also as you pointed out, the final moment of the evaporation of “semiclassical” (as referred to in the analysis) black holes is catastrophic. However, when the standard Heisenberg uncertainty principle is generalized to “d” dimensions (referred to as the GUP scenario in the analysis), then the math works out a bit differently, such that, among other things, the calculation for the specific heat, in the final moments, has the black hole with an insufficient amount to evaporate further, and a Planck-sized remnant is formed. This is considered to be a possible candidate for the dark matter (as pointed out in the analysis).
And as both Mark and klasm have pointed out, the rate of evaporation is very fast; so if one these were to absorb energy, it would re-radiate it in a very short time. So I'm wondering, if Planck-sized remnants do in fact exist, then in what manner would they interact, and what is their galactic distribution?
I was also asking how they might compare, not to a neutron, but how might they compare to a neutrino? They both have very little mass, and both are elusive. So keeping in mind the illustration of Cooper pairs (previous post) showing how electrons can appear to be boson-like (good trick for a fermion!) due to interaction with a superconducting lattice, and assuming that the BH remnants would be distributed in a dense (but not infinitely dense), lattice-like thermodynamic equilibrium, is it possible that neutrinos and BH remnants are one and the same thing?
And could a lattice of BH remnants also be responsible for the observed 2nd and 3rd generations of matter? This could be easily checked by comparing the decay times of the higher generations, with the evaporation rates of the quantum BHs...
ChipperQ: I am not familiar
)
ChipperQ:
I am not familiar with d "dimension" theory you are quoting.
According to NIST the Plank mass is 2.17645x10^-8 kg or 1.2209x10^28 eV. Neutrinos have a mass of <100 eV and probably <10 eV. There is little likelihood of any connection.
Fermions can pair up and behave like bosons as long as you don't look too close (or equivalently don't apply too much energy). For such pairs to form there has to be some kind of force to hold them together and enough component fermions to form pairs within the limits of the range and strength of the force. Given an overall density of the universe on the order of 1 nuclear mass equivalent per meter cubed it is hard for me to see how such a pairing could work.
Cosmology currently rules out all known particles as the source of dark matter. Our knowledge of elementary particles is constrained to masses <10^15 eV. Given the zoo of particles discovered in getting to the current level I think it is likely we will discover many more before we approach Plank mass.
Well what one can say is that
)
Well what one can say is that what will be left at the end of BH evaporation will depend on the properties of the black hole it self. There are only three propertis BH can have, as far as classical GR allows, they are mass, angular momentum and electric charge.
Here neutrinos differ from BH since they also interact with the weak nuclear force in a way which so far neither string theory nor clasical particle physics/GR expects a BH to do. Another difference is that a neutrino has a very small mass, less than one eV, but the black holes in the paper you cite thave masses in the range of TeV
One of the main problems with actually getting a long live remnant is that for anything this small to be stable the must exist a conservation law which makes it stable, and for small BHs we do not know of any such conserved proptery to keep it stable.Any particle which is not stabilised by a conserved property is expected to decay into stable particles with lower energy. This last decay is not by Hawking radition but by processe more in line with traditional particle physics.
However, these are things which are very far from being well understood. In a seminar recently Gary Horowitz described how in string theory an electrically charged black holes can leave behind a remnant in the form of something called a "Kaluza-Klein bubble of nothing". This is not a particle in any normal sense but rather a missing piece of space-time itself. So at least for kinds of BHs there might be more interesting ends than just a cloud of classical particles.
In the end one can mainly say that we don't really know what will be left after a BH as evaporated, but that it is likely to either be unstable and decay further by other processes than Hawking-radiation or be something vey strange indeed.
Getting the LHC onine will be very interesting!
Thanks again, Mark and klasm,
)
Thanks again, Mark and klasm, your answers are thoughtful and very helpful. I noticed that in the GUP-scenario analysis a curious thing about the equation (eq. 21) giving the total multiplicity “N”, for finding the multiplicity of a particle species “i” produced in BH decay:
I was quite surprised to see a zeta function, the zeta(3) here. I checked the reference cited for the equation, and see that a zeta function is also in an equation for a Stefan-Boltzmann constant (generalized to n_i-dimensional slices of d-dimensional spacetime), where the argument for the zeta function is n_i.
Which zeta function is being used? More to the point, what is the physical interpretation for it in the equation? For example, if it's Riemann's zeta function, then zeta(3) is Apery's constant and
which has several particularly interesting series representations,
*
*
*
where each one, it seems to me, lends itself to a different physical interpretation.
That should be the Riemann
)
That should be the Riemann zeta function. The Riemann zeta function and gamma function shows up quite often in both quamtum theory and statistical mechanics. This usually happens when you are counting discrete objects or states and really comes more from the underlying combinatorics, rather than something in the specific physical theory.
When you work with chaotic quantum systems, or in some parts of renormalisation theory, you often encounter expression containign something called a polylogarithm, and when you evaluate polylogarithms you often get value which are rather nice multiples of the riemann zeta function.
RE: This usually happens
)
Thanks, klasm, this is like music to my ears. I've had my head wrapped around the enigmatic distribution of prime numbers for years, from time to time, and it helps tremendously to think of them as “discrete objects” in an allowed “state”. I thought a helpful analogy might be drawn between the prime numbers and fermions: the primes “fill” into the number line as fermions fill energy states in a system in thermodynamic equilibrium, e.g., 2, 3, 5, 7, 11 are “allowed”, while 4, being really (2)(2), and 6 being (2)(3),... 10 being (2)(5), would be examples of states “not allowed”, obeying the Pauli exclusion principle.
But then I remembered there's already a math for the distribution of fermions, called Fermi-Dirac statistics, so I checked, and at first glance, the equation
didn't look like it had anything remotely to do with the primes. However, a plot of it as a function of temperature caught my eye, since I'd previously expressed the sequence of primes in a manner that produces a plot in the same family of curves – the curve rises sharply and then levels off converging on some value, 0.5 for the Fermi-Dirac distribution as a function of temperature:
This quote, from a wonderful book by John Derbyshire entitled Prime Obsession, Bernhard Riemann and the Greatest Unsolved Problem in Mathematics (ISBN 0-452-28525-9), p. 295, relates to my previous question about the zeta function:
While a bijection as simple as the one drawn above between primes and fermions is asking quite a bit, the folks working in those fields could probably tell if the following function (using the sequence of prime numbers) is either useful or helpful, by comparing it to the graph (above) and to their observables data. A few reasons for thinking it might be helpful, are that it's inherently discrete (like quanta), the zeta function is already “built in”, and whereas the above graph of Fermi-Dirac statistics is globally and locally regular, the following function is globally regular, but locally irregular. I'm happy to share it, anyway.
Without being too colorful, the function arose from the following line of reasoning: if, out of an infinite number of whole numbers, the number “2” removes from consideration as prime numbers exactly 50% (of all the whole numbers), how many more are extracted by “3”, that then adds to the total already removed, without counting same numbers already removed by “2”? And then what about “5”, and “7”, and so on. The function I worked out is:
“n” is (0,1,2,3,...), P_n is the n'th prime, E_n is the percentage of whole numbers extracted by the n'th prime, and R_n is the total percentage removed with the n'th prime (and all the primes less than it). R_0 = 0.
R_n = R_(n-1) + E_n,
where
E_n = {1 – R_(n-1)} / P_n
So for the 1st prime, “2”,
R_1 = 0 + (1 – 0)/2 = .5
and 50% of all whole numbers aren't prime, since they're multiples of 2, and for the next prime “3”:
E_2 = (1 - .5)/3 = .166666... meaning another ~17% of all whole numbers are multiples of 3 (but not of 2), and so
R_2 = .5 + .17 = .67 and now ~67% of all whole numbers are removed from consideration as being primes.
Here are the values for the first 10 primes (rounded off):
[pre]n P_n E_n R_n
1 2 .500 .500
2 3 .167 .667
3 5 .067 .733
4 7 .038 .771
5 11 .021 .792
6 13 .016 .808
7 17 .011 .819
8 19 .010 .829
9 23 .007 .836
10 29 .005 .842[/pre]
The values for R_n rise sharply, and level off to converge at 1 as n goes to infinity, so it's different from Fermi-Dirac distribution (as a function of temperature) at least by a factor of 2...
The distribution certainly gives me insight into the nature of the global regularity of R_n, and so thanks again klasm, for your remarks. Since it's an easy thing to solve for P_n in terms of R_n, I think combinations of functions, like one for the Fermi-Dirac distribution, can be used to work out something like a polyogarithm, that's essentially f(n) = n'th prime... bit of a challenge...
ChipperQ, what you have
)
ChipperQ, what you have started doing in your last post is part of something called Sieve methods in number theory. These methods let you get abetter and better approximations of various counting function, like the function counting the number of primes less than N.
You migh want to take a look at euler's phi-funktion for example: phi
The fact that the behaviour of the primes show up in quantum mechanics for nuclei isn't quite as mysterious as Derbysghire makes it sound, but it is still a very nice connection which there has been a lot of work done on in the last few years.
It would be mroe accurate to look at things in the direction that it is number theory which influnces the way quantum system must behave, rather than the other way round.
RE: ChipperQ, what you have
)
Thanks for this, klasm! Between working on a better understanding of GR and a regular day job (that I'm grateful to have kept, after seeing how complex Astronomy has grown!), my plate is overflowing, the way I like it. The program I wrote in Vbasic to generate my list of primes was essentially the 'Sieve of Eratosthenes', and so I'm going make room for number theory on the plate, too. I'm trying to look at things the way you advised, and it's pretty much the inspiration for the opinion in my profile. Who was it that said, with regard to quantum probability, that if there are no rules prohibiting an event, then the event has a probability of occurring? If it's not correct that math has some rules (e.g., volume of a sphere) that shape the resolution of these probabilities, then I should change my opinion... After seeing recently the remark that a mathematician once made (by Hilbert, see Thall's History of Quantum Mechanics), I should offer the physicists (and mathematicians) here high praise for their tolerance and patience, in advance, if I'm wrong.
And phi(n)(ln ln n )/n as a function of n looks awesome!
Re: black holes at the LHC,
)
Re: black holes at the LHC, just ran across this - here's what the signature of a small black hole may look like, as it's formed and decays:
The caption for the image is, “This image is a simulation of the production and decay of a black hole in a proposed linear collider detector. The black hole quickly evaporates into every type of matter particle. The “democratic” selection of decay products is a distinct signature of black hole decay.” The image is from a slide-show of NOVA's “The Elegant Universe” (click on the image for the link).
more new light
)
more new light