Thanks, Mike. A deep well for sure – no problem with thirsting but 'my cup runneth over' faster than I can drink :))
Yup, it can be like drinking from a firehose! :-)
Quote:
To Wit:
Virtual particles aside for a moment, considering the photon energy density, note that the result is independent of spatial dimensions and can be applied to a region of space that's in equilibrium. Recall how the 'Ultraviolet Catastrophe' is avoided and note the comparison of curves showing the Rayleigh-Jeans Law vs. the quantum Planck Radiation Formula – comparing these to the equation for Newtonian gravitational force with the modified equation using the notion of 'separation threshold' (mentioned previously). Question: to avoid an infinite gravitational force (from quarks to black holes), what about multiplying the standard Newtonian equation by both the Bose-Einstein energy distribution function and the Fermi-Dirac energy distribution function ( i.e., 1 / [Ae^(2E/kT) – 1] )...?
Congratulations Chipper! That is pretty well the approach of the supersymmetry ( SUSY ) crew. If one can associate with each boson a corresponding fermion ( photon with photino, and electron with selectron, say ) then such inclusion in the summations eliminates infinities from either. The denominators of the functions quoted result from limits of algebraic series/integrals. Problem is, we haven't found any photinos/selectrons/etc yet.
It is a fascinating thing, the boson/fermion divide. There's a deep lesson about the universe in that. Essentially if you look 'on the other side' of something ( 180 degrees away, but in a phase sense, not physical space ) does it look the same? I know if I do two such operations I should get back to what I started with. So :
( whatever ) ^ 2 = 1
so that means :
whatever = 1
or
whatever = - 1
gives you bosonic and fermionic behaviour, respectively. To superpose Bose particles you use +1 in your summations. To superpose Fermi particles you use -1 in your summations. Hence the probability of bosons in the same state increase, but with fermions it decreases.
We don't actually know what quantum mechanical phase is. We can't measure it per se, only differences in phase because of detections depending upon altered probabilities. Though conservation of charge does result from invariance under phase shift, as per Emmy Noether.
Cheers, Mike.
( edit ) So if you take all the particles in some problem and change each of their phases by some constant amount, then repeat the calculation, you'll get the same result. This turns out to be the same as saying you haven't altered the total electric charge in the situation. Hmmmm .....
( edit ) I should add, that whenever you multiply two sums you get terms corresponding to every combination of the particular terms of each. Say :
( a + b )( c + d ) = ac + ad + bc + bd
and so on for longer cases. A series is really another word for sum. So when you multiply those denominators you are effectively 'blending'. Did you mean to do that? :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanks, Mike, mostly just a lucky guess after many wrong ones, not necessarily the correct one yet as you point out, but still the kind of self-confidence builder no one should be without from time to time :)
Quote:
So when you multiply those denominators you are effectively 'blending'. Did you mean to do that?
Was thinking about a region of 'empty' space in the cosmos as a gradient, from typical energy density in a solar system, to density in the halo of a galaxy, to density in the space between galaxies. Both fermions and bosons would be present. Then thought about regions of confinement (by the standard forces, from strong to gravity) and again both fermions and bosons would be there. So to consider the total numbers of particles (to find the total energy and relate that to 'stiffness' or energetic equilibria of space) it made sense to use both distribution functions. Lots more to learn :)
Thanks, Mike, mostly just a lucky guess after many wrong ones, not necessarily the correct one yet as you point out, but still the kind of self-confidence builder no one should be without from time to time :)
Quote:
So when you multiply those denominators you are effectively 'blending'. Did you mean to do that?
Was thinking about a region of 'empty' space in the cosmos as a gradient, from typical energy density in a solar system, to density in the halo of a galaxy, to density in the space between galaxies. Both fermions and bosons would be present. Then thought about regions of confinement (by the standard forces, from strong to gravity) and again both fermions and bosons would be there. So to consider the total numbers of particles (to find the total energy and relate that to 'stiffness' or energetic equilibria of space) it made sense to use both distribution functions. Lots more to learn :)
At least you've identified the core of the problem. Half the battle is finding the right question to answer! Dear Douglas Adams put it so well : ?? = 42 :-)
Another principle of note is loosely called 'anarchy', sort of Murphy's Law of Quantum Mechanics. If you have some initial state ( say particles going into a region ) and a final state ( particles, not necessarily the same ones, leaving a region ) then to calculate the amplitude ( a complex number which you take the magnitude of to get a probabilty ) you must consider all possible mechanisms/ways of getting from the first state to the second.
[ There's an additional issue, called normalisation, so that you can form a denominator to divide the above magnitude by - and hence get a number between zero and one. Zero means 'no way', one means 'certainty'. One can only talk of probability with respect to some 'úniversal' set. Venn diagrams and whatnot ... ]
So this gives you the Feynman diagram approach where your particles seem to sample all possible modes of behaviour that could apply in between. Like going through both slits of a Young's setup. A key consideration here is the 'Ãndistinguishable' comment in those helpful statistics distributions you mentioned.
If I have non-identical, thus distinguishable, particles entering and similiarly leaving the region then that knowledge ( eg. a proton went up and a neutron went down, say ) narrows down the possible diagrams I am going to consider. But if I have say two protons entering and two leaving, then because I can't 'label' individual protons, to know which is which, I have to add in the possibility of them swapping over in between. So then the solution includes more cases to summate and eventually get a probability. The boson/fermion thing guides you as to whether you add or subtract specific possibilities within the sum.
The technical problem then comes down to whether one has correctly accounted for all cases of interest that contribute to the observed outcome ( specific final state ), and whether one can form the total sum/superposition of cases ( all conceivable final states ) to make the probability denominator. And if you look, meaning there's some additional interaction, the superposition is now over a smaller subset because you have excluded some because you looked! In a Young's experiment knowing which slit was taken removes the interference between those particles of whom which-way knowledge was obtained.
And then there is 're-normalisation' which is a fancy way of ignoring the detail ( and hence avoiding the analysis ) of behaviours below a certain distance scale ( or equivalently above a certain energy ). Instead one puts quantities in place to represent the 'ás if' situation of those summarised pieces. It's a bit like when one does accounting for a large organisation. The managing director, even a department head, doesn't really want to know how exactly how many paper clips were bought. Or pencil sharpeners used. Or boxes of tissues. So you bung in estimates for the likes of 'petty cash' or 'stationery' or 'sundry items'. Then use those figures to flow through your subsequent accounting steps.
You will immediately see the dangers here. How do we get the proper estimates for finer scale particle behaviour? How do the little guys actually behave over small distances and short times? Do such small distances and times really 'exist'? How far off mass shell do they go? Are the sums actually finite? Is there a humungous pile of paper clips out back? :-)
The really cheeky bit is using one infinite sum to offset another infinite sum ( eg. subtract fermion behaviours from boson behaviours ), and then ( allegedly !! ) getting a finite answer that (a) is meaningful and (b) is measurable. Here be dragons ..... :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
RE: Thanks, Mike. A deep
)
Yup, it can be like drinking from a firehose! :-)
Congratulations Chipper! That is pretty well the approach of the supersymmetry ( SUSY ) crew. If one can associate with each boson a corresponding fermion ( photon with photino, and electron with selectron, say ) then such inclusion in the summations eliminates infinities from either. The denominators of the functions quoted result from limits of algebraic series/integrals. Problem is, we haven't found any photinos/selectrons/etc yet.
It is a fascinating thing, the boson/fermion divide. There's a deep lesson about the universe in that. Essentially if you look 'on the other side' of something ( 180 degrees away, but in a phase sense, not physical space ) does it look the same? I know if I do two such operations I should get back to what I started with. So :
( whatever ) ^ 2 = 1
so that means :
whatever = 1
or
whatever = - 1
gives you bosonic and fermionic behaviour, respectively. To superpose Bose particles you use +1 in your summations. To superpose Fermi particles you use -1 in your summations. Hence the probability of bosons in the same state increase, but with fermions it decreases.
We don't actually know what quantum mechanical phase is. We can't measure it per se, only differences in phase because of detections depending upon altered probabilities. Though conservation of charge does result from invariance under phase shift, as per Emmy Noether.
Cheers, Mike.
( edit ) So if you take all the particles in some problem and change each of their phases by some constant amount, then repeat the calculation, you'll get the same result. This turns out to be the same as saying you haven't altered the total electric charge in the situation. Hmmmm .....
( edit ) I should add, that whenever you multiply two sums you get terms corresponding to every combination of the particular terms of each. Say :
( a + b )( c + d ) = ac + ad + bc + bd
and so on for longer cases. A series is really another word for sum. So when you multiply those denominators you are effectively 'blending'. Did you mean to do that? :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Thanks, Mike, mostly just a
)
Thanks, Mike, mostly just a lucky guess after many wrong ones, not necessarily the correct one yet as you point out, but still the kind of self-confidence builder no one should be without from time to time :)
Was thinking about a region of 'empty' space in the cosmos as a gradient, from typical energy density in a solar system, to density in the halo of a galaxy, to density in the space between galaxies. Both fermions and bosons would be present. Then thought about regions of confinement (by the standard forces, from strong to gravity) and again both fermions and bosons would be there. So to consider the total numbers of particles (to find the total energy and relate that to 'stiffness' or energetic equilibria of space) it made sense to use both distribution functions. Lots more to learn :)
RE: Thanks, Mike, mostly
)
At least you've identified the core of the problem. Half the battle is finding the right question to answer! Dear Douglas Adams put it so well : ?? = 42 :-)
Another principle of note is loosely called 'anarchy', sort of Murphy's Law of Quantum Mechanics. If you have some initial state ( say particles going into a region ) and a final state ( particles, not necessarily the same ones, leaving a region ) then to calculate the amplitude ( a complex number which you take the magnitude of to get a probabilty ) you must consider all possible mechanisms/ways of getting from the first state to the second.
[ There's an additional issue, called normalisation, so that you can form a denominator to divide the above magnitude by - and hence get a number between zero and one. Zero means 'no way', one means 'certainty'. One can only talk of probability with respect to some 'úniversal' set. Venn diagrams and whatnot ... ]
So this gives you the Feynman diagram approach where your particles seem to sample all possible modes of behaviour that could apply in between. Like going through both slits of a Young's setup. A key consideration here is the 'Ãndistinguishable' comment in those helpful statistics distributions you mentioned.
If I have non-identical, thus distinguishable, particles entering and similiarly leaving the region then that knowledge ( eg. a proton went up and a neutron went down, say ) narrows down the possible diagrams I am going to consider. But if I have say two protons entering and two leaving, then because I can't 'label' individual protons, to know which is which, I have to add in the possibility of them swapping over in between. So then the solution includes more cases to summate and eventually get a probability. The boson/fermion thing guides you as to whether you add or subtract specific possibilities within the sum.
The technical problem then comes down to whether one has correctly accounted for all cases of interest that contribute to the observed outcome ( specific final state ), and whether one can form the total sum/superposition of cases ( all conceivable final states ) to make the probability denominator. And if you look, meaning there's some additional interaction, the superposition is now over a smaller subset because you have excluded some because you looked! In a Young's experiment knowing which slit was taken removes the interference between those particles of whom which-way knowledge was obtained.
And then there is 're-normalisation' which is a fancy way of ignoring the detail ( and hence avoiding the analysis ) of behaviours below a certain distance scale ( or equivalently above a certain energy ). Instead one puts quantities in place to represent the 'ás if' situation of those summarised pieces. It's a bit like when one does accounting for a large organisation. The managing director, even a department head, doesn't really want to know how exactly how many paper clips were bought. Or pencil sharpeners used. Or boxes of tissues. So you bung in estimates for the likes of 'petty cash' or 'stationery' or 'sundry items'. Then use those figures to flow through your subsequent accounting steps.
You will immediately see the dangers here. How do we get the proper estimates for finer scale particle behaviour? How do the little guys actually behave over small distances and short times? Do such small distances and times really 'exist'? How far off mass shell do they go? Are the sums actually finite? Is there a humungous pile of paper clips out back? :-)
The really cheeky bit is using one infinite sum to offset another infinite sum ( eg. subtract fermion behaviours from boson behaviours ), and then ( allegedly !! ) getting a finite answer that (a) is meaningful and (b) is measurable. Here be dragons ..... :-)
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal