• 0 Posts
  • 4 Comments
Joined 5 months ago
cake
Cake day: July 7th, 2024

help-circle
  • Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.

    What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

    Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.

    Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

    Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

    However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

    Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.

    There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

    There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

    This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.


  • It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

    The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

    All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

    Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

    Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

    Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.


  • There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties?

    What makes quantum mechanics distinct from classical mechanics is the fact that not only are there interference effects, but statistically correlated systems (i.e. “entangled”) can seem to interfere with one another in a way that cannot be explained classically, at least not without superluminal communication, or introducing something else strange like the existence of negative probabilities.

    If it wasn’t for these kinds of interference effects, then we could just chalk up quantum randomness to classical randomness, i.e. it would just be the same as any old form of statistical mechanics. The randomness itself isn’t really that much of a defining feature of quantum mechanics.

    The reason I say all this is because we actually do know why there is a distinction between quantum and non-quantum objects and why large objects do not exhibit quantum properties. It is a mixture of two factors. First, larger systems like big molecules have smaller wavelengths, so interference with other molecules becomes harder and harder to detect. Second, there is decoherence. Even small particles, if they interact with a ton of other particles and you average over these interactions, you will find that the interference terms (the “coherences” in the density matrix) converge to zero, i.e. when you inject noise into a system its average behavior converges to a classical probability distribution.

    Hence, we already know why there is a seeming “transition” from quantum to classical. This doesn’t get rid of the fact that it is still statistical in nature, it doesn’t give you a reason as to why a particle that has a 50% chance of being over there and a 50% chance of being over here, that when you measure it and find it is over here, that it wasn’t over there. Decoherence doesn’t tell you why you actually get the results you do from a measurement, it’s still fundamentally random (which bothers people for some reason?).

    But it is well-understood how quantum probabilities converge to classical probabilities. There have even been studies that have reversed the process of decoherence.


  • Quantum internet is way overhyped and likely will never exist. There are not only no practical benefits to using QM for internet but it has huge inherent problems that make it unlikely to ever scale.

    • While technically yes you can make “unbreakable encryption” this is just a glorified one-time cipher which requires the key to be the same length of the message, and AES256 is already considered unbreakable even by quantum computers, so good luck cutting your internet bandwidth in half for purely theoretical benefits that exist on paper but will never be noticeable in practice!
    • Since it’s a symmetric cipher it doesn’t even work for internet communication unless you have a way to distribute keys, and there is something called quantum key distribution (QKD) based around algorithms like BB84. However, this algorithm only allows you to guarantee that you can exchange keys without anyone snooping in on it being undetected, but it does not actually stop them from snooping in on your key like Diffie-Hellman achieves. Meaning, a person can literally shut down the entire network traffic just by observing the packets in transit without having to even do anything to do them. How can the government and private companies possibly build an internet whereby you guarantee nobody ever looks at packages as they’re transmitted through the network?
    • QKD is also susceptible to man-in-the-middle attacks just like Diffie-Hellman, which we solve that problem in classical cryptography with digital signature algorithms. There are quantum digital signature algorithms (QDS) but they rely on Holevo’s theorem which says that the “collapse” is effectively a one-way process and only limited amount of information can be extrapolated from it, and thus you cannot derive the qubit’s initial state simply by measuring it. The problem, however, is Holevo’s theorem also says if you had tons of copies of the same qubit, you could derive even more information from it. Meaning, all public keys would have to be consumable, because making copies of them would undermine their security, and this makes it just not something that can scale.

    And all this for what? You have all these drawbacks for what? Imagined security benefits that you won’t actually notice in real life? Only people I could ever see using this are governments that are hyperparanoid. A government intranet could be highly controlled, highly centralized, and not particularly large scale by its very nature that you don’t want many people having access to it. So I could see such a government getting something like that to work, but there would be no reason to replace the internet with it.