Comment by RossBencina
Comment by RossBencina 3 days ago
But is computation enough? The computable reals are a subset of the reals.
Comment by RossBencina 3 days ago
But is computation enough? The computable reals are a subset of the reals.
As far as we know quantum mechanics does not have a granularity. You can measure any value to arbitrary precision. You must have limits on measuring two things simultaneously.
Granularity is implied by some, but not all, post standard model physics. It's a very open question.
Whether or not nature is discrete is an open question, but it's rather strongly implied, and there's absolutely nothing to suggest that the universe is incomputable.
> You can measure any value to arbitrary precision
Quantum mechanics lets you refine one observable indefinitely if you are willing to sacrifice conjugate observables.
You can measure that one observable to an arbitrarily (not infinitely!) high precision -- if you have an correspondingly arbitrary duration of time and arbitrarily powerful computational resources. That measurement of yours, if exceedingly precise, might require a block of computronium which utilizes the entire energy resources of the universe. As a practical matter, that's not permitted. I'm certainly unaware of any measurement in physics to more than 100 significant digits, let alone "arbitrary precision".
In fact, as it turns out, no experiment has ever resolved structure down to the Planck length. A fundamental spatial resolution is likely to be quite a lot smaller than the Planck length; even the diameter of the electron has been estimated at anywhere from 10^-22m to 10^-81m.
The question I was responding to asked whether reals are "necessary" -- plainly, insofar as reality is concerned, they are not.
We cannot measure anywhere near the Planck length or anything like 100 significant figures.
My concern with the reals goes the other direction. The reals are not closed under common operations. A lot of the work is done in complex numbers, which are. The rationals are not closed under radicals, and radicals seem pretty fundamental.
You can define a finitist model despite this. But it's ugly, and while ugliness is not physically meaningful, it tends to make progress difficult. A usable solution may yet arise; we shall (perhaps) see.
The argument is more like our most successful sciences like physics use our most successful math, which is ZFC. And thus incompatible reals are part of this package deal. Maybe it’s possible to do physics without any packaging of lots of these mathematical objects, but to my knowledge it hasn’t been done (physics based on a paired down math, from top to bottom). It’s a de facto require, that despite effort like intuitionism or Science without Numbers has not replaced classic ZFC as our most successful math.
If a popularity is the measure of success, then ZFC is successful. However, I'm not sure there's anything that requires it and doubt that there's a strong claim wrt to any ZFC requirement. So saying science requires it, is like saying that ARM is required vs x86 or IEEE 754 is because your experimental setup runs on it.
Numbers interpretable as ratios are the Rational numbers, by definition, not the Reals.
This entire discussion is about mathematical concepts, not physical ones!
Sure, yes, in physics you never "need" to go past a certain number of digits, but that has nothing to do with mathematical abstractions such as the types numbers. They're very specifically and strictly defined, starting from certain axioms. Quantum mechanics and the measurability of particles has nothing to do with it!
It's also an open question how much precision the Universe actually has, such as whether things occur at a higher precision than can be practically measured, or whether the ultimate limit of measurement capability is the precision that the Universe "keeps" in its microscopic states.
For example, let's assume that physics occurs with some finite precision -- so not the infinite precision reals -- and that this precision is exactly the maximum possible measurable precision for any conceivable experiment. That is: Information is matter. Okay... which number space is this? Booleans? Integers? Rationals? In what space? A 3D grid? Waves in some phase space? Subdivided... how?
Figure that out, and your Nobel prize awaits!
Ratios of numbers that are not integers or Rationals are... the Reals. I mean sure, you could get pedantic and talk about ratios of complex integers or whatever, but that's missing the point: The Rationals are closed under division, which means the ratio of any two Rationals is a Rational. To "escape" the Rationals, the next step up is Irrational numbers. Square roots, and the like. The instant you mix in Pi or anything similar, you're firmly in the Reals and they're like a tarpit, there's no escape once you've stepped off the infinitesimal island of the Rationals.
The set of values of any physical quantity must have an algebraic structure that satisfies a set of axioms that include the axioms of the Archimedean group (which include the requirements that it must be possible to compare, add and subtract the values of that physical quantity).
This requirement is necessary to allow the definition of a division operation, which has as operands a pair of values of that physical quantity, and as result a scalar a.k.a. "real" number. This division operation, as you have noticed, is called "measurement" of that physical quantity. A value of some physical quantity, i.e. the dividend in the measurement operation, is specified by writing the quotient and the divisor of the measurement, e.g. in "6 inches", "6" is the quotient and "inch" is the divisor.
In principle, this kind of division operation, like any division, could have its digit-generating steps executed infinitely, producing an approximation as close as desired for the value of the measured quantity, which is supposed to be an arbitrary scalar, a.k.a. "real" number. Halting the division after a finite number of steps will produce a rational number.
In practice, as you have described, the desire to execute the division in a finite time is not the only thing that limits the precision of the measured values, but there are many more constraints, caused by the noise that could need longer and longer times to be filtered, by external influences that become harder and harder to be suppressed or accounted for, by ever greater cost of the components of the measurement apparatus, by the growing energy required to perform the measurement, and so on.
Nevertheless, despite the fact that the results of all practical measurements are rational numbers of low precision, normally representable as FP32, with only measurements done in a few laboratories around the world, which use extremely expensive equipment, requiring an FP64 or an extended precision representation, it is still preferable to model the set of scalars using the traditional axioms of the continuous straight line, i.e. of the "real" numbers.
The reason is that this mathematical model of a continuous set is actually much simpler than attempting to model the sets of values of physical quantities as discrete sets. An obvious reason why the continuous model is simpler is that you cannot find discretization steps that are good both for the side and for the diagonal of a square, which has stopped the attempts of the Ancient Greeks to describe all quantities as discrete. Already Aristotle was making a clear distinction between discrete quantities and continuous quantities. Working around the Ancient Greek paradox requires lack of isotropy of the space, i.e. discretization also of the angles, which brings a lot of complications, e.g. things like rigid squares or circles cannot exist.
The base continuous dynamical quantities are the space and time, together with a third quantity, which today is really the electric voltage (because of the convenient existence of the Josephson voltage-frequency converters), even if the documents of the International System of Units are written in an obfuscated way that hides this, in an attempt to preserve the illusion that the mass might be a base quantity, like in the older systems of units.
In any theory where some physical quantities that are now modeled as continuous, were modeled as discrete instead, the space and time would also be discrete. There have been many attempts to model the space-time as a discrete lattice, but none of them has produced any useful result. Unless something revolutionary will be discovered, all such attempts appear to be just a big waste of time.
The Heisenberg uncertainty principle is completely irrelevant for metrology and it certainly does not have any relationship whatsoever with the algebraic structure of the set of values of a physical quantity.
The Heisenberg uncertainty principle is just a trivial consequence of the properties of the Fourier transform. It just states that there are certain pairs of physical quantities which are not independent (because their probability densities are connected by a Fourier transform relationship), so measuring both simultaneously with an arbitrary precision is not possible.
The Heisenberg uncertainty principle says absolutely nothing about the measurement of a single physical quantity or about the simultaneous measurement of a pair of independent physical quantities.
There is no such thing as a quantity that cannot be measured, i.e. the set of its values does not have the required Archimedean group algebraic structure. If it cannot be measured, it is not a quantity (there are also qualities, which can only be compared, but not measured, so the sets of their values are only ordered sets, not Archimedean groups; an example of a physical quality, which is not a physical quantity, is the Mohs hardness, whose numeric values are just labels attached to certain values, a Mohs hardness of "3" could have been as well labeled as "Ktcwy" or with any other arbitrary string, the numeric labels have been chosen only to remember easy the order between them).
Your whole comment can just be TL;DR Forth and the fixed point philosophy :)
Rationals > irrationals on computing them. You can always approximate irrationals with rationals, even Scheme (Lisp, do'h) has a function to convert a rational to decimal and the reverse. decimal to rational.
Like I have expanded in another reply, reals are not necessary, but using them to model the sets of values of the dynamic quantities is much simpler than any alternative that attempts to use only rational numbers.
During the last decades, there have been published many research papers exploring the use of discreteness instead of the traditional continuity, but they cannot be considered as anything else but failures. The resulting mathematical models are much more complicated than the classic models, without offering any extra predictions that could be verified.
The reason for the complications is that in physics it would be pointless to try to model a single physical quantity as discrete instead of continuous. You have a system of many interrelated quantities and trying to model all of them as discrete reaches quickly contradictions for the simpler models, due to the irrational or transcendent functions that relate some quantities, functions that appear even when modeling something as simple as a rotation or oscillation. If, in order to avoid incommensurability, you replace a classic continuous uniform rotation with a sequence of unequal jumps in angular orientation, to match the possible directions of nodes in a discrete lattice, then the resulting model becomes extremely more complicated than the classic continuous model.
Are they?
The idea that uncountable means more comes from a bad metaphor. See https://news.ycombinator.com/item?id=44271589 for my explanation of that.
Accepting that uncountable means more forces us to debatable notions of existence. See https://news.ycombinator.com/item?id=44270383 for a debate over it.
But, finally, there is this. Every chain of reasoning that we can ever come up with, can be represented on a computer. So even if you wish to believe in some extension of ZFC with extremely large sets, PA is capable of proving every possible conclusion from your chosen set of axioms. So yes, PA is enough.
If you're not convinced, I recommend reading https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden....
This is an under specified question, until some observable goal is attached to "enough". Enough for what?
I like to think about it like this: while real numbers in general are impossible to compute, write down or do anything else with them, many statements about real numbers can be expressed in a useful, computable form.
It all gets mind-bendingly mind-bending really fast, especially with people like Harvey Friedman ( https://u.osu.edu/friedman.8/foundational-adventures/fom-ema... ) or the author of this post trying to break the math by constructing an untractably large values with the simplest means possible (thus showing that you can encounter problems that do not fit in the universe even when working in a "simple" theory)
(i saw the username and went to check audiomulch website, but it did not resolve :( )
> But is computation enough?
Of course not, but that would invalidate the entire project of some/many here to turn reality into something clockwork they can believe they understand. Reality is much more interesting than that.
"Reals" (tragically poorly named) can be interpreted as physical ratios.
That is: Real numbers describe real, concrete relations. For e.g., saying that Jones weighs 180.255 pounds means there's a real, physical relationship -- a ratio -- between Jones' weight and the standard pound. Because both weights exist physically, their ratio also exists physically. Thus, from this viewpoint, real numbers can be viewed as ratios.
In contrast, the common philosophical stance on numbers is that they are abstract concepts, detached from the actual physical process of measurement. Numbers are seen as external representations tied to real-world features through human conventions. This "representational" approach, influenced by the idea that numbers are abstract entities, became dominant in the 20th century.
But the 20th century viewpoint is really just one interpretation (you could call it "Platonic"), and, just as it's impossible to measure ratios to infinite precision in the real world, absolutely nothing requires an incomputable continuum of reals.
Physics least of all. In 20th and 21st century physics, things are discrete (quantized) and are very rarely measured to over 50 significant digits. Infinite precision is never allowed, and precision to 2000 significant digits is likewise impossible. The latter not only because quantum mechanics makes it impossible to attain great precision on very small scales. For e.g., imagine measuring the orbits of the planets and moons in the solar system: By the time you get to 50 significant digits, you will need to take into account the gravitational effects of the stars nearest to the sun; before you get to 100 significant digits, you'll need to model the entire Milky Way galaxy; the further you go in search of precision, the exponentially larger your mathematical canvas will need to grow, and at arbitrarily high sub-infinite precision you’d be required to model the whole of the observable universe -- which might itself be futile, as objects and phenomena outside observable space could affect your measurements, etc. So though everything is in principle simulatable, and precision has a set limit in a granular universe that can be described mathematically, measuring anything to arbitrarily high precision is beyond finite human efforts.