Comment by btilly

Comment by btilly 5 days ago

3 replies

I believe that you're fine with it, simply because that is what you're familiar with. And if you'd grown up with a different way of thinking about these problems, then you wouldn't be fine with it.

Personally, I can work with either system. But, to me, formalism really does feel like a game. And the more that I have thought about the foundations of math, the more dissatisfied I have become with this game. And now I find myself frustrated when people assert the conclusions of the game as truth, instead of as merely being formal statements within a game that mathematicians are choosing to play.

Here is something that I believe.

We owe our current understanding of uncountability to Cantor's metaphor, about figuring out which group of sheep is larger by pairing them off. We would today have a very different kind of mathematics if Cantor had instead made a more careful analogy to the problem of trying to count all of the sheep that have ever existed. Even if you had perfect information about the past, you're doomed to fail because you can't figure out where to draw the line between ancestral sheep, and sheep-like ancestors.

This second metaphor is exactly parallel to uncountability within the computable universe. For example we can implement reals as some kind of Cauchy sequences. For example as programs for functions f, where f(n) is a rational, and |f(n) - f(m)| <= 1/n + 1/m. This works perfectly well. But now Cantor's diagonalization argument clearly does not demonstrate that there are more reals. Instead it demonstrates the limits of what computation can predict about the behavior of other computations.

In other words, I just described a system operating on a notion of truth that is directly tied to the reasoning required to establish that truth. And in that system, uncountable is tied to self-reference. And really doesn't mean more.

I don't know how to really formalize this idea. But I'd be interested if anyone actually has done so.

bubblyworld 4 days ago

Mmm, the problem with computable foundations (in my opinion anyway) is that it takes a lot of stuff that is trivial in standard foundations (equivalence relations, basic operations of arithmetic and their laws, quotients, etc) and fills them with subtle logical footguns.

As you say, some view this as a feature, not a flaw. But to my mind mathematics is a tool for dissecting problems with hard formal properties, and for that I'd like the sharpest scalpel I can find.

For me classical foundations strikes a good balance between ease of use and subtlety of reasoning required to get results. I'm not sure the non-constructive and self-referential bits bother me, they don't really get in the way unless you're studying logic (in which case you're interested in computability and other foundations anyway).