Comment by cachvico
It can be done, it was done, but it doesn't scale.
If you want layers of failsafe and redundancy (as we would do it today), it requires higher level abstractions, e.g. writing in at least C if not C++ or Rust, instead of hand-coded assembler like they did back then.
So yes simple got us there, but it's not useful to repeat the exercise like that again.
This really proves my point.
I suggest that maybe the key to success is limiting ourselves to simpler TTL logic and make up for it by adding 10% additional material. Immediately somebody responds that TTL logic is the old way and can't be as good as modern C, let alone Rust.
So now instead of a few hundred large, durable transistors and relays which shrug off radiation and heat and voltage spikes and a have few enough states that they can be formally proved correct, we need delicate 30+ MHz microprocessors which need special radiation hardening and which will go up in smoke if their signal lines transiently exceed 10 volts, and runs a couple million lines of code.
The arguments here for Rust aren't even wrong, which is the problem. In theory Rust would be better than TTL logic in every way: easier, cheaper, lighter, more capable, more logging, updatable. Professionally TTL is an argument which can't be won and is therefore career limiting to make, so finesse wins out.
Yet large projects of every type keep 'mysteriously' failing due to "unforeseen difficulties".