Comment by js8
Formal verification is ultimately an epistemological problem, how do we know that our model of reality corresponds to reality?
People have commented that validation is different from verification, but I think validation can be done by formally specifying the environment the program runs in.
For example, the question whether user can do X in your software corresponds to a question, is there a sequence of user inputs such that the software plus operating system leads to the output X.
By including the environment (like the operating system) into the formal spec we could answer (in theory) such questions.
Even if the question is fuzzy, we can today formalize it with AI models. For example, if we ask, does paint program allow user to draw a rose? We can take a definition of a rose from a neural network that recognizes what a rose is, and formally verify the system of paint program and rose verifier given by the NN.
> We can take a definition of a rose from a neural network that recognizes what a rose is
Can we? That just shifts the problem to building such a NN.
I think it would already be pretty tricky to get a decent sized panel of humans to agree on what a "rose" is. Do rose-shaped but yellow flowers count? How about those genetically altered thornless ones? How about perfectly imitatations that are made of plastic? Images of roses? The flowers from the list at https://www.epicgardening.com/rose-lookalikes/?
And of course there's the problem of getting a NN to recognise just roses. The mythical neural network with 0% error rate is exceptionally rare and usually not very useful for real world applications. I very much doubt humanity could build a neural network that perfectly recognises roses and only roses.