Comment by Meganet

Comment by Meganet 4 days ago

1 reply

Its the same thing as with humans, thats right. It doesn't do Logical reasoning but even the best humans stop at some level.

But if you read all the knowledge of humans, were does your reasoning start? Probably at a very high level of it.

If you look at human brains, we conduct experiments right? As a software developer, we write tests. ChatGPT can already run python code and it can write unit tests.

We do not use proofs when we develop. An AI could actually doing this. But at the end its more of a question who does it better, faster and cheaper eh?

Hugsun 8 hours ago

There is an important difference between humans and LLMs in this context.

Humans do in most cases have some knowledge about why they know the things they know. They can recall the topics they learned at school, and can deduce that they probably heard a given story from a friend who likes to discuss similar topics, etc.

LLMs have no access to the information they were trained on. They could know that everything they know was learned during the training, but they have no way of determining what they learned about and what they didn't.