Comment by coffeebeqn
Comment by coffeebeqn 2 days ago
Sounds exactly like my experience with the “agents” about a year ago. Autogpt or whatever it was called. Works great 1% of the time and the rest it gets stuck in the wrong places completely unable to back out.
I’m now using o1 or Claude Sonnet 3.5 and usually one of them gets it right.
The current frontier models are all neocortex. They have no midbrain or crocodile brain to reconcile any physical, legal or moral feedback. The current state of the art is to preprocess all LLM responses with a physical/legal/moral classifier and respond with a generic "I'm sorry Dave, I'm afraid I can't do that."
We are fooled into thinking these golems have a shred of humanity, but their method of processing information is completely backward. Humans begin with a fight/flight classifier, then a social consensus regression, and only after this do we start generating tokens ... and we do this every moment of every day of our lives, uncountably often, the only prerequisite being the calories in an occasional slice of bread and butter.