Comment by __alexs
An agent cannot interact with tools without prompts that include them.
But also, the text you quoted is NOT recursive iteration of an empty prompt. It's two models connected together and explicitly prompted to talk to each other.
An agent cannot interact with tools without prompts that include them.
But also, the text you quoted is NOT recursive iteration of an empty prompt. It's two models connected together and explicitly prompted to talk to each other.
Words are magic. Right now you're thinking of blueberries. Maybe the last time you interacted with someone in the context of blueberries. Also. That nagging project you've been putting off. Also that pain in your neck / back. I'll stop remote-attacking your brain now HN haha
I asked claude what python linters it would find useful, and it named several and started using them by itself. I implicitly asked it to use linters, but didn't tell it which. Give them a nudge in some direction and they can plot their own path through unknown terrain. This requires much more agency than you're willing to admit.
> tools without prompts that include them
I know what you mean, but what if we tell an LLM to imagine whatever tools it likes, than have a coding agent try to build those tools when they are described?
Words can have unintended consequences.