Comment by chrismorgan
Comment by chrismorgan 5 days ago
https://github.com/phoenixframework/phoenix/blob/main/instal...
That’s insane. 3000 words of prose boilerplate about the language and framework. Sounds like you need, at the very least, some sort of import directive. I have no idea if “Read and follow the instructions in path/to/phoenixframework/AGENTS.md.” would work.
And then the eclectic mixture of instructions with a variety of ways of trying to bully an intransigent LLM into ignoring its Phoenix-deficient training… ugh.
The thing about language models is that they are *language* models. They don't actually parse XML structure, or turn code into an AST, they are just next-token generators.
Individual models may have supplemented their training with things that look like structure (e.g. Claude with its XMLish delimiters), but it's far from universal.
Ultimately if we want better fidelity to the concepts we're referencing, we're better off working from the larger/richer dataset of token sequences in the training data--the total published written output of humanity.