Comment by petekoomen
Comment by petekoomen a day ago
Smarter models aren't going to somehow magically understand what is important to you. If you took a random smart person you'd never met and asked them to summarize your inbox without any further instructions they would do a terrible job too.
You'd be surprised at how effective current-gen LLMs are at summarizing text when you explain how to do it in a thoughtful system prompt.
I’m less concerned with understanding what’s important to me than I am the number of errors they make. Better prompts don’t fix the underlying issue here.