Comment by jaredsohn
I'm not doing this much now, but this AI-generated text might be more useful if you use AI to ask questions using it as a source.
I'm not doing this much now, but this AI-generated text might be more useful if you use AI to ask questions using it as a source.
Seems very idealistic :)
I think this is about when the app is broken and people are keeping a meeting app open to communicate with each other as they scramble to fix things.
So the limitation here is more about problems not being solved yet rather than how a 'meeting' is organized.
While in principle that should be great I don't even slightly trust it as a technique because you're compounding points at which the LLM can get things wrong. First you've got the speech to text engine, which will introduce errors based on things like people mumbling, or a bird shouting outside the window. That's then fed into a summarising LLM to make the meeting notes, which may latch onto the errors in the speech to text engine, or just make up its own new and exciting misinterpretations. Finally you're feeding those into some sort of document store to ask another LLM questions about them, and that LLM too can misinterpret things in interesting ways. Its like playing a game of chinese whispers with yourself.
Meeting notes are useful in two ways, for me:
- I'm reviewing the last meeting of a regular meeting cadence to see what we need to discuss.
- I put it in a lookup (vector store, whatever) so I can do things like "what was the thing customer xyz said they needed to integrate against".
Those are pretty useful. But I don't usually read the whole meeting notes.
I think this is probably more broadly true too. AI can generate far more text than we can process, and text treatises on what an AI was prompted to say is pretty useless. But generating text not with the purpose of presenting it to the user but as a cold store of information that can be paired with good retrieval can be pretty useful.