Comment by twoodfin
Comment by twoodfin 2 months ago
I’m not clear on the virtues or potential of a model like this over a pure text model using STT/TTS to achieve similar results.
Is the idea that as these models grow in sophistication they can properly interpret (or produce) inflection, cadence, emotion that’s lost in TTS?
There's a lot of data loss and guessing with STT/TTS.
An STT model might misrecognize a word, but an audio LLM may understand the true word because of the broad context. A TTS model needs to guess the inflection and it can get it completely wrong, but an audio LLM could understand how to talk naturally and with what tone (e.g. use a higher tone if it's interjecting)
Speaking of interjection, an STT/TTS system will never interject because it relies on VAD and heuristics to guess when to start talking or when to stop, and generally the rule is to only talk after the user stopped talking. An audio LLM could learn how to conversate naturally, avoid taking up too much conversation time or even talk with a group of people.
An audio LLM could also produce music or sounds or tell you what the song is when you hum it. There's a lot of new possibility
I say "could learn" for most of this because it requires good training data, but from my understanding most of these are currently just trained with normal text datasets synthetically turned into voice with TTS, so they are effectively no better than a normal STT/TTS system; it's a good way to prove an architecture but it doesn't demonstrate the full capabilities