Comment by artsalamander

Comment by artsalamander 15 hours ago

3 replies

I've been building solutions for real-time voice -> llm -> voice output, and I think the most exciting part of what you're building is the streaming neural audio codec since you're never actually really able to stream STT with whisper.

However from a product point of view I wouldn't necessarily want to pipe that into an LLM and have it reply, I think in a lot of use-cases there needs to be a tool/function calling step before a reply. Down to chat with anyone reading this who is working along these lines!

edit: tincans as mentioned below looks excellent too

editedit: noooo apparently tincans development has ended, there's 10000% space for something in this direction - Chris if you read this please let me pitch you on the product/business use-cases this solves regardless of how good llms get...

huac 7 hours ago

> there needs to be a tool/function calling step before a reply

I built that almost exactly a year ago :) it was good but not fast enough - hence building the joint model.

malevolent-elk 14 hours ago

I've been playing around with this workflow too - I'm using a "streaming" setup with Whisper (chunking samples to start transcribing while a user is still talking), which pipes to Mistral 8B as a conversation arbiter to walk through a preset IVR tree which calls tools etc. The LLM isn't responding on its own though, just selecting nodes in the tree with canned TTS outputs.

There's a "pause length" parameter that tries to decide whether a user has finished talking before it passes transcripts to the LLM, nothing fancy. If you have any recs I'm still working through how to properly handle the audio input and whether a prompting setup can manage the LLM with enough fidelity to scrap the IVR tree. It works decently well, but lots of room for improvement

  • Jonovono 11 hours ago

    Is this a client / server setup? What are you using for handling the streaming of audio? (daily, livekit, etc?)