Comment by owenpalmer
Comment by owenpalmer 3 days ago
This approach really doesn't make sense to me. The model has to output the entire transcript token by token, instead of simply adding it to the context window...
A more interesting idea would be a browser extension that lets you open a chat window from within YouTube, letting you ask it questions about certain parts of the transcript with full context in the system prompt.
That's initially what I thought this was. Seems like somebody had the same concept, there's an extension called "AskTube" which looks like it does exactly this.
https://chromewebstore.google.com/detail/asktube-ai-youtube-...