Comment by Franklinjobs617
Comment by Franklinjobs617 14 hours ago
I’m currently building YTVidHub—a tool that focuses on solving a very specific, repetitive workflow pain for researchers and content analysts.
The Pain Point: If you are analyzing a large YouTube channel (e.g., for language study, competitive analysis, or data modeling), you often need the subtitle files for 50, 100, or more videos. The current process is agonizing: copy-paste URL, click, download, repeat dozens of times. It's a massive time sink.
My Solution: YTVidHub is designed around bulk processing. The core feature is a clean interface where you can paste dozens of YouTube URLs at once, and the system intelligently extracts all available subtitles (including auto-generated ones) and packages them into a single, organized ZIP file for one-click download.
Target Users: Academic researchers needing data sets, content creators doing competitive keyword analysis, and language learners building large vocabulary corpora.
The architecture challenge right now is optimizing the backend queuing system for high-volume, concurrent requests to ensure we can handle large batches quickly and reliably without hitting rate limits.
It's still pre-launch, but I'd love any feedback on this specific problem space. Is this a pain point you've encountered? What's your current workaround?
How coincidental - I needed exactly this just a couple days ago. I ended up vibecoding a script to feed an individual URL into yt-dlp then pipe the downloaded audio through Whisper - not quite the same thing as it's not downloading the _actual_ subtitles but rather generating its own transcription, but similar. I've only run it on a single video to test, but it seemed to work satisfactorily.
I haven't upgraded to bulk processing yet, but I imagine I'd look for some API to get "all URLs for a channel" and then process them in parallel.