eigenvalue 2 days ago

I actually just finished making a service that does something similar, but it also transforms the transcripts to make them into polished written documents with complete sentences and nice markdown formatting. It also can generate interactive multiple choice quizzes. And it supports editing of the markdown files with revision history and one click hosting.

I'm still doing the last testing of the site, but might as well share it here since it's so relevant:

https://youtubetranscriptoptimizer.com/

There might still be a few rough edges, so keep that in mind!

  • Terretta 2 days ago

    The pricing is confusingly giving counts of videos of short length, rather than time per price.

    The vodcasts that most need transcription are long form. After the "don't make me do math" pricing, you do have a table of minutes, up to 60, so for a typical, say, ContraPoints vodcast episode, you multiply by 3, and find out that could cost $30 to turn into the optimized transcript. (Which the creator might well pay for if they value their time, but viewers might not.)

    • eigenvalue 2 days ago

      Thanks for the feedback. I'll try to clarify the pricing table a bit better. And yes, this is targeting creators more. If it turns out that viewers are the better target market, I might pivot it a bit. And I'm considering adding a discount for longer videos.

  • hackernewds 2 days ago

    why limit this to YouTube? it should work on any body of text, is that right?

    • eigenvalue 2 days ago

      Yes, I'm also working on another version that is document-centric. It's a bit of a different problem. In the case of YouTube video transcripts, we are dealing with raw speech utterances. There could be run-on sentences, filler words and other speech errors, etc. Basically, it's a very far cry from a polished written document. Thus we need to really transform the underlying content to first get the optimized document, which can differ quite significantly from the raw transcript. Then we use that optimized document to generate the quizzes.

      In the case of a document only workflow, we generally want to stick to what's in the document very closely, and just extract the text accurately using OCR if needed (or extract it directly in case we don't need OCR) and then reformat it into nice looking markdown-- but without changing the actual content itself, just its appearance. When we've turned the original document into nice looking markdown, we can then use this to generate the quizzes and perhaps other related outputs (e.g, Anki cards, Powerpoint-type presentation slides, etc.).

      Because of that fundamental difference in approach, I decided to separate it into two different apps. But I'm planning on using much of the same UI and other backend structure. The document centric app also seems like it has a broader base of potential users (like teachers-- there are a lot of teachers out there, way more than there are YouTube content creators). I started with the YouTube app because my wife makes YouTube videos about music theory and I wanted to make something that at least she would actually want to use!

  • [removed] 2 days ago
    [deleted]
owenpalmer 3 days ago

This approach really doesn't make sense to me. The model has to output the entire transcript token by token, instead of simply adding it to the context window...

A more interesting idea would be a browser extension that lets you open a chat window from within YouTube, letting you ask it questions about certain parts of the transcript with full context in the system prompt.

spuz 2 days ago

How is it supposed to work? When I open this, I just see a prompt that says "Get the full transcription of any Youtube video, fast. Studies suggest that reading leads to better retention of complex information compared to video watching. Only English videos currently."

I tried pasting the URL of a YouTube video and I get the message "I'm unable to access the video directly, as the tool needed for that is disabled. However, if you'd like, you can summarize the video or let me know how I can assist with it!"

two_handfuls 2 days ago

I get what this is doing, but calling it "chat with a transcript" is weird. Like, documents and videos don't chat. We chat with a bot who has seen the document/video.

  • nwhnwh 2 days ago

    We are going to chat with all kinds of stuff soon xD

  • Kiro 2 days ago

    You're way too late starting that fight. "Chat with [anything]" has been an established term for a long time now.

    • two_handfuls a day ago

      In the enthusiast community, I suppose. It's not too late to adopt clearer terminology- this will be important as those things try to reach mainstream users.

romseb 3 days ago

It does not work with long form conversations like podcasts.

"I was unable to retrieve the transcript for this video due to its large size."

  • ofou 3 days ago

    Coming soon! Currently, it works for videos under one hour. This limitation is due to ChatGPT's context window when using Plugins. I don't know why since it should support 200k tokens... Alternatively, you can use https://textube.olivares.cl to get the full transcription for any video in English.

nomilk 2 days ago

I’d love this but from the yt home page and search results page. That would let me ask chatgpt if the video really contains the info its thumbnail/title suggest it does without having to leave the current browser tab.

I’ve done this by manually copy/pasting a yt transcript into chatgpt (and later streamlining it into a bash function), and it was quite effective, allowing me to dodge a couple of click bait time wasters. (videos that looked important but really were just fluffing up unimportant nonsense).

Workaccount2 3 days ago

I don't know if everyone has access to it (might just be yt premium), but many videos have an "ask gemini about this video" button, where you can directly ask questions about the video.

  • ofou 3 days ago

    It might be a preview or something because I have YT premium and doesn't show up that anywhere. Can you share a video that works for that? Like this one.

    https://www.youtube.com/watch?v=zjkBMFhNj_g

  • oefrha 2 days ago

    It’s really ironic that YouTube basically pushed videos to be at least ~ten minutes long through commercial incentives, then offers AI features to cut through that filler garbage.

    • Workaccount2 2 days ago

      While this is true, the thrust of what youtube was doing was to incentivize creation of videos that are 10+ minutes because they need to be 10+ minutes, not 10+ minutes because you are trying to game the system.

    • hombre_fatal 2 days ago

      Well, YTPremium users don’t see those ads. They’re the only ones who get the AI tool.

  • adzm 3 days ago

    It is a beta feature in YouTube premium and doesn't seem to be for all videos, but it has been extremely useful in my experience. You can even ask where in a video things are discussed etc.

andai 3 days ago

Very nice. I made a thing in Python which summarizes a YouTube transcript in bullet points. Never thought about asking it questions, that's a great idea!

I just run yt-dlp to fetch the transcript and shove it in the GPT prompt. (I think also have a few lines to remove the timestamps, although arguably those would be useful to keep.)

My prompt is "{transcript} Please summarize the above in bullet points"

The trick was splitting it up into overlapping chunks so it fits in the context size. (And then summarizing your summary because it ends up too long cause you had so many chunks!)

These days that's not so important, usually you can shove an entire book in! (Unless you're using a local model, which still have small context sizes, work pretty well for summarization.)

  • HPsquared 3 days ago

    If you're going as far as using yt-dlp, why not run the audio through Whisper?

    • andai 3 days ago

      Interesting, I haven't used Whisper, is it cost effective? Seems to be about 36 cents per (hour long) video? How long does processing take?

      • kajecounterhack 3 days ago

        You can run it locally, and it's really fast. But since YouTube transcription is really good, I don't see why you'd use Whisper and get a worse transcription (unless maybe it's on videos that Google did not transcribe for whatever reason).

      • HPsquared 2 days ago

        36 cents an hour is how much it costs to hire an entire GPU like an A4000. I can assure you Whisper runs much, much faster than 1x!

    • davidzweig 2 days ago

      The security against downloading audio from YouTube has been upped recently with 'PO tokens'.

      Whisper is only a few tenths of a cent per hour transcribed if transcribing on your gpu though, at about 30x real-time on a 3080 etc. with batching.

      • swyx 2 days ago

        > The security against downloading audio from YouTube has been upped recently with 'PO tokens'.

        do you have a source? more generally is there a community or news source for youtube "api" news like this?

      • HPsquared 2 days ago

        Tbh I've not had trouble with this for personal use.

iorrus 3 days ago

I've been using Voxscript [0] for a while, after comparing the two I think voxscript is better, gives longer more detailed summaries, TexTube just seems to give a very brief impersonal overview. Easy to try both and see which you prefer.

[0] https://chatgpt.com/g/g-g24EzkDta-voxscript

[removed] 3 days ago
[deleted]
altdataseller 2 days ago

I pasted a video link and it says “Not Found”. Absolutely not the best first impression.

jonwinstanley 3 days ago

What does it mean by chat with a transcript?

I.e. what are the kind of things I can ask and get value from?

  • ofou 3 days ago

    First, I would say that reading is faster than watching. Therefore, it is more time-efficient to read a YouTube video, especially if it covers technical content or interesting ideas. Additionally, you can ask follow-up questions about the content, and since it's in an OAI conversation, you can leverage the "intelligence" of the model to help you understand the parts that you find difficult. Sometimes, I watch technical YouTube videos and wish I had a written version; so here it is.

    This is an interesting example, it feels different than watching the ~12min video. https://chatgpt.com/share/66e9eaff-248c-8009-9761-d848d97881...

  • kylebenzle 3 days ago

    Nothing, it means nothing, like most of this "AI" hype nonsense.

    They copy paste text transcripts into an Llm and have it generate more text based on its training and prompt data. You can't "chat" with a text document of course.

    • yreg 3 days ago

      Chat with the document means chat about that document with an LLM who has “read” it.

      It can be useful; it's not hype nonsense.

      • jonwinstanley 3 days ago

        Ahh ok.

        So rather than watch the video or read the transcript you just ask the one thing you want to know.

        Could it take you to the moment in the video that is useful too?

    • camus_absurd 3 days ago

      I’m not sure I follow. Can you explain ‘you can’t chat with a text document’ because you clearly can.

      • hombre_fatal 3 days ago

        Is anyone even chomping at the bit to hear a pedant explain how "chatting with a text document" isn't the most precise way to phrase this concept that we all understand?

tsunamifury 3 days ago

allofus.ai already congregates all of the thinking of any creator on YouTube into a single mental model and allows you to interact with their synthetic self.

  • CamperBob2 3 days ago

    Now that does sound intriguing, but it just leads to a blank page...?

  • slt2021 2 days ago

    it is purely synthetic interaction though.

    asking questions to transcript at least is ground based on something real (a video)

afro88 3 days ago

When I try it it just says "Not found"

  • ofou 3 days ago

    Can you share the link?

    • afro88 2 days ago

      I clicked on one of the examples, which was "State of GPT by Andrej Karpathy"

      • ofou 2 days ago

        Sometimes, the model used by Plugins gets confused, especially when the transcript is too long. It might just load the content into memory as a response without saying much more. You can then engage in follow-up chat interactions. But now I just tried again the link and it seems to work. Sometimes you have to try a bunch of times, or explicitly ask for the transcript if not shown.

        https://chatgpt.com/share/66eadbad-1d3c-8009-91f0-abe3cf4d36...

jerjerjer 2 days ago

The most interesting thing about this is that OpenAI apparently does not own chatgpt.com domain.

  • alexeichemenda 2 days ago

    They do, this URL just links to a custom GPT hosted on OpenAI's chatgpt URL.

[removed] 2 days ago
[deleted]
lupusreal 3 days ago

Seems like fishing with hand grenades to me. I just download the subs and grep that.

  • mdp2021 2 days ago

    Even just experience with `man`-pages, "/<term>", show that it is a suboptimal strategy that leaves querying an understanding reader engine to be desired.

    • lupusreal 2 days ago

      Really? I generally have a good experience with searching manpages. My big grip with those is the man program itself.

      • mdp2021 a day ago

        Mine is that directly asking a question ("How to...") would be much faster than finding the information through grep or highlight aided skimming. It would be just more efficient.

        Also since in order to find a feature through a literal string you first have to guess it... But language is inherently fuzzier, so literal searches are in this purpose weaker than an interface dealing with the fuzzy aspect of expression.

  • [removed] 3 days ago
    [deleted]
righthand 3 days ago

Nice, hallucinate a text document about video content. Next is hallucinating a video from a text document hallucinated from a video?

studymonkey 2 days ago

Awesome work, OP! I really believe we’ll soon be able to get a full four-year education just from YouTube. The challenge right now is sifting through the infotainment that the algorithms tend to push.

This is actually what inspired us to create Lectura: https://lectura.xyz/

We’ve added features that promote curiosity and deeper learning, like ELI5 explanations, suggested queries based on transcripts, quizzes to track retention, and more.

If you’re interested in joining us to build out the platform, feel free to reach out at neil at lectura dot xyz