Comment by kuschku

Comment by kuschku 2 days ago

5 replies

> I dare you to try building a project with Cursor or a better cousin and then come back and repeat this comment

I always try every new technology, to understand how it works, and expand my perspective. I've written a few simple websites with Cursor (one mistake and it wiped everything, and I could never get it to produce any acceptable result again), tried writing the script for a YouTube video with ChatGPT and Claude (full of hallucinations, which – after a few rewrites – led to us writing a video about hallucinations), generated subtitles with Whisper (with every single sentence having at least some mistake) and finally used Suno and ChatGPT to generate some songs and images (both of which were massively improved once I just made them myself).

Whether Android apps or websites, scripts, songs, or memes, so far AI is significantly worse at internet research and creation than a human. And cleaning up the work AI did always ended up being taking longer just doing it myself from scratch. AI certainly makes you feel more productive, and it seems like you're getting things done faster, even though it's not.

permo-w a day ago

simply, you're using them wrongly

  • kuschku a day ago

    Let's assume that's true — I'm just bad at using AI.

    If that were the case, everyone else's AI creations would have a significantly higher quality than my own.

    But that's not what we observe in the real world. They're just as bad as what I managed to create with AI.

    The only ones I see who are happy with the AI output are people who don't care about the quality about the end result, or the details of it, just the semblance of a result.

    • permo-w a day ago

      you're ignoring survivorship bias. anything text-based you can tell was made with AI input is something that was made using the AI poorly

      • kuschku a day ago

        If that was the case, that'd be great. I don't necessarily care how something was achieved, as long as the software engineering and architecture was properly done, requirements were properly considered, edge cases documented, tests written, and bugs reported upstream.

        But it's not the case. Of course, I could be wrong – maybe it's not AI, maybe it's just actual incompetence instead.

        That said, humans usually don't approach tasks the way LLMs do. Humans generally build a mental model that they refine over time, which means that each change, each bit of code written, closely resembles other code written at the same time, but often bears little resemblance to code nearby. This is also why humans need refactoring – our mental model has changed, and we need to adjust the old code to match the new model.

        Whereas LLMs are influenced most by the most recent tokens, which means that any change is affected by the code surrounding it much more than by other code written at the same time. That's also why, when something is wrong, LLMs struggle with fixing it (as even just reading the broken code distorts the probabilities, making it more likely to make the same mistake again), which is why it's typically best to recreate a piece of code from scratch instead.

        • permo-w a day ago

          this doesn't really negate or address the fact that the sample you're basing your position upon clearly doesn't account for the content that you couldn't tell was made using AI

          I only gave AI coding assistants as a secondary example as to why AI obviously isn't something that people are suddenly going to realise they don't need, and you're over-focusing on it because clearly you have an existing and well-thought out position on the topic, but it's completely beside the point

          this thread is about AI generated text content online