Comment by amarcheschi

Comment by amarcheschi 2 days ago

13 replies

I just spent some time trying to make claude and gemini make a violin plot of some polar dataframe. I've never used it and it's just for prototyping so i just went "apply a log to the values and make a violin plot of this polars dataframe". ANd had to iterate with them for 4/5 times each. Gemini got it right but then used deprecated methods

I might be doing llm wrong, but i just can't get how people might actually do something not trivial just by vibe coding. And it's not like i'm an old fart either, i'm a university student

VOIPThrowaway 2 days ago

You're asking it to think and it can't.

It's spicy auto complete. Ask it to create a program that can create a violin plot from a CVS file. Because this has been "done before", it will do a decent job.

hiq 2 days ago

> had to iterate with them for 4/5 times each. Gemini got it right but then used deprecated methods

How hard would it be to automate these iterations?

How hard would it be to automatically check and improve the code to avoid deprecated methods?

I agree that most products are still underwhelming, but that doesn't mean that the underlying tech is not already enough to deliver better LLM-based products. Lately I've been using LLMs more and more to get started with writing tests on components I'm not familiar with, it really helps.

  • jaccola a day ago

    How hard can it be to create a universal "correctness" checker? Pretty damn hard!

    Our notion of "correct" for most things is basically derived from a very long training run on reality with the loss function being for how long a gene propagated.

  • henryjcee a day ago

    > How hard would it be to automate these iterations?

    The fact that we're no closer to doing this than we were when chatgpt launched suggests that it's really hard. If anything I think it's _the_ hard bit vs. building something that generates plausible text.

    Solving this for the general case is imo a completely different problem to being able to generate plausible text in the general case.

    • HDThoreaun a day ago

      This is not true. The chain of logic models are able to check their work and try again given enough compute.

      • lelandbatey a day ago

        They can check their work and try again an infinite number of times, but the rate at which they succeed seems to just get worse and worse the further from the beaten path (of existing code from existing solutions) that they stray.

  • 9dev a day ago

    How hard would it be, in terms of the energy wasted for it? Is everything we can do worth doing, just for the sake of being able to?

dinfinity 2 days ago

Yes, you're most likely doing it wrong. I would like to add that "vibe coding" is a dreadful term thought up by someone who is arguably not very good at software engineering, as talented as he may be in other respects. The term has become a misleading and frankly pejorative term. A better, more neutral one is AI assisted software engineering.

This is an article that describes a pretty good approach for that: https://getstream.io/blog/cursor-ai-large-projects/

But do skip (or at least significantly postpone) enabling the 'yolo mode' (sigh).

  • amarcheschi 2 days ago

    You see, the issue I get petty about is that Ai is advertised as the one ring to rule them all software. VCs creaming themselves at the thought of not having to pay developers and using natural language. But then, you have to still adapt to the Ai, and not vice versa. "you're doing it wrong". This is not the idea that VCs bros are selling

    Then, I absolutely love being aided by llms for my day to day tasks. I'm much more efficient when studying and they can be a game changer when you're stuck and you don't know how to proceed. You can discuss different implementation ideas as if you had a colleague, perhaps not a PhD smart one but still someone with a quite deep knowledge of everything

    But, it's no miracle. That's the issue I have with the way the idea of Ai is sold to the c suites and the general public

    • pixl97 a day ago

      >But, it's no miracle.

      All I can say to this is fucking good!

      Lets imagine we got AGI at the start of 2022. I'm talking about human level+ as good as you coding and reasoning AI that works well on the hardware from that age.

      What would the world look like today? Would you still have your job. With the world be in total disarray? Would unethical companies quickly fire most their staff and replace them with machines? Would their be mass riots in the streets by starving neo-luddites? Would automated drones be shooting at them?

      Simply put people and our social systems are not ready for competent machine intelligence and how fast it will change the world. We should feel lucky we are getting a ramp up period, and hopefully one that draws out a while longer.

juped a day ago

You pretty much just have to play around with them enough to be able to intuit what things they can do and what things they can't. I'd rather have another underling, and not just because they grow into peers eventually, but LLMs are useful with a bit of practice.

pydry 2 days ago

all tech hype cycles are a bit like this. when you were born people were predicting the end of offline shops.

The trough of disillusionment will set in for everybody else in due time.