Comment by alganet

Comment by alganet 18 hours ago

13 replies

> I think a huge reason why LLMs are so far ahead in programming

Are they? Last time I checked (couple of seconds ago), they still made silly mistakes and hallucinated wildly.

Example: https://imgur.com/a/Cj2y8km (AI teaching me about the Coltrane operator, that obviously does not exist).

gcanko 17 hours ago

You're using the worst model when it comes to programming, not sure what point you're trying prove here. That's why when someone starts ranting how useless ai models are when it comes to coding I always assume they're just using inferior models.

  • alganet 17 hours ago

    My question was very simple. Suitable for a simpler model.

    I can come up with prompts that make better models hallucinate (see post below).

    I don't understand your objection. This is a known fact, LLMs hallucinate shit regardless of the model size.

    • CamperBob2 17 hours ago

      LLMs are getting better. Are you?

      Nothing matters in this business except the first couple of time derivatives.

      • alganet 17 hours ago

        Maybe I'm not.

        However, I'm discussing this within the context of the study presented in the paper, not some future yet-to-be-achieved performance expectation.

        If we step outside the context of the paper (not advised), I think any average developer is better than an LLM at energy efficiency. LLMs cheat by consuming more resources than a human. "Better" is quite relative. So, let's keep reasonable.

aoeusnth1 18 hours ago

Are you intentionally sandbagging the LLMs to prove a point, or do you really think 4o-mini is good enough for programming?

Even 2.5 flash easily gets this https://imgur.com/a/OfW30eL

  • alganet 17 hours ago

    The point is that I can make them hallucinate quite easily. And they don't demonstrate knowing their own limitations.

    For example, 2.5 Flash fails to explain the difference between the short ternary operator (null coalescing) and the Elvis operator.

    https://imgur.com/a/xKjuoqV

    Even when I specify a language (therefore clearing the confusion, supposedly), it still fails to even recognize the Elvis operator by its toupe, and mixes it up the explanation (it doesn't even understand what I asked).

    https://imgur.com/a/itr87hM

    So, the point I'm trying to make is that they're not any better for programming than they're for chemistry.

    • CamperBob2 17 hours ago

      Flash is the wrong model for questions like that -- not that you care -- but if you'd like to share the actual prompt you gave it, I'll try it in 2.5 Pro.

      • alganet 17 hours ago

        "explain me the difference between the short ternary operator and the Elvis operator"

        When it failed, I replied: "in PHP".

        You don't seem to understand what I'm trying to say and instead is trying to defend LLMs for a fault that is a fact known in the industry at large.

        I'm sure that in short time, I could make 2.5 Pro hallucinate as well. If not on this question, on others.

        This behavior is inline with the paper conclusions:

        > many models are not able to reliably estimate their own limitations.

        (see Figure 3, they tested a variety of models of different qualities).

        This is the kind of question a junior developer can answer with simple google searches, or by reading the PHP manual, or just by testing it on a REPL. Why do we need a fancy model in order to answer such a simple inquiry? Would a beginner know that the answer is incorrect and he should use a different model?

        Also, from the paper:

        > For very relevant topics, the answers that models provide are wrong.

        > Given that the models outperformed the average human in our study, we need to rethink how we teach and examine chemistry.

        That's true for programming as well. It outperforms the average human, but then it makes silly mistakes that could confuse beginners. It displays confidence in being plain wrong.

        The study also used manually curated questions for evaluation, so my prompt is not some dirty trick. It's totally inline with the context of this discussion.

  • CamperBob2 17 hours ago

    They aren't getting any better at programming, so they naturally assume the LLMs aren't, either.