Comment by mmooss

Comment by mmooss 2 months ago

7 replies

You don't trust it yet, like a new human assistant you might hire - will they be able to handle all the variables? Eventually, they earn your trust and you start offloading everything to their inbox.

paulryanrogers 2 months ago

No, not like a human assistant. Competent humans will use logical reasoning, non-digital signals like body language and audible clues, and know the limits of their knowledge, so are more likely to ask for missing input. Humans will also be more predictable.

  • mmooss 2 months ago

    You're missing the point. The point is, trust grows with familiarity and a track record.

    • paulryanrogers 2 months ago

      Humans are easier to trust because (IME) their motivations and reasoning are easier to understand and evaluate.

      • mmooss 2 months ago

        You trust all sorts of technology and services: Your computer (and of course that includes an incredible, integrated collection of hardware and software), your car, the plane you flew on, your lightswitch, weekly garbage collection, the fire extinguisher, the chair you are sitting in (will it collapse?), your hammer and the nail you just pounded in. The list is effectively infinite.

        This technology is new, but soon it will be old and trusted too.

binarymax 2 months ago

LLMs don’t learn. They’re static. You could try to fine tune, or continually add longer and longer context, but in the end you hit a wall.

  • ErikBjare 2 months ago

    You can provide them a significant amount of guidance through prompting. The model itself won't "learn", but if given lessons in the prompt, which you can accumulate from mistakes, it can follow them. You will always hit a wall "in the end", but you can get pretty far!

  • mmooss 2 months ago

    But you can learn how to work with one.