Comment by CamperBob2

Comment by CamperBob2 9 hours ago

3 replies

If we ever do that, it means LLMs failed at their job. They are supposed to help and understand us, not the other way around.

If you buy into the whole AGI thing, I guess so, but I don't. We don't have a good definition of intelligence, so it's a meaningless question.

We do know how to make and use tools, though. And we know that all tools, especially the most powerful and/or hazardous ones, reward the work and care that we put into using them. Further, we know that tool use is a skill, and that some people are much better at it than others.

What makes my example invalid, and benchmark prompts valid?

Your example is a valid case of something that doesn't work perfectly. We didn't exactly need to invent AI to come up with something that didn't work perfectly. I have examples of using LLMs to generate working, useful code in advanced, specialized disciplines, code that I frankly don't understand myself and couldn't have written without months of study, but that I can validate.

Just one of those examples is worth a thousand examples like yours, in my book. I can now do things that were simply impossible for me before. It would take some nerve to demand godlike perfection on top of that, or to demand useful results with little or no effort on my part.

alganet 8 hours ago

> We do know how to make and use tools

It's the same principle. A tool is supposed to assist us, not the other way around.

An LLM, "AGI magic" or not, is supposed to write for me. It's a tool that writes for me. If I am writing for the tool, there's something wrong with it.

> I have examples [...] Just one of those examples is worth a thousand examples like yours

Please, share them. I shared my example. It can be a very small "bug report", but it's real and reproducible. Other people can build on it, either to improve their "tool skills" or to improve LLMs themselves.

An example that is shared is worth much more than an anectode.

  • CamperBob2 6 hours ago

    It's hard to get too specific without running afoul of NDAs and such, since most of my work is for one customer or another, but the case that really blew me away was when I needed to find a way to correct an oscillator that had inherent stability problems due to a combination of a very good crystal and very poor thermal engineering on the OEM's part. The customer uses a lot of these oscillators, and they are a massive pain point in production test because they often perform so much worse than they should.

    I started out brainstorming with o1-pro, trying to come up with ways to anticipate drift on multiple timescales, from multiple influences with differing lag times, and correct it using temperature trends measured a couple of inches away on a different component. It basically said, "Here, train this LSTM model to predict your drift observations from your observed temperature," and spewed out a bunch of cryptic-looking PyTorch code. It would have been familiar enough to an ML engineer, I'm sure, but it was pretty much Perl to me.

    I was like, Okaaaaayyy....? but I tried it anyway, suggested hyperparameters and all, and it was a real road-to-Damascus moment. Again, I can't share the plots and they wouldn't make sense anyway without a lot of explanation, but the outcome of my initial tests was freakishly good.

    Another model proved to be able to translate the Python to straight C for use by the onboard controller, which was no mean feat in itself (and also allowed me to review it myself), and now that problem is just gone. Basically for free. It was a ridiculous, silly thing to try, and it worked.

    When this tech gets another 10x better, the customer won't need me anymore... and that is fucking awesome.

    • alganet 6 hours ago

      I too have all sorts of secret stuff that I wouldn't share. I'm not asking for that. Isolating and reproducing example behavior is different from sharing your whole work.

      > It would have been familiar enough to an ML engineer, I'm sure, but it was pretty much Perl to me.

      How can you be sure that the solution doesn't have obvious mistakes that an ML engineer would spot right away?

      > When this tech gets another 10x better

      A chainsaw is way better than a regular saw, but it's also more dangerous. Learning to use it can be fun. Learning not to cut your toes is also important.

      I am looking for ways in which LLMs could potentially cut people's toes.

      I know you don't want to hear that your favorite tool can backfire, and you're still skeptic despite having experienced the example I gave you firsthand. However, I was still hopeful that you could understand my point.