Comment by CamperBob2
Comment by CamperBob2 17 hours ago
It's hard to get too specific without running afoul of NDAs and such, since most of my work is for one customer or another, but the case that really blew me away was when I needed to find a way to correct an oscillator that had inherent stability problems due to a combination of a very good crystal and very poor thermal engineering on the OEM's part. The customer uses a lot of these oscillators, and they are a massive pain point in production test because they often perform so much worse than they should.
I started out brainstorming with o1-pro, trying to come up with ways to anticipate drift on multiple timescales, from multiple influences with differing lag times, and correct it using temperature trends measured a couple of inches away on a different component. It basically said, "Here, train this LSTM model to predict your drift observations from your observed temperature," and spewed out a bunch of cryptic-looking PyTorch code. It would have been familiar enough to an ML engineer, I'm sure, but it was pretty much Perl to me.
I was like, Okaaaaayyy....? but I tried it anyway, suggested hyperparameters and all, and it was a real road-to-Damascus moment. Again, I can't share the plots and they wouldn't make sense anyway without a lot of explanation, but the outcome of my initial tests was freakishly good.
Another model proved to be able to translate the Python to straight C for use by the onboard controller, which was no mean feat in itself (and also allowed me to review it myself), and now that problem is just gone. Basically for free. It was a ridiculous, silly thing to try, and it worked.
When this tech gets another 10x better, the customer won't need me anymore... and that is fucking awesome.
I too have all sorts of secret stuff that I wouldn't share. I'm not asking for that. Isolating and reproducing example behavior is different from sharing your whole work.
> It would have been familiar enough to an ML engineer, I'm sure, but it was pretty much Perl to me.
How can you be sure that the solution doesn't have obvious mistakes that an ML engineer would spot right away?
> When this tech gets another 10x better
A chainsaw is way better than a regular saw, but it's also more dangerous. Learning to use it can be fun. Learning not to cut your toes is also important.
I am looking for ways in which LLMs could potentially cut people's toes.
I know you don't want to hear that your favorite tool can backfire, and you're still skeptic despite having experienced the example I gave you firsthand. However, I was still hopeful that you could understand my point.