Comment by grayhatter
Comment by grayhatter 2 days ago
> You don’t think people are trying very hard to understand LLMs? We recognize the value of interpretability. It is just not an easy task.
I think you're arguing against a tangential position to both me, and the person this directly replies to. It can be hard to use and understand something, but if you have a magic box that you can't tell if it's working. It doesn't belong anywhere near the systems that other humans use. The people that use the code you're about to commit to whatever repo you're generating code for, all deserve better than to be part of your unethical science experiment.
> It’s not the first time in human history that our ability to create things has exceeded our capacity to understand.
I don't agree this is a correct interpretation of the current state of generative transformer based AI. But even if you wanted to try to convince me; my point would still be, this belongs in a research lab, not anywhere near prod. And that wouldn't be a controversial idea in the industry.
We used the steam engine for 100 years before we had a firm understanding of why it worked. We still don’t understand how ice skating works. We don’t have a physical understanding of semi-fluid flow in grain silos, but we’ve been using them since prehistory.
I could go on and on. The world around you is full of not well understood technology, as well as non deterministic processes. We know how to engineer around that.