Comment by adastra22
Comment by adastra22 2 days ago
You don’t think people are trying very hard to understand LLMs? We recognize the value of interpretability. It is just not an easy task.
It’s not the first time in human history that our ability to create things has exceeded our capacity to understand.
> You don’t think people are trying very hard to understand LLMs? We recognize the value of interpretability. It is just not an easy task.
I think you're arguing against a tangential position to both me, and the person this directly replies to. It can be hard to use and understand something, but if you have a magic box that you can't tell if it's working. It doesn't belong anywhere near the systems that other humans use. The people that use the code you're about to commit to whatever repo you're generating code for, all deserve better than to be part of your unethical science experiment.
> It’s not the first time in human history that our ability to create things has exceeded our capacity to understand.
I don't agree this is a correct interpretation of the current state of generative transformer based AI. But even if you wanted to try to convince me; my point would still be, this belongs in a research lab, not anywhere near prod. And that wouldn't be a controversial idea in the industry.