Comment by rosslh
One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.
One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.
LLMs do not lie. That implies agency and intentionality that they do not have.
LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.
>That implies agency and intentionality that they do not have.
No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.
Sure https://www.nbcnews.com/tech/tech-news/man-asked-chatgpt-cut...
Not going to go back and forth on thos as you inevitably try to nitpick "oh but the chatbot didn't say to do that"
If it was actually being given away as an accessiblity tool, then I would agree with you.
It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.
1. Intellectual property is a fiction that should not exist.
2. Open source models exist.
It is unethical to me to provide an accessibility tool that lies.