Comment by dkdcio
counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry
I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago
I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?
When Stable Diffusion and GitHub Copilot came out a few years ago is when I really started seeing this "immoral" mentality about AI, and like you it really left me scratching my head, why now and not before? Turns out, people call it immoral when they see it threatens its livelihood and come up with all sorts of justifications that seem justifiable, but when you dig underneath it, it's all about their economic anxiety, nothing more. Humans are not direct creatures, it's much more emotional than one would expect.