63stack 2 days ago

There is not a single person in this thread that thinks of swiping on phones when the term "AI" is mentioned, apart from people playing the contrarian.

  • dkdcio 2 days ago

    counter example: me! autocorrect, spam filters, search engines, blurred backgrounds, medical image processing, even revenue forecasting with logistic regression are “AI” to me and others in the industry

    I started my career in AI, and it certainly didn’t mean LLMs then. some people were doing AI decades ago

    I would like to understand where this moral line gets drawn — neural networks that output text? that specifically use the transformer architecture? over some size?

    • satvikpendem a day ago

      When Stable Diffusion and GitHub Copilot came out a few years ago is when I really started seeing this "immoral" mentality about AI, and like you it really left me scratching my head, why now and not before? Turns out, people call it immoral when they see it threatens its livelihood and come up with all sorts of justifications that seem justifiable, but when you dig underneath it, it's all about their economic anxiety, nothing more. Humans are not direct creatures, it's much more emotional than one would expect.

  • fragmede 2 days ago

    You take a pile of input data, use a bunch of code on it to create a model, which is generally a black box, and then run queries against that black box. No human really wrote the model. ML has been in use for decades, in various places. Google Translate was an "early" convert. Credit card fraud models as well.

    The industry joke is: What do you call AI that works? Machine Learning.