Comment by ttiurani
> imo LLMs are (currently) good at 3 things
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
> imo LLMs are (currently) good at 3 things
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
> because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
Get with the program dude. Where we're going, we don't need morals.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.