Comment by XenophileJKO

Comment by XenophileJKO 2 days ago

2 replies

I mean I "understand" your point. However, this isn't any different than being a technical lead in a system of any significant complexity.. you will constantly be reviewing work that you are not always an expert on, it is a very similar practice.

I'm constantly reviewing things that I am not a domain expert on. I have to identify what is risky, what I don't know, etc. Throwing to the AI first is no different than throwing to someone else first. I have the same requirements. Now I can choose how much I "trust" the person or LLM. I have had coworkers I trust less than LLMs.. I'll put it that way.

So just like with reviewing a co-worker.. pay attention to areas you are not sure what the right method is and maybe double-check it. This just isn't a "new" thing.

hitarpetar a day ago

> Throwing to the AI first is no different than throwing to someone else first

except in all the ways that it is obviously different

imiric a day ago

Well, you're right that reviewing someone else's work isn't new, but interacting with these tools is vastly different from communicating with a coworker.

A competent human engineer won't delude you with claims not based in reality, and be confident about it. They can be wrong about practical ways of accomplishing something, but they won't suggest using APIs that don't exist, or go off on wild tangents because a certain word was mentioned. They won't give a different answer whenever you ask them the same question. Most importantly, conversations with humans can be productive in ways that both parties gain a deeper understanding of the topic and respect for each other. Humans can actually think and reason about topics and ideas, they can actually verify their and your claims, and they won't automatically respond with "You're right!" at any counterargument or suggestion.

Furthermore, the marketing around "AI" is strongly based on promoting their superhuman abilities. If we're led to believe that these are superintelligent machines, we're more inclined to trust their output. We have people using them as medical professionals, thinking that they're talking to a god, and being influenced by them. Trusting them to produce software is somewhere on that scale. All of this is highly misleading and potentially dangerous.

Any attempt at anthropomorphizing "AI" is a mistake. You can get much more out of them by using them as what they are: excellent pattern matching probabilistic tools.