Comment by timbritt

Comment by timbritt 4 days ago

0 replies

There’s this popular pastime lately where people pretend to be the last rational mind on Earth while making blanket pronouncements about tech they don’t understand.

>“Avoid it! Perplexity rules prohibit commercial use…”

Cool story, but also: irrelevant. Nobody serious is shipping products on Perplexity as a backend. It’s a research tool with a nice wrapper. The people building with LLMs are using OpenAI, Claude, Mistral, Groq, and Ollama, depending on the constraints and goals. Acting like the existence of one walled garden invalidates an entire paradigm is like saying cars are bad because golf carts can’t go on highways.

> “ChatGPT rules prohibit sharing the output with other AI…”

There are some restrictions, yes. And that matters… if your use case is literally just shuttling model output between APIs like a glorified message broker. Most developers are fine with this because they’re building systems, not playing telephone.

> “They’re training on everything you pass in…”

This is just flat wrong. OpenAI doesn’t train on API input/output unless you explicitly opt in. The fact that this myth is still circulating tells me people are repeating each other instead of reading the docs.

> “You’re paying to get brain raped…”

If your argument requires dehumanizing metaphors to land, you don’t have an argument. You have trauma cosplay.

> “Coding with OpenAI = being a sheep…”

This is the kind of thing people say when they’ve never delivered software in production. The tools either help or they don’t. Calling people sheep for using powerful tools is anti-intellectualism dressed up as cynicism. Nobody’s building a search engine from scratch to prove they’re not a sheep either. We use leverage. That’s the whole game.

> “You’re paying them to train their model to imitate you…”

Actually, no — again, unless you opt in. But even if that were true, you’re also trading that data for time saved, features shipped, and workflows unlocked. You know, ROI.

> “Better off learning a better (compiled) programming language…”

I have nothing against compiled languages, but this is like telling someone struggling with Figma that they should learn Blender. It might be good advice in a vacuum, but it doesn’t help you win the game that’s actually being played.

> “Why pay OpenAI for the privilege to help them take your job?”

You could’ve said the same thing about AWS. Or GitHub. Or Stack Overflow. Or even programming itself, in the mainframe era. Gatekeeping based on purity is a waste of time. The actual work is understanding what AI can do, what it shouldn’t do, and when to lean in.

> “Why outsource mental activity to a service with a customer noncompete?”

You’re not outsourcing thinking. You’re compressing cognitive overhead. There’s a difference. If you think using AI is “outsourcing thinking,” you were probably outsourcing thinking to Stack Overflow and Copilot already.

Look, are there risks? Costs? Vendor lock-in traps? Of course. Anyone seriously building with AI right now is absorbing all that volatility and pushing forward anyway, because the upside is that massive. Dismissing the entire space as a scam or a trap is either willful ignorance or fear masquerading as intellectual superiority.

If you don’t want to use AI tools, don’t. But don’t throw rocks from the sidelines at the people out there testing edge cases, logging failures, bleeding tokens, and figuring out what’s possible.