Comment by Swizec

Comment by Swizec 2 days ago

276 replies

I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.

> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that

The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.

I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...

arthurfirst 2 days ago

I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

You might as well work on product marketing for ai because that is where the client dollars are allocated.

If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.

  • order-matters 2 days ago

    Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals. I am sure you could find a lot of moral values you would simply refuse to compromise on for the sake of business. the line between moral value and heavy preference, however, is blurry - and is probably where most people have AI placed on the moral spectrum right now. Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

    I am in a different camp altogether on AI, though, and would happily continue to do business with it. I genuinely do not see the difference between it and the computer in general. I could even argue it's the same as the printing press.

    What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations. that's not to say two things cant both be bad, but it just looks to me like people are using the moral argument as a means to avoid learning something new while being able to virtue signal how ethical they are about it, while at the same time they refuse to sacrifice things they are already accustomed to for ethical reasons when they learn more about it. It just all seems rather convenient.

    the main issue I see talked about with it is in unethical model training, but let me know of others. Personally, I think you can separate the process from the product. A product isnt unethical just because unethical processes were used to create it. The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain? For example, should we let people die rather than use medical knowledge gained unethically?

    Maybe we should be targeting these AI companies if they are unethical and stop them from training any new models using the same unethical practices, hold them accountable for their actions, and distribute the intellectual property and profits gained from existing models to the public, but models that are already trained can actually be used for good and I personally see it as unethical not to.

    Sorry for the ramble, but it is a very interesting topic that should probably have as much discussion around it as we can get

    • arthurfirst 2 days ago

      > Yes, actually - being right and out of business is much better than being wrong and in business when it comes to ethics and morals.

      Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.

      Turning the page is a valid choice though. Sometimes a clean slate is what you need.

      > Being out of business shouldn't be a death sentence, and if it is then maybe we are overlooking something more significant.

      Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.

      > For example, should we let people die rather than use medical knowledge gained unethically?

      Depends if you are doing it 'for their own good' or not.

      Also the ends do not justify the means in the world of morals we are discussing -- that is pragmatism / utilitarianism and belongs to the world of the material not the ideal.

      Finally - Who determines what is ethical? beyond the 'golden rule'? This is the most important factor. I'm not implying ethics are ALL relative, but beyond the basics they are, and who determines that is more important than the context or the particulars.

      • order-matters 2 days ago

        >Yes, but since you are out of business you no longer have an opportunity to fix that situation or adapt it to your morals. It's final.

        Lots of room for nuance here, but generally Id say its more pragmatic to pivot your business to one that aligns with your morals and is still feasible, rather than convince yourself youre going to influence something you have no control over while compromising on your values. i am going to emphasize the relevance of something being an actual moral or ethical dilemma vs something being a very deep personal preference or matter of identity/personal branding.

        >Fair point! It feels like a death sentence when you put so much into it though -- a part of you IS dying. It's a natural reflex to revolt at the thought.

        I agree, it is a real loss and I don't mean for it to be treated lightly but if we are talking about morals and potentially feeling forced to compromise them in order to survive, we should acknowledge it's not really a survival situation.

        >Depends if you are doing it 'for their own good' or not.

        what do you mean by this?

        I am not posing a hypothetical. modern medicine has plenty of contributions to it from unethical sources. Should that information be stripped from medical textbooks and threaten to take licenses away from doctors who use it to inform their decision until we find an ethical way to relearn it? Knowing this would likely allow for large amounts of suffering to go untreated that could have otherwise been treated? I am sincerely trying not to make this sound like a loaded question

        also, this is not saying the means are justified. I want to reiterate my point of explicitly not justifying the means and saying the actors involved in the means should be held maximally accountable.

        I would think from your stance on the first bullet point you would agree here - as by removing the product from the process you are able to adapt it to your morals.

        >Finally - Who determines what is ethical?

        I agree that philosophically speaking all ethics are relative, and I was intending to make my point from the perspective of navigating the issues as in individual not as a collective making rules to enforce on others. So you. you determine what is ethical to you

        However, there are a lot of systems already in place for determining what is deemed ethical behavior in areas where most everyone agrees some level of ethics is required. This is usually done through consensus and committees with people who are experts in ethics and experts in the relevant field its being applied to.

        AI is new and this oversight does not exist yet, and it is imperative that we all participate in the conversation because we are all setting the tone for how this stuff will be handled. Every org may do it differently, and then whatever happens to be common practice will be written down as the guidelines

      • johnnyanmac 2 days ago

        >It's final.

        You should tell that to all the failed businesses Jobs had or was ousted out of. Hell, Trump hasn't really had a single successful business in his life.

        Nothing is final until you draw your last breath.

        >Who determines what is ethical? beyond the 'golden rule'?

        To be frank, you're probably not the audience being appealed to in this post if you have to suggest "ethics can be relative". This is clearly a group of craftsmen offering their hands and knowledge. There are entire organizations who have guidelines if you need some legalese sense of what "ethical" is here.

    • int_19h 16 hours ago

      > What exactly is the moral dilemma with AI? We are all reading this message on devices built off of far more ethically questionable operations.

      The main difference is that for those devices, the people negatively affected by operations are far away in another country, and we're already conditioned to accept their exploitation as "that's just how the world works" or "they're better off that way". With AI, the people affected - those whose work was used to train, and those who lose jobs because of it - are much closer. For software engineers in particular, these are often colleagues and friends.

    • sambuccid a day ago

      >> The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain?

      That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.

      It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.

      (I'm not necessarely in the group of people avoiding to use AI, but I can see their point)

    • derangedHorse a day ago

      > but once the damage is done why let it happen in vain?

      Because there are no great ways to leverage the damage without perpetuating it. Who do you think pays for the hosting of these models? And what do you mean by distribute the IP and profits to the public? If this process will be facilitated by government, I don’t have faith they’ll be able to allocate capital well enough to keep the current operation sustainable.

  • johnnyanmac 2 days ago

    >but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

    Depends. Is it better to be "wrong" and burn all your goodwill for any future endeavors? Maybe, but I don't think the answer is clear cut for everyone.

    I also don't fully agree with us being the "minority". The issue is that the majority of investors are simply not investing anymore. Those remaining are playing high stakes roulette until the casino burns down.

  • trial3 2 days ago

    > but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?

    yes [0]

    [0]: https://en.wikipedia.org/wiki/Raytheon

    • anonymars 2 days ago

      Can you... elaborate?

      • jfindper 2 days ago

        Not the parent.

        I believe that they are bringing up a moral argument. Which I'm sympathetic too, having quit a job before because I found that my personal morals didn't align with the company, and the cognitive dissonance to continue working there was weighing heavily on me. The money wasn't worth the mental fight every day.

        So, yes, in some cases it is better to be "right" and be forced out of business than "wrong" and remain in business. But you have to look beyond just revenue numbers. And different people will have different ideas of "right" and "wrong", obviously.

jimbokun 2 days ago

Has anyone considered that the demand for web sites and software in general is collapsing?

Everyone and everything has a website and an app already. Is the market becoming saturated?

  • oldandboring 2 days ago

    I know a guy who has this theory, in essence at least. Businesses use software and other high-tech to make efficiency gains (fewer people getting more done). The opportunities for developing and selling software were historically in digitizing industries that were totally analog. Those opportunities are all but dried up and we're now several generations into giving all those industries new, improved, but ultimately incremental efficiency gains with improved technology. What makes AI and robotics interesting, from this perspective, is the renewed potential for large-scale workforce reduction.

  • Aperocky 2 days ago

    The demand is massively increasing, but filled by less people and more GPUs.

  • HeyLaughingBoy 2 days ago

    And new companies are created every day, and new systems are designed every day, and new applications are needed every day.

    The market is nowhere close to being saturated.

    • jimbokun 16 hours ago

      You're just begging the question.

      What are examples of these "new applications" that are needed every day? Do consumers really want them? Or are software and other companies just creating them because it benefits those companies?

      • HeyLaughingBoy 15 hours ago

        Most of the software written worldwide is created for internal company usage. Consumers don't even know that it exists.

        I've worked (still do!) for engineering services companies. Other businesses pay us to build systems for them to either use in-house or resell downstream. I have to assume that if they're paying for it, they see profit potential.

xnx 2 days ago

> In my opinion this business died 2 years ago

It was an offshoot bubble of the bootcamp bubble which was inflated by ZIRP.

kagevf 2 days ago

I think your post pretty well illustrates how LLMs can and can't work. Favoriting this so I can point people to it in the future. I see so many extreme opinions on it like from how LLM is basically AGI to how it's "total garbage" but this is a good, balanced - and concise! - overview.

jillesvangurp 2 days ago

This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.

Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.

There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.

Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.

  • wartywhoa23 2 days ago

    > Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.

    So let's all just give zero fucks about our moral values and just multiply monetary ones.

    • simianwords 2 days ago

      >So let's all just give zero fucks about our moral values and just multiply monetary ones.

      You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.

      That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.

      If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.

      You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.

      • vkou 2 days ago

        > If AI can make things 1000x more efficient,

        Is that the promise of the faustian bargain we're signing?

        Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?

      • johnnyanmac 2 days ago

        >They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.

        And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.

        I don't think using the 2025 US government for a moral compass is helping your case either.

        >If AI can make things 1000x more efficient

        Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?

    • ako 2 days ago

      AI is just a tool, like most other technologies, it can be used for good and bad.

      Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?

      If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.

      • Throaway1985232 2 days ago

        The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).

      • jimbokun 2 days ago

        You are thinking too small.

        The goal of AI is NOT to be a tool. It's to replace human labor completely.

        This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.

        To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.

      • jpadkins 2 days ago

        I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)

      • johnnyanmac 2 days ago

        >Where are you going to draw the line?

        How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).

        I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.

        >AI as a technology is almost impossible to stop.

        Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.

        Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.

      • georgemcbay 2 days ago

        > AI is just a tool, like most other technologies, it can be used for good and bad.

        The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).

        I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.

        However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.

      • _heimdall 2 days ago

        If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.

        AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.

        I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.

    • tjwebbnorfolk 2 days ago

      What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.

      You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.

      • malfist 2 days ago

        Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"

    • idiotsecant 2 days ago

      That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.

      • dripdry45 2 days ago

        This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?

    • senordevnyc 2 days ago

      [flagged]

      • PaulDavisThe1st 2 days ago

        The age old question: do people get what they want, or do they want what they (can) get?

        Put differently, is "the market" shaped by the desires of consumers, or by the machinations of producers?

      • DonHopkins 2 days ago

        Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.

        So is it rational for a web design company to take a moral stance that they won't use JavaScript?

        Is there a market for that, with enough clients who want their JavaScript-free work?

        Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?

  • senordevnyc 2 days ago

    Totally agree, but I’d state it slightly differently.

    This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.

  • alextingle 2 days ago

    I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.

    • simonw 2 days ago

      The latest DeepSeek and Kimi open weight models are competitive with GPT-5.

      If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.

      Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.

  • classified 2 days ago

    > And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.

    I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.

  • BenGosub 2 days ago

    I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.

    However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.

    • b3ing 2 days ago

      Website building started dying off when SquareSpace launched and Wix came around. WordPress copied that and its been building blocks for the most part since then. There are few unique sites around these days.

    • classified 2 days ago

      > we don't see the authors getting the recognition.

      In that sense AI has been the biggest heist that has ever been perpetrated.

      • jillesvangurp 2 days ago

        Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.

        Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.

        It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.

        People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.

        You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.

        Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?

  • balamatom 2 days ago

    Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)

    >the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most

    There. Fixed!

  • skeeter2020 2 days ago

    it is totally valid to NOT play the game - Joshua taught us this way back in the 80's

  • rob74 2 days ago

    I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.

    • ako 2 days ago

      But what if the AI generated course was actually good, maybe even better than the human generated course? Which one would you pick then?

      • neutronicus 2 days ago

        The answer is "the highest-ranked free one in a Google Search".

      • wizzwizz4 2 days ago

        When a single such "actually good" AI-generated course actually exists, this question might be worth engaging with.

        • ako 2 days ago

          Actually, I already prefer AI to static training materials these days. But instead of looking for a static training material, I treated it like a coach.

          Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.

          With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.

  • otabdeveloper4 2 days ago

    AI is not a tool, it is an oracle.

    Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.

    • classified 2 days ago

      Seeing how many successful businesses are a product of pure luck, using an oracle to roll the dice is not significantly different.

    • rob74 2 days ago

      Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...

    • danielbln 2 days ago

      Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.

      • otabdeveloper4 2 days ago

        > if you combine the Stone Soup strategy with Clever Hans syndrome you can sell the illusion of not working for 8 billable hours a day

        No thanks, I'm good.

    • tonyhart7 2 days ago

      "praying that the next prompt finally spits out something decent is not a business strategy."

      well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history

      as much as I agree with your statement but the real world doesn't respect that

      • otabdeveloper4 2 days ago

        > one of the most fastest growing user acquisition user base in history

        By selling a dollar of compute for 90 cents.

        We've been here before, it doesn't end like you think it does.

  • Towaway69 2 days ago

    > Arguing against progress as it is happening is as old as the tech industry. It never works.

    I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.

    So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.

    And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?

    Not everything that comes out of Silicon Valley is automatically good.

skeeter2020 2 days ago

markets are not binary though, and this is also what it looks like when you're early (unfortunately similar to when you're late too). So they may totally be able to carve out a valid & sustainable market exactly because theyu're not doing what everyone else is doing right now. I'm currently taking online Spanish lessons with a company that uses people as teachers, even though this area is under intense attack from AI. There is no comparison, and what's really great is using many tools (including AI) to enhance a human product. So far we're a long way from the AI tutor that my boss keeps envisioning. I actually doubt he's tried to learn anything deep lately, let alone validated his "vision".

tonyhart7 2 days ago

what happen if the market is right and this is "new normal"?????

same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital

  • lmm 2 days ago

    > what happen if the market is right and this is "new normal"?????

    Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.

    • _DeadFred_ 2 days ago

      Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.

      It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.

      • jakelazaroff 2 days ago

        That's where the post-scarcity society AI will enable comes in! Surely the profits from this technology will allow these displaced programmers to still live comfortable lives, not just be hoarded by a tiny number of already rich and powerful people. /s

        • lisbbb a day ago

          I'd sooner believe that a unicorn will fly over my house and poop out rainbow skittles on my lawn. Yeah /s for sure!

          You and I both know we're probably headed for revolutionary times.

    • gloosx 2 days ago

      It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.

      • ben_w 2 days ago

        > If you are a very good developer, you will always be in demand.

        "Always", in the same way that five years ago we'd "never" have an AI that can do a code review.

        Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.

    • yungwarlock 2 days ago

      I'm young, please when was that and in what industry

      • habibur 2 days ago

        After the year 2000. dot com burst.

        An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.

        That turned tech workers switching to "flipping burgers" into a meme.

      • forgetfreeman 2 days ago

        .com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.

      • throwaway2037 2 days ago

        The defense industry in southern California used to be huge until the 1980s. Lots and lots of ex-defense industry people moved to other industries. Oil and gas has gone through huge economic cycles of massive investment and massive cut-backs.

  • indemnity 2 days ago

    I haven’t visited StackOverflow for years.

    • throwaway2037 2 days ago

      I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)

      When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.

      • indemnity 2 days ago

        It's not a pejorative statement, I used to live in Stack Overflow.

        But the killer feature of an LLM is that it can synthesize something based on my exact ask, and does a great job of creating a PoC to prove something, and it's cheap from time investment point of view.

        And it doesn't downvote something as off-topic, or try to use my question as a teaching exercise and tell me I'm doing it wrong, even if I am ;)

      • wonderwonder 2 days ago

        I think that's OP's point though, Ai can do it better now. No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.

        Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.

    • ido 2 days ago

      Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.

      • jv22222 2 days ago

        It became as annoying as experts exchange the very thing it railed against!

    • [removed] 2 days ago
      [deleted]
  • m463 2 days ago

    buggywhips are having a temporary setback.

    • throwaway2037 2 days ago

      I had a "milk-up-the-nose" laughter moment when I read this comment.

    • vkou 2 days ago

      The coach drivers found other work, their horses got turned into glue.

    • balamatom 2 days ago

      leaded gasoline is making a killing, though

jimmydddd 2 days ago

Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.

  • Aurornis 2 days ago

    Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.

    Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.

    • bluGill 2 days ago

      Rejecting clients when you have enough is a sound business decision. Some clients are too annoying to serve. Some clients don't want to pay. Sometimes you have more work than you can do... It is easy to think when things are bad that you must take any and all clients (and when things are bad enough you might be forced to), but that is not a good plan and to be avoided. You should be choosing your clients. It is very powerful when you can afford to tell someone I don't need your business.

      • tacker2000 2 days ago

        Sure, but it seems here that they are rejecting everything related to AI, which is probably not a smart business move, as they also remark, since this year was much harder for them.

        The fact is, a lot of new business is getting done in this field, with or without them. If they want to take the "high road", so be it, but they should be prepared to accepts the consequences of worse revenues.

        • bluGill 2 days ago

          Is it though? We don't know the future. Is this just a dip in a growing business, or sign of things to come? Even if AI does better than the most optimistic projections it could still be great for a few people to be anti-ai if they are in the right place selling to the right people.

          Without knowing the future I cannot answer.

  • satvikpendem 2 days ago

    People who make products with AI are not necessarily rich, often it's solo "vibe coders."