Comment by throwaway13337

Comment by throwaway13337 12 hours ago

45 replies

I'm amazed at this question and the responses you're getting.

These last few years, I've noticed that the tone around AI on HN changes quite a bit by waking time zone.

EU waking hours have comments that seem disconnected from genAI. And, while the US hours show a lot of resistance, it's more fear than a feeling that the tools are worthless.

It's really puzzling to me. This is the first time I noticed such a disconnect in the community about what the reality of things are.

To answer your question personally, genAI has changed the way I code drastically about every 6 months in the last two years. The subtle capability differences change what sorts of problems I can offload. The tasks I can trust them with get larger and larger.

It started with better autocomplete, and now, well, agents are writing new features as I write this comment.

GoatInGrey 10 hours ago

The main line of contention is how much autonomy these agents are capable of handling in a competitive environment. One side generally argues that they should be fully driven by humans (i.e. offloading tedious tasks you know the exact output of but want to save time not doing) while the other side generally argues that AI agents should handle tasks end-to-end with minimal oversight.

Both sides have valid observations in their experiences and circumstances. And perhaps this is simply another engineering "it depends" phenomenon.

bdangubic 12 hours ago

the disconnect is quite simple, there are people that are professionals and are willing to put the time in to learn and then there’s vast majority of others who don’t and will bitch and moan how it is shit etc. if you can’t get these tools to make your job easier and more productive you ought to be looking for a different career…

  • overfeed 11 hours ago

    You're not doing yourself any favors by labeling people who disagree with you undereducated or uninformed. There is enough over-hyped products/techniques/models/magical-thinking to warrant skepticism. At the root of this thread is an argument to (paraphrasing) encouraging people to just wait until someone solves major problems instead of tackling it themselves. This is a broad statement of faith, if I've ever seen one, in a very religious sense: "Worry not, the researchers and foundation models will provide."

    My skepticism and intuition that AI innovations are not exponential, but sigmoid are not because I don't understand what gradient-descent, transformers, RAG, CoT, or multi-head attention are. My statement of faith is: the ROI economics are going to catch up with the exuberance way before AGI/ASI is achieved; sure, you're getting improving agents for now, but that's not going to justify the 12- or 13-digit USD investments. The music will stop, and improvements slow to a drip

    Edit: I think at it's root, the argument is between folk who think AI will follow the same curve as past technological trends, and those who believe "It's different this time".

    • bdangubic 10 hours ago

      > labeling people who disagree with you undereducated or uninformed

      I did neither of these two things... :) I personally could not care about

      - (over)hype

      - 12/13/14/15 ... digit USD investment

      - exponential vs. sigmoid

      There are basically two groups of industry folk:

      1. those that see technology as absolutely transformational and are already doing amazeballs shit with it

      2. those that argue how it is bad/not-exponential/ROI/...

      If I was a professional (I am) I would do everything in my power to learn everything there is to learn (and then more) and join the Group #1. But it is easier to be in Group #2 as being in Group #1 requires time and effort and frustrations and throwing laptop out the window and ... :)

      • wat10000 2 hours ago

        I see the first half of group 1, but where's the second half? Don't get me wrong, there's some cool and interesting stuff in this space, but nothing I'd describe as close to "amazeballs shit."

      • gmm1990 9 hours ago

        If there is really amazing stuff happening with this technology how did we have two recent major outages that were cause by embarrassing problems? I would guess that at least in the cloud flare instance some of the responsible code was ai generated

      • overfeed 7 hours ago

        A mutually exclusive group 1 & group 2 are a false dichotomy. One can have a grasp on the field and keep up to date with recent papers, have an active Claude subscription, use agents and still have a net-negative view of "AI" as a whole, considering the false promises, hucksters, charlatans and an impending economic reckoning.

        tl;dr version: having negative view of the industry is decoupled from one's familiarity with, and usage of the tools, or the willingness to learn.

      • __loam 5 hours ago

        Amazeballs shit yet precious little actual products.

    • juped 7 hours ago

      They're not logistic, this is a species of nonsense claim that irks me even more than claiming "capabilities gains are exponential, singularity 2026!"; it actually includes the exponential-gains claim and then tries to tack on epicycles to preempt the lack of singularities.

      Remember, a logistic curve is an exponential (so, roughly, a process whose outputs feed its growth, the classic example being population growth, where more population makes more population) with a carrying capacity (the classic example is again population, where you need to eat to be able to reproduce).

      Singularity 2026 is open and honest, wearing its heart on its sleeve. It's a much more respectable wrong position.

  • siva7 11 hours ago

    It's disheartening. I got a colleague, very senior, who dislikes AI for a myriad of reasons and doesn't want to adapt if not forced by mgmt. I feel from 2022-2024 the majority of my colleagues were in this camp - either afraid from AI or because they looked at it as not something a "real" developer would ever use. 2025 it seemed to change a bit. American HN seemed to adapt more quickly while EU companies are still lacking the foresight to see what is happening on the grand scale.

    • wat10000 2 hours ago

      I'm pretty senior and I just don't find it very useful. It is useful for certain things (deep code search, writing non-production helper scripts, etc.) and I'm happy to use it for those things, but it still seems like a long way off for it to be able to really change things. I don't foresee any of my coworkers being left behind if they don't adopt it.

[removed] 11 hours ago
[deleted]
GiorgioG 12 hours ago

Despite the latest and greatest models…I still see glaring logic errors in the code produced in anything beyond basic CRUD apps. They still make up fields that don’t exist, assign a value to a variable that is nonsensical. I’ll give you an example, in the code in question, Codex assigned a required field LoanAmount to a value from a variable called assessedFeeAmount…simply because as far as I can tell, it had no idea how to get the correct value from the current function/class.

  • lbreakjai 10 hours ago

    That's why I don't get people that claim to be letting an agent run for an hour on some task. LLMs tend to do so many small errors like that, that are so hard to catch if you aren't super careful.

    I wouldn't want to have to review the output of an agent going wild for an hour.

    • snoman 7 hours ago

      Who says anyone’s reviewing anything? I’m seeing more and more influencers and YouTubers playing engineer or just buying an app from an overseas app farm. Do you think anyone in that chain gives the first shit what the code is like?

      It’s the worst kind of disposable software.

nickphx 12 hours ago

ai is useless. anyone claiming otherwise is dishonest

  • la_fayette 11 hours ago

    I use GenAI for text translation, text 2 voice and voice 2 text, there it is extremely useful. For coding I often have the feeling it is useless, but also sometimes it is useful, like most tools...

  • whattheheckheck 12 hours ago

    What are you doing at your job that ai can't help with at all to consider is completely use less?

  • ghurtado 12 hours ago

    That could even be argued (with an honest interlocutor, which you clearly are not)

    The usefulness of your comment, on the other hand, is beyond any discussion.

    "Anyone who disagrees with me is dishonest" is some kindergarten level logic.

  • ulfw 12 hours ago

    [Deleted as Hackernews is not for discussion of divergent opinions]

    • wiseowise 11 hours ago

      > It's not useless but it's not good for humanity as a whole.

      Ridiculous statement. Is Google also not good for humanity as a whole? Is Internet not good for humanity as a whole? Wikipedia?

      • Nition 6 hours ago

        Chlorofluorocarbons, microplastics, UX dark patterns, mass surveillance, planned obsolescence, fossil fuels, TikTok, ultra-processed food, antibiotic overuse in livestock, nuclear weapons.

        It's a defensible claim I think. Things that people want are not always good for humanity as a whole, therefore things can be useful and also not good for humanity as a whole.

      • [removed] 11 hours ago
        [deleted]
the_mitsuhiko 12 hours ago

> EU waking hours have comments that seem disconnected from genAI. And, while the US hours show a lot of resistance, it's more fear than a feeling that the tools are worthless.

I don't think it's because the audience is different but because the moderators are asleep when Europeans are up. There are certain topics which don't really survive on the frontpage when moderators are active.

  • jagged-chisel 12 hours ago

    I'm unsure how you're using "moderators." We, the audience, are all 'moderators' if we have the karma. The operators of the site are pretty hands-off as far as content in general.

    This would mean it is because the audience is different.

    • the_mitsuhiko 11 hours ago

      I’m referring to the actual moderators of this website removing posts from the front page.

      • verdverm 8 hours ago

        that's a conspiracy theory

        The by far more common action is for the mods to restore a story which has been flagged to oblivion by a subset of the HN community, where it then lands on the front page because it already has sufficient pointage

    • uoaei 12 hours ago

      The people who "operate" the website are different from the people who "moderate" the website but both are paid positions.

      This fru-fru about how "we all play a part" is only serving to obscure the reality.

      • delinka 12 hours ago

        I'm sure this site works quite differently from what you say. There's no paid team of moderators flicking stories and comments off the site because management doesn't like them.

        There's dang who I've seen edit headlines to match the site rules. Then there's the army of users upvoting and flagging stories, voting (up and down) and flagging comments. If you have some data to backup your sentiments, please do share it - we'd certainly like to evaluate it.

      • throwaway13337 12 hours ago

        As an anonymous coward on HN for at least a decade, I'd say that's not really true.

        When paul graham was more active and respected here, I spoke negatively about how revered he was. I was upvoted.

        I also think VC-backed companies are not good for society. And have expressed as much. To positive response here.

        We shouldn't shit on one of the few bastions of the internet we have left.

        I regret my negativity around pg - he was right about a lot and seems to be a good guy.

  • jamesblonde 10 hours ago

    Anything sovereign AI or whatever is gone immediately when the mods wake up. Got an EU cloud article? Publish it at 11am CET, it's disappears around 12.30.