Comment by behnamoh

Comment by behnamoh 5 days ago

24 replies

    5 years ago: ML-auto-complete → You had to learn coding in depth
    Last Year: AI-generated suggestions → You had to be an expert to ask the right questions
    Now: AI-generated code → You should learn how to be a PM
    Future: AI-generated companies → You must learn how to be a CEO
    Meta-future: AI-generated conglomerates → ?
Recently I realized that instead of just learning technical skills, I need to learn management skills. Specifically, project management, time management, writing specifications, setting expectations, writing tests, and in general, handling and orchestrating an entire workflow.

And I think this will only shift to the higher levels of the management hierarchy in the future. For example, in the future we will have AI models that can one-shot an entire platform like Twitter. Then the question is less about how to handle a database and more about how to handle several AI generated companies!

While we're at the project manager level now, in the future we'll be at the CEO level. It's an interesting thing to think about.

jplusequalt 5 days ago

>While we're at the project manager level now, in the future we'll be at the CEO level.

This is the kind of half baked thought that seems profound to a certain kind of tech-brained poster on HN, but upon further consideration makes absolutely zero sense.

  • behnamoh 5 days ago

    thanks for your intellectual contribution to the HN community.

    • jplusequalt 5 days ago

      I think calling out ill thought out comments is a public service. Especially because many people who read these comment sections are not engineers.

      • behnamoh 5 days ago

        I was being sarcastic; your comment was a low blow. You didn't say why you disagreed with it. Might wanna read HN guidelines before leaving comments here.

        @dang

        • hxugufjfjf 5 days ago

          I'm not sure you can tag dang like that, but I don't think its against the rules either.

andyfilms1 5 days ago

I've never understood this train of thought. When working in teams and for clients, people always have questions about what we have created. "Why did you choose to implement it like this?" "How does this work?" "Is X possible to do within our timeframe/budget?"

If you become just a manager, you don't have answers to these questions. You can just ask the AI agent for the answer, but at that point, what value are you actually providing to the whole process?

And what happens when, inevitably, the agent responds to your question with "You're absolutely right, I didn't consider that possibility! Let's redo the entire project to account for this?" How do you communicate that to your peers or clients?

  • fatherwavelet 4 days ago

    I think you are not using enough imagination.

    It would not be shocking at all if in 10 years, "Let's redo the entire project to account for this" is exactly how things work.

    Or lets make 3 or 4 versions of the project and see what one the customer likes best.

    Or each decision point of the customer becomes multiple iterations of the project, with each time the project starting from scratch.

    Of course, at some point there might not be a customer in this context. The "customer" that can't handle this internally might no longer be a viable business.

    "You're absolutely right" feels so summer 2025 to me.

candiddevmike 5 days ago

If AI gets to be this sophisticated, what value would you bring to the table in these scenarios?

  • behnamoh 5 days ago

    > what value would you bring to the table in these scenarios?

    I bring the table, AI brings the value.

    • candiddevmike 5 days ago

      So... nothing. Glad we're in agreement here. If AI can do all the things people hope/dream it can, there won't be any value in doing it on behalf of folks. I would argue that even some "AI provider" (if that could even be a thing given a sophisticated enough agent) would see diminishing returns as the tech inevitably distills into everyone having bespoke agents running locally and handling/organizing/managing everything (of whatever needs managing, who knows).

      Basically I don't see how you can be an AI maximalist and a capitalist at the same time. They're contradictory, IMO.

      • behnamoh 5 days ago

        what value do you bring to the table or to this discussion?

      • fatherwavelet 4 days ago

        I think it becomes more a religious and philosophical question than an economic question. We need to separate economics from the neoliberal religion.

        Byung-Chul Han - Psychopolitics should be standard text that everyone is discussing right now but instead we will probably do nothing and the future will suffer the consequences of our collective intellectual laziness.

        neoliberal ideas vs Marxism is just incredibly intellectually lazy. We really need to think on the level of the way Marx did about the industrial revolution in a new way without being lazy and just falling back on the standard Marxist orothodxy religious ideas.

        We don't just need a protestant reformation, we need an entire new religion to deal with this. I think that will be too hard so if I had to bet my bet would be on we do absolutely nothing.

    • gritspants 5 days ago

      "The value of Juicero is more than a glass of cold-pressed juice. Much more."

  • wordpad 5 days ago

    EVERY developer will own their own hyper niche SAAS?

layer8 5 days ago

The moment we have true AGD (artificial general developer), we’ll also have AGI that can equally well serve as a CEO. Where humans sit then won’t be a question of intellectual skill differentiation among humans anymore.

philipwhiuk 5 days ago

> more about how to handle several AI generated companies!

The cost of a model capable of running an entire company will be multiples of the market cap of the company it is capable of running.

  • behnamoh 5 days ago

    "AI-generated company" as in the AI writes the A-Z of the code required to have a working platform like Twitter. Currently it can build some of the frontend or some of the backend, but not all. It's conceivable that in the future AI can handle the entire chain.

    Also you're forgetting the decreasing cost of AI, as well as the fact that you can buy a $10k Mac Studio NOW and have it run 24/7 with some of the best models out there. Only costs would be the initial fixed cost and electric (250W at peak GPU usage).

    • jplusequalt 5 days ago

      >Also you're forgetting the decreasing cost of AI

      AI is still being heavily subsidized. None of the major players have turned a profit, and they are all having to do 4D Chess levels of financing to afford the capex.

      • behnamoh 5 days ago

        Even if AI subsidies go away, the Mac Studio scenario still holds.

    • knollimar 4 days ago

      Aren't all the models you can run way worse than SOTA closed weight ones?

myth_drannon 5 days ago

No, no companies and no CEOs. Just a user. It's like StarTrek replicator. Food replication. No you are not a chef, not a restaurant manager, not agrifarm CEO but just a user that orders a meal. So yes you will need "skills" to specify the type of meal but nothing beyond that.

pksebben 5 days ago

I'd advise caution with this approach. One of the things I'm seeing a lot of people get wrong about AI is that they expect that it means they no longer need to understand the tools they're working with - "I can just focus on the business end". This seems true but it's not - it's actually more important to have a deep understanding of how the machine works because if the AI is doing things that you don't understand you run a severe risk of putting yourself in a very bad situation - insecure applications or servers, code with failure modes that are catastrophic edge cases you won't catch until they're a problem, data lossage / leakage.

If anything, managing the project, writing the spec, setting expectations and writing tests are things llms are incredibly well suited for. Getting their work 'correct' and not 'functional enough that you don't know the difference' is where they struggle.

tintor 5 days ago

one-shot doesn't mean what you think it means.

one-shot means you provide one full question/answer example (from the same distribution) in the context to LLM.