Comment by ivraatiems

Comment by ivraatiems a day ago

33 replies

Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.

lm28469 15 hours ago

> Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

We spend the best 40 years of our lives working 40-50 hours a week to enrich the top 0.1% while living in completely artificial cities. People should wonder what is the point of our current system instead of worrying about Terminator tier sci fi system that may or may not come sometimes in the next 5 to 200 years

  • anonzzzies 14 hours ago

    A lot of people in my surroundings are not buying this life anymore; especially young people are asking why would they. Unlike in the US, they won't end up under a bridge (unless some real collapse, which can of course happen but why worry about it; it might not) so they work simple jobs (data entry or whatnot) to make enough money to eat and party and nothing more. Meaning many of them work no more than a few hours a month. They live rent free at their parents and when they have kids they stop partying but generally don't go work more (well; raising kids is hard work of course but I mean for money). Many of them will inherit the village house from their parents and have a garden so they grow stuff to eat , have some animals and make their own booze so they don't have to pay for that. In cities, people feel the same 'who would I work for the ferrari of the boss we never see', but it is much harder to not to; more expensive and no land and usually no property to inherit (as that is in the countryside or was already sold to not have to work for a year or two).

    Like you say, people but more our govs need to worry about what is the point at this moment, not scifi in the future; this stuff has already bad enough to worry about. Working your ass off for diminishing returns , paying into a pension pot that won't make it until you retire etc is driving people to really focus on the now and why they would do these things. If you can just have fun with 500/mo and booze from your garden, why work hard and save up etc. I noticed even people from my birth country with these sentiments while they have it extraordinarily good for the eu standards but they are wondering why would they do all of this for nothing (...) more and more and cutting hours more and more. It seems more an education and communication thing really than anything else; it is like asking why pay taxes: if you are not well informed, it might feel like theft, but when you spell it out, most people will see how they benefit.

  • brookst 13 hours ago

    Well said. I keep reading these fearmongering articles and looking around wondering where all of these deep meaning and human agency is today.

    I’m led to believe that we see this stuff because the tiny subset of humanity that has the wealth and luxury to sit around thinking about thinking about themselves are worried that AI may disrupt the navel-gazing industry.

zdragnar 19 hours ago

> What's the point of our existence if we have no way to meaningfully contribute to our own world?

You may find this to be insightful: https://meltingasphalt.com/a-nihilists-guide-to-meaning/

In short, "meaning" is a contextual perception, not a discrete quality, though the author suggests it can be quantified based on the number of contextual connections to other things with meaning. The more densely connected something is, the more meaningful it is; my wedding is meaningful to me because my family and my partners family are all celebrating it with me, but it was an entirely meaningless event to you.

Thus, the meaningfulness of our contributions remains unchanged, as the meaning behind them is not dependent upon the perspective of an external observer.

  • lo_zamoyski 11 hours ago

    People talk about meaning, but they rarely define it.

    Ultimately, "meaning" is a matter of "purpose", and purpose is a matter of having an end, or telos. The end of a thing is dependent on the nature of a thing. Thus, the telos of an oak tree is different from the telos of a squirrel which is different from that of a human being. The telos or end of a thing is a marker of the thing's fulfillment or actualization as the kind of thing it is. A thing's potentiality is structured and ordered toward its end. Actualization of that potential is good, the frustration of actualization is not.

    As human beings, what is most essential to us is that we are rational and social animals. This is why we are miserable when we live lives that are contrary to reason, and why we need others to develop as human beings. The human drama, the human condition, is, in fact, our failure to live rationally, living beneath the dignity of a rational agent, and very often with knowledge of and assent to our irrational deeds. That is, in fact, the very definition of sin: to choose to act in a way one knows one should not. Mistakes aren't sins, even if they are per se evil, because to sin is to knowingly do what you should not (though a refusal to recognize a mistake or to pay for a recognized mistake would constitute a sin). This is why premeditated crimes are far worse than crimes of passion; the first entails a greater knowledge of what one is doing, while someone acting out of intemperance, while still intemperate and thus afflicted with vice, was acting out of impulse rather fully conscious intent.

    So telos provides the objective ground for the "meaning" of acts. And as you may have noticed, implicitly, it provides the objective basis for morality. To be is synonymous with good, and actualization of potential means to be more fully.

    • nthingtohide 9 hours ago

      Meaning is a matter of context. Most of the context resides in the past and future. Ludwig's claim that word's meaning is dependent on how it is used. This applies generally.

      Daniel Dennett - Information & Artificial Intelligence

      https://www.youtube.com/watch?v=arEvPIhOLyQ

      Daniel Dennett bridges the gap between everyday information and Shannon-Weaver information theory by rejecting propositions as idealized meaning units. This fixation on propositions has trapped philosophers in unresolved debates for decades. Instead, Dennett proposes starting with simple biological cases—bacteria responding to gradients—and recognizing that meaning emerges from differences that affect well-being. Human linguistic meaning, while powerful, is merely a specialized case. Neural states can have elaborate meanings without being expressible in sentences. This connects to AI evolution: "good old-fashioned AI" relied on propositional logic but hit limitations, while newer approaches like deep learning extract patterns without explicit meaning representation. Information exists as "differences that make a difference"—physical variations that create correlations and further differences. This framework unifies information from biological responses to human consciousness without requiring translation into canonical propositions.

  • ionwake 18 hours ago

    Please don't be offended by my opinion, I mean it in good humour to share some strong disagreements - Im going to give my take after reading your comment and the article which both seem completely OTT ( contextwise regarding my opinions ).

    >meaning behind them is not dependent upon the perspective of an external observer.

    (Yes brother like cmon)

    Regarding the author, I get the impression he grew up without a strong father figure? This isnt ad hominem I just get the feeling of someone who is so confused and lost in life that he is just severely depressed possibly related to his directionless life. He seems so confused he doesn't even take seriously the fact most humans find their own meaning in life and says hes not even going to consider this, finding it futile.( he states this near the top of the article ).

    I believe his rejection of a simple basic core idea ends up in a verbal blurb which itself is directionless.

    My opinion ( Which yes maybe more floored than anyones ), is to deal with Mazlows hierarchy, and then the prime directive for a living organism which after survival , which is reproduction. Only after this has been achieved can you then work towards your family community and nation.

    This may seem trite, but I do believe that this is natural for someone with a relatively normal childhood.

    My aim is not to disparage, its to give me honest opinion of why I disagree and possible reasons for it. If you disagree with anything I have said please correct me.

    Thanks for sharing the article though it was a good read - and I did struggle myself with meaning sometimes.

    • zdragnar 13 hours ago

      To use a counter example, consider Catholic priests who do not marry or raise children. It would be quite the argument indeed to suggest their lives are without meaning or purpose.

      Aha, you might say, but they hold leadership roles! They have positions of authority! Of course they have meaning, as they wield spiritual responsibility to their community as a fine substitute for the family life they will not have.

      To that, I suggest looking deeper, at the nuns and monks. To a cynical non-believer, they surely are wanting for a point to their existence, but to them, what they do is a step beyond Maslow's self actualization, for they live in communion with God and the saints. Their medications and good works in the community are all expressions of that purpose, not the other way around. In short, though their "graph of contextual meaning" doesn't spread as far, it is very densely packed indeed.

      Two final thoughts:

      1) I am both aware of and deeply amused by the use of priests and nuns and monks to defend the arguments of a nihilist's search for meaning.

      2) I didn't bring this up so much to take the conversation off topic, so much as to hone in on the very heart of what troubled the person I originally responded to. The question of purpose, the point of existence, in the face of superhuman AI is in fact unchanged. The sense of meaning and purpose one finds in life is found not in the eyes of an unfeeling observer, whether the observers are robots or humans. It must come from within.

joshdavham a day ago

> I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be.

For me personally, I hope that we do get AGI. I just don't want it by 2027. That feels way too fast to me. But AGI 2070 or 2100? That sounds much more preferable.

abraxas a day ago

I think LLM or no LLM the emergence of intelligence appears to be closely related to the number of synapses in a network whether a biological or a digital one. If my hypothesis is roughly true it means we are several orders of magnitude away from AGI. At least the kind of AGI that can be embodied in a fully functional robot with the sensory apparatus that rivals the human body. In order to build circuits of this density it's likely to take decades. Most probably transistor based, silicon based substrate can't be pushed that far.

  • joshjob42 21 hours ago

    I think generally the expectation is that there are around 100T synapses in the brain, and of course it's probably not a 1:1 correspondence with neural networks, but it doesn't seem infeasible at all to me that a dense-equivalent 100T parameter model would be able to rival the best humans if trained properly.

    If basically a transformer, that means it needs at inference time ~200T flops per token. The paper assumes humans "think" at ~15 tokens/second which is about 10 words, similar to the reading speed of a college graduate. So that would be ~3 petaflops of compute per second.

    Assuming that's fp8, an H100 could do ~4 petaflops, and the authors of AI 2027 guesstimate that purpose wafer scale inference chips circa late 2027 should be able to do ~400petaflops for inference, ~100 H100s worth, for ~$600k each for fabrication and installation into a datacenter.

    Rounding that basically means ~$6k would buy you the compute to "think" at 10 words/second. Generally speaking that'd probably work out to maybe $3k/yr after depreciation and electricity costs, or ~30-50¢/hr of "human thought equivalent" 10 words/second. Running an AI at 50x human speed 24/7 would cost ~$23k/yr, so 1 OpenBrain researcher's salary could give them a team of ~10-20 such AIs running flat out all the time. Even if you think the AI would need an "extra" 10 or even 100x in terms of tokens/second to match humans, that still puts you at genius level AIs in principle runnable at human speed for 0.1 to 1x the median US income.

    There's an open question whether training such a model is feasible in a few years, but the raw compute capability at the chip level to plausibly run a model that large at enormous speed at low cost is already existent (at the street price of B200's it'd cost ~$2-4/hr-human-equivalent).

    • brookst 13 hours ago

      Excellent back of napkin math and it feels intuitively right.

      And I think training is similar — training is capital intensive therefore centralized, but if 100m people are paying $6k for their inference hardware, add on $100/year as a training tax (er, subscription) and you’ve got $10B/year for training operations.

  • ivraatiems a day ago

    I think there is a good chance you are roughly right. I also think that the "secret sauce" of sapience is probably not something that can be replicated easily with the technology we have now, like LLMs. They're missing contextual awareness and processing which is absolutely necessary for real reasoning.

    But even so, solving that problem feels much more attainable than it used to be.

    • throwup238 21 hours ago

      I think the missing secret sauce is an equivalent to neuroplasticity. Human brains are constantly being rewired and optimized at every level: synapses and their channels undergo long term potentiation and depression, new connections are formed and useless ones pruned, and the whole system can sometimes remap functions to different parts of the brain when another suffers catastrophic damage. I don’t know enough about the matrix multiplication operations that power LLMs, but it’s hard to imagine how that kind of organic reorganization would be possible with GPUs matmul. It’d require some sort of advanced “self aware” profile guided optimization and not just trial and error noodling with Torch ops or CUDA kernels.

      I assume that thanks to the universal approximation theorem it’s theoretically possible to emulate the physical mechanism, but at what hardware and training cost? I’ve done back of the napkin math on this before [1] and the number of “parameters” in the brain is at least 2-4 orders of magnitude more than state of the art models. But that’s just the current weights, what about the history that actually enables the plasticity? Channel threshold potentials are also continuous rather than discreet and emulating them might require the full fp64 so I’m not sure how we’re even going to get to the memory requirements in the next decade, let alone whether any architecture on the horizon can emulate neuroplasticity.

      Then there’s the whole problem of a true physical feedback loop with which the AI can run experiments to learn against external reward functions and the core survival reward function at the core of evolution might itself be critical but that’s getting deep into the research and philosophy on the nature of intelligence.

      [1] https://news.ycombinator.com/item?id=40313672

      • lblume 9 hours ago

        Transformers already are very flexible. We know that we can basically strip blocks at will, reorder modules, transform their input in predictable ways, obstruct some features and they will after a very short period of re-training get back to basically the same capabilities they had before. Fascinating stuff.

    • narenm16 a day ago

      i agree. it feels like scaling up these large models is such an inefficient route that seems to be warranting new ideas (test-time compute, etc).

      we'll likely reach a point where it's infeasible for deep learning to completely encompass human-level reasoning, and we'll need neuroscience discoveries to continue progress. altman seems to be hyping up "bigger is better," not just for model parameters but openai's valuation.

  • baq 17 hours ago

    Exponential growth means the first order of magnitude comes slowly and the last one runs past you unexpectedly.

    • Palmik 16 hours ago

      Exponential growth generally means that the time between each order of magnitude is roughly the same.

      • brookst 13 hours ago

        At the risk of pedantry, is that true? Something that doubles annually sure seems like exponential growth to me, but the orders of magnitude are not at all the same rate. Orders of magnitude are a base-10 construct but IMO exponents don’t have to be 10.

        EDIT: holy crap I just discovered a commonly known thing about exponents and log. Leaving comment here but it is wrong, or at least naive.

  • UltraSane a day ago

    Why can't the compute be remote from the robot? That is a major advantage of human technology over biology.

    • abraxas 21 hours ago

      Mostly latency. But even if a single robot could be driven by a data centre consider the energy and hardware investment requirements to make such a creature practical.

      • Jensson 16 hours ago

        1ms latency is more than fast enough, you probably have bigger latency than that between the cpu and the gpu.

        • Symmetry 13 hours ago

          We've got 10ms of latency between our brains and our hands along our nerve fibers and we function all right.

      • UltraSane 5 hours ago

        The Figure robots use a two level control scheme with a fast LLM at 200Hz directly controlling the robot and a slow planning LLM running at 7Hz. This planning LLM could be very far away indeed and still have less than 142.8ms of latency.

      • UltraSane 21 hours ago

        Latency would be kept low be keeping the compute nearby. One 1U or 2U server per robot would be reasonable.

TheDong 21 hours ago

> What's the point of our existence if we have no way to meaningfully contribute to our own world?

For a sizable number of humans, we're already there. The vast majority of hacker news users are spending their time trying to make advertisements tempt people into spending money on stuff they don't need. That's an active societal harm. It doesn't contribute in any positive way to the world.

And yet, people are fine to do that, and get their dopamine hits off instagram or arguing online on this cursed site, or watching TV.

More people will have bullshit jobs in this SF story, but a huge number of people already have bullshit jobs, and manage to find a point in their existence just fine.

I, for one, would be happy to simply read books, eat, and die.

  • bshacklett 11 hours ago

    I was hoping someone would bring up Bullshit Jobs. There are definitely a lot of people spending the majority of their time doing "work" that doesn't have any significant impact to the world already. I don't know that some future AI takeover would really change much, except maybe remove some vale of perception around meaningless work.

    At the same time, I wouldn't necessarily say that people are currently fine getting dopamine hits from social media. Coping would probably be a better description. There are a lot of social and societal problems that have been growing at a rapid rate since Facebook and Twitter began tapping into the reward centers of the brain.

    From a purely anecdotal perspective, I find my mood significantly affected by how productive and impactful I am with how I spend my time. I'm much happier when I'm making progress on something, whether it's work or otherwise.

  • john_texas 21 hours ago

    Targeted advertising is about determining and giving people exactly what they need. If successful, this increases consumption and grows the productivity of the economy. It's an extremely meaningful job as it allows for precise, effective distribution of resources.

    • the_gipsy 13 hours ago

      In practice you're just selling shittier or unnecessary stuff. Advertising makes society objectively worse.

baron816 19 hours ago

My vision for an ASI future involves humans living in simulations that are optimized for human experience. That doesn’t mean we are just live in a paradise and are happy all the time. We’d experience dread and loss and fear, but it would ultimately lead to a deeply satisfying outcome. And we’d be able to choose to forget things, including whether we’re in a simulation so that it feels completely unmistakeable from base reality. You’d live indefinitely, experiencing trillions of lifespans where you get to explore the multiverse inside and out.

My solution to the alignment problem is that an ASI could just stick us in tubes deep in the Earth’s crust—it just needs to hijack our nervous system to input signals from the simulation. The ASI could have the whole rest of the planet, or it could move us to some far off moon in the outer solar system—I don’t care. It just needs to do two things for it’s creators—preserve lives and optimize for long term human experience.

arisAlexis 15 hours ago

do you really think that AGI is impossible after all that happened up to today? how is this possible?