Comment by gwd
> Someday we will have a machine simulate a cat, then the village idiot... This isn't how LLMs work.
I think you misunderstood that argument. The simulate the brain thing isn't a "start from the beginning" argument, it's an "answer a common objection" argument.
Back around 2000, when Nick Bostrom was talking about this sort of thing, computers were simply nowhere near powerful enough to come even close to being smart enough to outsmart a human, except in very constrained cases like chess; we did't even have the first clue how to create a computer program to be even remotely dangerous to us.
Bostrom's point was that, "We don't need to know the computer program; even if we just simulate something we know works -- a biological brain -- we can reach superintelligence in a few decades." The idea was never that people would actually simulate a cat. The idea is, if we don't think of anything more efficient, we'll at least be able to simulate a cat, and then an idiot, and then Einstein, and then something smarter. And since we almost certainly will think of something more efficient than "simulate a human brain", we should expect superintelligence to come much sooner.
> There is no evidence or argument for exponential growth.
Moore's law is exponential, which is where the "simulate a brain" predictions have come from.
> It is science fiction and leads people to make bad decisions based on fictional evidence.
The only "fictional evidence" you've actually specified so far is the fact that there's no biological analog; and that (it seems to me) is from a misunderstanding of a point someone else was making 20 years ago, not something these particular authors are making.
I think the case for AI caution looks like this:
A. It is possible to create a superintelligent AI
B. Progress towards a superintelligent AI will be exponential
C. It is possible that a superintelligent AI will want to do something we wouldn't want it to do; e.g., destroy the whole human race
D. Such an AI would be likely to succeed.
Your skepticism seems to rest on the fundamental belief that either A or B is false: that superintelligence is not physically possible, or at least that progress towards it will be logarithmic rather than exponential.
Well, maybe that's true and maybe it's not; but how do you know? What justifies your belief that A and/or B are false so strongly, that you're willing to risk it? And not only willing to risk it, but try to stop people who are trying to think about what we'd do if they are true?
What evidence would cause you to re-evaluate that belief, and consider exponential progress towards superintelligence possible?
And, even if you think A or B are unlikely, doesn't it make sense to just consider the possibility that they're true, and think about how we'd know and what we could do in response, to prevent C or D?
> Moore's law is exponential, which is where the "simulate a brain" predictions have come from.
To address only one thing out of your comment, Moore's law is not a law, it is a trend. It just gets called a law because it is fun. We know that there are physical limits to Moore's law. This gets into somewhat shaky territory, but it seems that current approaches to compute can't reach the density of compute power present in a human brain (or other creatures' brains). Moore's law won't get chips to be able to simulate a human brain, with the same amount of space and energy as a human brain. A new approach will be needed to go beyond simply packing more transistors onto a chip - this is analogous to my view that current AI technology is insufficient to do what human brains do, even when taken to their limit (which is significantly beyond where they're currently at).