jordanb 4 hours ago

There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.

A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.

He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.

godelski 2 hours ago

I think the comparison to giving change is a good one, especially given how frequently the LLM hype crowd uses the fictitious "calculator in your pocket" story. I've been in the exact situation you've described, long before LLMs came out and cashiers have had calculators in front of them for longer than we've had smartphones.

I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.

I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.

So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.

  • userbinator an hour ago

    I briefly taught a beginner CS course over a decade ago, and at the time it was already surprising and disappointing how many of my students would reach for a calculator to do single-digit arithmetic; something that was a requirement to be committed to memory when I was still in school. Not surprisingly, teaching them binary and hex was extremely frustrating.

    I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated".

    I would tell others to "shift right once, then divide by 2 and add" for 15%, and get the same response.

    However, I'm not so sure what you mean by a problem with thinking that abstraction is bad. Yes, abstraction is bad --- because it is a way to hide and obscure the actual details, and one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.

noduerme 8 hours ago

I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?

The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.

So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.

  • hamasho 7 hours ago

      > The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true.
    
    This really resonates with me. If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them. We are using AI for a lot of small tasks inside big systems, or even for designing the entire architecture, and we still need to validate the answers by ourselves, at least for the foreseeable future. But outsourcing thinking reduces a lot of brain powers to do that, because it often requires understanding problems' detailed structure and internal thinking path.

    In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.

    • chickensong an hour ago

      If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something.

      I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.

      Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.

      Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.

      Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.

      If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.

    • zephen 6 hours ago

      > If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.

      I think past successes have led to a category error in the thinking of a lot of people.

      For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware.

      But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated.

      If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine.

      But three seemingly uncorrelated LLMs? No fucking way.

      • noduerme 3 hours ago

        There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.

      • firejake308 3 hours ago

        The LLMs are not uncorrelated, though, they're all trained on the same dataset (the Internet) and subject to most of the same biases

  • knollimar 5 hours ago

    It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')

    • noduerme 3 hours ago

      That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.

  • MrDarcy 8 hours ago

    On the other hand the incorrect values may drive architects to think more critically about what their tools are producing.

    • noduerme 3 hours ago

      On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.

roenxi 8 hours ago

That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.

It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.

  • gdulli 8 hours ago

    If you told me this was a verbatim cautionary sci-fi short story from 1953 I'd believe it.

    • Joker_vD 4 hours ago

      At long last, we have created the Torment Nexus from classic sci-fi novel "Don't Create The Torment Nexus"!

    • peyton 8 hours ago

      Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.

      • beedeebeedee 7 hours ago

        Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.

jakubtomanik 9 hours ago

I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.

  • [removed] 9 hours ago
    [deleted]
  • peyton 8 hours ago

    That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.

    • noduerme 7 hours ago

      If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.

      • kjkjadksj 5 hours ago

        On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.

rco8786 8 hours ago

I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.

  • vjvjvjvjghv 4 hours ago

    You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.

  • kjkjadksj 5 hours ago

    The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.

    Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.

  • zephen 6 hours ago

    To some extent, the argument against calculators is perfectly valid.

    The cash register says you owe $16.23, you give the cashier $21.28, and all hell breaks loose.

[removed] 8 hours ago
[deleted]
benSaiyen 7 hours ago

Too late. Outsourcing has already accomplished this.

No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.

The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.

The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.

Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.

I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.

American tech workers are now primarily cloud configurators and that's being automated away.

This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.

What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.

And some pretty well connected people are hinting at similar sense of what's wrong: https://www.barchart.com/story/news/36862423/weve-done-our-c...