Comment by delichon

Comment by delichon 2 days ago

45 replies

> but then again if you'd showed me an RPi5 back in 1977 I would have said "nah, impossible" so who knows?

I was reading lots of scifi in 1977, so I may have tried to talk to the pi like Scotty trying to talk to the mouse in Star Trek IV. And since you can run an LLM and text to speech on an RPi5, it might have answered.

JdeBP a day ago

You should have been watching lots of SciFi, too. (-:

I have a Raspberry Pi in a translucent "modular case" from the PiHut.

* https://thepihut.com/products/modular-raspberry-pi-4-case-cl...

It is very close to the same size and appearance as the "key" for Orac in Blake's 7.

I have so far resisted the temptation to slap it on top of a Really Useful Box and play the buzzing noise.

* https://youtube.com/watch?v=XOd1WkUcRzY

Obviously not even Avon figured out that the main box of Orac was a distraction, a fancy base station to hold the power supply, WiFi antenna, GPS receiver, and some Christmas tree lights, and all of the computational power was really in the activation key.

The amusing thing is that that is not the only 1970s SciFi telly prop that could become almost real today. It shouldn't be hard -- all of the components exist -- to make an actual Space 1999 commlock; not just a good impression of one, but a functioning one that could do teleconferencing over a LAN, IR control for doors and tellies and stuff, and remote computer access.

Not quite in time for 1999, alas. (-:

* https://mastodonapp.uk/@JdeBP/114590229374309238

rahen 2 days ago

No need for an RPi 5. Back in 1982, a dual or quad-CPU X-MP could have run a small LLM, say, with 200–300K weights, without trouble. The Crays were, ironically, very well suited for neural networks, we just didn’t know it yet. Such an LLM could have handled grammar and code autocompletion, basic linting, or documentation queries and summarization. By the late 80s, a Y-MP might even have been enough to support a small conversational agent.

A modest PDP-11/34 cluster with AP-120 vector coprocessors might even have served as a cheaper pathfinder in the late 70s for labs and companies who couldn't afford a Cray 1 and its infrastructure.

But we lacked both the data and the concepts. Massive, curated datasets (and backpropagation!) weren’t even a thing until the late 80s or 90s. And even then, they ran on far less powerful hardware than the Crays. Ideas and concepts were the limiting factor, not the hardware.

  • fentonc 18 hours ago

    I think a quad-CPU X-MP is probably the first computer that could have run (not train!) a reasonably impressive LLM if you could magically transport one back in time. It supported a 4GB (512 MWord) SRAM-based "Solid State Drive" with a supported transfer bandwidth of 2 GB/s, and about 800 MFLOPS CPU performance on something like a big matmul. You could probably run a 7B parameter model with 4-bit quantization on it with careful programming, and get a token every couple seconds.

  • adwn a day ago

    > a small LLM, say, with 200–300K weights

    A "small Large Language Model", you say? So a "Language Model"? ;-)

    > Such an LLM could have handled grammar and code autocompletion, basic linting, or documentation queries and summarization.

    No, not even close. You're off by 3 orders of magnitude if you want even the most basic text understanding, 4 OOM if you want anything slightly more complex (like code autocompletion), and 5–6 OOM for good speech recognition and generation. Hardware was very much a limiting factor.

    • rahen a day ago

      I would have thought the same, but EXO Labs showed otherwise by getting a 300K-parameter LLM to run on a Pentium II with only 128 MB of RAM at about 50 tokens per second. The X-MP was in the same ballpark, with the added benefit of native vector processing (not just some extension bolted onto a scalar CPU) which performs very well on matmul.

      https://www.tomshardware.com/tech-industry/artificial-intell...

      John Carmack was also hinting at this: we might have had AI decades earlier, obviously not large GPT-4 models but useful language reasoning at a small scale was possible. The hardware wasn't that far off. The software and incentives were.

      https://x.com/ID_AA_Carmack/status/1911872001507016826

      • adwn a day ago

        > EXO Labs showed otherwise by getting a 300K-parameter LLM to run on a Pentium II with only 128 MB of RAM at about 50 tokens per second

        50 token/s is completely useless if the tokens themselves are useless. Just look at the "story" generated by the model presented in your link: Each individual sentence is somewhat grammatically correct, but they have next to nothing to do with each other, they make absolutely no sense. Take this, for example:

        "I lost my broken broke in my cold rock. It is okay, you can't."

        Good luck tuning this for turn-based conversations, let alone for solving any practical task. This model is so restricted that you couldn't even benchmark its performance, because it wouldn't be able to follow the simplest of instructions.

Mountain_Skies 2 days ago

Someday real soon, kids being shown episodes of 'Knight Rider' by their grandparents won't understand why a talking car was so futuristic.

  • KineticLensman 2 days ago

    Like James Bond's Aston Martin with a satnav/tracking device in 1964's Goldfinger. Kids would know what that was but they might not understand why Bond had to continually shift some sort of stick to change the car's gear.

    • anthk a day ago

      Gear shifting it's still a thing in Europe, and mandatory if you want to get your driver's license.

      • prmoustache 21 hours ago

        you can get a driver license with an automatic. But it just means you can only drive automatics.

        It would have been a huge deal not being able to drive manuals 20y ago but hybrid and ev all being automatic it is not that much of a downside nowadays unless you want to buy old cars or borrow friend's car. Most renting fleets have autos available nowadays.

      • inkyoto 11 hours ago

        At this point, it is a historical artefact that will cease to exist soon enough.

        Electric vehicles do not have gearboxes as there are no converters, so there is nothing to shift up or down. A few performance EV's that have been announced (and maybe even have released) with a gear stick, do so for nostalgic reasons and the gear shift + the accompanying experience is simulated entirely in the software.

  • qgin a day ago

    It’s impossible to explain to kids now why it was funny on Seinfeld when Kramer pretended to be MoviePhone and says “why don’t you just tell me the name of the movie you selected!”

  • tsoukase a day ago

    I grew up watching Kitt and when I watched it again a few days ago, I didn't feel anything. Much less my kids.

  • Havoc 2 days ago

    Tried explaining what a Tamagotchi was to someone recently. Looks of utter bewilderment

    • azeirah a day ago

      Really? Tamagotchis seem to be one of those things that have charm beyond straight up nostalgia :o

  • dizhn 2 days ago

    Kitt was funny though. (For its time)

  • sublinear 2 days ago

    Was that point not almost a decade ago?

    • Mountain_Skies 2 days ago

      Not really. My 1983 Datsun would talk, but it couldn't converse. Alexa and Siri couldn't hold a conversation anywhere near the level KITT did. There's a big difference. With LLMs, we're getting close.

  • hulitu a day ago

    > Someday real soon, kids being shown episodes of 'Knight Rider' by their grandparents won't understand why a talking car was so futuristic.

    Maybe in 100 years. The talking car was more intelligent than Siri, Alexa or Hey Google.

    It is not that we are not able to "talk" to computers, it is that we "talk" with computers only so that they can collect more data about us. Their "intelligence" is limited to simple text underestanding.

    • olddustytrail a day ago

      I think maybe you missed the last three years. We're not talking about Alexa or Hey Google level.

      We're talking about Google Gemini or ChatGPT.

  • heelix 2 days ago

    The self driving aspect, amazingly, is already here and considered mundane.

    • DrillShopper 2 days ago

      Oh really? What vehicle can I buy today, drive home, get twice the legal limit drunk, flop in the back alone to take a nap while my car drives me two hours away to a relative's house?

      I'd really like to buy that car so I await your response.

      • ptero 2 days ago

        That's a jurisdiction problem, not a technology problem. No tech is foolproof, but even with the current technology someone would be much safer (for others, too) in the back seat than trying to drive tired, borderline DUI at night in unfamiliar town. Which many folks regularly do, for example on business travel.

        The reason I cannot do this today is laws, not technology. My 2c.

        • DrillShopper 14 hours ago

          The claim is that self driving is mundane - something everyone can have if they want. A standard feature, so entwined in the background of life that it is unremarkable.

          Given that there is no system out there that I can own, jump in the back of in no condition to drive, and get to my destination safely defeats that claim. It's not even so mundane that everyone has the anemic Tesla self-driving feature that runs over kids and slams into highway barriers.

          It may also be a matter of laws, but the underlying tech is also still not there given all the warnings any current "self driving car" systems give about having to pay attention to the road and keep your hands on the wheel even if the laws weren't there.

          Could I get behind the wheel of my self driving car, drunk, and make it there safely? No, I definitely couldn't, and I understand why those laws exist with all of the existing failure modes of self driving cars.

          People have called the current state of LLMs "sparkling AutoComplete". The current state of "self-driving cars" is "sparkling lane assist" with a chaser of adaptive cruise control.

      • dmd 2 days ago

        The only thing stopping a Waymo from doing that is laws.