Comment by mikewarot

Comment by mikewarot 16 hours ago

2 replies

My vision for the future includes greatly reducing the power requirements for AI by rethinking computing using first principles thinking. Every single attempt at this so far wasn't willing to go far enough, and ditch the CPU or RAM. FPGAs got close, but they went insane with switching fabrics and special logic blocks. Now they've added RAM, which is just wrong.

Edit/Append: I've had this idea [1] forever (since the 1990s, possibly earlier... don't have notes going that far back). Imagine the simplest possible compute element, the look up table... arranged in a grid. Architectural optimizations I've pondered over time lead me to a 4 bits in, 4 bits out look up table, with latches on all outputs and a clock signal. This prevents race conditions by slowing things down. The gain is that you can now just clock a vast 2d array of these cells with a 2 phase clock (like the colors on a chessboard) and it's a universal computer, Turing complete, but you can actually think about it without your brain melting down.

The problem (for me) has always been programming it and getting a chip made. Thanks to the latest "vibe coding" stuff, I've gotten out of analysis paralysis, and have some things cooking on the software front. The other part is addressed by TinyTapeout, so I'll be able to get a very small chip made for a few hundred dollars.

Because the cells are only connected to neighbors, the runs are all short, low capacitance, and thus you can really, REALLY crank up the clock rates, or save a lot of power. Because the grid is uniform, you wont have the hours or days long "routing" problems that you have with FPGAs.

If my estimates are right, it will cut the power requirements for LLM computing by 95%.

[1] Every mention of BitGrid here on HN - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

mindcrime 16 hours ago

greatly reducing the power requirements for AI by rethinking computing using first principles thinking.

I feel some affinity for this statement! Although what I've said in the past was more along the lines of "rethinking our approach to (artificial) neural networks from first principles" and not necessarily the foundations of computing itself. That said, I wouldn't reject your position out of hand at all!

It certainly feels like we've reached a point where there may be an opportunity to stop, take stock, look back, revisit some things, and maybe do a bit of a reset in some areas.