Comment by reissbaker

Comment by reissbaker 2 days ago

1 reply

I think it's actually conceptually pretty different. LLMs today are usually constrained to:

1. Outputting text (or, sometimes, images).

2. No long term storage except, rarely, closed-source "memory" implementations that just paste stuff into context without much user or LLM control.

This is a really neat glimpse of a future where LLMs can have much richer output and storage. I don't think this is interesting because you can recreate existing apps without coding... But I think it's really interesting as a view of a future with much richer, app-like responses from LLMs, and richer interactions — e.g. rather than needing to format everything as a question, the LLM could generate links that you click on to drill into more information on a subject, which end up querying the LLM itself! And similarly it can ad-hoc manage databases for memory+storage, etc etc.

pepoluan 11 hours ago

Or, maybe, just not use LLMs?

LLM is just one model used in A.I. It's not a panacea.

For generating deterministic output, probably a combination of Neural Networks and Genetic Programming will be better. And probably also much more efficient, energy-wise.