Comment by ares623
Amazing. This is the Internet moment of AI.
The Internet took something that used to be slow, cumbersome, expensive and made it fast, efficient, cheap.
Now we are doing it again.
Amazing. This is the Internet moment of AI.
The Internet took something that used to be slow, cumbersome, expensive and made it fast, efficient, cheap.
Now we are doing it again.
Have you tried the thought experiment though?
I agree this way seems "wrong", but try putting on your engineering hat and ask what would you change to make it right?
I think that is a very interesting thread to tug on.
Not grandfather, but this is "wrong" because it's like asking a junior coder to store/read some values in the database manually (each time writing an SQL query) and then writing HTML to output those values. Each time the junior coder has to do some thinking and looking up. And the AI is doing a similar thing (using the word "thinking" loosely here).
If the coder is smart, she'll write down the query and note where to put the values, she'll have a checklist of how to load the UI for the database, paste the query, hit run, and copy/paste the output to her HTML. She'll use a standard HTML template. Later she could glue these steps up with some code so that a program takes those values, put them in the SQL query, and then put them in the HTML and send that HTML to the browser... Oh look, she's made a program, a tool! And if she gets an assignment to read/write some values, she can do it in 1 minute instead of 5. Wow, custom made programs save time, who could've guessed?
Thank you for your response!
I agree that spending time on inference or compute every time for the same LLM task is wasteful and the results would be less desirable.
But I don't think the thought experiment should end with that. We can continue to engineer and problem solve the shortcomings of the approach, IMHO.
You provided a good example of an optimization - tool creation.
Trying to keep my mind maximially open - one could think of a "design time" performance at runtime - where the user interacting with the system is describing what they want the first time, and the system is assembling the tool (much like we do now with AI assisted coding, but perhaps without even seeing the code).
Once that piece of the system is working it is persisted so no more inference is required, as essentially code - a tool, that saves time. I am thinking of this as essentially memoizing a function body- i.e. generating and persisting the code.
There could even be some process overseeing the generated code/tool to make sure the quality meets some standard and providing automated iteration, testing, etc if needed.
A big problem is if the LLM never converges to the "right" solution on it's on (e.g. the right tool to generate the HTML from the SQL query, without any hallucination). But, I am willing to momentarily punt on that problem as being more to do with the determinism problem and the quality of the result. The issue isn't per se the non-deterministic results of an LLM anyway, it's the quality of the result fit for purpose for the use case.
I think it's difficult but possible to go further with the thought experiment. A system that "builds itself" at runtime, but persists what it builds, based on user interaction and prompting when the result is satisfactory...
I remember one of the first computer science things I learned- the program that could print out it's own source code. Even then we were believing that systems could build themselves and grow themselves.
So my ask would be to look beyond the initial challenge of the first time costs of generating the tool/code and solve that by persisting a suitable result.
What challenge or problem comes next in this idea?
I totally agree. The reason I asked before offering any solution ideas was I was curious what you might think.
My brain went to the concept of memoization that we use to speed up function calls for common cases.
If you had a proxy that sat in front of the LLM and cached deterministic responses for inputs, with some way to maybe even give feedback when a response is satisfactory.. this could be a building block for a runtime design mode or something like that.
Then AI came and made the internet slow, cumbersome and expensive again
> Amazing. This is the Internet moment of AI.
I am a big proponent of AI. To me, this experiment mostly shows how not to use AI.