Comment by sixdimensional

Comment by sixdimensional 2 days ago

3 replies

Have you tried the thought experiment though?

I agree this way seems "wrong", but try putting on your engineering hat and ask what would you change to make it right?

I think that is a very interesting thread to tug on.

netsharc a day ago

Not grandfather, but this is "wrong" because it's like asking a junior coder to store/read some values in the database manually (each time writing an SQL query) and then writing HTML to output those values. Each time the junior coder has to do some thinking and looking up. And the AI is doing a similar thing (using the word "thinking" loosely here).

If the coder is smart, she'll write down the query and note where to put the values, she'll have a checklist of how to load the UI for the database, paste the query, hit run, and copy/paste the output to her HTML. She'll use a standard HTML template. Later she could glue these steps up with some code so that a program takes those values, put them in the SQL query, and then put them in the HTML and send that HTML to the browser... Oh look, she's made a program, a tool! And if she gets an assignment to read/write some values, she can do it in 1 minute instead of 5. Wow, custom made programs save time, who could've guessed?

ares623 2 days ago

Running inference for every interaction seems a bit wasteful IMO, especially with a chance for things to go wrong. I’m not smart enough to come up with a way on how to optimize a repetitive operation though.

  • sixdimensional 2 days ago

    I totally agree. The reason I asked before offering any solution ideas was I was curious what you might think.

    My brain went to the concept of memoization that we use to speed up function calls for common cases.

    If you had a proxy that sat in front of the LLM and cached deterministic responses for inputs, with some way to maybe even give feedback when a response is satisfactory.. this could be a building block for a runtime design mode or something like that.