Comment by pjmlp

Comment by pjmlp 13 hours ago

28 replies

I think people are still fooling themselves about the relevance of 3GL languages in an AI dominated future.

It is similar to how Assembly developers thought about their relevance until optimising compilers backends turned that into a niche activity.

It is a matter of time, maybe a decade who knows, until we can produce executables directly from AI systems.

Most likely we will still need some kind of formalisation tools to tame natural language uncertainties, however most certainly they won't be Python/Rust like.

We are moving into another abstraction layer, closer to the 4GL, CASE tooling dreams.

dragonwriter 5 hours ago

> I think people are still fooling themselves about the relevance of 3GL languages in an AI dominated future.

I think, as happens in the AI summer before each AI winter, people are fooling themselves about both the shape and proximity of the “AI dominated future”.

  • brookst 5 hours ago

    It will be approximately the same shape and proximity as “the Internet-dominated future” was in 2005.

albertzeyer 12 hours ago

4GL and 5GL are already taken. So this is the 6GL.

https://en.wikipedia.org/wiki/Programming_language_generatio...

But speaking more seriously, how to get this deterministic?

  • pjmlp 12 hours ago

    Fair enough, should have taken a look, I stopped counting when computer magazines buzz about 4GLs faded away.

    Probably some kind of formal methods inspired approach, declarative maybe, and less imperative coding.

    We should take an Alan Kay and Bret Victor like point of view where AI based programming is going to be in a decade from now, not where it is today.

mpweiher 4 hours ago

"Since FORTRAN should virtually eliminate coding and debugging…" -- FORTRAN report, 1954 [1]

If, as you seem to imply and as others have stated, we should no longer even look at the "generated" code, then the LLM prompts are the programs / programming language.

I can't think of a worse programming language, and I am not the only one [2]

However, it does indicate that our current programming languages are way to low-level, too verbose. Maybe we should fix that?

[1] http://www.softwarepreservation.org/projects/FORTRAN/BackusE...

[2] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

[3] https://objective.st/

Wowfunhappy 12 hours ago

Assemblers and compilers are (practically) deterministic. LLMs are not.

  • chowells 4 hours ago

    That's the wrong distinction, and bringing it up causes pointless arguments like are in the replies.

    The right distinction is that assemblers and compilers have semantics and an idea of correctness. If your input doesn't lead to a correct program, you can find the problem. You can examine the input and determine whether it is correct. If the input is wrong, it's theoretically possible to find the problem and fix it without ever running the assembler/compiler.

    Can you examine a prompt for an LLM and determine whether it's right or wrong without running it through the model? The idea is ludicrous. Prompts cannot be source code. LLMs are fundamentally different from programs that convert source code into machine code.

    This is something like "deterministic" in the colloquial sense, but not at all in the technical sense. And that's where these arguments come from. I think it's better to sidestep them and focus on the important part: compilers and assemblers are intended to be predictable in terms of semantics of code. And when they aren't, it's a compiler bug that needs to be fixed, not an input that you should try rephrasing. LLMs are not intended to be predictable at all.

    So focus on predictability, not determinism. It might forestall some of these arguments that get lost in the weeds and miss the point entirely.

  • traverseda 10 hours ago

    LLMs are deterministic. So far every vendor is giving them random noise in addition to your prompt though. They don't like have a free will or a soul or anything, you feed them exactly the same tokens exactly the same tokens will come out.

    • mmoskal 2 hours ago

      If you change one letter in the prompt, however insignificant you may think it is, it will change the results in unpredictable ways, even with temperature 0 etc. The same is not true of renaming a variable in a programming language, most refactorings etc.

    • jnwatson 7 hours ago

      Only if you set temperature to 0 or have some way to set the random seed.

      • vlovich123 7 hours ago

        Locally that’s possible but for multi tenant ones I think there’s other challenges related to batch processing (not in terms of the random seed necessarily but because of other non determinism sources).

  • pjmlp 12 hours ago

    Missed the part?

    > Most likely we will still need some kind of formalisation tools to tame natural language uncertainties, however most certainly they won't be Python/Rust like

    • Wowfunhappy 12 hours ago

      No, I didn't miss it. I think the fact that LLMs are non deterministic means we'll need a lot more than "some kind of formalization tools", we'll need real programming languages for some applications!

zenkey 13 hours ago

Yes I agree this is likely the direction we're heading. I suppose the "Python 4" I mentioned would just be an intermediate step along the way.

  • sanderjd 9 hours ago

    I think the question is: What is the value of that intermediate step? It depends on how long the full path takes.

    If we're one year away from realizing a brave new world where everyone is going straight from natural language to machine code or something similar, then any work to make a "python 4" - or any other new programming languages / versions / features - is rearranging deck chairs on the Titanic. But if that's 50 years away, then it's the opposite.

    It's hard to know what to work on without being able to predict the future :)

sitkack 7 hours ago

> It is a matter of time, maybe a decade who knows, until we can produce executables directly from AI systems.

They already can.

krembo 8 hours ago

Wild thought: maybe coding is a thing of the past? Given that an llm can get fast&deterministic results if needed, maybe a backend for instance, can be a set of functions which are all textual specifications and by following them it can do actions (validations, calculations, etc), approach apis and connect to databases, then produce output? Then the llm can auto refine the specifications to avoid bugs and roll the changes in real time for the next calls? Like a brain which doesn't need predefined coding instructions to fulfill a task, but just understand its scope, how to approach it and learn from the past.

  • TechDebtDevin 8 hours ago

    I really want to meet these people that are letting an LLM touch their db.

    • krembo 7 hours ago

      Fast forward to the near future, why wouldn't it with the correct restrictions? For instance, would you let it today run SELECT queries? as Hemingway once said "if it's about price we know who you are".