sham1 2 days ago

Things standing on the shoulders of proprietary giants shouldn't claim to be free software/open source.

  • t-writescode 2 days ago

    Their interfacing software __is__ open source; and, they're asking for your OpenAI api key to operate. I would expect / desire open source code if I were to use that, so I could be sure my api key was only being used for my work, so it's only my work that I'm paying for and it's not been stolen in some way.

noduerme 2 days ago

My older brother who got me into coding learned to code in Assembly. He doesn't really consider most of my work writing in high level languages to be "coding". So maybe there's something here. But if I had to get into the underlying structure, I could. I do wonder whether the same can be said for people who just kludge together a bunch of APIs that produce magical result sets.

  • dotancohen 2 days ago

      > But if I had to get into the underlying structure, I could.
    
    How do you propose to get into the underlying structure of the OpenAPI API? Breach their network and steal their code and models? I don't understand what you're arguing.
    • latexr 2 days ago

      > How do you propose to get into the underlying structure of the OpenAPI API?

      The fact that you can’t is the point of the comment. You could get into the underlying structure of other things, like the C interpreter of a scripting language.

      • robertlagrant 2 days ago

        But what about the microcode inside the CPU?

        • zja 2 days ago

          That tends to not be open source, and people don’t claim that it is.

    • K0balt 2 days ago

      I think the relevant analogy here would be to run a local model. There are several tools to easily run local models for a local API. I run a 70b finetune with some tool use locally on our farm, and it is accessible to all users as a local openAI alternative. For most applications it is adequate and data stays on the campus area network.

      • noduerme a day ago

        A more accurate analogy would be, are you capable of finding and correcting errors in the model at the neural level if necessary? Do you have an accurate mental picture of how it performs its tasks, in a way that allows you to predictably control its output, if not actually modify it? If not, you're mostly smashing very expensive matchbox cars together, rather than doing anything resembling programming.

        • K0balt 11 hours ago

          As an ancient imbedded system programmer, I feel your frustration… but I think that it’s misguided. LLMs are not “computers”. They are a statistics driven tool for navigating human written (and graphical) culture.

          It just so happens to be that a lot of useful stuff is in that box, and LLMs are handy at bringing it out in context. Getting them to “think” is tricky, and it’s best to remember that what you are really doing is trying to get them to talk as if they were thinking.

          It sure as heck isn’t programming lol.

          Also, it’s useful to keep in mind that “hallucinations “ are not malfunctions. If you were to change parameters to eliminate hallucinations, you would lose the majority of the unusual usefulness of the tool, its ability to synthesise and recombine ideas in statistically plausible (but otherwise random) ways. It’s almost like imagination. People imagine goofy shit all the time too.

          At any rate, using agentic scripting you can get it to follow a kind of plan, and it can get pretty close to an actual “train of thought”facsimile for some kinds of tasks.

          There are some really solid use cases, actually, but I’d say mostly they aren’t the ones trying to get LLMs to replace higher level tasks. They are actually really good at doing rote menial things. The best LLMs apps are going to be the boring ones.

    • seadan83 2 days ago

      I think the argument is that stitching things together at a high level is not really coding. A bit of a no true scotsmen perspective. The example is that anything more abstract than assembly is not even true coding, let alone creating a wrapper layer around an LLM