Comment by onion2k

Comment by onion2k 3 days ago

10 replies

I'm a huge fan of LLM-based tools, and I use them pretty much daily, but stuff like this concerns me a bit. In any dev process there needs to be a review step somewhere. Someone who understands code well needs to be looking at what the app is doing and making sure it's protecting my data. Someone needs to make sure there isn't a bug that loses the work I put in to creating records with a CRUD operation. They need to be making sure my privacy is respected in a legally compliant way. They need to make sure things are reasonably secure. None of that is guaranteed when you have a dev team, but it is a possibility at least.

Telling Joe Random "describe you app in a prompt and press deploy!" guarantees that isn't happening. This sort of service is great for non-dev people who want to launch something but it's a pretty big threat to my data.

I'm under no illusion that these services are going to be huge, and no doubt someone will sell an app built with one to a service that puts data about me into it. I suspect that means one day an attacker is going to learn something I'd rather they didn't. That sucks.

space_fountain 3 days ago

I wonder how much of this is that LLMs are worse than human developers (they are much more error prone right now) and how much of this is that we want someone to blame. When the elevator operator closes a door on someone fingers that's an honest mistake and/or we can fire them, but when the automated elevator bruises some 12 year olds finger that's a big problem that needs fixed

  • curious_cat_163 3 days ago

    That's an interesting idea!

    I think that the liability will just travel a layer of indirection. So in your example, I would think that the company that made the elevator would still be liable for any harm that their product causes -- if it can be established that it is their fault that a 12 year old's finger got bruised because of a poor design for the elevator.

    • ofcrpls 3 days ago

      Disagree partly - once there is monetary alignment to said risk - lets say something like the insurance policy of a Surgeon, there will be a quick alignment. All this indirection is due to lack of actuarial involvement.

    • TeMPOraL 3 days ago

      In general case, won't it eventually hit the liability diffusers, i.e. insurance? Kid gets paid from an accident insurance, building owner will cover their costs from civil liability insurance, and the elevator designers or installers will get shielded by professional liability insurance.

      • jfdjkfdhjds 3 days ago

        yall nerd spniping the example and missing the point that ofered it.

        the elevator example, the poster was giving chatbots the same excuse for mistakes as a person.

        imagine if elevators could just make mistakes and damage people, because well, a human would too, never minda that its very much trivial to design elevators with sensors in the correct place once and then they are accident free! this is the ridiculous world ai apologists must rely on...

squigglydonut 2 days ago

I also feel this way but I am confused why people would want this. Generating from a prompt is still introducing a human step. Why would anyone want something so basic and bland. Then again, I think about the food industry. Fast food sucks but the predictability is really attractive to a lot of people. People like familiar too. It is interesting to watch it unfold.

jfdjkfdhjds 3 days ago

I take it you never worked at a cheap/local/small software shop, usually associated with an advertising agency.

because I would rather those fill-in-the-blank-forced-prompts that just add form fields and obviously broken business logic to a generic template they curate.

  • dartos 3 days ago

    And the market for these tools suddenly appears before me.

    The software world is way bigger than most of us realize.