andsoitis 2 days ago

Why did NYC release it in the first place? Did they not QA it?

Or was it perhaps one of those cases where they found issues, but the only way to really know for sure that the deleterious impact is significant enough by pushing it to prod?

  • drillsteps5 2 days ago

    >Why did NYC release it in the first place? Did they not QA it?

    How do you QA black box non-deterministic system? I'm not being facetious, seriously asking.

    EDIT: Formatting

    • pegasus 2 days ago

      The same way you test any system - you find a sampling of test subjects, have them interact with the system and then evaluate those interactions. No system is guaranteed to never fail, it's all about degree of effectiveness and resilience.

      The thing is (and maybe this is what parent meant by non-determinism, in which case I agree it's a problem), in this brave new technological use-case, the space of possible interactions dwarfs anything machines have dealt with before. And it seems inevitable that the space of possible misunderstandings which can arise during these interactions will balloon similarly. Simply because of the radically different nature of our AI interlocutor, compared to what (actually, who) we're used to interacting with in this world of representation and human life situations.

      • drillsteps5 2 days ago

        Does knowing the system architecture not help you with defining things like happy path vs edge case testing? I guess it's much less applicable for overall system testing, but in "normal" systems you test components separately before you test the whole thing, which is not the case with LLMs.

        By "non-deterministic" I meant that it can give you different output for the same input. Ask the same question, get a different answer every time, some of which can be accurate, some... not so much. Especially if you ask the same question in the same dialog (so question is the same but the context is not so the answer will be different).

        EDIT: More interestingly, I find an issue, what do I even DO? If it's not related to integrations or your underlying data, the black box just gave nonsensical output. What would I do to resolve it?

        • bhadass a day ago

          >EDIT: More interestingly, I find an issue, what do I even DO? If it's not related to integrations or your underlying data, the black box just gave nonsensical output. What would I do to resolve it?

          Lots of stuff you could do. Adjust the system prompt, add guardrails/filters (catching mistakes and then asking the LLM loop again), improve the RAG (assuming they have one), fine tune (if necessary), etc.

      • datsci_est_2015 a day ago

        > The same way you test any system - you find a sampling of test subjects, have them interact with the system and then evaluate those interactions.

        That’s not strictly how I test my systems. I can release with confidence because of a litany of SWE best practices learned and borrowed from decades of my own and other people’s experiences.

        > No system is guaranteed to never fail, it's all about degree of effectiveness and resilience.

        It seems like the product space for services built on generative AI is diminishing by the day with respect to “effectiveness and resilience”. I was just laughing with some friends about how terrible most of the results are when using Apple’s new Genmoji feature. Apple, the company with one of the largest market caps in the world.

        I can definitely use LLMs and other generative AI directly, and understand the caveats, and even get great results from them. But so far every service I’ve interacted with that was a “white label” repackaging of generative AI has been absolute dogwater.

      • [removed] 2 days ago
        [deleted]
      • themafia 2 days ago

        > radically different nature of our AI interlocutor

        It's the training data that matters. Your "AI interlocutor" is nothing more than a lossy compression algorithm.

    • [removed] 2 days ago
      [deleted]
  • thedanbob 2 days ago

    > Why did NYC release it in the first place? Did they not QA it?

    Considering Louis Rossmann's videos on his adventures with NYC bureaucracy (e.g. [0]), the QAers might not have known the laws any better than the chat bot.

    [0] https://www.youtube.com/watch?v=yi8_9WGk3Ok

    • direwolf20 2 days ago

      Considering the previous mayor's relationship with the law, it could be on purpose.

  • cheald 2 days ago

    Remember that many people are heavily are happy-path biased. They see a good result once and say "that's it, ship it!"

    I'm sure they QA'd it, but QA was probably "does this give me good results" (almost certainly 'yes' with an LLM), not "does this consistently not give me bad results".

    • themafia 2 days ago

      > almost certainly 'yes' with an LLM

      LLMs can handle search because search is intentionally garbage now and because they can absorb that into their training set.

      Asking highly specific questions about NYC governance, which can change daily, is almost certainly 'not' going to give you good results with an LLM. The technology is not well suited to this particular problem.

      Meanwhile if an LLM actually did give you good results it's an indication that the city is so bad at publishing information that citizens cannot rightfully discover it on their own. This is a fundamental problem and should be solved instead of layering a $600k barely working "chat bot" on top the mess.

      • [removed] a day ago
        [deleted]
      • Imustaskforhelp 2 days ago

        I use Duckduckgo so I don't see really garbage search imo but not sure about people who google.

        But as you say that LLM's cant handle search. One of the things that I can't understand and I hope you help in is that this doesn't have to be this way.

        Kagi exists (I think I like the product/product idea even though I haven't bought it but I have tried it). Kagi's assistants can actually use Kagi search engine itself which is really customizable and you can almost have a lot of search settings filtered and Kagi overall is considered by many people as giving good search.

        Not to be a sponsor of kagi or anything but if this is such a really big problem assuming that NYC literally had to kill a bot because of it & the reason you mention is the garbage in garbage out problem of search happening.

        I wonder if Kagi could've maybe helped in it. I think that they are B-corp so they would've really appreciated the support itself if say NYC would've used them as a search layer.

  • pibaker 2 days ago

    The chatbot was released under the Eric Adams administration. The same Eric Adams, as soon as his term finished, went to Dubai and launched a cryptocurrency.

    https://apnews.com/article/eric-adams-crypto-meme-coin-942ba...

    I think he is simply not very bright, and got mesmerized by all the shiny promises AI and crypto makes without the slightest understanding of how it actually works. I do not understand how he got into office in the first place.

  • elgenie 2 days ago

    QA efforts can whack-a-mole some issues, but the mismatch of problem and solution is inherent in any situation in which a generator of plausible-sounding text gets pointed at an area where correctness matters.

  • rsynnott a day ago

    It’s an LLM. The dirty little secret of LLMs is that they cannot be used for anything important, unless the output is checked by an expert (which typically rather defeats the purpose).

    There’s no amount of qa that could save this.

  • erxam 2 days ago

    > Why did NYC release it in the first place?

    Perhaps a big fat check was involved.

  • fragmede 2 days ago

    Why do you think OpenAI let a red team loose on GPT-5 for six months before releasing it to the public?

    • bluGill 2 days ago

      For the image. There is no way a red team can find all the issues in 6 months. They can find some of the biggest, but even getting all the issues fixed in 6 months seems unlikely.

  • JohnTHaller 20 hours ago

    It was implemented by our scammy, grifting, Republican in a Democratic lawmaker suit former mayor Eric Adams who should probably be in prison but who made a deal with Trump to not be prosecuted.

Neywiny 2 days ago

I always ask this question about these bots: is the literature the training data or is the understanding of literature the training data? Meaning, sure you trained the bot on the current rules and regulations. But does that mean the model weights contain them? Such that really is a guess at legal accuracy? Or is it trained to be a lawyer and understand the docs which sit outside the model? Every time I've asked the answer is the former, and to me that's the wrong approach. But I'm not an AI scientist so I don't know how hard my theoretically perfect solution is.

What I do know is that if it was done my way it would be pretty easy for it to do what the Google AI does. Say it's not responsible, give links for humans to fact check it. I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.

  • acdha a day ago

    > I've noticed a dramatic drop in hallucinations after it had to provide links to its sources. Still not 0, though.

    I’ve noticed that Google does a fair job at linking to relevant sources but it’s still fairly common for it to confabulate something that source doesn’t say or even directly contradicts. It seems to hit the underlying inability to reason where if the source covers more than one thing it’s prone to taking an input “X does A while Y does B” and emitting “Y does A” or “X does A and B”. It’s a fascinating failure mode which seems to be insurmountable.

  • sdwr 2 days ago

    > pretty easy to do what the Google AI does

    I thought Gemini just started providing citations in the last few months. Are you saying they should have beaten Google to the punch on this? As part of the $500,000 budget?

    • Neywiny 2 days ago

      Correct. Much in the same way that videos were online before YouTube, social networks existed before Facebook, and messaging existed before WhatsApp and co, they should have understood their problem set better instead of just following the leaders. Because Gemeni is not this chatbot on steroids, it's a different problem entirely that happens to now employ the same technique.

      Also, search says they did links in 2024 for the Google AI. So there's that.

sylens 2 days ago

> The bot, built using Microsoft’s cloud computing platform

When is the last time there was positive news involving Microsoft? This bot could've easily been on AWS or GCP but I find it hilarious that here they are, getting dragged yet again

toomuchtodo 2 days ago

> A spokesperson for the mayor, Dora Pekec, confirmed in a text message that the new administration plans to take down the chatbot. She said a member of the Mamdani transition team had seen reporting on the bot from The Markup and THE CITY and presented it to the mayor as a possible place to save funds.

Journalism works.

  • andrewflnr 2 days ago

    Journalism teed up an easy way for an incoming politician to dunk on his predecessor, if you'll forgive the mixed metaphor. Not that I'm opposed to any part of it, just that this was an easy scenario for "journalism" to "work" in.

  • atq2119 2 days ago

    It does. And it works best if you elect politicians who are willing to listen.

hashberry 2 days ago

> The Office of Technology and Innovation spent nearly $600,000 to build out the foundations of the MyCity chatbot, which will be used for future chatbot offerings on MyCity. [0]

This was experimental tech... while I admire cities attempting to implement AI, it seems they did not spend enough tax dollars on it!

[0] https://abc7ny.com/post/ai-artificial-intelligence-eric-adam...

terespuwash 2 days ago

What else to expect from Eric Adams.

  • greekrich92 a day ago

    This is the only comment worth making. Virtually everything he did should be heavily audited and/or undone.

cmiles8 2 days ago

We’ll likely see a lot of these AI pet projects get axed in the coming year or two… especially things rushed out in the early phases of the AI bubble when folks were desperate to appear to be using AI.

  • chasd00 2 days ago

    yeah i hope the problems stay to somewhat humorous themes like convincing a car sales bot to sell you a car for $1 and not more serious issues like convincing a bot to metaphorically launch the ICBMs.

    • toomuchtodo 2 days ago

      "The WOPR did a better job avoiding thermonuclear war than most humans would" is my hot take.

      • jjk166 2 days ago

        Thinks through possibilities -> realizes what it is proposing is a bad idea

        Hell put WOPR in charge of everything

kittikitti 2 days ago

Being in and around the NYC area, while also knowing plenty of small businesses, I'm glad Mamdani killed this bot. Telling bosses to steal tips from their employees is run-of-the-mill corruption and common over here. The vibe for businesses is that everyone has to be exploiting someone else or have a schtick. If you were to talk about morals, you would be ridiculed. Most lawyers wouldn't even prosecute small businesses for this. It's probably why the agent was put into production, the level of business ethics in NYC is cartoonishly evil.

  • patrickmay 2 days ago

    In the case of stealing tips, that's wage theft and the New York State Department of Labor has zero sense of humor about that. They will definitely investigate all claims on that topic. It might be too little and too late for the individual affected, but the business will pay.

1970-01-01 2 days ago

He is turning out to be a benevolent, law-abiding mayor that just happens to be communist.

  • direwolf20 2 days ago

    What's that supposed to mean?

    • 1970-01-01 2 days ago

      The previous mayors were none of these things

      • georgemcbay 2 days ago

        Mamdani is a socialist, not a communist.

        And Fiorello La Guardia was (in terms of beliefs and enacted policy) even more socialist than Mamdani is even though he was technically a Republican when elected.

  • hydrogen7800 2 days ago

    To some, anything sufficiently resembling functioning government is indistinguishable from communism.

monero-xmr 2 days ago

[flagged]

  • geoffeg 2 days ago

    To ride NYC's free busses, you must have a two minute conversation with a chat bot. (/s)