mkeeter 6 days ago

The LLM tics are strong in this writeup:

"No manual overrides, no exceptions."

"Our VDP isn't just a bug bounty—it's a security partnership"

  • oasisbob 6 days ago

    Wow, you hit a nerve with that one. There have been some quick edits on the page.

    Another:

    > Security isn't just a checkbox for us; it's fundamental to our mission.

    • observationist 6 days ago

      They delved deep and spent a whole 2 minutes with ChatGPT 4o getting those explanations and apologies in play.

      • aardvarkr 6 days ago

        That’s the part that makes me laugh. If you’re going to try to pass of ChatGPT as your own work at least pay for the good model

    • jjani 5 days ago

      Hey CodeRabbit employees

      > The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.

      This is still ultra-LLM-speak (and no, not just because of the em-dash).

    • rob74 5 days ago

      A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...

  • teaearlgraycold 6 days ago

    Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.

  • coldpie 5 days ago

    The NFT smell completely permeates the AI "industry." Can't wait for this bubble to pop.

acaloiar 6 days ago

For anyone following along in the comments here. Code Rabbit's CEO posted some of the details today, after this post hit HN.

The usual "we take full responsibility" platitudes.

  • noisy_boy 6 days ago

    I would like to see a diff of the consequences of taking full vs half-hearted responsibility.

  • therealpygon 6 days ago

    I’m sure an “intern” did it.

    • noisy_boy 6 days ago

      I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.

      • therealpygon 6 days ago

        I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.

        Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.

  • paulddraper 6 days ago

    I would love to know the acceptable version.

    • jjani 5 days ago

      Something not copy-pasted from an LLM would be more acceptable.

cube00 6 days ago

They seem to have left out a point in their "Our immediate response" section:

- within 8 months: published the details after researchers publish it first.

Jap2-0 6 days ago

Hmm, is it normal practice to rotate secrets before fixing the vulnerability?

  • neandrake 6 days ago

    They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.

    However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.

    • KingOfCoders 6 days ago

      "According to their response all other tools were already sandboxed."

      Everything else was fine, just this one tool chosen by the security researcher out of a dozen of tools was not sandboxed.

      • darkwater 6 days ago

        Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?

    • shlomo_z 6 days ago

      > putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me

      Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.

      Edit: Formatting

    • Jap2-0 5 days ago

      Duh. Thanks for pointing that out.