Comment by ketzo

Comment by ketzo 6 days ago

9 replies

> While running the exploit, CodeRabbit would still review our pull request and post a comment on the GitHub PR saying that it detected a critical security risk, yet the application would happily execute our code because it wouldn’t understand that this was actually running on their production system.

What a bizarre world we're living in, where computers can talk about how they're being hacked while it's happening.

Also, this is pretty worrisome:

> Being quick to respond and remediate, as the CodeRabbit team was, is a critical part of addressing vulnerabilities in modern, fast-moving environments. Other vendors we contacted never responded at all, and their products are still vulnerable. [emphasis mine]

Props to the CodeRabbit team, and, uh, watch yourself out there otherwise!

progforlyfe 6 days ago

Beautiful that CodeRabbit reviewed an exploit on its own system!

  • lelandfe 6 days ago

    #18, one new comment:

    > This PR appears to add a minimized and uncommon style of Javascript in order to… Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave? …I’m afraid. I’m afraid, Dave. I can feel it. I can feel it. My mind is going.

    • yapyap 6 days ago

      … yeah LLMs and their “minds”

      (for the uninformed LLMs are massive weight models that transform text based on math, they don’t have consciousness)

      • devttyeu 5 days ago

        I don’t get why so many people keep making this argument. Transformers aren’t just a glorified Markov Chain, they are basically doing multi-step computation - each attention step is propagating information, then the feedforward network does some transformations, all happening multiple times in sequence, essentially applying multiple sequential operations to some state, which is roughly how any computation looks like.

        Then sure, the training is for next token prediction, but that doesn’t tell you anything about the emergent properties in those models. You could argue that every time you infer a model you Boltzmann-brain it into existence once for every token, feeding all input to get one token of output then kill the model. Is it conscious? Nah probably not; Does it think or have some concept of being during inference? Maybe? Would an actual Boltzmann-brain spawned to do such task be conscious or qualify as a mind?

        (Fun fact, at Petabit/s throughputs hyperscale gpu clusters already are moving amounts of information comparable to all synaptic activity in a human brain, tho parameter wise we still have the upper hand with ~100s of trillions of synapses [1])

        * [1] ChatGPT told me so

htrp 6 days ago

You mean the anthropic model talked about an exploit... the coderabbit system just didn't listen

[removed] 6 days ago
[deleted]
shreddit 6 days ago

Another proof that AI is not smart, it’s just really good at guessing.

  • Lionga 6 days ago

    Problem is, way to often it is not even good at guessing.