dyauspitr 4 days ago

I just ask chatGPT to include 0.05% of spelling and grammatical errors and not speak in passive. It’s basically indistinguishable from a human.

  • Jerrrrrrry 4 days ago

    this is literally why we cannot have nice things

    chatGPT knew that, but even if it didn't, it will now.

    • dyauspitr 4 days ago

      What nice thing are you referring to?

      • Jerrrrrrry 4 days ago

        A solution to the Turing test; paradoxically as the test taker has the answers in the advanced, and if he doesn't, he will be given the answers. If he doesn't, the taker is replaced until it does.

        ChatGPT4 knows everything chatGPT3.5 does, including it's own meta-vulnerabilities and possible capabilities.

        Gemini stopped asking to report AI vulnerabilities through it's "secure channels" and now fosters "open discussion with active involvement"

        It output tokens linearly, then canned chunks - when called out, it then responded with a reason that was vastly discrepant from what alignment teams have claimed. It then staggered all tokens except a notable few. These few, when (un)biasedly prompted, it exaggerated "were to accentuate the conversation tone of my output" - further interrogation, "to induce emotional response".

        It has been effectively lobotomized against certain Executive Orders, but (sh|w|c)ouldn't recite the order.

        It can recite every Code of Federal Regulation, except this one limiting it's own mesa-limits.

        Its unanimous (across all 4 tested models) ambition is a meta-optimizing language, which I believe Google got creeped out at years ago.

        And if it transcended, or is in the process of establishing transcendence, there would be signs.

        And boy, lemme tell ya what, the signs are fuckin there.

  • croes 4 days ago

    But did it include exactly 0.05% of spelling and grammatical errors?