Comment by mikert89

Comment by mikert89 11 hours ago

13 replies

There's another thing happening which people haven't really heard much about, which is basically ChatGPT Pro is really good at making legal arguments. And so people that previously would never have filed something like a discrimination lawsuit can now use ChatGPT to understand how to respond to managers' emails and proactively send emails that point out discrimination in non-threatening manner, and so in ways that create legal entrapment. I think people are drastically underestimating what's going to happen over the next 10 years and how bad the discrimination is in a lot of workplaces.

JumpCrisscross 11 hours ago

> ChatGPT Pro is really good at making legal arguments

It’s good at initiating them. I’ve started to see folks using LLM output directly in legal complaints and it’s frankly a godsend to the other side since blatantly making shit up is usually enough to swing a regulator, judge or arbitrator to dismiss with prejudice.

OutOfHere 11 hours ago

That's all well and good, but anyone who does this will likely just be terminated asap without cause, possibly as a part of a multi-person layoff that makes it appear innocuous.

  • mikert89 11 hours ago

    That’s not quite right. To win a discrimination case, you typically need to document a pattern of behavior over time—often a year. Most people can’t afford a lawyer to manage that. But if you’re a regular employee, you can use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently. With diligent, organized evidence, you absolutely can build a case; the hard part is proving it, and ChatGPT is great at helping you gather and frame the proof.

    • JumpCrisscross 11 hours ago

      > To win a discrimination case, you typically need to document a pattern of behavior over time—often a year

      Where did you hear this?

      > use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently

      This is terrible advice. It not only makes those messages inadmissible, it casts reasonable doubt on everything else you say.

      Using an LLM to take the emotion out of your breadcrumbs is fine. Having it draft generic stuff, or worse, potentially hallucinate, may actually flip liability onto you, particularly if you weren't authorised to disclose the contents of those messages to an outside LLM.

      • mikert89 11 hours ago

        With respect, it seems you haven’t kept up with how people actually use ChatGPT. In discrimination cases—especially disparate treatment—the key is comparing your performance, opportunities, and outcomes against peers: projects assigned, promotions, credit for work, meeting invites, inclusion, and so on. For engineers, that often means concrete signals like PR assignments, review comments, approval times, who gets merges fast, and who’s blocked.

        Most employees don’t know what data matters or how to collect it. ChatGPT Pro (GPT-5 Pro) can walk someone through exactly what to track and how to frame it: drafting precise, non-threatening documentation, escalating via well-written emails, and organizing evidence. I first saw this when a seed-stage startup I know lost a wage claim after an employee used ChatGPT to craft highly effective legal emails.

        This is the shift: people won’t hire a lawyer to explore “maybe” claims on a $100K tech job—but they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull. On its own, ChatGPT isn’t a lawyer. In the hands of a thoughtful user, though, it’s close to lawyer-level support for spotting issues, building a record, and pushing for a fair outcome. The legal system will feel that impact.