Comment by JumpCrisscross

Comment by JumpCrisscross 11 hours ago

6 replies

> To win a discrimination case, you typically need to document a pattern of behavior over time—often a year

Where did you hear this?

> use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently

This is terrible advice. It not only makes those messages inadmissible, it casts reasonable doubt on everything else you say.

Using an LLM to take the emotion out of your breadcrumbs is fine. Having it draft generic stuff, or worse, potentially hallucinate, may actually flip liability onto you, particularly if you weren't authorised to disclose the contents of those messages to an outside LLM.

mikert89 11 hours ago

With respect, it seems you haven’t kept up with how people actually use ChatGPT. In discrimination cases—especially disparate treatment—the key is comparing your performance, opportunities, and outcomes against peers: projects assigned, promotions, credit for work, meeting invites, inclusion, and so on. For engineers, that often means concrete signals like PR assignments, review comments, approval times, who gets merges fast, and who’s blocked.

Most employees don’t know what data matters or how to collect it. ChatGPT Pro (GPT-5 Pro) can walk someone through exactly what to track and how to frame it: drafting precise, non-threatening documentation, escalating via well-written emails, and organizing evidence. I first saw this when a seed-stage startup I know lost a wage claim after an employee used ChatGPT to craft highly effective legal emails.

This is the shift: people won’t hire a lawyer to explore “maybe” claims on a $100K tech job—but they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull. On its own, ChatGPT isn’t a lawyer. In the hands of a thoughtful user, though, it’s close to lawyer-level support for spotting issues, building a record, and pushing for a fair outcome. The legal system will feel that impact.

  • JumpCrisscross 11 hours ago

    > they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull

    This is correct usage. Letting it draft notes and letters is not. (Procedural emails, why not.) Essentially, ChatGPT Pro lets one do e-discovery and preliminary drafting to a degree that’s good enough for anything less than a few million dollars.

    I’ve worked with startups in San Francisco, where lawyers readily take cases on contingency because they’re so easy to win. The only times I’ve urged companies fight back have been recently, because the emails and notes the employee sent were clearly LLM generated and materially false in one instance. That let, in the one case that they insisted on pursuing, the entire corpus of claims be put under doubt and dismissed. Again, in San Francisco, a notoriously employee-friendly jurisdiction.

    I’ve invested in legal AI efforts. I’d be thrilled if their current crop of AIs were my adversary in any case. (I’d also take the bet on ignoring an LLM-drafted complaint more than a written one, lawyer or not.)

    • mikert89 10 hours ago

      No I think the big unlock is a bunch of people that would never file lawsuits can at least approach it. You obviously can’t copy paste its email output, but you can definitely verify what are legal terms, and how to position certain phrases.

      • JumpCrisscross 9 hours ago

        > the big unlock is a bunch of people that would never file lawsuits can at least approach it

        Totally agree again. LLMs are great at collating and helping you decide if you have a case and, if so, convincing either a lawyer to take it or your adversary to settle.

        Where they backfire is when people use them to send chats or demand letters. You suggested this, and this is the part where I’m pointing out that I am personally familiar with multiple cases where this took a case the person could have won, on contingency, and turned it into one where they couldn’t irrespective of which lawyers they retained.

      • OutOfHere 10 hours ago

        The legal system is extremely biased in favor of those who can afford an attorney. Moreover, the more expensive the attorney, the more biased it is in their favor.

        It is in effect not a legal system, but a system to keep lawyers and judges in business with intentionally vaguely worded laws and variable interpretations.

        • mikert89 9 hours ago

          Exactly. And it’s comical that the person I was debating with doesn’t understand this. Proclaimed investor in legal tech misses the biggest use case of ai in legal - providing access to people that can’t afford it or otherwise wouldn’t know to work with a lawyer