Comment by low_tech_love

Comment by low_tech_love 6 months ago

7 replies

This week I was on a meeting for a rather important scientific project at the university, and I asked the other participants “can we somehow reliably cluster this data to try to detect groups of similar outcomes?” to which a colleague promptly responded “oh yeah, chatGPT can do that easily”.

stanislavb 6 months ago

I guess, he's right - it will be easy and relatively accurate. Relatively/seemingly.

  • low_tech_love 6 months ago

    So that’s it then? We replace every well-understood, objective algorithm with well-hidden, fake, superficial surrogate answers from an AI?

    • yorwba 6 months ago

      "cluster this data to try to detect groups of similar outcomes" is typically a fairly subjective task. If the objective algorithm optimizes for an objective criterion that doesn't match the subjective criteria that will be used to evaluate it, that objectivity is just as superficial.

      • low_tech_love 6 months ago

        I’m not sure I follow. Every clustering algorithm that’s not an LLM prompt has a well-known, specified mathematical/computational functioning; no matter how complex, there's a perfectly concrete structure behind it, and whether you agree or not with its results doesn’t change anything about them.

        The results of an LLM are an arbitrary approximation of what a human would expect to see as the results of a query. In other words, it correlates very well with human expectations and is very good at fooling you into believing it. But can it provide you with results that you disagree with?

        And more importantly, can you trust these results scientifically?