Comment by trod1234
We seriously live in the world of Anathem now where apparently most people need a specialized expert to cut through plausible generated misinformation as a whole.
This is a second similar study I've seen today on HN that seems in part generated by AI, and fails rigorous methodology, while making conclusions that are unbased to seemingly fuel a narrative.
The study fails to account for a number of elements which nullify the conclusions as a whole.
AI Chatbot tasks by their nature are communication tasks involving a third-party (the customer). When the Chatbot fails to direct, or loops coercively, and this is a task computer's really can't do well; customers get enraged because it results in crazy-making/inducing behavior. The Chatbot in such cases imposes time-cost, with all the necessary elements suitable to call it torture. Those elements being isolation, cognitive dissonance, coercion with perceived/real loss, lack of agency. There is little if any differentiation between the tasks measured. Emotions Kill [1].
This results in outcomes where there is no change, or higher demand for workers, just to calm that person down and this is true regardless of occupation. In other words the punching bag of verbal hostility, which is the role of CSR receiving calls or communications from irrationally enraged customers after AI has had their first chance to wind them up.
It is a stochastic environment, and very few conclusions can actually be supported because they seem to follow reasoning along a null hypothesis.
The surveys use Denmark as an example (being part of the EU), but its unclear if they properly take into account company policies about not submitting certain private data for tasks to a US-based LLM given the risks related to GDPR. They say the surveys were sent to workers directly who are already employed, but it makes no measure of displaced workers, nor overall job reductions, which historically is how the changes in integration are adopted, misleading the non-domain expert reader.
The paper does not appear to be sound, and given it relies solely on a DiD approach without specifying alternatives, it may be pushing a pre-fabricated narrative that AI won't disrupt the workforce when the study doesn't actually support that in any meaningful rational way.
This isn't how you do good science. Overgeneralizing is a fallacy, and while some computation is being done to limit that it doesn't touch on what you don't know, because what you don't know hasn't been quantified (i.e. the streetlight effect)[1].
To understand this, the layman and expert alike must always pay attention to what you don't know. The video below touches on some of the issues without requiring technical expertise. [1]
[1][Talk] Survival Heuristics: My Favorite Techniques for Avoiding Intelligence Traps - SANS CTI Summit 2018