Comment by falcor84
That's one scenario, but I also see a potential scenario where this integration makes it easier to manage the full "chain of evidence" for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.
At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?
Possibly, but 1 I am concerned that the current LLM AI is not thinking critically, just auto completing in a way that looks like thinking. 2 current AI rollout is incentivised for market capture not honest work.