Comment by parentheses
Comment by parentheses 5 days ago
It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research
Comment by parentheses 5 days ago
It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research
I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.
Why do you think these points would make the usage dangerous?
I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).
So yes, you use it to write the paper but soon it is public knowledge anyway.
I am not sure if there is much to learn from the draft of the authors.