Comment by timmytokyo
Comment by timmytokyo a day ago
It's helpful to understand where this paper is coming from.
The authors are part of the Bay Area rationalist community and are members of "MATS", the "ML & Alignment Theory Scholars", a new astroturfed organization that just came into being this month. MATS is not an academic or research institution, and none of this paper's authors lists any credentials other than MATS (or Apollo Research, another Bay Area rationalist outlet). MATS started in June for the express purpose of influencing AI policy. On its web site, it describes how their "scholars organized social activities outside of work, including road trips to Yosemite, visits to San Francisco, and joining ACX meetups." ACX means Astral Codex Ten, a blog by Scott Alexander that serves as one of the hubs of the Bay Area rationalist scene.
I think I saw Apollo Research behind a paper that was being hyped a few months ago. The longtermist/rationalist space seems to be creating a lot of new organizations with new names because a critical mass of people hear their old names and say "effective altruism, you mean like Sam Bankman-Fried?" or "LessWrong, like that murder cult?" (which is a bit oversimplified, but a good enough heuristic for most people).