Comment by hollerith
Slightly more detail: until about 2001 Yudkowsky was what we would now call an AI accelerationist, then it dawned on him that creating an AI that is much "better at reality" than people are would probably kill all the people unless the AI has been carefully designed to stay aligned with human values (i.e., to want what we want) and that ensuring that it stays aligned is a very thorny technical problem, but was still hopeful that humankind would solve the thorny problem. He worked full time on the alignment problem himself. In 2015 he came to believe that the alignment problem is so hard that it is very very unlikely to be solved by the time it is needed (namely, when the first AI is deployed that is much "better at reality" than people are). He went public with his pessimism in Apr 2022, and his nonprofit (the Machine Intelligence Research Institute) fired most of its technical alignment researchers and changed its focus to lobbying governments to ban the dangerous kind of AI research.