Long-term AI safety feeds

Listed in alphabetical order within each category. This resource is incomplete, so I suggested additions welcome.

Blogs

AI Alignment, Paul Christiano
Discussions of what AI alignment means and technical approaches to it; no longer regularly updated

AI Impacts, various authors
Attempts to estimate when various AI benchmarks will be achieved, focusing on expert prediction and Fermi estimation; updated roughly monthly

All Party Parliamentary Group on AI
Accessible introduction to AI and associated UK policy; updated infrequently

Deep Safety, Victoria Krakovna
Musings on AI safety, machine learning, and rationality; updated sporadically

DeepMind
Mix of newsletters, paper summaries, and organization updates; updated multiple times a month

Machine Intelligence Research Institute
Mix of newsletters, paper summaries, and organization updates; updated roughly monthly

OpenAI
Brief overviews of their research; updated multiple times a month

Newsletters

Alignment Newsletter, Rohin Shah of the Center for Human-Compatible AI
Highlights and links about important happenings in both the technical and policy AI arenas; sent out weekly

ChinAI, Jeffrey Ding of the Future of Humanity Institute
Translations and links to AI developments in China; sent out weekly

Centre for the Study of Existential Risk
Mix of links and organization updates

Future of Humanity Institute
Mix of vacancies and organization updates

ImportAI, Jack Clark of OpenAI
Paragraph descriptions of important new ML papers and AI policy happenings, and suggested implications of those events; sent out weekly

Leverhulme Centre for the Future of Intelligence
Mix of links and organization updates

Machine Intelligence Research Institute
Brief overviews of their research and events, as well as associated news and links; sent out monthly

Podcasts

80,000 Hours, Robert Wiblin
Long interviews with experts in various fields, some of whom are AI safety experts; new additions roughly weekly

Future of Life Institute, Ariel Conn
Medium-length interviews primarily rendering simplified explanations of current papers and events in the AI sphere and their relationship to AGI safety; new additions roughly monthly

Vlogs

Robert Miles’ YouTube channel
Accessible explanations of new ML developments and AI safety papers

Advertisements