Long-term AI safety feeds

Listed in alphabetical order within each category. This resource is incomplete, so I suggested additions welcome.


AI Alignment, Paul Christiano
Discussions of what AI alignment means and technical approaches to it; no longer regularly updated

AI Impacts, various authors
Attempts to estimate when various AI benchmarks will be achieved, focusing on expert prediction and Fermi estimation; updated roughly monthly

All Party Parliamentary Group on AI
Accessible introduction to AI and associated UK policy; updated infrequently

Deep Safety, Victoria Krakovna
Musings on AI safety, machine learning, and rationality; updated sporadically

Mix of newsletters, paper summaries, and organization updates; updated multiple times a month

Machine Intelligence Research Institute
Mix of newsletters, paper summaries, and organization updates; updated roughly monthly

Brief overviews of their research; updated multiple times a month

Job Boards

80,000 Hours Job Board
Select list of jobs at organizations working on causes prioritized by the effective altruism community, with a focus on positions relevant to AGI technical and strategy safety efforts

Beyond Sputnik: Getting Involved and Careers
Static page of prestigious science policy opportunities

Brad Traverse Jobs
Comprehensive database of Washington, DC jobs, searchable by category; filter to positions in the US IC for the greatest relevance 

Careers in AI Safety Facebook group
User-populated page of listings, featuring positions for technical and strategy careers

USA Jobs
Database of positions offers by the US government in all cities; vacancies open and close within a couple of weeks so worth checking frequently; filter to positions in the US IC for the greatest relevance 


Alignment Newsletter, Rohin Shah of the Center for Human-Compatible AI
Highlights and links about important happenings in both the technical and policy AI arenas; sent out weekly

ChinAI, Jeffrey Ding of the Future of Humanity Institute
Translations and links to AI developments in China; sent out weekly

Centre for the Study of Existential Risk
Mix of links and organization updates

Future of Humanity Institute
Mix of vacancies and organization updates

ImportAI, Jack Clark of OpenAI
Paragraph descriptions of important new ML papers and AI policy happenings, and suggested implications of those events; sent out weekly

Leverhulme Centre for the Future of Intelligence
Mix of links and organization updates

Machine Intelligence Research Institute
Brief overviews of their research and events, as well as associated news and links; sent out monthly


80,000 Hours, Robert Wiblin
Long interviews with experts in various fields, some of whom are AI safety experts; new additions roughly weekly

Future of Life Institute, Ariel Conn and Lucas Perry
Medium-length interviews primarily rendering simplified explanations of current papers and events in the AI sphere and their relationship to AGI safety; new additions roughly monthly

Reading Lists

AGI Strategy – List of Resources, Rohin Shah
Spreadsheet with links to central resources on the topic, categorized by focus

Artificial Intelligence and Global Security Reading List, CNAS
Mix of videos, papers, reports, articles, and podcast episodes, with a particular focus on national security and publications coming out of the think tank itself

Reading Guide for the Global Politics of Artificial Intelligence, Allan Dafoe
Extensive list of papers and articles relevant to AI governance, categorized by focus


Robert Miles’ YouTube channel
Accessible explanations of new ML developments and AI safety papers