In June and July of this year, I ran a survey to ask a lot of people how useful they found a variety of resources on AI alignment. I was particularly interested in “secondary resources”: that is, not primary resource outputs, but resources that summarize, discuss, analyze, or propose concrete research efforts. I had many people promote the survey in an attempt to make it not obvious that I was running it (so that it would not affect what people said about AXRP, the podcast that I run). CEA helped a great deal with the shaping and promotion of the survey.
The goal of the survey was initially to figure out how useful AXRP was, but I decided that it would be useful to get a broader look at the space of these secondary resources. My hope is that the results give people a better sense of what secondary resources might be worth checking out, as well as gaps that could be filled.
Participants were shown a list of resources, select those they’d engaged with for >30 min, and for each they selected, rate on a scale from 0 to 4 how useful they’d found it, how likely they’d be to recommend to a friend getting into the field who hadn’t read widely, and how likely they’d be to recommend to someone paid to do AI alignment research. You can do a test run of the survey at this link.
My summary of the results
AXRP, my podcast, is highly rated among people paid to work on technical AI alignment resources, but less highly rated in other cohorts.
On a personal note, I find this a bit disappointing: I had hoped it could be useful for people orienting to research directions that they had not read widely about.
Rob Miles videos are highly rated among everyone, more than I would have guessed.
People really liked the AI Safety Camp, the AGI Safety Fundamentals Course, and conversations with AI alignment researchers.
People trying to get into alignment really liked the above and also MLAB. That said, they recommend Rob Miles videos higher than the AI Safety Camp and conversations with AI alignment researchers (but lower than MLAB and the AGI Safety Fundamentals Course).
Basic stats
Entries with demographic info: 139
Entries that rate various resources: 99
Number that say ‘I have heard of AI alignment’: 95
Number that say ‘I am interested in AI alignment research’: 109
Number that say ‘I am trying to move into a technical AI alignment career’: 68
Number that say ‘I spend some of my time solving technical problems related to AI alignment’: 51
Number that say ‘I spend some of my time doing AI alignment field/community-building’: 37
Number that say ‘I spend some of my time facilitating technical AI alignment research in ways other than doing it directly’: 35
Number that say ‘I spend some of my time publicly communicating about AI alignment’: 36
Number that say ‘I am paid to work on technical AI alignment research’: 30
Number that say ‘I help run an organization with an AI alignment mission (e.g. CHAI, MIRI, Anthropic)’: 11
Context for questions
When sorting things by ratings, I’ve included the top 5, and anything just below the top 5 if that was a small number. I also included ratings for AXRP, the podcast I make. Ratings are paired with the standard error of the mean (total ratings have this standard error multiplied by the number of people in the sample). Only things that at least 2 people engaged with were included.
Ratings were generally rounded to two significant figures, and standard errors were reported to the same precision.
Usefulness ratings
Among all respondents:
Total usefulness (multiplying average rating by reach):
80k podcast: 167 +/- 8
Superintelligence: 166 +/- 8
Talks by AI alignment researchers: 134 +/- 6
Rob Miles videos: 131 +/- 7
AI alignment newsletter: 117 +/- 7
conversations with AI alignment researchers at conferences: 107 +/- 5
Tie between AI Safety Camp at 3.5 +/- 0.3 and MLAB at 3.5 +/- 0.4
AGISF: 3.2 +/- 0.2
Convos: 3.1 +/- 0.2
ARCHES agenda: 3.0 +/- 0.7
80k podcast: 2.7 +/- 0.2
Then there’s a tail just under that, AXRP is at 2.6 +/- 0.2
Among people who spend time solving alignment problems:
Total usefulness:
Superintelligence: 48 +/- 5
Talks: 47 +/- 4
Convos: 45 +/- 4
AI Alignment Newsletter: 42 +/- 5
80k podcast: 37 +/- 4
Embedded Agency sequence: 36 +/- 5
Everything else 29 or below, AXRP is 20 +/- 2.
Average usefulness:
Convos: 3.2 +/- 0.3
AI Safety Camp: 3.2 +/- 0.3
Tie between AGISF at 2.7 +/- 0.4 and ML Safety Newsletter at 2.7 +/- 0.3
AI Alignment Newsletter: 2.6 +/- 0.3
Embedded Agency sequence: 2.6 +/- 0.3
Then a smooth drop in average usefulness, AXRP is at 2.2 +/- 0.3
Among people paid to work on technical AI alignment research:
Total usefulness:
Convos: 28 +/- 3
Talks: 26 +/- 2
Superintelligence: 23 +/- 4
AXRP: 22 +/- 3
Embedded Agency sequence: 20 +/- 3
Everything else 19 or below.
Average usefulness:
AI Safety Camp: 3.7 +/- 0.3
AI Alignment Newsletter: 3.2 +/- 0.4
Convos: 3.1 +/- 0.3
Rob Miles videos: 2.8 +/- 0.5 (honourable mention to AIRCS workshops, which had one rating and scored 3 for usefulness)
AXRP: 2.8 +/- 0.3
Everything else 2.5 or below.
Recommendation ratings
Alignment professionals recommend to peers:
Convos with researchers: 3.7 +/- 0.2
AXRP: 3.3 +/- 0.2
Tie between ML safety newsletter at 3.0 +/- 0.4 and AI alignment newsletter at 3.0 +/- 0.5
Rob Miles videos: 2.6 +/- 0.5
Embedded Agency sequence: 2.5 +/- 0.5
Everything else 2.4 or lower
Alignment professionals recommend to newcomers (= people trying to move into AI alignment career):
AGISF: 3.7 +/- 0.2
Rob Miles: 3.4 +/- 0.3
The Alignment Problem: 3.2 +/- 0.3
80k podcast: 3.13 +/- 0.3
AI safety camp: 3.0 +/- 0.5
Everything else 2.8 or lower (AXRP is at 1.9 +/- 0.4)
Newcomers recommend to newcomers:
MLAB: 4.0 +/- 0.0 (2 ratings)
AGISF: 3.7 +/- 0.1
Rob Miles: 3.4 +/- 0.2
AI safety camp: 3.0 +/- 0.9
Human Compatible (the book): 2.8 +/- 0.3 (honourable mention to AIRCS workshops which had one rating, and scored 3)
The Alignment Problem: 2.8 +/- 0.3
Everything else 2.6 or lower (AXRP is at 2.4 +/- 0.3)
One tidbit: newcomers seem to agree with the professionals about what newcomers should engage with, in terms of ratings.
Details of the survey
The survey was run on GuidedTrack. Due to an error on my part, if anybody pressed the ‘back’ button and changed a rating, this messed up their results unrecoverably (hence the drop-off from the number of entries total and the number with data I could use).
The list of resources:
AGI Safety Fundamentals Course
the AI Alignment Newsletter
AXRP - the AI X-risk Research Podcast
the ML Safety newsletter
Human Compatible (book)
The Alignment Problem (book)
Rob Miles videos
the Embedded Agency sequence on the Alignment Forum
the Value Learning sequence on the Alignment Forum
the Iterated Amplification sequence on the Alignment Forum
the FLI podcast
the 80,000 Hours podcast
Life 3.0 (book)
Superintelligence (book)
AI Safety Camp
AIRCS workshops
the Machine Learning for Alignment Bootcamp
the ARCHES agenda by Andrew Critch and David Krueger
Unsolved Problems in ML Safety by Hendrycks et al
Concrete Problems in AI Safety by Amodei et al
Scalable agent alignment via reward modeling: a research direction by Leike et al (aka “the recursive reward modelling agenda”)
conversations with AI alignment researchers at conferences
talks by AI alignment researchers
the annual AI Alignment Literature Review and Charity Comparison
The rating scale for usefulness:
0: Not at all
1: A little
2: Moderately
3: Very
4: Extremely
The probability rating scale:
0: 0-20%
1: 20-40%
2: 40-60%
3: 60-80%
4: 80-100%
As well as the details published here, I also collected how many years people had been interested in AI alignment and/or paid to work on technical AI alignment research, as applicable. Also, people were able to write in comments about specific resources, as well as the survey as a whole, and could write in the place they heard about the survey.
For more details, you can see my GitHub repository for this survey. It contains the GuidedTrack code to specify the survey, the results, and a script to analyze the results. Note that I redacted some details of some comments to remove detail that might identify a respondent.