Key points
Artificial intelligence may be the most important technologies to emerge during our lifetimes. As it becomes more integrated in our lives — influencing decisions, shaping industries, and transforming the way we live — it is crucial to understand the potential risks it poses.
This fellowship is designed for students interested in the alignment and governance of AI. Broadly, alignment refers to a system’s ability to act in accordance with human goals, and governance refers to how our society can facilitate a transition from its current state to one with AI. Participants may have a range of relevant background knowledge, including math, computer science, neuroscience, philosophy, law, and policy.
<aside>
📌 Goals:
- Equip participants with the vocabulary to talk about risks from AI
- Create space and time for participants to develop their own views about risks on AI (as opposed to deferring to the views of others)
- Provide resources for participants to further pursue AI alignment and governance
</aside>
About this curriculum:
- Each week is focused on a specific topic within alignment, governance, or AI in general.
- We prioritize short readings and include discussion questions to guide your reading.
- Alignment and governance are both covered in one curriculum. Participants with more background knowledge in one of these areas may find the other less interesting, but we encourage
- This syllabus is subject to change. The fields of alignment and governance are quite vast, and we will sometimes adjust the curriculum in real time to better serve our discussion group.
- The curriculum takes influence from:
Discussion group logistics:
- Each group will meet for 1–2 hours each week to read and/or discuss that week’s readings.
- Groups will be between 5–10 people.
Table of contents
Week 1: Introduction to AI safety