<aside> 💡 What can you expect: Each semester, we offer two curriculums focused spanning topics on AI alignment and governance.

We will host two technical fellowships, they are

This semester, we will also run a reading group led by a group of graduate students (Oliver Liu,  Deqing Fu) and Prof. Willie Neiswanger on topics pertaining to safety (alignment, mechanistic interpretability, etc). The goal of this group if to meet & discuss papers on a regular basis to brainstorm ideas that lead to publications at top conferences. If you're interested in this please fill out the application! For example, here're a list of papers that I will be presenting in our next meeting: Scaling Laws for Associative Memories, and Birth of a Transformer: A Memory Viewpoint

</aside>

Curriculums