BI 098 Brian Christian: The Alignment Problem

February 18, 2021 01:32:38
BI 098 Brian Christian: The Alignment Problem
Brain Inspired
BI 098 Brian Christian: The Alignment Problem

Feb 18 2021 | 01:32:38

/

Show Notes

Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:

Links:

Timestamps:
4:22 – Increased work on AI ethics
8:59 – The Alignment Problem overview
12:36 – Stories as important for intelligence
16:50 – What is the alignment problem
17:37 – Who works on the alignment problem?
25:22 – AI ethics degree?
29:03 – Human values
31:33 – AI alignment and evolution
37:10 – Knowing our own values?
46:27 – What have learned about ourselves?
58:51 – Interestingness
1:00:53 – Inverse RL for value alignment
1:04:50 – Current progress
1:10:08 – Developmental psychology
1:17:36 – Models as the danger
1:25:08 – How worried are the experts?

Other Episodes

Episode 0

March 04, 2020 01:57:16
Episode Cover

BI 062 Stefan Leijnen: Creativity and Constraint

Stefan and I discuss creativity and constraint in artificial and biological intelligence. We talk about his Asimov Institute and its goal of artificial creativity...

Listen

Episode 0

December 22, 2019 01:27:37
Episode Cover

BI 056 Tom Griffiths: The Limits of Cognition

Support the show on Patreon for almost nothing. I speak with Tom Griffiths about his “resource-rational framework”, inspired by Herb Simon's bounded rationality and...

Listen

Episode 0

March 25, 2024 01:43:34
Episode Cover

BI 186 Mazviita Chirimuuta: The Brain Abstracted

Support the show to get full episodes and join the Discord community. Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we...

Listen