BI 098 Brian Christian: The Alignment Problem

February 18, 2021 01:32:38
BI 098 Brian Christian: The Alignment Problem
Brain Inspired
BI 098 Brian Christian: The Alignment Problem

Feb 18 2021 | 01:32:38

/

Show Notes

Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:

Links:

Timestamps:
4:22 – Increased work on AI ethics
8:59 – The Alignment Problem overview
12:36 – Stories as important for intelligence
16:50 – What is the alignment problem
17:37 – Who works on the alignment problem?
25:22 – AI ethics degree?
29:03 – Human values
31:33 – AI alignment and evolution
37:10 – Knowing our own values?
46:27 – What have learned about ourselves?
58:51 – Interestingness
1:00:53 – Inverse RL for value alignment
1:04:50 – Current progress
1:10:08 – Developmental psychology
1:17:36 – Models as the danger
1:25:08 – How worried are the experts?

Other Episodes

Episode

August 02, 2018 00:41:58
Episode Cover

BI 001 Steven Potter: Brains in Dishes

Find out more about Steve at his website. I discovered him when I found his book chapter "What Can AI Get from Neuroscience?" in...

Listen

Episode 0

March 12, 2021 01:25:00
Episode Cover

BI 100.2 Special: What Are the Biggest Challenges and Disagreements?

In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience...

Listen

Episode 0

April 06, 2021 01:45:22
Episode Cover

BI 101 Steve Potter: Motivating Brains In and Out of Dishes

Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers...

Listen