📝 Episode Summary
AI textbook author exposes trillion-dollar AGI race.
🎯 Top Takeaways
- Beneficial AI requires encoding human preferences, which are complex and contextual
- International cooperation on AI governance is as critical as nuclear non-proliferation
- AI doesn't need consciousness to be dangerous—optimization is sufficient
- Superintelligent AI poses existential risk if not designed with human values
- Current AI systems optimize for proxy goals, not human flourishing