
AI Safety Fundamentals
Listen to resources from the AI Safety Fundamentals courses!
https://aisafetyfundamentals.com/
AI Safety Fundamentals
The Project: Situational Awareness
•
BlueDot Impact
By Leopold Aschenbrenner
A former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology.
Source:
https://situational-awareness.ai/the-project/?utm_source=bluedot-impact
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.