AI Safety Fundamentals: Governance

Strengthening Resilience to AI Risk: A Guide for UK Policymakers

May 03, 2024 Season 5 Episode 2
Strengthening Resilience to AI Risk: A Guide for UK Policymakers
AI Safety Fundamentals: Governance
More Info
AI Safety Fundamentals: Governance
Strengthening Resilience to AI Risk: A Guide for UK Policymakers
May 03, 2024 Season 5 Episode 2

This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” 

While this document is designed for UK policymakers, most of its findings are broadly applicable.

Original text:
https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf

Authors:
Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar Avi

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

Show Notes Chapter Markers

This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” 

While this document is designed for UK policymakers, most of its findings are broadly applicable.

Original text:
https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf

Authors:
Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar Avi

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

AI Risk Pathways
Achieving Resilience in the Domestic AI Policy Landscape
2.1 Creating visibility and understanding
2.2 Promoting best practices
2.3 Establishing incentives and enforcing regulation