How should government and society come together to address the challenge of regulating artificial intelligence? What approaches and tools will promote innovation, protect society from harm, and build public trust in AI? As part of the World Economic Forum’s Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning Platform, PHI President George Tilesch is part of the global working group that authored its first report, sponsored by the Government of New Zealand, but having representatives of at least 10 other governments, Tech industry, academia, and associations.

The World Economic Forum’s Frameworks for Reimagining Regulation at the Age of AI seek to address the need for upgrading our existing regulatory environment to ensure the trustworthy design and deployment of AI. These frameworks provide Governments with innovative approaches and tools for regulating AI that can be scaled.
The challenge
Artificial intelligence (AI) is a key driver of the Fourth Industrial Revolution. Algorithms are already being applied to improve predictions, optimize systems and drive productivity in many other sectors. However, early experience shows that AI can create serious challenges. Without proper oversight, AI may replicate or even exacerbate human bias and discrimination, cause potential job displacement, and lead to other unintended and harmful consequences.

To accelerate the benefits of AI and mitigate its risks, governments need to design suitable regulatory frameworks. However, regulating AI is a complex endeavour. Experts hold diverse views on what areas and activities should be regulated, and approaches to regulating AI diverge sharply across regions. In some jurisdictions, a lack of consensus on a path forward and the risk of stifling innovation may deter any action. Emerging controversies surrounding AI can also force governments to implement hastily constructed and suboptimal regulatory policies.
Given the growing importance of this powerful technology, AI regulation should not be designed in a haphazard manner. A collaborative roadmap is needed to reimagine an agile regulatory system for AI that encourages innovation and minimizes its risks.
The Reimagining AI Regulation project brings together stakeholders from all sectors of society to collaborate on co-designing innovative, agile frameworks for governing AI. Underpinning this is the belief that robust regulation promotes consumer confidence, provides the opportunity for global mobility, and gives social license for the adoption of emerging technologies.
Activities are centred on three core objectives:
- Framing national and global conversations on regulating AI in a coherent and accessible manner
- Developing a roadmap for policy-makers to facilitate their decisions about whether and how to regulate AI
- Identifying and iterating innovative approaches and tools for regulating AI that can be scaled