There's a growing urgency to establish regulations for artificial intelligence (AI). Public concerns about potential harm and human rights violations are valid. However, proposed regulatory regimes can add significant compliance burdens for organizations already navigating a complex landscape.
It's important to consider how existing regulations, standards, and professional oversight bodies can be leveraged for AI. Professional engineers, for example, already adhere to strict ethical codes. Adapting these frameworks to address AI-specific risks could be a quicker and more efficient approach than building entirely new regulatory structures.
By focusing on existing resources that safeguard critical infrastructure, public safety, and environmental sustainability, we can promote responsible AI development without stifling innovation. This requires a thoughtful and collaborative approach that balances both innovation and risk mitigation.
It’s time we considered Lean AI Regulation.