Why do we think that paper AI policies will be enough to handle AI risk?

With AI’s ability to learn and adapt, we need measures that are also able to learn and adapt.
This is a fundamental principle of cybernetic models (i.e., the Good Regulatory Theorem). The regulator must be isomorphic with respect to the system under regulation. It must be similar in form, shape, or structure.
That’s why a static, paper-based policy will never be enough to govern (i.e. regulate) the use of AI.
Governance – the means of regulation – must be as capable as AI.