🔸The Law of Inevitable Ethical Inadequacy🔸
The cybernetics law of Inevitable Ethical Inadequacy is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted.
You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system (in this case the value chain) will always optimize away from "quality", "safety", or environmental" goals towards non-ethical outcomes.
This dynamic may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the value chain will regulate towards that goal at the expense of all others.
This is never more important now when it comes the use of Artificial Intelligence (AI). If organizations want to steer aware from harms associated from the use of AI in their value chain, they must explicitly state their objectives for the responsible use of AI. Otherwise they will inevitably optimize towards productivity at the expense of ethical values.
In theory and in practice, compliance outcomes cannot be separate objectives overlaid on top of operational systems and processes. Compliance goals must be explicitly specified in the value outcomes we intend to achieve. Compliance must also have corresponding oeprational programs to regulate the business towards those outcomes.
That’s why we are seeing more roles in the “C-Suite” such as Chief Security Officer, Chief Safety Officer, Chief Sustainability Officer, and so on. These are the general managers of the programs needed to regulate the organization towards targeted compliance outcomes.
This is the world of Operational Compliance – the way organizations operate in high-risk, highly regulated environments. They are highly regulated not only because of government regulation. It's also because they want to ensure they advance the outcomes they want and avoid the ones they don't.