The Cost of AI
- Raimund Laqua
- 24 hours ago
- 2 min read
Is the collateral damage from AI worth it, and who should decide?

When it comes to AI, we appear to be “hell-bent“ towards developing Artificial General Intelligence (AGI) so as to consume all available energy, conduct uncontrolled AI experiments in the wild at scale, and disrupt society without a hint of caution or duty of care.
The decision of “Should We?” has always been the question.
However, when asked, the conversation often turns to silence.
Now, creating smart machines that can simulate intelligence is not the primary issue; it’s giving it agency to act in the real world without understanding the risk, that’s the real problem. Some might even call this foolishness.
The agentic line should never have been crossed without adequate safeguards. And yet without understanding the risk, how will we know what is adequate?
Nevertheless, here we are developing AI agents ready to be deployed in full force, for what purpose and at what cost?
Technology is often considered as neutral, and this appears to be how we are treating AI, just like other IT applications, morally agnostic.
Whether technology is agnostic or not, the question is, are we morally blind, or just wilfully ignorant?
Do we really know what we are giving up to gain something we know very little about?
To address some of this risk, organizations are adopting ISO 42001 certification as a possible shield against claims of negligence or wrongdoing, and AI insurance will no doubt be available soon.
But perhaps, we would do better by learning from the medical community and treat AI as something that is both a help and a harm – not neutral. But more importantly, something that requires a measure of precaution, a duty of care, and professional engineering.
If we did, we would keep AI in the lab until we studied it carefully. We would conduct controlled clinical trials to ensure that specific uses of AI actually create the intended benefits and minimize the harms, anticipated or otherwise.
Time will tell if the decisions surrounding AI will prove to be reckless, foolish, or wise.
However, what should not happen is for those who will gain the most to decide if the collateral damage is worth it.
What are we sacrificing, what will we gain, and will it be worth the risk?
Let’s face the future, but with our eyes open so we can count the cost.
For organizations looking to implement AI systems responsibly, education is the crucial first step. Understanding how these standards apply to your specific context creates the foundation for successful implementation.
That's why Lean Compliance is launching a new educational program to help organizations understand and take a standards-based approach to AI. From introductory webinars to comprehensive implementation workshops, we're committed to building your capacity for responsible and safe AI.