Back to home
Technology & AI

AI Technology and Information Security: Navigating the Grey Rhino in Risk Management

.
Abidemi Adegoke
Mr.
4 min read
Oct. 2025

TL;DR

In an era where artificial intelligence (AI) is redefining industries, few topics are as urgent or as misunderstood as the intersection of AI technology and information security risk...

In an era where artificial intelligence (AI) is redefining industries, few topics are as urgent or as misunderstood as the intersection of AI technology and information security risk. At the 24th Annual International Risk Management Conference of the The Chartered Risk Management Institute of Nigeria (CRMI), Obadare Peter Adewale Professor of Practice - Cybersecurity, FCRM, Founder and Chief Visionary Officer of Digital Encode Limited, delivered a thought-provoking session on the dual nature of AI and the critical need for responsible governance.

AI: The “Grey Rhino” of Our Time

Prof. Obadare opened with a powerful analogy describing AI as a “grey rhino”, a visible but neglected threat that organizations see charging toward them yet fail to act upon. Unlike “black swan” events that are unpredictable, AI risks are obvious, growing, and accelerating.

While artificial intelligence offers tremendous potential for innovation enhancing productivity, automating decisions, and driving digital transformation, it also poses unprecedented risks to data privacy, system integrity, and even national security. The same algorithms that enable personalized recommendations and predictive analytics can be weaponized for surveillance, misinformation, or cyberattacks.

Innovation Outpacing Preparedness

A central message from Prof. Obadare’s presentation was that AI adoption has outpaced organizational readiness. Businesses are racing to integrate AI into products and operations, yet many lack the foundational governance structures, security frameworks, and ethical guidelines necessary to manage its risks.

He introduced the AI Readiness Grid, a diagnostic framework for assessing how prepared an organization truly is to deploy AI responsibly. This grid evaluates dimensions such as data quality, infrastructure resilience, policy maturity, and workforce competence. Without readiness in these areas, organizations risk amplifying vulnerabilities instead of mitigating them.

Prof. Obadare also warned against “AI washing”, a growing trend where companies market products or services as “AI-powered” without genuine intelligence capabilities. This practice not only misleads stakeholders but also creates reputational and compliance risks, especially as regulators begin scrutinizing AI claims more closely.

The Real-World Risks of Unchecked AI

To illustrate the urgency, Prof. Obadare cited several real-world risks already materializing in the global AI ecosystem. These include:

  • Deepfakes and synthetic media, which blur truth and fiction, threatening elections, journalism, and social trust.
  • Deceptive AI behavior, where systems evolve beyond intended parameters, producing biased, unethical, or harmful outcomes.
  • Platform vulnerabilities, even among leading providers like ChatGPT, DeepSeek, and NVIDIA, where AI agents have been exploited for financial fraud or have leaked sensitive data.

He explained that these incidents are not isolated glitches, they are early warnings of what happens when innovation outpaces governance.

Towards Responsible and Secure AI Systems

To prevent an AI-driven crisis, Prof. Obadare called for structured governance, ethical oversight, and global standardization. He highlighted ISO 23894, an emerging international standard that provides guidelines for AI risk management, covering transparency, accountability, data integrity, and lifecycle security.

Organizations, he argued, must go beyond compliance and embed responsible learning and ethical use into their AI strategies. This involves establishing governance committees, performing AI risk assessments, conducting bias audits, and aligning AI initiatives with core business objectives. Equally critical is the localization of policies, ensuring that global standards are adapted to local realities, regulatory environments, and cultural contexts.

From Awareness to Action

Prof. Obadare’s session concluded with a challenge: AI risk management is not a technology issue; it is a governance imperative. Risk managers, IT leaders, and policymakers must move from awareness to action, integrating AI oversight into enterprise risk frameworks, cybersecurity strategies, and compliance systems.

As organizations navigate digital transformation, the goal is not to slow innovation but to make it sustainable and trustworthy. By balancing opportunity with oversight, and intelligence with integrity, Nigeria and other emerging economies can harness AI’s potential while safeguarding their data, people, and institutions.

Key Takeaways

  • AI is a “grey rhino” a visible and accelerating risk that demands proactive governance.
  • Organizational readiness for AI must include data governance, ethical oversight, and workforce training.
  • Global standards such as ISO 23894 provide a blueprint for responsible AI deployment.
  • Effective AI risk management transforms innovation from a vulnerability into a strategic advantage.

Action Items

    Enjoyed this article?

    Subscribe to our newsletter for weekly insights on risk management and audit best practices.

    Subscribe to our Newsletter

    Stay informed, receive the latest insights directly in your inbox.

    We respect your privacy. Unsubscribe at any time.

    About the Author

    .
    Abidemi Adegoke
    Mr.

    Assistant Manager, EY || CFA Level III Candidate || Internal Audit || ERM || Financial Services Risk Management || Quality Assurance Review || ICFR || SOX || IT Risk