GenAI Revolution Raises Cloud Risks

Written by ilyah | Jun 10, 2025 6:21:48 PM

Due to fast pace of change and unfamiliar attack surface, each phase of AI workflow has unique security challenges. ML and AI workflows are different due to distinct consumption models:

  1. Public SaaS platforms – web-based (e.g. ChatGPT, Bard, etc)
  2. Managed AI – hosted by major cloud providers (e.g. Microsoft AI Studio)
  3. Custom-developed GenAI - MLOps (e.g. developers using VCS IDE with extensions, GitHub, GenAI API, open-source AI integrations).
Many AI challenges mimic cloud issues:
  • Shadow IT, procurement challenges, business-led changes
  • Identity-based access to AI resources
  • Misconfigurations, exposures and breaches
  • Operational challenges
  • Data visibility into information sensitivity and data tagging
  • Third-party risk
New challenges emerged due to rapid pace of technology innovation and lack of security guardrails:
  • Adversarial Attacks: AI models, especially in areas like computer vision and natural language processing, are vulnerable to adversarial attacks, where malicious inputs are crafted to manipulate model outputs. These attacks can lead to harmful consequences in high-stakes applications, such as autonomous vehicles, healthcare, or finance.
  • Data Privacy and Protection: AI systems often process sensitive data, raising privacy concerns.
  • Model Theft and Intellectual Property Risks: Attackers can steal or reverse-engineer AI models, which risks exposing proprietary algorithms or data used in training. This can lead to intellectual property theft and unauthorized use of sensitive data.
  • Model Drift and Data Poisoning: Over time, changes in data can degrade model accuracy (model drift), while malicious actors may inject corrupted data into training, altering model behavior and reducing reliability.
  • Insecure APIs and Interfaces: AI systems often expose APIs for integration with other systems, and these interfaces, if improperly secured, can be exploited to manipulate, steal, or misuse data and model outputs.
  • Inadequate Access Controls and Authentication: Weak access control can allow unauthorized individuals to interact with, modify, or misuse AI models, leading to breaches, data theft, and potential manipulation of outputs.

What’s urgently needed

A new security approach is needed—one that unifies inventory, governance, access, and data protection—so organizations can safely embrace, not fear, the GenAI revolution.