AI is redefining third party risk: why your “approved” vendors may no longer be safe storage for data

For years, vendor risk was treated almost exclusively as a procurement event. You assessed a new provider, negotiated terms, signed the contract and moved on to monitoring. However, that model is starting to break. The real issue now is not just new vendors entering your business ecosystem. Existing vendors are changing underneath you, in unprecedented…

Corey Avatar
Blurred highway lights

For years, vendor risk was treated almost exclusively as a procurement event. You assessed a new provider, negotiated terms, signed the contract and moved on to monitoring. However, that model is starting to break. The real issue now is not just new vendors entering your business ecosystem. Existing vendors are changing underneath you, in unprecedented ways due to the rise of AI.

AI is no longer arriving only through net-new tools. It is being folded into software relationships organizations already approved years ago: productivity suites, collaboration platforms, recruiting systems, CRMs, workflow tools, and analytics products.

A recent enterprise governance paper notes that 78% of surveyed organizations reported AI deployment in at least one business function, but also warns of a “persistent gap between ethical principles and concrete operational practices” in how that adoption is governed [1]That gap is exactly where third-party risk is getting harder to see.

The issue: existing vendors, new exposure

Many widely trusted software providers are introducing AI features through routine updates, revised terms, or bundled functionality.

Sometimes those features are optional but opt-in by default. Sometimes they arrive quietly inside the existing product, buried in a new update. Sometimes the change looks administrative. However these subtle changes materially alter how data is processed, where it goes, what external models are involved, and what decisions the tool now helps shape. 

The vendor may be approved, but the exposure is new.

That creates a structural problem for procurement, legal, compliance, IT, and security teams, because many internal governance processes are still set up to catch new suppliers, not meaningful changes inside existing ones.

That is one reason this issue keeps flaring up in public.

High-profile examples show the risk of AI creep is not theoretical

This is not a hypothetical problem waiting to happen. It is already happening, and the pattern is getting easier to spot.

The warning signs: where AI governance slips into a grey area

Case Study 1: LinkedIn  

January 2025, B2B social media giant, LinkedIn faced a lawsuit where it was alleged that the company quietly introduced a privacy setting in August 2024 and later updated its privacy policy to permit personal data use for AI training.

The case was later dismissed after LinkedIn denied using private messages for training and produced evidence to the plaintiffs. But the damage was not just legal. The lack of clarity and grey area left for potential data misuse showed how quickly trust, consent, and governance escalate when data practices appear to change mid-relationship. It was something more familiar to governance teams: legal cost, reputational exposure, and forced remediation under scrutiny.

Case Study 2: Zoom

Zoom, the video conference platform faced a similar backlash when changes to its terms raised concerns about how customer content could be used for AI. The company later clarified it would not use customer audio, video, or chat content to train AI models without consent, and said its generative AI features were turned off by default and controlled by account owners and admins. But again, the issue was not just the clarification. It was the fact that a change in terms left a gap which was enough to trigger immediate doubt about consent, oversight, and control.

Zoom had to respond publicly, revise its wording, and reassure customers that they still owned and controlled their content. That matters commercially because enterprise trust is part of the product. When customers start questioning whether their meeting content might be repurposed for AI, the issue is no longer legal wording in the background. It becomes a procurement, security, and platform-trust problem.

That is the first stage of the problem. Once AI is layered into tools employees already use, old governance assumptions stop holding up. Consent becomes less direct. Accountability becomes harder to trace. Control often sits with admins or employers, not with the individuals whose data is actually in play. Non IT/GRC employees may assume this has been approved by the business and begin to leverage the AI tools without understand it’s privacy/security implications.

The escalation: where AI widens the risk surface

In February 2026, tech leader, Microsoft confirmed that a bug in Microsoft 365 Copilot Chat could return content from confidential emails in Drafts and Sent Items, even though labels and data loss prevention controls were supposed to restrict that behavior. Microsoft said the issue did not expose data to users who were not already authorized to access it, but the incident still mattered. It showed that once AI sits on top of an established platform, organizations cannot assume existing controls will work in exactly the same way.

Around the same time, reportedly blocked built-in AI tools on lawmakers’ work devices over cybersecurity and privacy concerns linked to cloud processing of confidential correspondence. Google, despite being one of the biggest AI backers, also warned its own employees not to enter confidential information into chatbots. These are not fringe reactions. They show that even highly sophisticated organizations are treating AI features as a distinct governance and security issue.

The same pattern appears in public-facing platforms. Social media brand, Meta openly said it would use public posts, comments, and user interactions with its AI tools to train models in the EU, following earlier privacy complaints over the use of personal data for AI training.

This is not hype around new features. AI can redraw the data boundary, widen the attack surface, and introduce third-party risk through opaque models, hidden dependencies, and cloud processing. If those changes have not been formally reassessed, the organization is relying on assumptions that may no longer hold.

That matters even in established platforms. Once AI is layered in, the governance burden changes with it. A 2023 IBM CEO study found that 61% of CEOs see a lack of clarity around data lineage and provenance as a barrier to generative AI adoption, which shows how exposed organizations remain when they cannot clearly trace what sits behind the model.

So this is not just about headline-grabbing generative AI features. It is about model provenance, hidden dependencies, new data flows, and whether anyone has actually re-approved the revised risk profile. These are not minor product updates or legal fine print. They are governance decisions.

What protection does global regulation offer users against unwanted data use by AI

More than many providing companies assume.

The rules are not identical across markets, but they are pointing in the same direction. Once AI changes how personal data is handled, whether through training, summarization, profiling, inference, or external processing, the legal position changes with it. What might look like a product update can trigger duties around notice, risk assessment, human oversight, and accountability.

United Kingdom: transparency and impact assessment are not optional

In the UK, the Information Commissioner’s Office (ICO) is clear that organizations using AI must be clear about why they process personal data, how long they keep it, and who they share it with, and that information should be provided to individuals when their data is collected. The ICO also says that, in most cases, using AI will trigger the need for a data protection impact assessment because it is likely to involve high-risk processing. That puts the burden squarely on the organization using the tool, not just the vendor selling it. Thus your existing vendors with new AI functionality could be a business risk Board is unaware of.

European Union: the AI Act raises the bar where risk is higher

In the EU, the AI Act goes further for high-risk use cases. Deployers of high-risk AI systems must use them according to the provider’s instructions, assign human oversight, ensure input data is relevant and sufficiently representative for the intended purpose, and in some cases carry out a fundamental rights impact assessment.

The practical message is straightforward: document the use case, assign responsibility, assess the risk, and be able to explain what the system is doing. That is a very different world from quietly enabling a new AI layer inside an existing software relationship.

United States: patchier rules, but real obligations

The U.S. remains more fragmented. There is no single federal equivalent to the EU AI Act.

 Instead, obligations are emerging through a mix of FTC enforcement and guidance on AI, privacy and confidentiality expectations for AI companies, state legislation such as Colorado’s AI Act for high-risk systems, and public-sector governance rules like the White House OMB memorandum on federal AI governance and risk management. The focus is not just on the label “AI” itself, but on the underlying risks: unfairness, opaque decision-making, weak notice, misleading claims, and misuse of personal data.

Even without one national rulebook, organizations cannot assume AI rollout sits outside ordinary legal accountability.

Want to read our comprehensive US AI regulation breakdown?

Middle East: risk-based governance is becoming more explicit

The Middle East is not standing still either. While the region does not yet have one unified AI rulebook, the direction of travel is clear.

The UAE has embedded AI into national digital and government strategy, while also maintaining a wider data protection framework. Saudi Arabia has published AI ethics principles and governance guidance through SDAIA. In Qatar, the Ministry of Communications and Information Technology has issued national AI strategy and ethical AI guidelines, backed by a dedicated AI committee and wider digital agenda..

That matters for multinational firms because it reinforces a broader point: across regions, the details differ, but the direction is the same. Regulators increasingly expect organizations to know when AI is being used, what data it touches, who is accountable, how risk is assessed, and what recourse exists when things go wrong. Once AI functionality changes data handling, decision support, or external processing, the legal questions change too.

The risk is not only technical. What the academic research is saying

The obvious risk is that AI gets something wrong or quietly holds confidential data.

That matters. But it is not the deepest risk.

The deeper problem is that many organizations still do not govern AI well in practice. AI is already mainstream in business, with 78% of surveyed organizations reporting deployment in at least one business function. But the research suggests the governance response is still far less mature than the adoption curve.[2]

That gap matters because most organizations are not short of principles. They already know the language: fairness, transparency, accountability, privacy. The problem is that these commitments often stay at policy level while the real decisions are being made elsewhere, in data access, workflow design, permissions, reuse, secondary use, and operational shortcuts. The research is blunt on this point. In many organizations, ethical safeguards are still too abstract for practical enterprise operations, and AI governance can slide into formal process without meaningful control.

That is exactly why “AI creep” through existing vendors is such a live issue. The danger is not only that the feature itself might misfire. The danger is that the feature may quietly alter how data is collected, accessed, processed, or reused without anyone pausing to ask whether the original governance assumptions still hold. The risk enters long before the obvious failure. It enters when the workflow changes and nobody treats that change as a governance event.

Also, arguably transparency only works if there is a real audience capable of scrutinizing, questioning, and acting on what is being disclosed. Without that, transparency becomes close to meaningless. That is a powerful point in this context. A release note, admin notice, or updated privacy page is not the same thing as accountability if nobody on the customer side has the trigger, authority, or time to challenge what changed.

So, the real risk is not only technical failure. It is governance that reacts too late. It is principles with no mechanism. It is disclosure with no challenge path. It is adoption moving faster than accountability.

IT Risk Management solution download

Business advice: stop treating material AI updates like routine product changes.

The practical answer is not to treat every AI update as a crisis. It is to stop treating material AI updates like routine product changes.

At a minimum, organizations should build an internal AI trigger into vendor governance and IT change management. If a vendor introduces a new assistant, summarization layer, model-backed workflow, training clause, external model dependency, or new data-sharing path, that should trigger reassessment, and employees need to be aware of this.

That review should not sit only with procurement. It should bring in legal, privacy, security, compliance, and the business owner of the tool.

The questions are not complicated, but they do need to be asked every time:

  • Has the data boundary changed?
  • Is customer or employee data now being processed differently?
  • Are third-party models involved?
  • Can the feature be centrally disabled?
  • Are the contractual terms still fit for purpose?
  • Would the feature create issues under internal IT policy or sector regulation?
  • Do users actually understand what they are opting into?

Because that last point matters. In a lot of environments, the real governance failure is not malicious rollout. It is silent drift.

The CoreStream GRC AI-agnostic point of view

At CoreStream GRC, we are not anti-AI. We are AI-agnostic. We integrate where AI adds real value. But we also think the market is still underestimating how often AI is being introduced carelessly, opaquely, or with too much assumed consent.

That is why we favor deliberate control and clear customer choice, not passive or hidden enablement. Clients have to actively opt-in for AI to be integrated into their environment, and the LLM is usually their approved AI instance that their Legal and IT team’s have approved and manage.

If AI changes the risk boundary, it should not be quietly pushed through as a routine enhancement. It should be visible. It should be governable. And customers should understand what they are trading off when they switch it on.

That is not anti-innovation. It is just adult governance.

A practical next step

A good place to start is simple: ask IT, procurement, or enterprise architecture for a list of your top ten SaaS vendors and check which have introduced AI features, copilots, assistants, or model-backed workflows in the last six months.

Then ask a harder question: how many of those changes went through fresh legal, privacy, and risk review?

If the answer is “not many,” you have found the gap.

If this is already landing as a live issue in your environment, that is the point where an AI strategy review stops being a nice-to-have and starts becoming operationally useful.

Conclusion on AI and vendor risk

The next AI governance problem in your organization may not come from a brand-new vendor.

It may come from one you approved years ago.

That is the shift risk leaders need to absorb. AI is not only creating exposure through new tools. It is changing the legal, technical, and governance profile of trusted vendors already inside the stack.

And if your internal processes still treat those shifts as routine product evolution, you are probably giving AI more access, more influence, and more trust than anyone ever explicitly approved.

That is not modernization.

That is drift.

FAQ on vendor AI and third-party risk

How is AI changing third-party risk for businesses?

AI is changing third-party risk because it is no longer arriving only through brand-new vendors. It is now being added into software businesses already use and already approved, which can quietly change how data is processed, shared, stored, or used.

Why might an approved vendor no longer be safe for data storage?

An approved vendor may no longer be safe in the same way it once was because AI features, copilots, assistants, and model-backed workflows can introduce new data flows, external processing, and hidden dependencies that were not part of the original risk review.

What is AI creep in vendor risk?

AI creep is when AI functionality is gradually introduced into existing tools through updates, bundled features, revised terms, or default settings, without the change being treated as a fresh governance or risk event.

What should businesses check when a vendor adds AI features?

Businesses should check whether the data boundary has changed, whether personal or confidential data is being processed differently, whether third-party models are involved, whether the feature can be disabled, whether the contract still works, and whether users understand what they are opting into.

How can organizations reduce AI-related third-party risk?

Organizations can reduce the risk by building AI triggers into vendor governance and change management, reassessing material updates, involving the right stakeholders early, and treating meaningful AI changes as governance events rather than routine product updates.

Sources and further reading


[1] Huang, X., Kou, T. and Zhou, Q. (2026) ‘Embedding AI ethics in the data lifecycle: A framework for enterprise AI governance’, Technology in Society, 86, 103261. Available at: https://doi.org/10.1016/j.techsoc.2026.103261.

[2] Huang, X., Kou, T. and Zhou, Q. (2026) ‘Embedding AI ethics in the data lifecycle: A framework for enterprise AI governance’, Technology in Society, 86, 103261. Available at: https://doi.org/10.1016/j.techsoc.2026.103261.


  • Gifts and Entertainment software RFP template: questions and scoring 

    Gifts and Entertainment software RFP template: questions and scoring 

    Enter your details and we’ll email you the G&E RFP template: From talking with our expert community, we know that for a lot of teams, the search for gifts and entertainment software starts when the current process stops feeling defensible.  Maybe declarations still sit across email chains, spreadsheets, shared folders, or basic forms that were never built for sensitive compliance…

  • As the US cools and Europe pushes on, ESG reporting is becoming a governance problem

    As the US cools and Europe pushes on, ESG reporting is becoming a governance problem

    Recent ESG headlines are not pointing in one simple direction. In the U.S., the political environment has become less supportive of climate-related regulation under the current administration, but investor pressure has not disappeared. In the past week alone, investors pressed Amazon, Microsoft, and Google for sharper disclosure on the water and power demands of their…

  • The ICO has put AI hiring under the risk and compliance spotlight. Enterprise leaders should pay attention.

    The ICO has put AI hiring under the risk and compliance spotlight. Enterprise leaders should pay attention.

    On 31 March 2026 UK’s Information Commissioner’s Office (ICO), called on businesses to review their use of automated decisions in recruitment and published fresh expectations for organizations using automated decision-making in hiring. The regulator said it had engaged with more than 30 employees, wrote to 16 organizations likely to be using automated decision-making in candidate…