Skip to content
Strategy

How AgentID Solves Shadow AI in ChatGPT, Copilot, and Gemini with Browser-Level Governance

AgentID provides AI governance in two layers: API control for custom AI systems and browser-level governance for public AI chat tools.

By AgentID Editorial Team14 min read.

April 14, 2026

Key takeaways

AgentID provides governance in two complementary ways: an API control layer for custom AI systems and a browser-based governance layer for public AI chat interfaces.

Shadow AI risk often appears outside approved internal workflows, inside ordinary browser sessions where employees use ChatGPT, Copilot, Gemini, and similar tools directly.

Browser-level governance matters because it can inspect prompts and files before submission, then log, mask, warn, or block based on policy.

Internal AI gateways remain important, but they do not automatically govern public browser AI usage.

AgentID's differentiation is that it can help govern both enterprise-built AI systems and employee use of public AI interfaces.

TL;DR / Executive Summary

AgentID provides governance in two complementary ways. For custom AI systems, it acts as an AI API gateway and runtime control layer. For public AI interfaces, it provides a browser-based governance layer that can apply policy before prompts or file uploads are submitted. This article focuses on the second layer, because that is where many enterprises still have the biggest blind spot.

Shadow AI is what happens when employees use public AI tools such as ChatGPT, Microsoft Copilot, Google Gemini, and similar browser-based assistants outside approved internal workflows. That matters because public AI use is already widespread. Microsoft's 2024 Work Trend Index reported that 75% of knowledge workers use generative AI at work, and 78% of AI users are bringing their own AI tools to work. Cisco's 2024 Data Privacy Benchmark Study reported that 48% of respondents admitted entering non-public company information into generative AI tools.

The governance problem is not just that AI exists. It is that employees can paste sensitive text, upload documents, and interact with powerful public models directly in the browser, often outside the enterprise's internal AI gateways and approved application stack. Official product documentation from OpenAI, Microsoft, and Google confirms that these interfaces support direct prompts and file uploads, and that data handling can vary by product tier, account type, and settings.

Quote-friendly summary: AgentID is designed to close the gap between AI policy and AI behavior. It governs custom AI systems through an API control layer, and it helps govern public AI tools such as ChatGPT, Copilot, and Gemini through a browser-based layer that can inspect prompts, evaluate files, mask sensitive data, block prohibited submissions, and create governance visibility before data leaves the user's screen.

AgentID's Two Governance Layers

AgentID should be understood as governance infrastructure, not just a compliance dashboard or policy library. The platform's value comes from controlling AI at the point where risk actually happens.

1AgentID as an AI API gateway and control layer for custom AI systems. For custom AI applications, internal copilots, agentic workflows, and model-powered business systems, AgentID acts as an AI API gateway and runtime control layer. In that role, it can sit in the path between enterprise systems and model providers, helping organizations apply rules, monitor behavior, log decisions, and generate governance evidence around internal AI usage.

This is the layer most buyers already understand. If a company builds its own internal assistant, internal RAG workflow, or AI-powered customer workflow, it can govern that system by controlling the APIs, prompts, tools, outputs, and runtime behavior inside that approved architecture.

2AgentID as a browser-based governance layer for public AI interfaces. For browser-based use of tools like ChatGPT, Copilot, Gemini, and similar public AI chat tools, AgentID provides a different kind of control: a browser-level or extension-based governance layer. This matters because many employees do not stay inside approved internal tools. They use the fastest interface available in the moment: the browser tab already open on their screen.

That is where Shadow AI becomes a governance problem. The employee is still doing work. The business data is still enterprise data. But the interaction happens outside the organization's internal AI gateway, which means internal controls may never see the prompt or file before it is submitted.

Why enterprises often need both. NIST's AI RMF and the NIST Generative AI Profile both emphasize governance across the lifecycle of AI use, including post-deployment monitoring and mechanisms for capturing and evaluating input from users. In practice, that means enterprises rarely solve AI governance with one control point alone. They often need one layer for custom AI systems they operate directly, and another for public AI interfaces employees can access through the browser.

Governance layer

AI API gateway / control layer

Primary environment

Custom apps, internal copilots, model APIs, agent workflows

What it governs

Runtime requests, outputs, policies, logs, routing

Best use case

Enterprise-built or enterprise-managed AI systems

Strengths

Strong control over approved internal AI architecture

Limitation if used alone

Does not automatically cover employee use of public browser AI tools

Governance layer

Browser-based / extension-based governance layer

Primary environment

Public AI chat interfaces in the browser

What it governs

Prompts, uploads, pre-send policy checks, point-of-use controls

Best use case

ChatGPT, Copilot, Gemini, and similar tools used directly by employees

Strengths

Covers Shadow AI at the moment of use

Limitation if used alone

Does not replace deeper governance inside custom internal AI apps

Governance layer

Policy-only governance

Primary environment

Policies, training, acceptable-use guidance

What it governs

Human instructions and process expectations

Best use case

Baseline governance and awareness

Strengths

Necessary for legal, HR, and program clarity

Limitation if used alone

Does not enforce policy at the point of submission

Governance layer

Visibility-only approach

Primary environment

Monitoring and reporting

What it governs

Usage patterns and governance telemetry

Best use case

Early-stage discovery and risk mapping

Strengths

Useful for understanding the problem before tightening controls

Limitation if used alone

Does not prevent risky sharing on its own

Why Shadow AI in Public Chat Interfaces Is Still a Major Enterprise Problem

Shadow AI is not simply unapproved AI. It is enterprise use of AI that happens outside the organization's intended governance pathway. Most often, that means employees opening a browser-based public assistant, pasting in work content, and asking for help without going through approved internal systems.

This behavior is easy to understand. Public AI chat tools are fast, familiar, and already part of many people's workflow. Microsoft's 2024 Work Trend Index found that employees are already adopting generative AI at work at scale, and many are bringing their own tools because organizational plans lag behind demand.

The risk comes from the type of content people share when they are under time pressure. Cisco's 2024 privacy benchmark found that 48% of respondents admitted entering non-public company information into generative AI tools. That is exactly the kind of real-world behavior that turns AI policy into an operational problem.

Public AI tools are also not just text boxes anymore. Microsoft documents direct file upload into Copilot. OpenAI documents file uploads in ChatGPT and notes that uploaded files may be stored in a user's Library. Google documents file uploads in Gemini and explains that the size, number, and handling of files depend on the product context and account type.

That means Shadow AI is not limited to someone pasting a paragraph into a prompt. It can include uploaded spreadsheets, PDFs, presentations, source files, screenshots, customer exports, legal drafts, HR documents, or internal strategy materials.

Government and privacy guidance reflects the same concern. UK government guidance to civil servants says they should never put sensitive information or personal data into these tools, and explicitly notes that government has no oversight over how data entered into web-based generative AI tools is then used. The EDPS has separately issued practical guidance on processing personal data when using generative AI systems.

Why Internal AI Controls Alone Are Not Enough

Many organizations already have internal AI controls. They may have an approved AI assistant, a model gateway, API policies, secure prompt templates, or review workflows around internally built tools.

Those controls are useful, but they do not automatically govern browser-based public AI usage.

An internal AI gateway controls the systems that route through it. If an employee instead opens ChatGPT, Copilot, or Gemini directly in the browser, the interaction may never pass through the enterprise's internal runtime. That means the organization may not see the prompt, the file, the data category, or the policy issue before the submission happens.

This is the core blind spot. An enterprise can have mature controls for its internal AI apps and still have weak point-of-use governance for public AI interfaces.

The distinction also matters because vendor environments are not uniform. OpenAI's help center explains that use of ChatGPT conversations and files depends on settings and plan type, and that individual-service content may be used for model training unless the user opts out. Google documentation similarly distinguishes between consumer-style Gemini usage and work or school accounts with enterprise-grade protections. Microsoft separately documents public Copilot file uploads and Microsoft 365 Copilot Chat file analysis.

So the enterprise problem is not just AI risk. It is environment mismatch. The company may govern internal AI well, while large volumes of real employee AI usage happen in public interfaces outside that control plane.

How Browser-Level Governance Works at a High Level

Browser-level governance is point-of-use governance. Instead of relying only on policy documents or downstream review, it applies controls close to the moment a user is about to send a prompt or upload a file.

At a high level, this can include five conceptual steps.

1Prompt inspection before submission. Before a prompt is sent, the governance layer can inspect the content for policy-relevant signals. That might include personal data, customer information, internal project names, financial details, legal matter references, security-sensitive strings, or other data categories defined by policy.

The purpose is not to read everything indiscriminately. The purpose is to apply enterprise policy before data leaves the governed environment, in line with organizational notice, internal policy, jurisdiction, and governance design. [VERIFY PRIVACY / EMPLOYMENT / LEGAL WORDING WITH TEAM OR COUNSEL]

2File inspection before submission. If a user attaches a file, the governance layer can evaluate the upload event before submission. That matters because files often contain more concentrated and higher-risk information than a typed prompt.

3Policy checks. The inspected prompt or file can then be checked against enterprise rules. Some organizations may define red lines around PII, unreleased financial information, customer records, trade secrets, legal privilege, regulated data categories, or unsupported tools.

4Masking. In some cases, the right action is not to stop the workflow entirely. It may be to transform the prompt so high-risk elements are masked or reduced before the submission proceeds.

5Blocking and governance evidence. Where policy requires it, the governance layer can block a risky prompt or file upload before it is submitted. It can also create governance telemetry and evidence that helps the enterprise understand usage patterns, policy pressure points, and recurring risk categories. That general approach aligns with the broader NIST emphasis on post-deployment monitoring, evaluation of user inputs, incident response, and transparency around AI risk management.

What AgentID Can Help Organizations See

Visibility is often the first requirement, especially for enterprises that know Shadow AI exists but do not yet understand where it is happening or what kinds of data are involved.

In this context, visibility means governance telemetry, not blanket surveillance. The goal is to help organizations understand patterns such as which public AI tools are being used in business workflows, what categories of prompts are most common, where risky sharing behavior tends to occur, which teams or workflows may need safer alternatives, and which policy areas generate the most friction.

This matters because you cannot govern what you cannot see. Microsoft's BYOAI findings and Cisco's data on non-public information being entered into generative AI tools both point to the same operational reality: employee use is already ahead of many governance programs.

AgentID fits this need by giving enterprises a browser-level way to create governance visibility around public AI tool usage patterns that would otherwise sit outside internal AI gateways.

What AgentID Can Help Mask Before Submission

Masking is appropriate when the user's underlying task is legitimate, but the raw data should not be sent as-is.

Prompt masking helps by reducing accidental exposure while preserving enough context for the AI tool to remain useful. Depending on policy and configuration, masking may be relevant for personally identifiable information, customer names or account identifiers, internal ticket numbers, employee identifiers, contract values, sensitive business terms, and selected regulated information categories.

This is often the most practical middle path. The enterprise does not have to choose only between allow everything and block everything. It can allow productive use while reducing unnecessary detail in what gets submitted.

That is especially useful in browser-based public AI tools, because employees may paste real operational content into tools that support broad prompting and rich file interaction. OpenAI, Microsoft, and Google all document mainstream file and content workflows in their AI chat products, which increases the need for pre-send data minimization.

What AgentID Can Help Block Before Submission

Blocking is appropriate when the enterprise has a clear red line.

Examples can include highly sensitive prompts, restricted legal or HR content, prohibited classes of customer data, security-sensitive material, confidential source code or internal secrets, and risky uploads into tools or environments that are not approved for that category of data.

Blocking is not the right response for every use case. But for clearly prohibited categories, it is often the only credible control. If a prompt or upload crosses a policy-defined threshold, the governance layer can stop the submission before the content is sent.

This is what makes browser-level governance meaningfully different from policy-only governance. Policy says what should happen. Blocking helps enforce what must not happen.

Why File Upload Governance Matters as Much as Prompt Governance

Many AI governance discussions still focus too narrowly on typed prompts. In practice, files are often the bigger risk surface.

A file can contain hundreds of pages of internal material, structured data, embedded metadata, images, tables, source code, formulas, comments, tracked changes, and personal data. It can also represent a much larger disclosure event than a short typed prompt.

And public AI tools increasingly support file-centric workflows. OpenAI documents file uploads in ChatGPT and notes that uploaded files can remain in a Library until deleted. Microsoft documents file upload into Copilot and Microsoft 365 Copilot Chat. Google documents file uploads into Gemini, including uploads from device storage and Google Drive in supported contexts.

That is why file upload governance should be treated as a first-class requirement, not an afterthought. If the enterprise only inspects typed prompts but ignores attachments, it leaves a major hole in its Shadow AI control model.

Visibility vs Masking vs Blocking

A mature Shadow AI governance program is usually risk-based, not one-size-fits-all.

Control mode

Visibility

Best when

The organization first needs to understand usage patterns and blind spots

What it does

Captures governance telemetry and evidence about public AI usage

Tradeoff

Lowest friction, but does not stop risky sharing by itself

Control mode

Masking

Best when

The workflow is legitimate but raw data should not be sent

What it does

Reduces or transforms sensitive content before submission

Tradeoff

Preserves productivity, but may reduce prompt specificity

Control mode

Blocking

Best when

The content crosses a clear policy red line

What it does

Prevents the prompt or file from being submitted

Tradeoff

Strongest protection, but highest user friction

Why This Matters for Security, Compliance, and AI Governance

Browser-level governance matters because it supports enterprise AI control where many real interactions actually occur.

From a security perspective, it can help reduce accidental disclosure of sensitive information into public AI interfaces.

From a privacy and compliance perspective, it can support data protection workflows by applying policy before data is submitted and by improving evidence around how public AI tools are being used. The EDPS has emphasized practical guidance for organizations processing personal data when using generative AI. Google's documentation for work accounts also shows that data handling conditions can differ materially depending on license and service context, which is exactly why enterprises need clear governance design rather than assumptions.

From an AI governance perspective, it helps close the gap between policy and behavior. NIST's Generative AI Profile explicitly calls for post-deployment monitoring plans and mechanisms for capturing and evaluating user input. Browser-layer governance is one practical way to operationalize that principle for public AI interfaces.

Why AgentID Is Different from Policy-Only or API-Only Approaches

Policy-only governance is necessary, but incomplete. It can define acceptable use, training expectations, approval models, and escalation paths. What it cannot do on its own is inspect a live prompt in a browser tab before the user clicks send.

API-only governance is also necessary, but incomplete. It is excellent for internal AI applications that route through approved enterprise systems. What it does not automatically cover is the employee who bypasses that stack and goes straight to a public AI interface.

AgentID fits this gap by combining both layers: for custom AI systems, an API control layer; for public AI interfaces, a browser-based governance layer.

That is the key distinction. AgentID is not limited to governing internal AI apps. It is built to help enterprises govern the AI they officially build and the AI employees actually use.

Who This Is For

This approach is especially relevant for security leaders responsible for reducing enterprise data leakage risk, compliance and governance leaders building real AI controls beyond policy documents, privacy teams evaluating how personal data may be shared into public AI tools, IT and digital workplace teams deciding whether and how to allow ChatGPT, Copilot, Gemini, or similar tools, enterprises that already allow some public AI usage but need more structure around it, and organizations that need to govern both internal AI systems and public AI interfaces.

It is especially useful for buyers who already know that banning public AI tools outright is rarely durable, but allowing unmanaged use is not acceptable either.

Practical Buyer Checklist

What a real Shadow AI governance layer should provide beyond policy documents:

apply controls at the point of use, not only after the fact

inspect prompts before submission

inspect file uploads before submission

support policy-based masking of sensitive content

block clearly prohibited prompts or uploads

create governance visibility into public AI usage patterns

distinguish between lower-risk and higher-risk workflows

support auditability and governance evidence

complement your internal AI gateway rather than duplicate it

be implemented in line with internal policy, notice obligations, jurisdiction, and applicable legal requirements

If the answer is no, then you probably have AI policy, but not yet full Shadow AI governance.

Frequently Asked Questions

Does AgentID only work as an AI API gateway? No. AgentID should be understood as a dual-layer governance platform. For custom AI systems, it acts as an AI API gateway and runtime control layer. For public AI interfaces, it provides a browser-based governance layer focused on Shadow AI risk.

Does AgentID also govern ChatGPT, Copilot, and Gemini? Yes, that is a core part of the positioning in this article. AgentID is intended to help enterprises govern browser-based use of public AI chat tools such as ChatGPT, Microsoft Copilot, Google Gemini, and similar interfaces.

How does AgentID help solve Shadow AI? By moving governance closer to the point of use. Instead of relying only on training or approved internal apps, AgentID can help inspect prompts and files before submission, apply policy, support masking, block prohibited activity, and create governance visibility.

Can AgentID inspect prompts before they are sent? That is the core browser-governance use case described here. Prompt inspection allows policy to be applied before content is submitted into a public AI interface.

Can AgentID mask sensitive data before submission? Yes, conceptually that is one of the most important controls in this category. Masking can help reduce accidental disclosure while still enabling legitimate AI-assisted work.

Can AgentID block risky uploads? Yes, where policy requires it. This matters because public AI tools now support rich file uploads and analysis across mainstream interfaces.

Why is browser-level governance different from API governance? Because API governance controls the systems that route through approved enterprise infrastructure. Browser-level governance covers public AI tools employees can use directly in the browser, outside that internal runtime.

Why do enterprises need both? Because enterprises often face risk in two places at once: the AI systems they build, and the public AI tools employees use on their own. NIST's governance framing supports this broader view of managing AI across real operational contexts, including post-deployment monitoring and evaluation of user inputs.

Sources / References