AI Moved Into Your Collaboration Stack. Does Your Governance Know That?
Part of my job is helping our customers and our team understand how AI actually works inside Mattermost — where it runs, what data it touches, what they control, what we do. One question keeps coming up, and most security programs haven’t had time to work through it yet: what about the AI that already shipped inside the collaboration platforms you bought before you had an AI policy?
Over the last 18 months, the major collaboration vendors have been bolting AI into their products on every front: summaries of long threads, smart search across files and channels, automated routing of incoming requests, real-time translation, bots that listen for sensitive content or sales objections, and assistants that draft replies and schedule meetings.
Most of it ships under feature names that don’t prominently mention “AI” with broad permissions to read across channels, files, and direct messages — sometimes including the ones marked private. And most concerningly, it runs on infrastructure the customer doesn’t control and often uses your data in their training.
If your AI governance covers procured AI tools and applications your engineers build internally, that’s a good start, but it leaves out vendor-embedded AI in your collaboration platform. That category tends to be invisible to governance programs because it didn’t arrive as a discrete tool to evaluate. It arrived as a feature update inside something you already approved and deployed.
By the time the governance team looks up, the AI is doing real work, and a copy of every sensitive thread, file, and channel it can reach has been read by a model running on infrastructure outside your boundary.
Five questions to ask your collaboration platform vendor
If you’re responsible for AI governance and your environment includes a collaboration platform, here are five questions worth putting in front of the vendor — in writing, with a name attached to the answer.
1. Where does the AI run, and who controls the infrastructure?
The answer needs to be specific. “In our cloud” isn’t specific. The question is whether you can pin the AI to infrastructure you own, in a jurisdiction you accept, on hardware you can audit. For a regulated or sovereign environment, “trust us” isn’t an answer.
2. What data can the AI access by default, and how do you constrain that?
Most vendor-embedded AI has read access to a wide surface — channels, files, DMs, attachments, calendar context. Ask which permissions are on by default, which can be turned off, and which are wired in and can’t be scoped down at all. The third list is the one most governance programs forget to ask about.
3. Whose policies govern what the AI does — yours or the vendor’s?
This is where most AI governance programs hit a wall. If the model’s behavior, training feedback loop, and boundaries are all defined by the vendor’s product roadmap, your security team isn’t governing anything. It’s using something it doesn’t control.
4. What does the AI log, and can you audit those logs yourself?
Every AI feature creates a record — prompts, responses, details from guardrails, and potentially intermediate steps. The question is whether you can see that record at the same fidelity the vendor can. If the audit trail lives on the vendor’s side and you only get a redacted summary, you have a compliance problem the first time a regulator or investigator asks for evidence.
5. What happens when someone weaponizes the AI feature as an attack surface?
There is a real example here. In 2024, security researchers documented an attack against Slack AI where a malicious instruction posted in a public channel got the assistant to pull data from private channels the attacker couldn’t reach directly. MITRE catalogued the technique as ATLAS case study AML.CS0035. The attack surface wasn’t a misconfigured network or an unpatched server — it was the AI feature embedded into the product itself. Ask your vendor what their response to that scenario would look like, and whether they’ve actually run that drill.
Why this gap matters
Walk a security leader through this list and most can’t answer all five cleanly. AI features have moved into deployed infrastructure faster than most governance frameworks are aware of, let alone kept ahead of. Gartner is forecasting that about half of enterprise incident response effort by 2028 will involve AI-driven applications. Deloitte found that only about one in five organizations has a mature governance model for autonomous AI agents.
Even if the percentages are off, the direction is clear: the tools security teams use to coordinate during incidents become critical security infrastructure themselves. If your incident war room runs on a collaboration platform whose AI behavior you can’t fully audit, that gap surfaces at the worst possible moment. Collaboration is the surface where most of that AI currently sits invisible to security teams.
It’s part of why the Gartner SRM 2026 keynote is built around AI governance and agentic identity. AI as a whole and agentic identity and access management has to be folded into normal security governance now before we suffer catastrophic results.
What Mattermost ships today, and what we do not
Mattermost runs AI inside the customer’s collaboration environment by design. We shipped web search through our agent in the first quarter, agents on mobile right after. Both are built so the customer governs the AI. Channel auto-translation runs the same way today. Customer-controlled AI infrastructure means sovereign AI deployment in regulated and air-gapped environments is supported now, not sitting on a roadmap.
Mattermost v11.7 ships later this week and makes that posture concrete:
- Multiplayer Tool Calling lets a user invoke an agent inside a channel or group message under their own permissions — the agent doesn’t get its own privileged surface to read from.
- The Tool Policy Editor gives admins granular control over which agent tools require approval, scoped by context (DM versus public channel) or by tool, with the ability to disable tools entirely.
- Agents Token Accounting exposes per-user and per-access token usage through Grafana, so the audit trail for what the AI did lives on your side, under your control.
- Agent Creation lets end users build, configure, and share their own agents through a permissioned UI — admins set custom instructions, scope which MCP tools each agent can call, configure access, and designate other admins.
- Agents Enhanced Search uses retrieval-augmented generation across the workspace while respecting the same access controls the rest of the platform enforces, so the AI sees what the user is allowed to see.
The full picture — every behavior logged at customer-controlled fidelity, every agent constrained by the same ABAC policies that already govern channels and files, end-to-end policy authority sitting with the customer’s security team — is what we’re building toward. Some of it ships today; some of it is on the roadmap; but it’s all critical.
If you’re at Gartner SRM 2026 in National Harbor, book a thirty-minute meeting at Booth 303. Bring the five questions and your current vendors’ answers if you have them. We’ll work through what’s running in your environment, who governs it, and where the gaps actually are. You’ll leave with something concrete to take back to whoever owns AI governance on your team.