AI is changing how physical security systems work, and how they work together. As this shift accelerates, the industry is starting to see that AI interoperability isn’t just about moving data anymore, but about agreeing on what that data actually means. To explore this, ONVIF has formed the AI Working Group. We spoke with Peter Damm of Milestone Systems, and chair of the group, about why this work matters and who should be involved.

Q: ONVIF has always focused on protocol and connectivity standards. Why expand into AI?
Peter Damm: Connectivity alone isn’t enough anymore. As an industry, we’ve become very good at moving data between systems. But exchanging data and genuinely understanding what another system is communicating are two very different things. And once you bring AI into the picture, that gap really starts to show.
AI-enabled systems rely on data that is clear, structured, and consistent across system boundaries — whether you’re talking about natural-language investigation, cross-system correlation, or more autonomous workflows. If the metadata is vague, labeled differently from one system to the next, or missing context, the outcome becomes harder to trust and even harder to explain.
Within the AI Working Group, members have a space to explore these new challenges together and figure out what interoperability needs to look like next. The goal of ONVIF here is not to define AI models or pick algorithms. What we are focused on is making sure AI-based systems can work together in a way that is safe, transparent, and predictable. That means establishing shared, testable, machine-readable meaning — things like common semantic models, clear provenance and trust attributes, and interfaces that let systems expose information in ways AI can reason over, and humans can still understand.
Q: What does this mean in practice for the industry?
Peter Damm:
We’re clearly moving toward what you might call multi-system intelligence. Video, access control, alarms, sensors, and building systems are no longer isolated. They’re increasingly feeding into a shared operational environment. But to make that work, transport-level interoperability isn’t enough.
What’s really needed is a shared language, something that different systems, and increasingly AI-driven systems, can reason over in a consistent way, regardless of vendor. Without that foundation, things start to break down in subtle but important ways. Events get correlated incorrectly, and context drops out. Decisions become difficult to explain or validate. In safety and security, those aren’t small problems.
Some of the use cases pushing this forward include natural-language search and investigation, AI agent-based workflows, and cross-domain context fusion where decisions need to be backed by evidence. A big part of the work in the AI Working Group is to model some important core concepts: observation, inference, and action: what was sensed, what was concluded, and what was triggered. Today, those ideas are often flattened into generic metadata, which causes confusion downstream.
Provenance is another key piece. Important assertions should carry information about where they came from, when they were made, and, where appropriate, how confident the system is. That’s essential for building systems that are not just interoperable, but also trustworthy and explainable.
This isn’t just theoretical work. Conformance has always been central to ONVIF. The long-term aim is interoperability that can be profiled, tested, and validated in the same structured way ONVIF has always done it.
Q: What kinds of companies are you looking for to help shape this work?
Peter Damm:
We’re really interested in companies that are already running into these challenges in the real world. Organizations that are working across multiple systems and domains and are feeling the limits of today’s fragmented, vendor-specific, and often ambiguous metadata.
If your product depends on combining information from different systems, this work will probably resonate. The same goes if you’re exploring AI in ways that need to be reliable and explainable — not just clever demos, but systems you can actually trust and deploy at scale.
In practical terms, that includes several types of companies:
- Physical security platform providers — such as VMS, PSIM, cloud platforms, and investigation tools that produce, consume, or correlate AI-based metadata and depend on reliable, machine-understandable context
- Device, sensor, and edge-AI vendors that generate observations and need consistent ways to express confidence, uncertainty, provenance, and source trustworthiness
- Multi-system orchestration and smart city platform providers integrating video with access control, alarms, IoT, and building systems, and experiencing semantic gaps between those domains
- AI-native physical security companies building agent-based or reasoning-centric platforms that depend on structured, semantically explicit data rather than flat or proprietary metadata
- Standards, metadata, and trust specialists working on related frameworks around semantic modeling, interoperability, provenance, and explainability across system boundaries
ONVIF’s AI Working Group is actively welcoming new member companies to help shape this work. Full and Contributing ONVIF members with experience in AI-enabled workflows, multi-system integration, or metadata standardization are encouraged to participate. To learn more, please contact help.onvif.org.
Not an ONVIF member yet? Visit ONVIF.org to explore membership options and learn how to get involved.
And for those who want to follow the work as it develops, you can sign up for the ONVIF newsletter, which provides updates on the AI Working Group and ONVIF’s broader interoperability initiatives in the physical security market.