TRACK B

AI Security Discussions

Technology review
LIVE NOW

Autonomous AI Agents for End-to-End SOC Operations: eliminating alert fatigue and automating the full triage-to-resolution lifecycle

Simbian

Ambuj Kumar and Anton discuss AI SOC, alert fatigue, and tribal knowledge, exploring how AI agents can automate triage, investigations, and MDR workflows while preserving human judgment.

Analyst Briefing

Analyst Briefing

Ambuj Kumar, Co-founder and CEO of Simbian, joins Google Industry Analyst Anton Chuvakin to discuss AI SOC, alert fatigue, and tribal knowledge — exploring how AI agents can automate triage and fundamentally change security operations.
Ambuj Kumar
Co-founder and CEO
Simbian
Anton Chuvakin
Industry Analyst
Google
Go to Demo Room
Technology review
LIVE NOW

Benchmarking Continuous AI Risk Detection & LLM‑Guardrail Remediation

Enkrypt AI

Powered by the world's most advanced AI threat database, Enkrypt’s capabilities are based on proprietary databases that combine insights from GenAI applications, open source data, and our dedicated ML research. Detects threats, removes vulnerabilities, and monitors performance for continuous insights.

Executive Overview

Executive Overview

A discussion of Enkrypt AI’s approach to enterprise agentic AI security, covering policy-to-rule enforcement, adversarial testing, runtime guardrails, and how Encrypt builds trust infrastructure for autonomous AI.
Sahil Agarwal
CEO
Enkrypt AI
Platform Demo

Platform Demo

A tour of Enkrypt AI’s policy engine, endpoint management, automated red-teaming, and runtime guardrails, showing how the platform secures and governs generative AI systems.
Go to Demo Room
Technology review
LIVE NOW

Evaluating AI‑Native Terminals for Technical Ops: Security & Compliance

Kindo

How to bring Agentic execution to security, DevOps, and IT via one platform where AI analyzes context, takes action across systems, and verifies outcomes in production.

Executive Overview

Executive Overview

An overview of how Kindo brings agentic execution to security, DevOps, and IT operations — enabling AI to analyze context, take action across systems, and verify outcomes in production.
Go to Demo Room
Technology review
LIVE NOW

Assessing “Always‑On” AI Security: Model Vetting, Red‑Teaming & Runtime Monitoring

Protect AI

Each product in the Protect AI suite is backed by 17k+ security researchers from the huntr community, and in partnership with Hugging Face, our first- and third-party threat research feeds our products so teams can stay ahead of attackers.

Executive Overview

Executive Overview

Ian explains how Protect AI enables you to implement AI-SPM capabilities to see, know, and manage security risks and defend against unique AI security threats, end-to-end.
Richard Stiennon
Research Analyst
IT-Harvest
Ian Swanson
CEO and Founder
Protect AI
Platform Demo

Platform Demo

Chris tours the Protect AI platform with a focus on the Guardian component, the main capability of the most comprehensive platform for securing your AI. Chris show how you can implement AI-SPM capabilities to see, know, and manage security risks and defend against unique AI security threats
Go to Demo Room
Technology review
LIVE NOW

Testing Agentic Security for Agentic AI Applications

Straiker

Agentic-native models for detection and minimal false positives, sub-second guardrail and detection performance designed for real production workloads and enterprise-grade privacy, isolated data paths, and adaptive guardrails that continuously improve without human tuning are the hallmarks of Straiker’s technology.

Analyst Briefing

Analyst Briefing

A discussion of Straiker’s approach to agentic AI security including prompt-injection, data leakage, tool manipulation, red-teaming, guardrails, and how enterprises should prioritize AI security.
Ankur Shah
Co-Founder & CEO
Straiker
Anton Chuvakin
Industry Analyst
Google
Platform Demo

Platform Demo

A full walkthrough of Straiker’s Ascend and Defend AI, showing automated red-teaming, runtime guardrails, and threat detection across RAG and agentic AI applications.
Go to Demo Room
Technology review
LIVE NOW

Comparing Unified AI Security Platforms for LLMs, RAG & AI Agents

Noma Security

To secure the entire AI ecosystem you must have full visibility and control over LLMs, RAG systems and autonomous agents. Without this comprehensive security and governance you cannot ensure your organization is safe, compliant and ready for the AI-driven future

The Role of AI at Noma

The Role of AI at Noma

A walkthrough of Noma Security's unified AI security platform, covering LLM security, RAG protection, autonomous agent governance, and how enterprises can maintain control and compliance across their entire AI ecosystem.
Go to Demo Room
Technology review
LIVE NOW

Measuring Real‑Time Visibility & Behavior‑Based Governance for Every Model & Agent

Witness AI

Govern human and AI agent workforces with network wide visibility and behavior based controls. Protect models and applications with runtime defense, enabling innovation with an enterprise-first, private instance architecture.

Executive Overview

Executive Overview

WitnessAI is building the guardrails that make AI safe, productive, and usable. Our platform allows enterprises to innovate and enjoy the power of generative AI, without losing control, privacy, or security.
Trevor Welsh
VP of Products
Witness AI
Platform Demo

Platform Demo

See how we enable you to observe, control and protect all aspects of AI usage in your environment
Trevor Welsh
VP of Products
Witness AI
Go to Demo Room
Technology review
LIVE NOW

Profiling Agentless SaaS Controls to Minimize Promptware & Anomalous AI Behavior

Zenity

Protection across the entire Agent ecosystem and organizations’ modern environments (e.g. misconfigurations, tool usage, triggers, and runtime behavior) to give security teams a unified, intent-aware view of agent activity. Our dynamic graph stitches together build-time and runtime data, revealing how individual issues compound into real risk.

Analyst Briefing

Analyst Briefing

Michael Bargury
Co-Founder & CTO
Zenity
Anton Chuvakin
Industry Analyst
Google
Go to Demo Room
Technology review
LIVE NOW

AI-Powered Data Classification: automating sensitive data discovery and protection at enterprise scale

Kriptos

Kriptos uses artificial intelligence to automatically classify, tag, and protect sensitive data across your organization — eliminating the manual effort of data labeling and ensuring consistent policy enforcement regardless of where data lives.

Executive Interview

Executive Interview

Daniel Molina, EVP Global Sales at Kriptos, explains how AI-powered data classification automatically identifies, labels, and protects sensitive data across your organization — ensuring consistent policy enforcement wherever data lives.
Go to Demo Room
Technology review
LIVE NOW

Natural Language Security Analytics: querying your security data the way you think about it

Aiquery

Aiquery enables security teams to query and analyze their data using natural language — eliminating the barrier between analyst intent and actionable insight without requiring deep knowledge of query languages or data schemas.

Executive Interview

Executive Interview

Nicholas Comeau talks about how AI is used by Aiquery — enabling security teams to query and analyze their data using natural language, eliminating the barrier between analyst intent and actionable insight.
Go to Demo Room
Technology review
LIVE NOW

AI Security Posture Management: governing and securing your AI applications from development to production

Singulr AI

Singulr AI provides comprehensive AI security posture management — discovering AI assets, detecting misconfigurations and data risks, and enforcing governance policies across your entire AI application landscape.

Executive Interview

Executive Interview

Shiv Agarwal, CEO & Co-Founder of Singulr AI, discusses how comprehensive AI security posture management discovers AI assets, detects misconfigurations and data risks, and enforces governance policies across the enterprise.
Go to Demo Room
Technology review
LIVE NOW

AI Application Security Testing: automated red teaming and vulnerability assessment for LLM-powered systems

Bonfy.ai

Bonfy.ai automates security testing for AI-powered applications — identifying prompt injection, jailbreaks, data leakage, and model manipulation risks before they reach production through continuous red teaming and assessment.

Executive Interview

Executive Interview

Gidi Cohen, CEO and Co-Founder of Bonfy.AI, explains rising AI-driven data risks, why legacy tools lack context and accuracy, and how Bonfy.AI uses entity-aware analysis to secure data across AI flows and the full data lifecycle.
Go to Demo Room
Technology review
LIVE NOW

Extended Security Visibility: unified threat detection across cloud, endpoint, and network attack surfaces

WideField

WideField provides unified security visibility that correlates signals across cloud infrastructure, endpoints, and network traffic — giving security teams the broad situational awareness needed to detect and respond to modern multi-stage attacks.

Executive Interview

Executive Interview

Abhay Kulkarni, CEO & Co-Founder of WideField, explains how unified security visibility correlates signals across cloud infrastructure, endpoints, and network traffic to detect and respond to modern multi-stage attacks.
Go to Demo Room
Technology review
LIVE NOW

AI Model Security: detecting trojans, backdoors, and adversarial vulnerabilities in machine learning systems

TrojAI

TrojAI specializes in AI model security — detecting hidden backdoors, trojan attacks, and adversarial vulnerabilities embedded in machine learning models before they can be exploited in production systems.

Executive Interview

Executive Interview

Lee explains how adversarial AI risk emerged, why generative AI accelerated the threat, and how enterprises can assess and protect models and agents at scale.
Go to Demo Room
Technology review
LIVE NOW

Continuous AI Red Teaming: automated adversarial testing throughout the AI development lifecycle

Mindgard

Mindgard delivers continuous AI red teaming that automatically tests models and AI-powered applications against the full spectrum of adversarial attacks — from prompt injection and jailbreaks to model extraction and data poisoning.

Executive Interview

Executive Interview

Peter Garraghan, Founder & Chief Science Officer of Mindgard, discusses how continuous AI red teaming automatically tests models and AI-powered applications against adversarial attacks including prompt injection, jailbreaks, and data poisoning.
Go to Demo Room