T
traeai
登录
返回首页
Databricks

Inside one of the first production deployments of Lakebase: LangGuard's agentic workflow governance engine

7.5Score
Inside one of the first production deployments of Lakebase: LangGuard's agentic workflow governance engine
AI 深度提炼
  • Lakebase为复杂AI代理工作流提供了高性能数据库支持。
  • LangGuard通过统一治理增强了数据安全性和合规性。
  • 实际案例展示了Lakebase在多租户环境下的扩展性优势。
#Lakebase#Databricks#AI#数据治理
打开原文

Inside one of the first production deployments of Lakebase: LangGuard's agentic workflow governance engine | Databricks Blog

Skip to main content

[![Image 1](blob:http://localhost/c3d26385bd032c882a09c45135533626)](http://www.databricks.com/)

[![Image 2](blob:http://localhost/c3d26385bd032c882a09c45135533626)](http://www.databricks.com/)

  • Why Databricks
  • * Discover
  • Customers
  • Partners
  • Product
  • * Databricks Platform
  • Integrations and Data
  • Pricing
  • Open Source
  • Solutions
  • * Databricks for Industries
  • Cross Industry Solutions
  • Migration & Deployment
  • Solution Accelerators
  • Resources
  • * Learning
  • Events
  • Blog and Podcasts
  • Get Help
  • Dive Deep
  • About
  • * Company
  • Careers
  • Press
  • Security and Trust
  • DATA + AI SUMMIT ![Image 3: Data+ai summit promo JUNE 15–18|SAN FRANCISCO Last chance to save 50% — ends April 30. Register](http://www.databricks.com/dataaisummit?itm_source=www&itm_category=home&itm_page=home&itm_location=navigation&itm_component=navigation&itm_offer=dataaisummit)

1. All blogs 2. / Platform

Table of contents

Table of contents

Table of contents

PartnersApril 27, 2026

Inside one of the first production deployments of Lakebase: LangGuard's agentic workflow governance engine

Enterprise agentic workflows span dozens of agents, hundreds of tools, and 15+ systems of record. Controlling and operating them in real time requires infrastructure that didn't exist until Lakebase.

by Venkat Raghavan, Jason Keirstead, Ravi Srinivasan, Nina Williams and Amelia Westberg

Summary

  • Fewer than 10% of enterprises have successfully deployed autonomous AI agents at scale, primarily because agents bypass traditional security controls by generating their own logic at runtime, creating an invisible governance gap.
  • Databricks provides unified governance for data, models, and access policies through Unity Catalog and AI Gateway. LangGuard extends these platform-level controls with a runtime enforcement layer for agentic workflows—monitoring and enforcing policy across the end-to-end chain of actions, decisions, tools, and credentials. It uses a patent-pending GRAIL™ data fabric that captures every agent action into a live knowledge graph and evaluates every policy decision in real time, without impacting agent performance.
  • Databricks Lakebase, the industry's first fully managed, serverless Postgres database built on the lakehouse, is what makes this possible, providing elastic scale-to-zero compute, low-latency query execution for hot operational data, and instant database branching for safe governance policy testing.

The invisible problem with agentic AI

Most enterprises are experimenting with autonomous AI agents. Very few are deploying them safely at scale. According to McKinsey's "The State of AI in 2025" survey (November 2025), in no business function have more than ten percent of companies scaled AI agents into production. The failure is rarely a lack of ambition; it is a lack of visibility.

Unlike traditional software, autonomous agents generate their own logic on the fly. They bypass conventional security monitors, invoke tools and access data in ways that are difficult to audit after the fact, and operate across complex multi-agent workflows where a single misconfigured permission or policy gap can cascade into a significant security incident. What enterprises need is a new category of control infrastructure: one that operates at the moment a decision is being made, not after the damage is done.

That is the problem LangGuard was built to solve.

Runtime enforcement meets platform governance

LangGuard acts as a runtime enforcement layer for agentic workflows, monitoring and enforcing policy across the end-to-end chain of actions, decisions, tools, credentials, and intent that spans every system an agent touches. Databricks provides unified governance through Unity Catalog and AI Gateway—the system of record for data, models, and access policies. As enterprises deploy agents into production, the workflow itself also needs a runtime enforcement layer that extends those platform-level controls into every step of agent execution. That is where LangGuard fits in. LangGuard's governance engine, the GRAIL™ (Governance AI Run-time Links) data fabric, captures every agent action as multidimensional trace data and constructs a live knowledge graph of workflow behavior and context. When an agent attempts to invoke a tool, access a dataset, or call a model, LangGuard evaluates that action against policy before it executes, across every system the workflow touches, regardless of where it runs.

The scale of a production enterprise agentic deployment makes this genuinely hard. A single workflow may involve tens of coordinated agents, hundreds of tool invocations, multiple foundation models, and policies managed across fifteen or more enterprise Systems of Record, including IT ticketing systems like ServiceNow, IAM and IDP platforms, CRM systems like Salesforce, HR platforms like Workday, cloud security platforms like Wiz and CrowdStrike, contact center platforms like TalkDesk, MCP Gateways, and API Gateways. Governing this in real time, without impacting agent performance, demands infrastructure purpose-built for the problem.

Why we chose Lakebase

The LangGuard team spent years building IBM QRadar, a multiple-time Gartner Magic Quadrant leader and one of the world's most widely deployed enterprise SIEM platforms. QRadar ingests and correlates petabytes of security telemetry per day under strict latency and reliability requirements. That experience taught us a hard lesson: database architecture is destiny. When we designed LangGuard's workflow governance engine, we faced the same challenge we had solved before: operational security data that arrives in unpredictable, high-intensity bursts, where every millisecond of decision latency matters and idle infrastructure spend is unacceptable. Traditional databases that couple compute and storage force you to provision for peak load and pay for that capacity around the clock. Lakebase's serverless model, which fully decouples compute from storage and scales to zero between bursts, was the answer we had always needed but didn't have access to when we were building QRadar. It matched the problem exactly.

What makes Lakebase the right fit

Lakebase is a new category of operational database architecture that disaggregates compute from storage, allowing compute to scale elastically with workload demand while durable state lives independently in a replicated storage layer. Built on the open foundation of PostgreSQL, the lakebase architecture preserves everything developers rely on in a proven relational database while eliminating the infrastructure constraints that make traditional, monolithic RDBMS the wrong choice for the speed and scale that modern apps, agents, and AI demand.

Serverless autoscaling and scale-to-zero

Agent behavior is notoriously bursty. An agent workflow might be completely dormant for hours and then suddenly generate hundreds of trace writes and enforcement reads in a matter of seconds. Lakebase dynamically provisions compute resources the exact moment those traces flood our system, and shuts down completely when activity stops. Because durable state lives in the storage layer, not in the compute node, spinning up a new compute instance requires no data movement. It simply attaches to the existing database history and begins serving queries immediately.

For a startup operating at enterprise scale, this is the difference between infrastructure that matches actual usage and infrastructure that penalizes you for having quiet periods. Our operational costs stay perfectly aligned with the workloads we are actually serving.

Millisecond read latency for hot operational data

The natural concern with any disaggregated database is read latency. Lakebase addresses this through a caching layer between compute and storage that keeps hot data close to compute.

For LangGuard’s enforcement queries, tight indexed lookups against GRAIL™ context and policy tables, we expect the active working set to fit comfortably in compute-local memory. This architecture gives us the confidence that governance decisions can be enforced at workflow speed, without adding meaningful latency to agent execution.

Instant database branching for governance policy testing

Lakebase's instant database branching is one of its most operationally valuable capabilities for a governance product. When we create a branch, no data is physically copied. The branch diverges from the current database state using copy-on-write semantics, consuming storage only for new or modified data. Our developers can create an isolated, exact replica of our production trace data in seconds, test new governance policies against real-world agent behavior, and validate enforcement logic without risking the stability of the live environment.

PostgreSQL: a proven foundation

Lakebase is built on PostgreSQL, the world's most advanced open-source relational database, with decades of production hardening across every industry. For LangGuard, this means full compatibility with the tools, libraries, and extensions our team already knows, with no proprietary query language or migration risk.

Image 4: PostgreSQL

Expand

How LangGuard and Databricks Work Together

The joint LangGuard and Databricks architecture is designed to govern enterprise agentic workflows end-to-end while keeping all operational data on a single, trusted data and AI platform. On the left of the architecture are the **enterprise agentic workflows** themselves: AI agents and their orchestrators interacting with dozens of systems of record such as IT service management, CRM, HR, identity, security, contact center, and API/MCP gateways. Each agent action, tool invocation, and data access request generates rich trace events that flow into LangGuard in real time.

At the center of the diagram is the **LangGuard Governance Workflow Engine**, powered by the patent-pending **GRAIL™ data fabric**. GRAIL captures every agent action as multidimensional trace data and constructs a live knowledge graph of workflow behavior and context. When an agent attempts to call a tool, access a dataset, or invoke a model, LangGuard performs a policy evaluation against this live context and the relevant governance rules, returning an allow/deny/modify decision before the action executes. This gives enterprises a single control point for enforcing policy across every system the workflow touches, regardless of where the underlying agents are running.

On the right, **Databricks Lakebase** serves as the operational system of record for LangGuard’s trace and policy data. Lakebase’s serverless, PostgreSQL architecture disaggregates compute from storage, enabling elastic autoscaling and scale-to-zero between bursts of agent activity while keeping hot operational data in a low-latency cache near compute. LangGuard continuously writes trace events into Lakebase and performs low-latency reads for governance policy lookups and contextual queries, ensuring that enforcement decisions can be made at workflow speed without over-provisioning database capacity.

Because LangGuard’s operational data lives natively in Lakebase, it is immediately available to the broader **Databricks Data Intelligence Platform** for analytics and AI without additional ETL. Databricks AI, Model Serving, and MLflow can train and deploy anomaly detection models directly on GRAIL trace data to identify agents that deviate from their established behavioral baseline. These predictive signals feed back into the LangGuard Governance Engine, closing the loop between real-time enforcement and predictive monitoring and enabling enterprises to move from reactive controls to proactive, behavior-based AI governance on a single platform.

What comes next: predictive governance for agentic workflows

LangGuard's engine today enforces established policies at runtime across the full workflow. The next evolution is predictive: training behavioral models on historical GRAIL trace data to detect anomalous agent behavior before it manifests as a policy violation.

Because our operational trace data already lives within the Databricks ecosystem, as described above, we can move directly from enforcement to prediction without building separate ETL pipelines or standing up a second analytical platform.

If an agent starts acting erratically or deviating from its established baseline, those models will flag it as an anomaly before any damage is done. This convergence of real-time enforcement and predictive machine learning is the future of enterprise AI governance, and it is the architecture we are building today.

| KEY TAKEAWAY | | --- | | LangGuard is one of the first startups building production infrastructure on Databricks Lakebase. The choice was driven by a specific set of non-negotiable requirements: low-latency enforcement, elastic burst handling, and governance policy testing against real data. Only a serverless OLTP database could satisfy them all. Lakebase is the first database to meet all of them. | | For enterprises that need to govern agentic workflows end-to-end, across every agent, tool, credential, and system of record in the chain, this architecture means enforcement that operates at workflow speed, scales with deployment complexity, and evolves toward predictive behavioral security without requiring a separate data platform. |

Ready to govern your agentic workflows end-to-end? Visit langguard.ai to learn how LangGuard secures, controls, and operates enterprise agentic workflows with full policy compliance, or explore Databricks Lakebase to see how serverless OLTP infrastructure powers real-time AI governance at scale.

Learn more about LangGuardExplore Databricks Lakebase

Get the latest posts in your inbox

Subscribe to our blog and get the latest posts delivered to your inbox.

Sign up

*

Work Email

*

Country Country*

By clicking “Subscribe” I understand that I will receive Databricks communications, and I agree to Databricks processing my personal data in accordance with its Privacy Policy.

Subscribe

View all blogs

![Image 5: databricks logo](https://www.databricks.com/)

Why Databricks

Discover

Customers

Partners

Why Databricks

Discover

Customers

Partners

Product

Databricks Platform

Pricing

Open Source

Integrations and Data

Product

Databricks Platform

Pricing

Open Source

Integrations and Data

Solutions

Databricks For Industries

Cross Industry Solutions

Data Migration

Professional Services

Solution Accelerators

Solutions

Databricks For Industries

Cross Industry Solutions

Data Migration

Professional Services

Solution Accelerators

Resources

Documentation

Customer Support

Community

Learning

Events

Blog and Podcasts

Resources

Documentation

Customer Support

Community

Learning

Events

Blog and Podcasts

About

Company

Careers

Press

Security and Trust

About

Company

Careers

Press

Security and Trust

![Image 7: databricks logo](https://www.databricks.com/)

Databricks Inc.

160 Spear Street, 15th Floor

San Francisco, CA 94105

1-866-330-0121

  • [](https://www.linkedin.com/company/databricks)
  • [](https://www.facebook.com/pages/Databricks/560203607379694)
  • [](https://twitter.com/databricks)
  • [](https://www.databricks.com/feed)
  • [](https://www.glassdoor.com/Overview/Working-at-Databricks-EI_IE954734.11,21.htm)
  • [](https://www.youtube.com/@Databricks)
Image 9

See Careers

at Databricks

  • [](https://www.linkedin.com/company/databricks)
  • [](https://www.facebook.com/pages/Databricks/560203607379694)
  • [](https://twitter.com/databricks)
  • [](https://www.databricks.com/feed)
  • [](https://www.glassdoor.com/Overview/Working-at-Databricks-EI_IE954734.11,21.htm)
  • [](https://www.youtube.com/@Databricks)

© Databricks 2026. All rights reserved. Apache, Apache Spark, Spark, the Spark Logo, Apache Iceberg, Iceberg, and the Apache Iceberg logo are trademarks of the Apache Software Foundation.

We Care About Your Privacy

Databricks uses cookies and similar technologies to enhance site navigation, analyze site usage, personalize content and ads, and as further described in our Cookie Notice. To disable non-essential cookies, click “Reject All”. You can also manage your cookie settings by clicking “Manage Preferences.”

Manage Preferences

Reject All Accept All

Image 13: Databricks Company Logo

Privacy Preference Center

Opt-Out Preference Signal Honored

Privacy Preference Center

  • ### Your Privacy
  • ### Strictly Necessary Cookies
  • ### Performance Cookies
  • ### Functional Cookies
  • ### Targeting Cookies
  • ### TOTHR

#### Your Privacy

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

#### Opting out of sales, sharing, and targeted advertising

Depending on your location, you may have the right to opt out of the “sale” or “sharing” of your personal information or the processing of your personal information for purposes of online “targeted advertising.” You can opt out based on cookies and similar identifiers by disabling optional cookies here. To opt out based on other identifiers (such as your email address), submit a request in our Privacy Request Center.

More information

#### Strictly Necessary Cookies

Always Active

These cookies are necessary for the website to function and cannot be switched off in our systems. They assist with essential site functionality such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will no longer work.

#### Performance Cookies

  • [x] Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site.

#### Functional Cookies

  • [x] Functional Cookies

These cookies enable the website to provide enhanced functionality and personalization. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.

#### Targeting Cookies

  • [x] Targeting Cookies

These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant advertisements on other sites. If you do not allow these cookies, you will experience less targeted advertising.

#### TOTHR

  • [x] TOTHR

Cookie List

Consent Leg.Interest

  • [x] checkbox label label
  • [x] checkbox label label
  • [x] checkbox label label

Clear

  • - [x] checkbox label label

Apply Cancel

Confirm My Choices

Allow All

![Image 14: Powered by Onetrust](https://www.onetrust.com/products/cookie-consent/)

Image 15

!Image 16!Image 17

Image 18