Skip to main content

Lakeflow Declarative Pipelines concepts

Learn what Lakeflow Declarative Pipelines is, the core concepts (such as pipelines, streaming tables, and materialized views) that define it, the relationships between those concepts, and the benefits of using it in your data processing workflows.

What is Lakeflow Declarative Pipelines?

Lakeflow Declarative Pipelines is a declarative framework for developing and running batch and streaming data pipelines in SQL and Python. Lakeflow Declarative Pipelines runs on the performance-optimized Databricks Runtime (DBR), and the Lakeflow Declarative Pipelines flows API uses the same DataFrame API as Apache Spark and Structured Streaming. Common use cases for Lakeflow Declarative Pipelines include incremental data ingestion from sources such as cloud storage (including Amazon S3, Azure ADLS Gen2, and Google Cloud Storage) and message buses (such as Apache Kafka, Amazon Kinesis, Google Pub/Sub, Azure EventHub, and Apache Pulsar), incremental batch and streaming transformations with stateless and stateful operators, and real-time stream processing between transactional stores like message buses and databases.

For more details on declarative data processing, see Procedural vs. declarative data processing in Databricks.

What are the benefits of Lakeflow Declarative Pipelines?

The declarative nature of Lakeflow Declarative Pipelines provides the following benefits compared to developing data pipelines with the Apache Spark and Spark Structured Streaming APIs and running them with the Databricks Runtime using manual orchestration via Lakeflow Jobs.

  • Automatic orchestration: Lakeflow Declarative Pipelines orchestrate processing steps (called "flows") automatically to ensure the correct order of execution and the maximum level of parallelism for optimal performance. Additionally, Lakeflow Declarative Pipelines automatically and efficiently retry transient failures. The retry process begins with the most granular and cost-effective unit: the Spark task. If the task-level retry fails, Lakeflow Declarative Pipelines proceeds to retry the flow, and then finally the entire pipeline if necessary.
  • Declarative processing: Lakeflow Declarative Pipelines provides declarative functions that can reduce hundreds or even thousands lines of manual Spark and Structured Streaming code to only a few lines. The Lakeflow Declarative Pipelines AUTO CDC API simplifies processing of Change Data Capture (CDC) events with support for both SCD Type 1 and SCD Type 2. It eliminates the need for manual code to handle out-of-order events, and it does not require an understanding of streaming semantics or concepts like watermarks.
  • Incremental processing: Lakeflow Declarative Pipelines provides an incremental processing engine for materialized views. To use it, you write your transformation logic with batch semantics, and the engine will only process new data and changes in the data sources whenever possible. Incremental processing reduces inefficient reprocessing when new data or changes occur in the sources and eliminates the need for manual code to handle incremental processing.

Key Concepts

The diagram below illustrates the most important concepts of Lakeflow Declarative Pipelines.

A diagram that shows how the core concepts of LDP relate to each other at a very high level

Flows

A flow is the foundational data processing concept in Lakeflow Declarative Pipelines which supports both streaming and batch semantics. A flow reads data from a source, applies user-defined processing logic, and writes the result into a target. Lakeflow Declarative Pipelines shares the same streaming flow type (Append, Update, Complete) as Spark Structured Streaming. (Currently, only the Append flow is exposed.) For more details, see output modes in Structured Streaming.

Lakeflow Declarative Pipelines also provides additional flow types:

  • Auto CDC is a unique streaming flow in Lakeflow Declarative Pipelines that handles out of order CDC events and supports both SCD Type 1 and SCD Type 2.
  • Materialized View is a unique batch flow in Lakeflow Declarative Pipelines that only processes new data and changes in the source tables whenever possible.

For more details, see:

Streaming tables

A streaming table is a form of Unity Catalog managed table that is also a streaming target for Lakeflow Declarative Pipelines. A streaming table can have one or more streaming flows (Append, Auto CDC) written into it. AUTO CDC is a unique streaming flow that's only available to streaming tables. You can define streaming flows explicitly and separately from their target streaming table. You can also define streaming flows implicitly as part of a streaming table definition.

For more details, see:

Materialized views

A materialized view is also a form of Unity Catalog managed table and is a batch target. A materialized view can have one or more materialized view flows written into it. Materialized views differ from streaming tables in that you always define the flows implicitly as part of the materialized view definition.

For more details, see:

Sinks

A sink is a streaming target for Lakeflow Declarative Pipelines and currently supports Delta tables, Apache Kafka topics, and Azure EventHubs topics. A sink can have one or more streaming flows (Append) written into it.

For more details, see:

Pipelines

A pipeline is the unit of development and execution in Lakeflow Declarative Pipelines. A pipeline can contain one or more flows, streaming tables, materialized views, and sinks. You use Lakeflow Declarative Pipelines by defining flows, streaming tables, materialized views, and sinks in your pipeline source code and then running the pipeline. While your pipeline runs, it analyzes the dependencies of your defined flows, streaming tables, materialized views, and sinks, and orchestrates their order of execution and parallelization automatically.

For more details, see:

Databricks SQL for Lakeflow Declarative Pipelines

Lakeflow Declarative Pipelines provides streaming tables and materialized views as two foundational ETL capabilities in Databricks SQL. You can use standard SQL to create and refresh streaming tables and materialized views in Databricks SQL. Streaming tables and materialized views in Databricks SQL run on the same Databricks infrastructure and have the same processing semantics as they do in Lakeflow Declarative Pipelines. When you use streaming tables and materialized views in Databricks SQL, flows are defined implicitly as part of the streaming tables and materialized views definition.

For more details, see:

More information