databricks-pipelines
OfficialBuild robust data pipelines on Databricks.
Authordatabricks
Version1.0.0
Installs0
System Documentation
What problem does it solve?
This Skill simplifies the development of complex batch and streaming data pipelines on Databricks using Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables), enabling efficient and reliable data processing.
Core Features & Use Cases
- Declarative Pipeline Development: Define data pipelines using Python or SQL with a focus on desired outcomes rather than imperative steps.
- Streaming and Batch Processing: Supports both continuous data streams and batch data processing with features like Auto Loader, Auto CDC, and Materialized Views.
- Data Quality Enforcement: Integrate data quality checks using Expectations to ensure data integrity throughout the pipeline.
- Use Case: Develop a robust ETL pipeline that ingests streaming data from cloud storage, cleanses and transforms it into a silver layer, and then aggregates it into a gold layer for business intelligence, all while enforcing data quality rules.
Quick Start
Use the databricks-pipelines skill to create a new Databricks Asset Bundle project for a Lakeflow pipeline.
Dependency Matrix
Required Modules
None requiredComponents
references
💻 Claude Code Installation
Recommended: Let Claude install automatically. Simply copy and paste the text below to Claude Code.
Please help me install this Skill: Name: databricks-pipelines Download link: https://github.com/databricks/databricks-agent-skills/archive/main.zip#databricks-pipelines Please download this .zip file, extract it, and install it in the .claude/skills/ directory.
Agent Skills Search Helper
Install a tiny helper to your Agent, search and equip skill from 223,000+ vetted skills library on demand.