Dictionary

Deployment pipeline

A deployment pipeline is the built-in ALM tool of Microsoft Fabric. You build in a Development workspace, test in a Test workspace, and roll out to Production, without doing manual export-import every time.

What is a deployment pipeline?

A deployment pipeline is the built-in ALM tool of Microsoft Fabric. You build in a Development workspace, promote to Test, promote to Production. Each step copies reports, models, lakehouses, notebooks and other items from the previous workspace to the next one in a predictable way.

This is not a replacement for Git; it is an alternative route for teams that do not want to set up pull requests and YAML CI. Fabric also supports Git integration for those who do, and both mechanisms live side by side.

Think of a deployment pipeline as a lift in an apartment building. Development, Test and Production are the floors. You press the button, the whole content rides up, and on arrival everything is in place.

How is a pipeline structured?

A pipeline is made of stages. The default is three: Development, Test and Production. You can have two if you skip testing, or up to ten if you have a complex chain of environments (Dev, QA, UAT, Staging, Prod, per continent, per business unit).

Each stage has exactly one workspace attached to it. That workspace contains all items belonging to the stage. When you promote content, it lands in the linked workspace of the next stage.

The three default stages in short:

Development
Build and break. You publish new reports, adjust models and experiment here.

Test
Compare content between Dev and Test (Fabric shows a diff), deploy, and let reviewers or test users look at representative data. Test usually runs on a separate lakehouse or warehouse with a subset or an anonymised copy of production.

Production
The final version. Deploying to Prod only happens from Test, never straight from Dev.

Which items can you carry over?

The list grows with every Fabric release. At the time of writing (some items in preview):

Power BI: reports, paginated reports, semantic models (from a .pbix), dashboards, dataflows, org apps.

Data Engineering: lakehouse, notebook, Spark job definition, environment, user data functions.

Data Factory: pipelines, dataflows Gen2, copy jobs, mirrored databases.

Real-Time Intelligence: eventhouse, eventstream, KQL database, KQL queryset, real-time dashboard.

Data Warehouse: warehouse, mirrored Databricks catalog.

SQL and Cosmos DB: SQL database, Cosmos database, both in preview.

Note: from 12 February 2026 onwards, deployment pipelines no longer support semantic models that are not on Enhanced Metadata. Old PBIX files from 2018 therefore need to be brought into the newer format first.

Item pairing: the key concept

This is the concept beginners miss most often, and it explains why a deploy sometimes creates duplicates instead of overwriting.

A report in Dev and a report in Test are not automatically linked just because they share a name. They get paired the moment they are deployed through the pipeline for the first time, or the moment you attach a workspace to a stage that already contains items. Two paired items overwrite each other on subsequent deploys; two unpaired items with the same name duplicate.

Consequence: if someone uploads a report manually into Production that never went through the pipeline, and you later deploy a report with the same name from Test, you end up with two versions in Production. Always go through the pipeline.

Deployment rules

Something almost always has to change between stages. A report in Dev points to the test warehouse; in Production the same report has to point to the prod warehouse.

Deployment rules take care of that. You define per stage which connection string, database object or parameter value has to be replaced when the content arrives. The author does not have to touch anything in the report; the pipeline rewrites the source references automatically at promotion time.

For Direct Lake models on OneLake there is one known limitation: you cannot rebind the data source directly via a rule. Using a parameter expression in the connection string is the standard workaround.

Pitfalls

Everything in one workspace
Mixing Dev, Test and Production in one workspace feels simple, until an unfinished report accidentally shows up in front of end users. Put each stage in its own workspace. Always.

Manual edits in Production
Fixing a "small issue straight in Prod" breaks pairing and causes friction on every subsequent deploy. Fix in Dev, promote to Test, promote to Prod.

Data and code mixed together
A lakehouse contains both code (schemas, table definitions) and data (the rows themselves). Deployment pipelines do not automatically copy the table data along with it. Count on a separate process (a source copy or pipeline) to provide Test with data.

Contributor licensing
Every user who has permission to publish in a stage workspace needs a Pro licence. Free users can only read, and only on F64 or higher. Do not underestimate this when working out the cost of a release crew.

Conflating Git integration and pipelines
Git integration versions your code in a repo; pipelines copy content between workspaces. You can combine them, but understand the difference: a pipeline is not source control and Git is not a release mechanism.

Last Updated: April 23, 2026 Back to Dictionary
Keywords
deployment pipeline power bi microsoft fabric alm cicd dev test prod item pairing deployment rules git integration release management pbip