Microsoft Fabric DWH connector

Land your operational data in Microsoft Fabric and let Power BI read OneLake directly.

Data Panda lifts data out of your CRM, ERP, ecommerce, finance and product systems into the Fabric Data Warehouse on a known schedule. Once it sits in OneLake, Power BI Direct Lake, Copilot in Fabric and your existing Microsoft estate all read the same numbers without anyone exporting another extract.

Data Panda Reporting Automation AI Apps
Microsoft Fabric logo
About Microsoft Fabric DWH

The T-SQL warehouse inside Microsoft Fabric, sitting directly on OneLake.

Microsoft Fabric was announced at Build in May 2023 and reached general availability in November 2023 as a unified SaaS analytics platform on Azure. The Data Warehouse experience is one of seven workloads in Fabric, alongside Data Engineering (Spark notebooks and lakehouses), Data Factory (pipelines and dataflows), Real-Time Intelligence, Data Science, Databases and Power BI. All of them write to and read from the same store: OneLake, a single tenant-wide data lake built on Delta Parquet, with shortcuts that let one workspace point at data physically held in another without copying it.

The Fabric Data Warehouse itself is a fully managed, serverless T-SQL warehouse that runs queries directly against Delta tables in OneLake. It speaks the SQL Server T-SQL surface, supports cross-database queries between warehouses and lakehouses in the same workspace, and pairs natively with Power BI through Direct Lake mode, where reports query Delta files in OneLake without an import step or a DirectQuery round-trip. Billing runs on capacity units (CU) bought as a Fabric capacity (F2 through F2048 SKUs, or shared with the Power BI Premium P-SKUs), so warehouse compute, Spark, pipelines and Power BI rendering all draw from the same pool. For Microsoft-shop teams in BE and NL who already have Power BI Premium and Azure SQL on the floor, Fabric is the natural place to land the rest of the business so Direct Lake reports stop being demos and start being the actual finance and sales boards.

What your Microsoft Fabric DWH data is for

What you get once Microsoft Fabric DWH is connected.

Power BI on Direct Lake

Reports read Delta tables in OneLake instead of importing or DirectQuery-ing for every refresh.

  • One semantic model across finance, sales and operations
  • Direct Lake skips the import refresh and the DirectQuery round-trip
  • Workspace-level access keeps raw and curated zones apart

Loads on a known cadence

Operational data lands in the warehouse on a schedule that matches the business, not the loudest dashboard.

  • Source systems unloaded once per cycle, not per report
  • Pipelines and warehouse compute share the same Fabric capacity
  • Failed loads surface upstream of the morning Power BI refresh

Copilot in Fabric on governed data

Copilot and the AI skills work against curated warehouse tables, not raw extracts.

  • Copilot in Power BI authors visuals against the curated semantic model
  • AI skills answer natural-language questions on warehouse tables
  • OneLake security and workspace roles travel with every prompt

Apps and shortcuts on OneLake

Internal apps, Azure services and other workspaces read the same OneLake without copying data.

  • OneLake shortcuts expose tables to other workspaces without duplication
  • Azure Data Factory, Synapse and Databricks read the same Delta files
  • Power Apps and custom apps query warehouse tables through T-SQL
Use cases

Use cases we deliver with Microsoft Fabric DWH data.

A list of concrete reports, automations and AI features we have built on Microsoft Fabric DWH data. Pick the one that matches your situation.

One-truth Power BIDirect Lake reports off curated Fabric warehouse tables instead of imported PBIX models.
Off the OLTPMove analyst queries off the live ERP onto a Fabric warehouse copy.
Finance close packMonth-end P&L, balance and cashflow on warehouse-grade ledger data.
Customer 360 in OneLakeOne customer record across CRM, billing, support and product usage.
Copilot on real dataCopilot in Power BI and Fabric AI skills working on curated tables, not extracts.
Capacity in controlWorkload-aware capacity sizing so warehouse loads do not throttle Power BI.
Lakehouse-to-warehouseCross-database queries between Fabric lakehouses and the warehouse in one workspace.
Shortcuts, not copiesOneLake shortcuts replace the daily ETL into a second team's workspace.
Mirroring source DBsMirroring brings Azure SQL, Snowflake or Cosmos DB into OneLake without ETL.
Fabric on top of Azure SQLLayer Fabric DWH on an existing Azure SQL estate without ripping it out.
Power BI Premium upgradeUse existing Power BI Premium capacity as Fabric capacity for the warehouse.
Real business questions

Answers you will finally get.

We already have Power BI Premium. Why move to Fabric?

Power BI Premium P-SKUs already give you a Fabric capacity, so the move is more of a flick than an investment in new licenses. The win is Direct Lake: reports stop relying on PBIX import refreshes and read Delta tables in OneLake straight away. The same capacity then carries the warehouse, the pipelines and the Spark workloads, instead of you running a separate Azure Synapse or Azure SQL bill next to Premium.

How is the Fabric warehouse different from a Fabric lakehouse?

Both write Delta tables to OneLake, but the warehouse is a fully managed T-SQL surface with cross-database queries, transactions and the SQL Server semantics finance teams are used to. The lakehouse is the Spark and notebook side, with the SQL endpoint as a read-only T-SQL view on top. In most BE/NL Fabric deployments we land raw and bronze in a lakehouse and curate the gold layer in the warehouse, because that is where Power BI Direct Lake and downstream T-SQL apps want to read.

Why does our Fabric capacity throttle when we add the warehouse?

Capacity units are shared across every workload in Fabric, so Power BI rendering, warehouse compute, Spark and pipelines all draw from the same F-SKU. A capacity sized for Power BI alone runs out of headroom the moment a warehouse load and a Spark notebook hit at the same time. Sizing per workload, scheduling heavy loads outside report hours and watching the Capacity Metrics app keeps the throttling out of the morning standup.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

The CFO gets a Fabric-fed close pack that ties to the boekhouding. Revenue, margin and AR carry one definition, sourced from the same warehouse the sales board reads through Direct Lake, so month-end stops being three people reconciling Power BI exports.

For sales leaders

Sales leaders see pipeline, forecast and quota next to invoiced revenue and product usage on warehouse-grade data. Direct Lake means the Power BI report stays current without an import refresh in between, and Copilot in Power BI answers questions against the same semantic model.

For operations

Operations and data leads track Fabric capacity unit usage, warehouse query cost and pipeline runtime in one Capacity Metrics view. The capacity bill becomes predictable, and the OneLake layout stops growing sideways with team-specific copies of the same Delta tables.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your Microsoft Fabric DWH data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From Microsoft Fabric DWH to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • Microsoft Fabric DWH connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

Do Power BI reports really read OneLake directly with Direct Lake?

Yes. Direct Lake mode reads Delta Parquet files in OneLake straight into the Power BI engine without an import refresh and without a DirectQuery hop to the warehouse engine. As long as the semantic model and the Delta tables stay compatible, queries fall back automatically to DirectQuery on the warehouse if a feature is not yet supported. The curated layer we build in the Fabric warehouse is exactly what Direct Lake wants to read.

How do we keep our Fabric capacity bill predictable?

Sized per workload, not per peak. Power BI rendering, warehouse loads, Spark notebooks and pipelines all draw from the same F-SKU, so an over-sized capacity for one workload subsidises the others quietly. Watching the Capacity Metrics app every cycle, scheduling heavy loads outside report hours and parking dev workloads on a smaller capacity keeps the bill flat without anyone reaching for autoscale at midnight.

We are already on Azure Synapse. Should we migrate to Fabric?

It depends on which Synapse you mean. Synapse Dedicated SQL Pools (the warehouse) map cleanly onto a Fabric warehouse and Microsoft has positioned Fabric as the forward path. Synapse Spark and serverless SQL pools have analogues in Fabric Data Engineering and the lakehouse SQL endpoint. Most BE/NL Synapse customers we see migrate one workload at a time, starting with the warehouse on a curated layer in OneLake, and keep Synapse running until Power BI is on Direct Lake.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your Microsoft Fabric DWH setup and the systems around it. Together we pick the first thing worth building.