Open WebUI connector

Use your Open WebUI data for reporting, automation and AI.

Data Panda brings your Open WebUI usage data together with the data from the rest of your business. From one place, we turn it into dashboards, automations, AI workflows and custom apps your team uses every day.

Data Panda Reporting Automation AI Apps
Open WebUI logo
About Open WebUI

The self-hosted front door to your LLMs.

Open WebUI is an open-source, self-hosted AI platform that gives a team the ChatGPT-style experience on their own infrastructure. Tim Jaeryang Baek started it in September 2023 as Ollama WebUI, a friendly frontend for local Ollama models, and renamed it to Open WebUI in early 2024 once it grew to support OpenAI-compatible APIs alongside Ollama. The project is in the 2024 GitHub Accelerator and sits well above 130k GitHub stars. Up to v0.6.5 the codebase is BSD-3-Clause; from v0.6.6 (April 2025) it ships under a custom Open WebUI License with a branding-retention clause and a CLA, which is worth checking before commercial rollouts.

The product is a Docker- or Kubernetes-deployed web app with a Python backend and a Svelte frontend. It speaks Ollama natively and any OpenAI-compatible endpoint by URL, which means LM Studio, vLLM, llama.cpp, LiteLLM, Mistral, Groq and OpenRouter all plug in the same way. RBAC, groups, SSO/OIDC/LDAP and SCIM 2.0 provisioning cover the enterprise side. The stored entities a warehouse cares about are users, groups, chats, messages (with per-message input/output token counts), models, prompts, knowledge collections, files, channels, notes, feedback ratings and the analytics tables behind the admin dashboard, exposed via /api/v1/analytics/summary, /models, /users, /messages, /daily and /tokens. That last set of endpoints requires admin auth and is what turns Open WebUI from a chat box into something a finance and IT team can attribute.

What your Open WebUI data is for

What you get once Open WebUI is connected.

AI adoption per team and per model

Per-user and per-group message volume, token spend and model mix on one timeline.

  • Active users per week per group, joined to HR or directory data
  • Token spend per user joined to the team and the project they belong to
  • Model mix per group, so a team that quietly switched everything to GPT-5 becomes visible

Usage and cost-control automation

Push usage signals back into the tools where decisions about Open WebUI really get made.

  • Slack alert when a single user crosses a daily token budget on a paid model
  • Group permissions tightened automatically when a team's monthly inference cost runs above plan
  • HR offboarding triggers a sweep that disables the leaver's Open WebUI account and exports their chat history

AI workflows on AI-frontend usage

Use Open WebUI history to feed the next round of model-routing and knowledge-base decisions.

  • Routing scoring that picks Ollama, OpenAI or Anthropic per prompt class based on past quality and cost
  • Knowledge-collection ranking on retrieval hits and downstream feedback ratings
  • Prompt-pattern clustering on chat content to surface use cases the team is reaching for unprompted

Custom apps on your data

Internal tools on Open WebUI usage data for teams that do not log into the admin tab.

  • AI adoption dashboard per department and per quarter for the leadership team
  • Per-user token-cost view next to seat licence and salary band for finance
  • Knowledge-base health dashboard showing which collections are read, ignored or rated down
Use cases

Use cases we deliver with Open WebUI data.

A list of concrete reports, automations and AI features we have built on Open WebUI data. Pick the one that matches your situation.

Per-user token spendInput, output and total tokens per user per day, joined to the team and the model they used.
Model mix per groupShare of messages and tokens across Ollama, OpenAI, Anthropic and other backends per group, per week.
Active-user trendWeekly and monthly active users per group, with churn between active and lapsed.
Knowledge-collection hitsRetrieval hits per collection joined to the chats and users that pulled them.
Feedback ratings per modelThumbs-up and thumbs-down rate per model and per prompt template, with examples.
Custom-model adoptionUsage of admin-built model presets versus default models, by group and by week.
Prompt-library usageWhich saved prompts are reached for, by whom, and how often, against the ones nobody uses.
Per-customer AI workloadChats and tokens tagged to the customer the user worked on, joined to CRM.
Group-level cost attributionInference cost per group rolled up against the team budget.
Local versus hosted-model splitShare of traffic on local Ollama models versus paid hosted models, with the cost gap.
Real business questions

Answers you will finally get.

Who is using our self-hosted AI, and on which models?

Per-user message and token volume over the last thirty days, broken down by group and by backend model. Surfaces the engineering team running ten thousand Ollama messages a week on the local box, the marketing group quietly funnelling everything through GPT-5 on the OpenAI key, and the long tail of accounts that logged in once and never came back, all in one view instead of three different admin tabs on three different hosts.

Are our knowledge collections being read?

Retrieval hits per knowledge collection joined to the chats that triggered them and the feedback rating that followed. Catches the carefully curated policy collection that nobody is searching, and the half-finished sales-deck collection that ten people are pulling from every day with thumbs-down ratings, so the next round of curation work is aimed where it pays off.

What is one user costing us on the underlying API key?

Token spend per user mapped to the public model price, ranked from heaviest to lightest. Shows the analyst whose long-context summarisation routine alone runs more on GPT-5 than the entire support team on GPT-4o-mini, in time to set a per-user budget or move that workload to a cheaper tier instead of arguing about the bill at the end of the quarter.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

Open WebUI inference cost split per group and per user, mapped to the model-provider invoice. The shared OpenAI or Anthropic bill behind the self-hosted frontend stops being one line and becomes spend the team that produced it can be held to.

For sales leaders

Prompt patterns and knowledge-collection hits across teams, surfacing which customer-facing use cases people are reaching for in their own words. Account managers get language that came from real internal users instead of guessing what AI features customers will respond to in the next pitch.

For operations

Active users, model mix and feedback ratings per group over ninety days. The behaviour of the AI rollout is followed as a curve, not rediscovered the day someone notices the local Ollama node is at 100% GPU and the OpenAI bill tripled.

Data model

Tables we make available.

These are the 4 tables we currently pull from Open WebUI into your warehouse. Query them directly in SQL, join them to the rest of your stack, or build reports on top.

  • Files
  • Knowledge Bases
  • Models
  • Notes

Missing a table you need? We can extend the sync. Tell us what is missing and we will build it for you.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your Open WebUI data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From Open WebUI to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • Open WebUI connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

Which Open WebUI data does the connector really pull?

The admin Analytics endpoints (/api/v1/analytics/summary, /models, /users, /messages, /daily, /tokens) are the primary source, plus the chat, message, user, group, model, prompt, knowledge-collection and feedback tables behind them. Per-message input and output token counts are captured automatically by Open WebUI when analytics is enabled and are normalised across Ollama, OpenAI-compatible and llama.cpp backends. Chat content can be pulled too if the policy allows, but most rollouts pull metadata only and leave the message bodies in Open WebUI.

Open WebUI does not bill us, so where does the cost come from?

From the underlying model-provider keys. A self-hosted Ollama deployment is metered in GPU hours and electricity, not tokens, while traffic that Open WebUI routes to OpenAI, Anthropic, OpenRouter or any OpenAI-compatible endpoint shows up on that provider's invoice. Joining the per-message token counts inside Open WebUI to the public price list of the model the message used gives a per-user, per-group cost number that ties back to the actual external bill.

Is Open WebUI open source enough to host for our customers?

Up to and including v0.6.5 the project is BSD-3-Clause and behaves like classic permissive open source. From v0.6.6 (April 2025) it ships under a custom Open WebUI License with an anti-endorsement clause and a contributor licence agreement, which is not OSI-approved. For internal company use this is rarely a problem; for embedding Open WebUI into a product you sell to customers, the licence and branding terms are worth running past legal before the rollout.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your Open WebUI setup and the systems around it. Together we pick the first thing worth building.