About Open WebUI
The self-hosted front door to your LLMs.
Open WebUI is an open-source, self-hosted AI platform that gives a team the ChatGPT-style experience on their own infrastructure. Tim Jaeryang Baek started it in September 2023 as Ollama WebUI, a friendly frontend for local Ollama models, and renamed it to Open WebUI in early 2024 once it grew to support OpenAI-compatible APIs alongside Ollama. The project is in the 2024 GitHub Accelerator and sits well above 130k GitHub stars. Up to v0.6.5 the codebase is BSD-3-Clause; from v0.6.6 (April 2025) it ships under a custom Open WebUI License with a branding-retention clause and a CLA, which is worth checking before commercial rollouts.
The product is a Docker- or Kubernetes-deployed web app with a Python backend and a Svelte frontend. It speaks Ollama natively and any OpenAI-compatible endpoint by URL, which means LM Studio, vLLM, llama.cpp, LiteLLM, Mistral, Groq and OpenRouter all plug in the same way. RBAC, groups, SSO/OIDC/LDAP and SCIM 2.0 provisioning cover the enterprise side. The stored entities a warehouse cares about are users, groups, chats, messages (with per-message input/output token counts), models, prompts, knowledge collections, files, channels, notes, feedback ratings and the analytics tables behind the admin dashboard, exposed via /api/v1/analytics/summary, /models, /users, /messages, /daily and /tokens. That last set of endpoints requires admin auth and is what turns Open WebUI from a chat box into something a finance and IT team can attribute.