OpenAI connector

Use your OpenAI data for reporting, automation and AI.

Data Panda brings your OpenAI API usage data together with the data from the rest of your business. From one place, we turn it into dashboards, automations, AI workflows and custom apps your team uses every day.

Data Panda Reporting Automation AI Apps
OpenAI logo
About OpenAI

Get the best insights out of your data with OpenAI

OpenAI is the company behind ChatGPT and the GPT model family. Teams reach for it because it covers more than text: voice, images, video, search inside your own content, and agents that take action. The line-up runs from a fast, cheap tier for high-volume jobs up to heavier reasoning models when the answer really matters.

Wired next to your warehouse, OpenAI can answer questions in your own numbers, transcribe and tag every sales call straight into the CRM, draft outreach grounded in what your records actually say, or run a finance task as an agent that works from the real numbers instead of a guess.

What your OpenAI data is for

What you get once OpenAI is connected.

AI spend attributed to features and customers

Token spend, model mix and batch-discount adoption per project, API key and model on one timeline.

  • Spend per API key joined to the product feature behind it
  • GPT-5, GPT-4o, GPT-4o mini and o-series mix per project and per week
  • Batch-API jobs versus synchronous calls so the fifty-percent discount becomes a number

Cost-control automation

Push usage signals back into the tools where decisions about OpenAI really get made.

  • Slack alert when daily GPT-5 or o3 spend on one feature crosses a budget
  • API key paused or rate-limited when a project burns its monthly cap in a week
  • CRM contact tagged when a customer's AI feature runs above its contracted token allowance

AI workflows on AI usage

Use OpenAI usage history to feed the next round of model and routing decisions.

  • Routing scoring that picks GPT-4o mini, GPT-4o or o3 per request based on past quality and cost
  • Prompt-template ranking on output tokens per task and tool-call round-trip count
  • Drift detection on Assistants thread length to catch threads that grew silently across releases

Custom apps on your data

Internal tools on OpenAI usage data for teams that do not log into the platform dashboard.

  • AI cost dashboard per product feature, per customer segment, per week
  • Fine-tuning registry showing which custom models are still in production and what they cost to run
  • Per-customer AI-usage view next to MRR for finance and customer success
Use cases

Use cases we deliver with OpenAI data.

A list of concrete reports, automations and AI features we have built on OpenAI data. Pick the one that matches your situation.

Token spend per featureInput, output and cached input tokens per API key, joined to the product feature behind the key.
Model-tier mixShare of calls and spend across GPT-5, GPT-4o, GPT-4o mini and the o-series per project, per week.
Batch-API discount adoptionShare of eligible workloads run through the Batch API, with the fifty-percent saving against synchronous baseline.
Assistants and Threads costCost per Assistants run including retrieval and code-interpreter tool calls, per assistant and per customer.
Fine-tuning ROITraining cost and per-token inference premium of fine-tuned models against the base model they replaced.
Per-customer AI usageToken spend joined to CRM customer, contract tier and MRR.
Whisper and audio costSpeech-to-text minutes and Realtime audio session cost broken out from the text-token bill.
Image and video generationDALL-E 3 and Sora generation counts and cost per project, separated from the rest of the API spend.
Project-level budget burnSpend against the monthly project cap and projected month-end position.
Multi-organisation consolidationUsage across several OpenAI organisations and Azure OpenAI deployments rolled up into one picture.
Real business questions

Answers you will finally get.

Which feature is driving our OpenAI bill?

Token spend per API key over the last thirty days, joined to the product feature behind each key, with the GPT-5, GPT-4o, GPT-4o mini and o-series split on top. Surfaces the one Assistants-based feature on GPT-5 that produces eighty percent of the bill while the high-volume mini classifier barely registers, before the next monthly invoice arrives as a single number from Microsoft.

Is the Batch API discount really saving us money?

Batch jobs land in /v1/batches with their own input and output token counts at half the synchronous rate. The connector splits batched spend from real-time spend per project and per model, so the share of eligible workloads still running synchronously becomes a real number. A team that thinks it batches everything but in fact ships a fallback path that always goes synchronous shows up as a flat batch line and a rising real-time line.

Which customers are consuming AI features beyond what their contract assumed?

Token spend joined to CRM customer, contract tier and MRR, with usage per customer ranked against their tier allowance. Shows the customer on a small plan whose Assistants-based AI assistant is running tens of thousands of GPT-5 turns a month, so account management gets a real number to take into the renewal conversation instead of a hunch.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

OpenAI spend per product feature, per customer segment and per service tier instead of one line on the Microsoft invoice. The AI cost moves from a fixed unknown into a metric tied to the customers and features that produce it, in time to act on it before the next Azure OpenAI commitment renewal.

For sales leaders

AI usage per customer in the same record reps already open, so a customer running heavy GPT-5 traffic on a small plan becomes a renewal conversation instead of a surprise on the year-end review.

For operations

Model mix, batch-share, Assistants-thread length and tool-call round-trip count per template over ninety days. The behaviour of the AI features is followed as a curve, not rediscovered the morning a deploy quietly tripled the bill.

Ideas

What you can automate with OpenAI.

Pair with Slack

Push OpenAI spend alerts into Slack

Daily token spend per API key from the OpenAI Costs API lands in Slack as a per-feature line. The product team gets a ping the day GPT-5 traffic on the new agent feature crosses its budget, instead of finding out a week into the next billing cycle. Threshold breaches reference the project, the model and the API key so the on-call engineer knows where to look first.

Pair with HubSpot

Sync per-customer AI usage into HubSpot

OpenAI token spend per API key is mapped to the HubSpot customer it serves and lands on the contact record next to MRR and contract tier. Account managers see the customer on a small plan whose Assistants-driven copilot runs tens of thousands of GPT-5 turns a month before the renewal conversation, and customer success can flag accounts whose AI usage is creeping toward what the contract assumed.

Pair with PostHog

Join PostHog product events with OpenAI token usage

PostHog events for AI features (prompt submitted, agent task started, summary generated) are joined to the OpenAI usage data on project and timestamp. Product gets cost per AI action, not just cost per API call, so a feature that fires three GPT-5 turns and a code-interpreter call per click becomes visible against a feature that fires one GPT-4o-mini call. The same join answers which features have a healthy ratio of usage to spend and which are losing money on every interaction.

Pair with Fireflies.ai

Map Fireflies transcription mix to OpenAI Whisper cost

Fireflies meeting IDs that triggered a Whisper transcription or a follow-up GPT-5 summary are joined to the OpenAI usage data on the API key and time window of the run. Revenue operations sees cost per transcribed meeting split across audio minutes and summary tokens, so the choice between transcribing every internal call and only the customer-facing ones stops being a guess. The same view shows which teams pull the most AI-summary cost per week.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your OpenAI data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From OpenAI to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • OpenAI connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

Which OpenAI usage data does the connector really pull?

The Usage and Costs APIs at /v1/organization/usage/* and /v1/organization/costs are the primary sources, alongside the Models API, the Batch API at /v1/batches, the Fine-tuning Jobs API and the Assistants and Threads APIs. Per project, per API key and per model the connector returns input tokens, cached-input tokens, output tokens, request counts and the dollar cost the platform billed, plus batch-job results and fine-tuning run metadata. Customer prompts and completions are not pulled, only the metering. Anything else in the warehouse, like which feature owns which API key, has to be joined in from your own systems.

Can we see whether the Batch API discount is paying off?

Yes. The Batch API at /v1/batches advertises a fifty-percent discount against synchronous pricing on the same model, and batch jobs report their token counts and completion timestamps separately. The connector splits batched spend from real-time spend per project and per model, so the share of eligible workloads still running synchronously becomes a number a finance or platform team can act on, instead of an assumption.

What if we run OpenAI through Azure OpenAI Service instead of the direct API?

Microsoft hosts the same OpenAI models behind Azure OpenAI Service, but billing and usage telemetry then live in the Azure portal, not in OpenAI's organization endpoints. Workloads on Azure OpenAI have to be pulled via the Azure cost-management connector to get the same per-feature attribution. Mixed setups end up with two sources joined in the warehouse on deployment, model and time window, with the rate card normalised so a GPT-4o call costs the same on both sides of the report.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your OpenAI setup and the systems around it. Together we pick the first thing worth building.