Anthropic Claude connector

Use your Anthropic Claude data for reporting, automation and AI.

Data Panda brings your Claude API usage data together with the data from the rest of your business. From one place, we turn it into dashboards, automations, AI workflows and custom apps your team uses every day.

Data Panda Reporting Automation AI Apps
Anthropic Claude logo
About Anthropic Claude

Where your AI bill really comes from.

Anthropic was founded in 2021 in San Francisco by Dario and Daniela Amodei together with five other former OpenAI staff, with Amazon and Google as strategic backers. About 2,500 people work on the product. The training approach is called Constitutional AI: the model is trained against an explicit set of written principles rather than only against human ratings.

The product is the Claude family of models, exposed through a REST API at api.anthropic.com. The line-up is Opus for the heavy reasoning, Sonnet for the daily workhorse and Haiku for the fast and cheap tier, with the 4.x generation as the current release. The endpoints that matter for a warehouse are the Messages API, the Message Batches API, the Models API, the Files and Skills APIs for agent workflows, and the Admin Usage Report at /v1/organizations/usage_report/messages, which returns uncached input tokens, cache-creation tokens, cache-read tokens, output tokens and service tier per workspace, per API key and per model in 1m, 1h or 1d buckets. That last endpoint is the one that turns Claude from a black-box invoice into something a finance and product team can attribute by feature.

What your Anthropic Claude data is for

What you get once Anthropic Claude is connected.

AI spend attributed to features and customers

Token spend, model mix and cache-hit rate per workspace, API key and model on one timeline.

  • Spend per API key joined to the product feature behind it
  • Opus, Sonnet and Haiku mix per workspace and per week
  • Cache-read tokens versus uncached input tokens to spot prompts that lost the cache

Cost-control automation

Push usage signals back into the tools where decisions about Claude really get made.

  • Slack alert when daily Opus spend on one feature crosses a budget
  • API key paused or rate-limited when a workspace burns its monthly tier in a week
  • CRM contact tagged when a customer's AI feature runs above the contracted token allowance

AI workflows on AI usage

Use Claude usage history to feed the next round of prompt and routing decisions.

  • Routing scoring that picks Haiku, Sonnet or Opus per request based on past quality and cost
  • Prompt-template ranking on output-tokens-per-task and tool-use round-trip count
  • Drift detection on input-token length per template to catch prompts that grew without anyone noticing

Custom apps on your data

Internal tools on Claude usage data for teams that do not log into the Anthropic Console.

  • AI cost dashboard per product feature, per customer segment, per week
  • Prompt registry showing which templates are still in production and what they cost to run
  • Per-customer AI-usage view next to MRR for finance and customer success
Use cases

Use cases we deliver with Anthropic Claude data.

A list of concrete reports, automations and AI features we have built on Anthropic Claude data. Pick the one that matches your situation.

Token spend per featureInput, output and cache tokens per API key, joined to the product feature behind the key.
Model-tier mixShare of calls and spend across Opus, Sonnet and Haiku per workspace, per week.
Prompt-cache hit rateCache-read tokens divided by uncached input tokens, per template and per workspace.
Tool-use round-trip countNumber of tool calls per agent task and the output tokens spent on each round.
Per-customer AI usageToken spend joined to CRM customer, contract tier and MRR.
Service-tier consumptionStandard, batch, priority and flex usage side by side, including the batch discount.
Output-token drift per templateAverage output tokens per template over time, to catch responses growing silently.
Web-search server-tool usageWeb-search requests through the Messages API as a separate cost line.
Workspace-level budget burnSpend against the monthly tier limit per workspace and projected month-end position.
Multi-account consolidationUsage across several Anthropic organisations rolled up into one picture.
Real business questions

Answers you will finally get.

Which feature is driving our Anthropic bill?

Token spend per API key over the last thirty days, joined to the product feature behind each key, with the Opus, Sonnet and Haiku split on top. Surfaces the one feature on Opus that is producing eighty percent of the bill while support reply drafts on Haiku barely register, before the next monthly invoice arrives as a single number.

Are our prompt caches really being hit?

Cache-read tokens versus uncached input tokens per template and per workspace, on a daily curve. Catches the system prompt that quietly grew past the cache window after a release, or the template whose preamble keeps shifting by a few tokens so the cache misses every call, both of which double the input bill without changing what the product does.

Which customers are consuming AI features beyond what their contract assumed?

Token spend joined to CRM customer, contract tier and MRR, with usage per customer ranked against their tier allowance. Shows the customer on a small plan whose AI assistant is running tens of thousands of Opus calls a month, so account management gets a real number to take into the renewal conversation instead of a hunch.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

Anthropic spend per product feature, per customer segment and per service tier instead of one line on the SaaS bill. The AI cost moves from a fixed unknown into a metric tied to the customers and the features that produce it, in time to act on it before the renewal of the Anthropic spend commitment.

For sales leaders

AI usage per customer in the same record reps already open, so a customer running heavy Opus traffic on a small plan becomes a renewal conversation instead of a surprise on the year-end review.

For operations

Cache-hit rate, model mix, tool-use round-trip count and output-token drift per template over ninety days. The behaviour of the AI features is followed as a curve, not rediscovered the morning a deploy quietly tripled the bill.

Ideas

What you can automate with Anthropic Claude.

Pair with Slack

Push Anthropic spend alerts into Slack

Daily token spend per API key from the Anthropic Admin Usage Report lands in Slack as a per-feature line. The product team gets a ping the day Opus traffic on the new agent feature crosses its budget, instead of finding out a week into the next billing cycle. Threshold breaches reference the workspace, the model and the API key so the on-call engineer knows where to look first.

Pair with HubSpot

Sync per-customer AI usage into HubSpot

Anthropic token spend per API key is mapped to the HubSpot customer it serves and lands on the contact record next to MRR and contract tier. Account managers see the customer on a small plan whose AI assistant runs tens of thousands of Opus calls a month before the renewal conversation, and customer success can flag accounts whose AI usage is creeping toward what the contract assumed.

Pair with PostHog

Join PostHog product events with Anthropic token usage

PostHog events for AI features (prompt submitted, agent task started, summary generated) are joined to the Claude usage report on workspace and timestamp. Product gets cost per AI action, not just cost per API call, so a feature that fires three Opus calls per click becomes visible against a feature that fires one Haiku call. The same join answers which features have a healthy ratio of usage to spend and which are losing money on every interaction.

Pair with Gong

Map Gong call AI summaries to Claude inference cost

Gong call IDs that triggered an AI summary or topic-extraction job are joined to the Anthropic usage report on the API key and time window of the run. Revenue operations sees cost per call summarised, broken down by model and by tool-use round-trip count, so the choice between summarising every call on Sonnet and only the closed-won ones on Opus stops being a guess. The same view shows which sales teams are pulling the most AI-summary cost.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your Anthropic Claude data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From Anthropic Claude to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • Anthropic Claude connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

Which Claude usage data does the connector really pull?

The Admin Usage Report at /v1/organizations/usage_report/messages is the primary source. It returns uncached input tokens, cache-creation tokens, cache-read tokens, output tokens, web-search server-tool requests and the service tier (standard, batch, priority, flex) per workspace, per API key and per model in 1m, 1h or 1d buckets. Customer prompts and completions are not pulled, only the metering. Anything else in the warehouse, like which feature owns which API key, has to be joined in from your own systems.

Can we see whether prompt caching is paying off?

Yes. The usage report splits cache-read input tokens from uncached input tokens, including the 5-minute and 1-hour ephemeral cache buckets, so the cache-hit ratio per template and per workspace is a direct division. A template whose system prompt drifts by a few tokens between releases shows up as a flat cache-read line and a rising uncached-input line on the same day.

What if we run Claude through Amazon Bedrock or Google Vertex AI instead of the direct API?

Anthropic publishes Claude on Amazon Bedrock, Google Vertex AI and Microsoft Azure AI alongside the direct API, but billing and usage telemetry then live in the cloud provider's console, not in the Anthropic Admin API. Workloads on Bedrock or Vertex have to be pulled via the relevant cloud-billing connector to get the same per-feature attribution. Mixed setups end up with two sources joined in the warehouse on workspace, model and time window.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your Anthropic Claude setup and the systems around it. Together we pick the first thing worth building.