Mistral AI connector

Plug Mistral AI on top of your warehouse data.

Data Panda lands your business data in one warehouse and lets Mistral read it. Mistral Large 3 and Medium 3.1 answer questions over your own documents, Mistral Embed indexes them for retrieval, and OCR 3 turns scanned PDFs and images into structured rows the rest of your stack can use.

Data Panda Reporting Automation AI Apps
Mistral AI logo
About Mistral AI

Open-weights AI built in Paris.

Mistral AI was founded in April 2023 in Paris by Arthur Mensch (CEO, ex-DeepMind), Guillaume Lample and Timothée Lacroix (both ex-Meta FAIR). The company shipped its first open-weights model within months of starting and has held to that pattern: many of the generalist models are released under permissive licenses with weights you can download, host yourself and fine-tune. The positioning is European and sovereignty-aware, with hybrid deployment as a first-class option rather than an afterthought, which is why financial-services, public-sector and healthcare buyers in the EU end up shortlisting Mistral when data residency is a hard constraint.

The current lineup spans four families. The generalist tier is Mistral Large 3 (open-weights, multimodal flagship), Mistral Medium 3.1 (premier multimodal), Mistral Small 4 (hybrid instruct/reasoning/coding) and the Ministral 3 series at 14B, 8B and 3B for edge and low-latency work. Magistral Medium 1.2 covers the reasoning workload. Codestral and Devstral 2 handle code completion and coding agents respectively. Mistral Embed and Codestral Embed produce vectors for retrieval. OCR 3 powers the Document AI stack, processing up to two thousand pages per minute on a single node and returning text, tables, equations and images in reading order. Voxtral Mini Transcribe 2 handles speech-to-text. The whole stack runs on La Plateforme at api.mistral.ai, on AWS Bedrock, Azure AI Foundry, Google Vertex AI, Snowflake Cortex, IBM watsonx, or fully self-hosted in a customer VPC or on-premise. Le Chat Enterprise sits on top as the assistant layer for end users.

What your Mistral AI data is for

What you get once Mistral AI is connected.

RAG quality you can measure

Mistral usage and retrieval performance per workflow, side by side with the warehouse content the model is reading.

  • Token spend per Mistral endpoint and per model, joined to the workflow that triggered the call
  • Retrieval hit rate per query template, so retrievals that miss the right document become visible
  • Mistral Embed coverage check: which warehouse tables are indexed, which are stale, which are not in the vector index at all

Mistral-driven actions back into the business

Pipe Mistral Large 3 and OCR 3 decisions straight into the systems where the work gets done.

  • Scanned supplier invoices read by OCR 3, structured into a row, posted in the ERP queue before anyone keys them
  • Support tickets classified by Mistral Small 4 in NL, FR and EN, then routed in Zendesk or HubSpot before the first agent reads them
  • Sales emails drafted on top of CRM history and product docs, dropped in the rep's outbox for review

RAG on your own warehouse, in your own region

Mistral Large 3 reads what is in the warehouse, Mistral Embed indexes it, and the whole pipeline can run inside your own VPC.

  • Q&A over policy docs, contracts and SOPs with citations back to the source row in the warehouse
  • OCR 3 over scanned PDFs and image-based forms, returning text, tables and equations in reading order
  • Multilingual customer-record search in NL, FR, DE and EN that returns the same customer regardless of the query language

Custom apps on Mistral plus your data

Internal tools that sit on warehouse data and call Mistral only for the language and document work.

  • Internal knowledge-base assistant for support and onboarding teams
  • Document-intake app where OCR 3 pre-fills the metadata and a reviewer only confirms
  • Per-customer briefing screen that summarises CRM, support and contract history before a meeting
Use cases

Use cases we deliver with Mistral AI data.

A list of concrete reports, automations and AI features we have built on Mistral AI data. Pick the one that matches your situation.

RAG over policy and contract docsMistral Large 3 answers questions on documents stored in the warehouse, with citations back to the source row.
OCR on scanned PDFs and formsOCR 3 extracts text, tables and equations from scanned documents in reading order at up to two thousand pages per minute.
Multilingual customer searchMistral Embed turns NL, FR, DE and EN customer notes into one vector index per account.
Support ticket classificationMistral Small 4 routes incoming tickets to the right team and tags them with product, urgency and intent across languages.
Invoice and document intakeOCR 3 pre-fills supplier-invoice and contract metadata at intake, the reviewer confirms instead of types.
Reasoning over structured dataMagistral Medium 1.2 walks through multi-step questions on warehouse tables and returns its working.
Code completion in the IDECodestral and Devstral 2 handle in-IDE completion and coding-agent tasks for internal engineering teams.
Per-customer briefing summariesSummarise CRM, support and contract history into a one-page brief before each meeting.
Edge and low-latency tasksThe Ministral 3 series at 14B, 8B and 3B handles tasks that need to run close to the user or at scale on cheap inference.
Private deployment for regulated dataSame models on AWS Bedrock, Azure AI Foundry, Vertex, Snowflake Cortex, IBM watsonx or inside a customer VPC for regulated data.
Real business questions

Answers you will finally get.

Can our RAG setup find the right document, in the right language?

Retrieval hit rate per query template, plotted against the warehouse table the answer should have come from, with the language of the query as a separate axis. Surfaces the queries where Mistral Embed returns plausible-looking neighbours but the truly relevant document never reaches the top three, and catches the case where the NL query keeps missing the FR-language source it needed.

What did OCR 3 pull from the scanned invoices last month, and where did it slip?

Per-document OCR 3 output rows joined to the eventual ERP entry, with confidence scores and reviewer corrections on the same line. Shows which supplier templates the model reads cleanly, which ones still need a human pass on the totals or the VAT line, and how many minutes of accounts-payable time the OCR pipeline saved against the manual baseline.

Can we keep the model and the data inside the EU?

Mistral publishes open-weights versions of several generalist models, and Le Chat Enterprise is documented as deployable self-hosted, in a private cloud or in a customer VPC. The reporting layer makes the deployment mode visible per workflow, so a DPO can see at a glance which workloads run inside the customer's own infrastructure and which still go to the hosted API, instead of taking the platform team's word for it.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

Mistral spend per endpoint, per workflow and per business unit, joined to the workflow's measured outcome. The AI line on the budget moves from a single Mistral invoice to a number that sits next to invoices processed, support time saved and proposals shipped.

For sales leaders

Multilingual proposal- and briefing-assistant on top of CRM and the registry of past wins, so the rep walks into a NL, FR or DE meeting with a one-page brief drafted from the closest matching account history instead of the last note someone happened to leave.

For operations

Mistral usage, retrieval-hit rate and OCR confidence per workflow on one screen, refreshed daily. The pipeline is followed as a curve, not rediscovered the morning a stakeholder forwards a screenshot of a wrong answer or a misread invoice.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your Mistral AI data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From Mistral AI to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • Mistral AI connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

Which Mistral models are typically used over warehouse data?

Mistral Large 3 is the open-weights multimodal flagship for chat and agent work, with Mistral Medium 3.1 as the premier multimodal alternative and Mistral Small 4 as the hybrid instruct/reasoning/coding tier. Magistral Medium 1.2 covers the explicit reasoning workload. The Ministral 3 series at 14B, 8B and 3B is for edge and low-latency tasks. Mistral Embed and Codestral Embed produce vectors for retrieval, OCR 3 turns scanned documents into structured text, and Voxtral Mini Transcribe 2 handles speech-to-text. Codestral and Devstral 2 cover code completion and coding agents for internal engineering teams.

Can we use Mistral through AWS Bedrock, Azure or Vertex instead of La Plateforme?

Yes. Mistral publishes its models on AWS Bedrock, Azure AI Foundry, Google Cloud Vertex AI, Snowflake Cortex and IBM watsonx alongside the direct API on La Plateforme at api.mistral.ai. Billing and usage telemetry then live in the cloud provider's console rather than in Mistral's dashboard, so cost reporting has to come through the relevant cloud-billing connector. Mixed setups end up with two sources joined in the warehouse on workflow, model and time window.

Can Mistral run inside our own VPC for data that cannot leave the EU?

Yes. Mistral supports private cloud and on-premise deployments for customers who cannot let regulated data leave their network or their region. Open-weights releases like Mistral Large 3, Mistral Small 4 and the Ministral 3 series can be downloaded and hosted on the customer's own infrastructure, and Le Chat Enterprise is documented as deployable self-hosted, in a private cloud or fully inside a customer VPC. Premier models like Mistral Medium 3.1, Magistral Medium 1.2, Codestral and OCR 3 are available through the same partners and through Mistral's enterprise sales for private inference.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your Mistral AI setup and the systems around it. Together we pick the first thing worth building.