Cortecs connector

Plug Cortecs on top of your warehouse data.

Data Panda lands your business data in one warehouse and lets Cortecs read it. The Vienna-based router exposes one OpenAI-compatible endpoint that fans out to 150+ model endpoints across Mistral, Claude, GPT, Gemini, Qwen, DeepSeek and Kimi, hosted in EU data centres in Spain, Germany, France, Finland and Poland.

Data Panda Reporting Automation AI Apps
Cortecs logo
About Cortecs

An EU-only AI router built in Vienna.

Cortecs is run by Cortecs GmbH out of Althanstrasse 4 in Vienna. The product is an AI gateway that sits between your application and the model providers. You call one OpenAI-compatible endpoint, Cortecs filters the providers that match your residency, latency and cost constraints, ranks what remains by price and performance, and dispatches the call. Inference happens in EU data centres in Spain, Germany, France, Finland and Poland, prompts are processed in temporary memory and then deleted, and Cortecs documents that prompts are never stored or used for training.

The catalogue covers more than 150 model endpoints across the names you would expect on a shortlist: Mistral, Anthropic Claude, OpenAI GPT, Google Gemini, Qwen, DeepSeek and Kimi. The same gateway exposes embeddings for search, clustering and recommendation work, image-input handling for the multimodal models, and audio transcription on the providers that ship a speech model. Pricing is pure pass-through. You top up credits, Cortecs takes a flat 5% on the top-up, and the per-token cost on the dashboard is the provider's base rate. There is no monthly subscription, no rate-limit ceiling on the gateway side, and the router publishes a 32% average cost reduction figure based on routing the same workload across providers instead of pinning it to one.

What your Cortecs data is for

What you get once Cortecs is connected.

Gateway usage you can read

Cortecs spend, model mix and routing decisions per workflow, next to the warehouse content the gateway is reading.

  • Token spend per Cortecs workflow, broken out per underlying provider so the cross-provider routing shows up on the report
  • Routing-decision log per call: which providers were eligible, which one was picked and on what criterion (price, latency, region)
  • EU-region breakdown of where each call landed, so a DPO sees Spain, Germany, France, Finland or Poland next to the workflow

AI decisions wired back into the business

Pipe Cortecs output straight into the systems where the work happens, with the routing layer hidden behind one endpoint.

  • Inbound emails classified by intent and language and dropped into the right Zendesk or HubSpot queue before an agent reads them
  • Sales drafts written on top of CRM history and product docs, queued in the rep's outbox for review
  • Quality-flagged warehouse rows enriched with a model-generated explanation column the operator can scan instead of opening the source row

RAG and classification on your own warehouse

Cortecs reads what is in the warehouse, the multilingual stack handles the language work, and nothing leaves the EU.

  • Q&A over policy docs, contracts and SOPs in the warehouse with citations back to the source row
  • Embeddings produced through the gateway and indexed against warehouse tables for retrieval
  • Multilingual support and CRM-note classification in NL, FR, DE and EN routed to whichever model wins on price for that month

Internal apps on Cortecs plus your data

Tools that sit on warehouse data and only call the gateway for the language and document work.

  • Internal knowledge-base assistant for support and onboarding teams
  • Per-customer briefing screen that summarises CRM, support and contract history before a meeting
  • Provider-agnostic prompt console where the team tries the same prompt across Claude, GPT, Mistral and Gemini without changing the application code
Use cases

Use cases we deliver with Cortecs data.

A list of concrete reports, automations and AI features we have built on Cortecs data. Pick the one that matches your situation.

RAG over policy and contract docsCortecs answers questions on documents stored in the warehouse, with citations back to the source row, and the call lands in the EU region you specified.
Embeddings for warehouse retrievalEmbedding endpoints exposed through the same gateway, used to index warehouse tables for search, clustering and recommendation.
Multilingual support classificationInbound NL, FR, DE and EN tickets routed to whichever model wins on price and latency, without rewriting the application.
Provider switching without code changesOne OpenAI-compatible endpoint means swapping Mistral for Claude or GPT is a routing rule, not a deployment.
Multimodal document intakeImage-input handling routed to the multimodal models for forms, receipts and screenshots, with the structured output landing back in the warehouse.
Audio transcription pipelineSpeech-to-text through the gateway for the providers that ship an audio model, joined to the meeting or call record in the warehouse.
EU-resident inference for regulated dataInference pinned to one of five EU regions (Spain, Germany, France, Finland, Poland) for workloads that cannot leave the union.
Per-customer briefing summariesSummarise CRM, support and contract history into a one-page brief before each meeting, drafted by whichever model the router currently prefers.
Cost reporting that beats one invoiceRouted-spend report broken out per provider and workflow, instead of one Cortecs invoice with no internal detail.
Pay-as-you-go without a subscriptionTop up credits, take the flat 5% on the top-up, and burn down at provider base rates with no rate-limit ceiling on the gateway side.
Real business questions

Answers you will finally get.

Which model did the gateway pick for each workflow last month, and at what cost?

Routing-decision log per Cortecs call, joined to the workflow that triggered it and to the eventual outcome. Surfaces the workflows where the router keeps switching between providers because none clearly wins on price, and the workflows where it has settled on one vendor for so long that the routing layer is no longer earning its keep. The cost column shows the provider base rate, not a marked-up gateway price.

Did our regulated workloads stay inside the EU region we specified?

Per-call EU-region log, with the workflow's data-classification flag on the same line. Shows which calls landed in Spain, Germany, France, Finland or Poland and which (if any) hit a non-EU fallback the application code did not expect, instead of trusting the dashboard summary at the gateway level.

Is the 32% routing saving real for our specific traffic mix?

Workflow-level cost-per-task rebuild that compares Cortecs' routed cost against the cost of pinning the same workload to a single provider for the same period. Confirms whether the published 32% average shows up in your mix, or whether your traffic is concentrated on the cheapest provider already and the saving is closer to zero.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

Cortecs spend per workflow and per business unit, joined to the workflow's measured outcome, with the provider mix the router picked. The AI line on the budget moves from one Cortecs invoice to a number that sits next to tickets handled, briefings shipped and embeddings produced.

For sales leaders

Provider-agnostic prompt console for sales enablement: the same briefing or proposal prompt runs on Claude, GPT, Mistral or Gemini through one endpoint, so the rep gets the strongest output for that account without engineering having to re-deploy the application.

For operations

Cortecs usage, latency and routing-decision log per workflow on one screen, refreshed daily. The router is followed as a curve, not rediscovered the morning a stakeholder forwards a screenshot of a slow or wrong answer.

Data model

Tables we make available.

These are the 1 tables we currently pull from Cortecs into your warehouse. Query them directly in SQL, join them to the rest of your stack, or build reports on top.

  • Models

Missing a table you need? We can extend the sync. Tell us what is missing and we will build it for you.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your Cortecs data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From Cortecs to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • Cortecs connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

Which model providers does Cortecs route to?

Cortecs publishes more than 150 model endpoints across Mistral, Anthropic Claude, OpenAI GPT, Google Gemini, Qwen, DeepSeek and Kimi. The gateway also exposes embeddings, image-input handling for the multimodal models, and audio transcription on the providers that ship a speech model. Every model is reachable through the same OpenAI-compatible endpoint, so swapping a workload from one provider to another is a routing rule rather than a code change.

Where does inference run, and how does that hold up for GDPR?

Cortecs runs in EU data centres across Spain, Germany, France, Finland and Poland. Cortecs documents that prompts are processed in temporary memory and deleted immediately, and are never stored or used for training. The product is positioned as a GDPR-compliant router with EU-only hosting, which is why regulated buyers in financial services, public sector and healthcare end up shortlisting it when residency is a hard constraint.

What does Cortecs cost compared to going direct to each provider?

There is no subscription. You top up credits, Cortecs takes a flat 5% on the top-up, and the per-token price on the dashboard is the provider's base rate with no markup. Cortecs publishes a 32% average cost reduction figure based on routing the same workload across providers instead of pinning it to one. Whether that average lands for you depends on your traffic mix, which is why we wire the routed cost back into the warehouse and rebuild it at workflow level.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your Cortecs setup and the systems around it. Together we pick the first thing worth building.