Superagent connector

Put Superagent in front of every AI feature that touches your data.

Data Panda lands your business data in one warehouse, and Superagent sits between that data and the AI features your product runs. Prompt injections get blocked, customer PII gets redacted before it leaves, and red-team scenarios run against your agents in CI before a release ships.

Data Panda Reporting Automation AI Apps
Superagent logo
About Superagent

Open-source safety layer for AI agents.

Superagent is built by Superagent Technologies, a Y Combinator W24 company started in 2024 by Ismail Pelaseyed and Alan Zabihi. The team is small, the codebase is on GitHub at superagent-ai/superagent under the MIT licence, and the open-source SDK is the entry point for everything else they sell. The pitch is that a system prompt and a model provider's content filter are not enough on their own to keep an AI agent from leaking data, calling the wrong tool or following a malicious instruction hidden in a document.

The SDK exposes four methods you call on inputs, outputs or intermediate steps. Guard classifies a payload for prompt injection, jailbreak attempts, malicious instructions and unsafe tool calls, with open-weight models in 0.6B, 1.7B and 4B parameter sizes that run on your own infrastructure at 50 to 100 ms latency, or against the hosted API with no key required for the default model. Redact strips PII, PHI and secrets from text before it reaches the model or leaves your perimeter. Scan reads repositories and files (PDFs, images, URLs) for repo poisoning and other AI-agent-targeted attacks. Test runs red-team scenarios against a deployed agent so a release that quietly weakened the guardrails fails the build instead of fails in production. There is a TypeScript and a Python SDK, a CLI for batch and CI runs, and an MCP Server compatible with Claude Code and Claude Desktop. The same team also ships VibeKit (sandboxed code-agent execution), ReAG (reasoning-augmented generation), Grok CLI, Brin, and PolyResearch as separate open-source projects.

What your Superagent data is for

What you get once Superagent is connected.

AI safety telemetry next to AI usage

Guard verdicts, redaction events and red-team test results joined to the model usage that produced them.

  • Prompt-injection detections per feature, per model and per week
  • Redacted-token volume per data source so the heaviest PII paths surface
  • Red-team test pass rate per release tagged against the deploy that introduced the regression

Block, log, alert in the runtime

Guard verdicts drive what happens next, instead of being a number on a dashboard nobody opens.

  • Slack alert when prompt-injection rate on one tenant crosses a threshold
  • Tickets opened automatically for any payload classified as a malicious instruction with the agent name and the tool that almost ran
  • CI build fails when a Test red-team scenario regresses against the previous release

Cleaner data into the model

Redact and Guard run on warehouse-derived context before it reaches Claude, GPT or whatever sits behind the agent.

  • PII and secrets stripped from retrieval results before they enter the prompt
  • Untrusted document text guarded for prompt-injection patterns before it reaches a tool-using agent
  • Per-tenant Guard model picked from the 0.6B, 1.7B or 4B open-weight tier based on latency budget

Custom apps for the people who answer to the audit

Internal views on Superagent telemetry for compliance, security and product teams that do not live in the SDK logs.

  • Per-customer log of guarded inputs and outputs, exportable for a procurement-review evidence pack
  • Compliance dashboard with redaction coverage per data source and per pipeline
  • Release-readiness view showing red-team Test pass rate against the previous baseline
Use cases

Use cases we deliver with Superagent data.

A list of concrete reports, automations and AI features we have built on Superagent data. Pick the one that matches your situation.

Prompt-injection detection rateGuard verdicts per feature, per model and per tenant on a daily curve.
PII and PHI redaction coverageRedacted-token volume per data source and per pipeline, with the categories that fired.
Unsafe tool-call attemptsNumber of tool calls Guard blocked, broken down by tool and by calling agent.
Repository poisoning findingsScan results across the repos and document corpora your agents read from, ranked by severity.
Red-team Test pass rateTest scenarios passed and failed per release, tagged to the deploy that changed the result.
Guard latency per tenantMedian and p95 Guard latency per environment, to keep the safety layer inside its budget.
Self-hosted vs hosted Guard mixShare of Guard calls running on the 0.6B, 1.7B or 4B open-weight model versus the hosted API.
Guarded payload audit trailPer-payload log of input, verdict, redaction and the action the agent ended up taking.
MCP Server trafficCalls flowing through the Superagent MCP Server from Claude Code, Claude Desktop and other MCP clients.
Compliance evidence packPer-customer export of guarded interactions, suitable for a procurement-review or SOC 2 evidence request.
Real business questions

Answers you will finally get.

Are prompt injections reaching our agents, and which feature is the most exposed?

Guard verdicts per feature, per model and per tenant on a daily curve, with the suspicious payloads themselves available behind a click. Surfaces the support-ticket summarisation feature whose user-supplied content carries an order of magnitude more injection attempts than the internal copilot, and the one tenant whose documents have started arriving with payloads that smell like targeted attacks instead of generic web noise.

Is PII being redacted before it reaches the model?

Redacted-token volume per data source and per pipeline, broken down by the categories that fired (email, phone, national-ID, secret keys, health terms). Catches the new ETL pipeline that piped a customer-message field straight into the prompt without going through Redact, and shows the data sources where the redaction is doing the heaviest lifting so the warehouse owners know which contracts and which retention policies matter.

Did this release weaken the guardrails on our agents?

Red-team Test pass rate per release against the previous baseline, with the failing scenarios listed. Wires into CI so a release that broke a guardrail fails the build instead of fails in production a week later, and gives security a per-deploy timeline they can hand to an auditor without reconstructing it after the fact.

Value for everyone in the organisation

Where each function gets value.

For finance leaders

Self-hosted versus hosted Guard mix and the latency that comes with each, so the cost of running the 4B open-weight model on your own GPU instead of the hosted API becomes a number the budget owner can argue, not a hand-wave.

For sales leaders

Per-customer evidence packs of guarded inputs and outputs, ready to drop into a procurement review or a SOC 2 evidence request, so the security questionnaire stops being a deal blocker and becomes a paragraph the AE answers from the CRM.

For operations

Guard verdicts, redaction events and red-team Test results sitting on the same timeline as the AI feature usage that produced them. Security and platform engineering follow guardrail behaviour as a curve, instead of waiting for the incident that finally makes it visible.

Your existing tools

Your data lands in a warehouse. Your BI tools read from it.

You keep the reporting tool you already have. We connect it to the warehouse where your Superagent data lives.

Power BI logo
Power BI Microsoft
Microsoft Fabric logo
Fabric Microsoft
Snowflake logo
Snowflake Data warehouse
Google BigQuery logo
BigQuery Google
Tableau logo
Tableau Visualisation
Microsoft Excel logo
Excel Sheets & pivots
Three steps

From Superagent to answers in three steps.

01

Connect securely

OAuth authentication. Read-only by default. We sign a DPA and your admin keeps the keys.

02

Land in your warehouse

Data flows into your warehouse on your schedule. Near real time or nightly, your call. You own the data.

03

Reporting, automation, AI

We build the first dashboard, workflow or AI feature with you, then hand over the keys. Or we stay on for ongoing delivery.

Two ways to work with us

Pick the track that fits how you work.

Track 01

Self-serve

We set up the foundation. Your team builds on top.

  • Superagent connector configured and running
  • Warehouse set up in your cloud account
  • Clean access for your Power BI, Fabric or Tableau team
  • Documentation on what's in the data model
  • Sync monitoring so you're warned before reports break

Best fit Teams that already have a BI analyst or data engineer and want to own the build.

Track 02

Done for you

We build the whole thing, end to end.

  • Everything in Self-serve
  • Dashboards built to the questions your team actually asks
  • Automations between your systems
  • AI workflows scoped to real tasks your team runs
  • Custom apps where a dashboard does not cut it
  • Ongoing delivery at a pace that fits your team

Best fit Teams without in-house BI or dev capacity. You tell us what you need and we deliver it.

Before you book

Frequently asked questions.

Who owns the data?

You do. It lands in your warehouse, on your cloud account. We don't resell or aggregate it. If you stop working with us, the warehouse stays yours and keeps running.

How fresh is the data?

Near real time for most operational systems. For heavier sources we schedule hourly or nightly. You pick based on what the reports need.

Do I need a warehouse already?

No. If you don't have one, we help you pick one and set it up as part of the first delivery. Common starting points are Snowflake, Microsoft Fabric, or a small Postgres start.

What does the Superagent SDK do at runtime?

Four methods you call on inputs, outputs or intermediate steps of an agent. Guard classifies a payload for prompt injection, jailbreak, malicious instructions and unsafe tool calls. Redact strips PII, PHI and secrets from text. Scan analyses repositories and files (including PDFs, images and URLs) for AI-targeted attacks like repo poisoning. Test runs red-team scenarios against a deployed agent. The SDK is open-source under the MIT licence at github.com/superagent-ai/superagent and ships in TypeScript and Python, with a CLI and an MCP Server alongside.

Does Superagent run on its own infrastructure, or do calls go to a hosted API?

Both options ship out of the box. The hosted API requires no key for the default Guard model, which makes the SDK easy to start with. Open-weight Guard models are published in 0.6B, 1.7B and 4B parameter sizes for self-hosted deployment, with 50 to 100 ms latency once loaded. Teams that cannot send tenant content to a third-party endpoint run the open-weight model behind their own perimeter and keep every payload local.

Is Superagent the same thing as a LangChain or AutoGen?

No. Superagent does not orchestrate agents, route tools or hold memory. It sits next to whatever framework or model you already use (OpenAI, Anthropic, Google, Bedrock, Groq, Fireworks and others) and inspects what flows in and out. The SDK is model-agnostic on purpose, so the same Guard, Redact and Scan logic protects an agent regardless of which framework or which provider builds it.

GDPR-compliant
Data stays in the EU
You own the warehouse

A first deliverable live in four to six weeks.

We review your Superagent setup and the systems around it. Together we pick the first thing worth building.