About Mistral AI
Open-weights AI built in Paris.
Mistral AI was founded in April 2023 in Paris by Arthur Mensch (CEO, ex-DeepMind), Guillaume Lample and Timothée Lacroix (both ex-Meta FAIR). The company shipped its first open-weights model within months of starting and has held to that pattern: many of the generalist models are released under permissive licenses with weights you can download, host yourself and fine-tune. The positioning is European and sovereignty-aware, with hybrid deployment as a first-class option rather than an afterthought, which is why financial-services, public-sector and healthcare buyers in the EU end up shortlisting Mistral when data residency is a hard constraint.
The current lineup spans four families. The generalist tier is Mistral Large 3 (open-weights, multimodal flagship), Mistral Medium 3.1 (premier multimodal), Mistral Small 4 (hybrid instruct/reasoning/coding) and the Ministral 3 series at 14B, 8B and 3B for edge and low-latency work. Magistral Medium 1.2 covers the reasoning workload. Codestral and Devstral 2 handle code completion and coding agents respectively. Mistral Embed and Codestral Embed produce vectors for retrieval. OCR 3 powers the Document AI stack, processing up to two thousand pages per minute on a single node and returning text, tables, equations and images in reading order. Voxtral Mini Transcribe 2 handles speech-to-text. The whole stack runs on La Plateforme at api.mistral.ai, on AWS Bedrock, Azure AI Foundry, Google Vertex AI, Snowflake Cortex, IBM watsonx, or fully self-hosted in a customer VPC or on-premise. Le Chat Enterprise sits on top as the assistant layer for end users.