Choosing a Model
Tracore supports four AI providers and a handful of models per provider. The right choice depends on your data residency requirements, schema complexity, and run volume.
Quick comparison
| Provider | Default model | Allowlist | Residency | Strengths | Cost class |
|---|---|---|---|---|---|
| Mistral | pixtral-large-latest | pixtral-large-latest, pixtral-12b-2409, mistral-large-latest | EU | EU residency, multimodal, low cost | $ |
| Anthropic | claude-haiku-4-5 | claude-haiku-4-5, claude-sonnet-4-6, claude-opus-4-7 | US | Best schema adherence, deep reasoning | $$ / $$$ |
| OpenAI | gpt-4o-mini | gpt-4o-mini, gpt-4o | US | Wide ecosystem, fast structured output | $ / $$ |
gemini-2.0-flash | gemini-2.0-flash, gemini-1.5-pro | US | Long context, low latency | $ |
Cost class is a relative ranking among Tracore’s allowlisted models — $ cheapest, $$$ priciest. Refer to each provider’s pricing page for current per-token rates.
When to pick each provider
Start here: System fallback (Mistral)
If you don’t have a key yet, just leave the environment unassigned. Extractions use Tracore’s system Mistral key (pixtral-large-latest, EU). Plan page-limits still apply, but you don’t pay the provider directly. Good enough to build, demo, and test most schemas.
Mistral (BYOK)
Pick this when you need EU data residency and want the lowest per-document cost. pixtral-large-latest handles invoices, receipts, IDs, and other moderately-structured documents well.
Anthropic
Pick this when schema fidelity matters most. Claude is the strongest model for complex schemas — discriminated unions, deep nesting, optional polymorphic fields, conditional validation.
claude-haiku-4-5— default. Fast, cheap, surprisingly good at structured output. Use it as your first BYOK upgrade from system fallback.claude-sonnet-4-6— middle tier. Pick when Haiku misses fields on long documents (>20 pages) or schemas with many optional sections.claude-opus-4-7— premium. Reserve for production runs of high-stakes documents (contracts, compliance filings).
OpenAI
Pick this when you have existing OpenAI billing or want to consolidate spend.
gpt-4o-mini— competitive with Haiku on simple-to-moderate schemas, often cheaper.gpt-4o— strong on long-context documents; matches Sonnet on most schemas.
Pick this when you need very long context (>100 pages per document) or want to test Gemini’s multimodal pipeline.
gemini-2.0-flash— low latency, cheap, good for batch processing.gemini-1.5-pro— long-context champion when individual documents are huge.
Decision flow
- Just exploring? Use the system fallback. Don’t paste any keys yet.
- Need EU residency? Connect Mistral. Use
pixtral-large-latestfor most schemas. - Schema with deep nesting / discriminated unions? Connect Anthropic. Start on Haiku.
- Already on OpenAI billing? Connect OpenAI. Start on
gpt-4o-mini. - Documents over 100 pages? Connect Google. Start on
gemini-1.5-pro.
You can mix per environment. A common pattern:
development— system fallback (free-as-in-no-provider-spend, lets you iterate fast)staging— Anthropic / Haiku (production-grade quality at low cost)production— Anthropic / Sonnet or Opus (high-quality runs on real data)
See Environments for assigning a key + model per environment, and Provider keys for adding the keys themselves.
Per-run override
If you want to A/B a model on a specific run without changing the env assignment, pass model to client.extract:
const run = await client.extract(
"my-workspace",
"invoice",
{ documentId: "doc_abc123" },
{ model: "anthropic/claude-opus-4-7", poll: true }
);
Format is <provider>/<model>. Precedence (highest first): request model > env assignment > system fallback. A request that names a provider you have no key for, when that provider isn’t Mistral, returns 412 no_key.