§.FAQ

Which LLM model should I pick?

Decision matrix: speed vs quality vs cost for each supported model.

Updated 2026-04-13
Use caseTop pickRunner-up
Complex reasoning, long context, highest qualityClaude Opus 4.6GPT-5 (reasoning)
Most general-purpose prompt workClaude Sonnet 4.6GPT-4.1
Fast, cheap, high volumeClaude Haiku 4.5GPT-4.1 Mini or Nano
Multi-modal (images, audio)Gemini 2.5 ProClaude Sonnet 4.6 (images only)
Cost-optimized inference at scaleGPT-4.1 NanoGemini 2.5 Flash
Hardest reasoning problemsOpenAI o3Gemini 2.5 Pro with thinking enabled
Latency-criticalClaude Haiku 4.5GPT-4.1 Mini
Experiment in the playground
The playground's compare mode runs two models side-by-side with the same prompt. Pick a representative test case, run it against a few candidates, and use the latency / token / cost output to decide. The playground's test-run history keeps the comparisons available for later review.