Per-trace billing, with a twist.
Four places to land.
LangSmith Plus is $39 per seat per month, plus $2.50 per 1K base traces and $5.00 per 1K extended traces. Annotation queues, evaluators, and run-rule matches automatically upgrade affected traces to the extended tier. The cost curve goes superlinear exactly when the team starts taking quality seriously.
This page compares the four credible alternatives on platform fee, provider scope, where the inference traffic actually flows, and the prompt-export shape that lets you migrate cleanly.
Five rows. Five facts each.
Each row is a real option for a specific fit. The facts are sourced from each vendor's live pricing and docs as of May 2026 · re-verified the morning the page shipped.
- Platform fee
- Developer free (5K base traces/mo, 14-day retention, 1 seat). Plus $39/seat/mo (10K base traces). Overage $2.50/1K base traces, $5.00/1K extended (400-day) traces. Annotation queues, evaluators, and run-rule matches automatically upgrade affected traces to the extended tier.
- Provider scope
- Multi-provider via Python and TypeScript SDKs. Deepest support for LangChain and LangGraph; other frameworks via OpenTelemetry wrappers.
- Inference path
- Direct to provider. Observability-first; LangSmith instruments your client SDK and ingests traces but does not sit in the inference request path.
- LangSmith export
- Prompts stored as LangChain PromptTemplate objects. Export via `langsmith-data-migration-tool` (Python CLI · datasets, experiments, prompts, charts). Bulk Data Export (Plus and Enterprise) writes Parquet to a customer S3 bucket.
- Best fit
- LangChain-native teams whose codebase already commits to LangChain abstractions and whose trace volume sits below the breakeven where per-trace billing crosses the curve.
- Platform fee
- Hobby free (50K units/mo, 30-day retention) · Core $29/mo · Pro $199/mo · Enterprise $2,499/mo. Self-host free under MIT.
- Provider scope
- Provider-neutral via OpenTelemetry. Anthropic, OpenAI, Google all supported through adapter integrations.
- Inference path
- Direct to provider. Langfuse instruments your traffic via OTel; provider keys never leave your app for production calls.
- LangSmith export
- No first-class importer for LangSmith. Migration via Public API: pull prompts from LangSmith, transform, push into Langfuse.
- Best fit
- Observability-first teams comfortable self-hosting (or running ClickHouse) who want OTel-native tracing and no proxy in the request path.
- Platform fee
- Hobby free (10K requests/mo, 1GB storage). Pro $79/mo. Team $799/mo. Enterprise custom.
- Provider scope
- Multi-provider via the unified gateway URL or async-log mode. Anthropic, OpenAI, Google supported.
- Inference path
- Gateway-mode routes inference through Helicone before reaching the provider (proxy in the request path, observed latency overhead). Async-log mode keeps Helicone off the critical path; failures don't take your product down.
- LangSmith export
- No first-class LangSmith importer. Datasets and traces migrate via the Helicone API.
- Best fit
- Teams who want gateway-level caching, rate limits, and prompt routing alongside observability, and who accept proxied inference traffic for the gateway features.
- Platform fee
- Starter free (1GB processed, 10K scores, 14-day retention). Pro $249/mo (5GB, 50K scores, 30-day retention). Enterprise custom.
- Provider scope
- Multi-provider via AI Gateway endpoint accepting OpenAI, Anthropic, and Google SDKs.
- Inference path
- BYOK supported, but inference flows through Braintrust's gateway. Provider keys you supply still pass through their proxy.
- LangSmith export
- Prompts are TypeScript-defined; LangSmith migration requires manual transformation through the SDK.
- Best fit
- Well-funded eval-driven teams who want a managed AI gateway plus observability, and who accept proxied inference traffic.
- Our entry
Prompt Assay
- Platform fee
- Free tier · Solo $49/mo · Team $99/seat/mo · Enterprise contact-sales. Flat per-seat; no per-trace tax, no annotation-driven retention upgrade.
- Provider scope
- Anthropic, OpenAI, Google with first-class adapters.
- Inference path
- Direct to provider. We never sit in the inference request path. Your bill stays with your provider.
- LangSmith export
- No first-class LangSmith importer; the LangChain PromptTemplate format converts to provider-native via SDK helpers, then pastes into a new prompt as version 1.
- Best fit
- Teams who want a craft-forward workbench (six-dimension critique, two-version Compare, AI pair) for authoring and want flat-fee economics regardless of trace volume.
Verified 2026-05-01 · Read the full breakdown
Three reasons teams pick the fifth row.
Flat platform fee, no per-trace tax, no auto-upgrade.
LangSmith Plus charges $39 per seat plus $2.50 per 1K base traces and $5.00 per 1K extended traces. Annotation queues, evaluators, and run-rule matches automatically promote affected traces to the extended tier · the cost curve is not under your control. Prompt Assay Team is flat at $99 per seat per month. No per-trace overage. Provider inference stays direct to your provider account.
Author and assay in one surface; pair with a tracing tool if you still need one.
LangSmith is observability-first: traces, datasets, evaluators, and a prompt registry attached to the trace stream. Prompt Assay is the workbench half · six-dimension critique, two-version Compare with model-graded diff, AI pair, prompt-level versioning. The honest framing is that PA replaces the authoring experience and pairs with Langfuse, Helicone, or Phoenix for tracing if your stack still needs it.
Your inference traffic does not visit a third-party gateway.
BYOK at every paid tier. Provider keys are encrypted at rest with AES-256-GCM and decrypted only inside the LLM call you triggered. We never proxy provider traffic. Your bill stays with your provider account; compliance review reads cleanly.
Land the workbench half.
Free to start. Your keys, your bill, no demo call. Pair with a tracing tool if you still need one.