§.Migrate from LangSmith

The export tool exists.
The next workbench is the choice.

LangChain ships an official `langsmith-data-migration-tool` that writes datasets, experiments, annotation queues, project rules, prompts, and charts. Bulk Data Export hands you Parquet for the traces. The export is solved. The choice is the destination.

I.The procedure

Three steps. Then you ship.

  1. Station · 01

    Export

    Install LangChain's official `langsmith-data-migration-tool` (Python CLI, `uv tool install langsmith-data-migration-tool`). Run it against your source workspace with the Plus or Enterprise API key. The tool writes datasets, experiments, annotation queues, project rules, prompts, and charts. Plus and Enterprise also have Bulk Data Export, which writes Parquet trace data into a customer S3 bucket for offline analysis.

  2. Station · 02

    Convert

    Prompts come out shaped as LangChain `PromptTemplate` (or `StructuredPrompt`) objects. The SDK has converters that emit OpenAI- or Anthropic-style chat messages directly. Run the conversion locally · the prompt body, variables, and model config survive cleanly. Evaluator configs, project rules, and Fleet metadata are LangSmith-specific and will not carry over; treat them as work to redo in the next tool.

  3. Station · 03

    Land

    Sign up for Prompt Assay. For each converted prompt, create a new prompt in the library and paste the converted body in as version one. The workbench surfaces the AI pair (Brainstorm, Critique, Improve, Rewrite, Compare) and prompt-level versioning from the first save. If you still need tracing, pair with Langfuse or Helicone alongside.

II.The destinations

Five rows. Five facts each.

Each row is a real option. Pick the row that matches how your team works · then run the procedure above.

  • Platform fee
    Developer free (5K base traces/mo, 14-day retention, 1 seat). Plus $39/seat/mo (10K base traces). Overage $2.50/1K base traces, $5.00/1K extended (400-day) traces. Annotation queues, evaluators, and run-rule matches automatically upgrade affected traces to the extended tier.
    Provider scope
    Multi-provider via Python and TypeScript SDKs. Deepest support for LangChain and LangGraph; other frameworks via OpenTelemetry wrappers.
    Inference path
    Direct to provider. Observability-first; LangSmith instruments your client SDK and ingests traces but does not sit in the inference request path.
    LangSmith export
    Prompts stored as LangChain PromptTemplate objects. Export via `langsmith-data-migration-tool` (Python CLI · datasets, experiments, prompts, charts). Bulk Data Export (Plus and Enterprise) writes Parquet to a customer S3 bucket.
    Best fit
    LangChain-native teams whose codebase already commits to LangChain abstractions and whose trace volume sits below the breakeven where per-trace billing crosses the curve.
  • Platform fee
    Hobby free (50K units/mo, 30-day retention) · Core $29/mo · Pro $199/mo · Enterprise $2,499/mo. Self-host free under MIT.
    Provider scope
    Provider-neutral via OpenTelemetry. Anthropic, OpenAI, Google all supported through adapter integrations.
    Inference path
    Direct to provider. Langfuse instruments your traffic via OTel; provider keys never leave your app for production calls.
    LangSmith export
    No first-class importer for LangSmith. Migration via Public API: pull prompts from LangSmith, transform, push into Langfuse.
    Best fit
    Observability-first teams comfortable self-hosting (or running ClickHouse) who want OTel-native tracing and no proxy in the request path.
  • Platform fee
    Hobby free (10K requests/mo, 1GB storage). Pro $79/mo. Team $799/mo. Enterprise custom.
    Provider scope
    Multi-provider via the unified gateway URL or async-log mode. Anthropic, OpenAI, Google supported.
    Inference path
    Gateway-mode routes inference through Helicone before reaching the provider (proxy in the request path, observed latency overhead). Async-log mode keeps Helicone off the critical path; failures don't take your product down.
    LangSmith export
    No first-class LangSmith importer. Datasets and traces migrate via the Helicone API.
    Best fit
    Teams who want gateway-level caching, rate limits, and prompt routing alongside observability, and who accept proxied inference traffic for the gateway features.
  • Platform fee
    Starter free (1GB processed, 10K scores, 14-day retention). Pro $249/mo (5GB, 50K scores, 30-day retention). Enterprise custom.
    Provider scope
    Multi-provider via AI Gateway endpoint accepting OpenAI, Anthropic, and Google SDKs.
    Inference path
    BYOK supported, but inference flows through Braintrust's gateway. Provider keys you supply still pass through their proxy.
    LangSmith export
    Prompts are TypeScript-defined; LangSmith migration requires manual transformation through the SDK.
    Best fit
    Well-funded eval-driven teams who want a managed AI gateway plus observability, and who accept proxied inference traffic.
  • Our entry

    Prompt Assay

    Platform fee
    Free tier · Solo $49/mo · Team $99/seat/mo · Enterprise contact-sales. Flat per-seat; no per-trace tax, no annotation-driven retention upgrade.
    Provider scope
    Anthropic, OpenAI, Google with first-class adapters.
    Inference path
    Direct to provider. We never sit in the inference request path. Your bill stays with your provider.
    LangSmith export
    No first-class LangSmith importer; the LangChain PromptTemplate format converts to provider-native via SDK helpers, then pastes into a new prompt as version 1.
    Best fit
    Teams who want a craft-forward workbench (six-dimension critique, two-version Compare, AI pair) for authoring and want flat-fee economics regardless of trace volume.

Verified 2026-05-01 · Read the full breakdown

III.Marginalia · 5 questions

Frequently asked.

What does the LangSmith export actually carry over?
Datasets, experiments, annotation queues, project rules, prompts, and charts via langsmith-data-migration-tool. Bulk Data Export carries trace data as Parquet to your S3 bucket. Prompts come out as LangChain PromptTemplate objects. Evaluator configurations and Fleet deployment metadata are LangSmith-specific and will not carry over to a non-LangSmith destination.
Why are my LangSmith traces being upgraded to extended retention automatically?
LangSmith promotes a base trace to extended retention (400-day, $5.00 per 1K) when an automation rule matches a run inside the trace, when the trace is added to an annotation queue, or when an automated evaluator attached to the project adds feedback. The pricing page describes the base rate; the support article on extended retention names the three triggers. Once feedback signals are flowing, the bill curve is no longer flat.
Will my LangChain app keep working if I move authoring to Prompt Assay?
Yes. Prompt Assay is the authoring surface; the LangChain app stays where it is. Pull the prompt body and variables from Prompt Assay through the public REST API or SDK, and feed them to LangChain at runtime. Tracing is a separate decision · you can keep LangSmith as the trace destination, or move to Langfuse, Helicone, or Phoenix.
Where does the breakeven sit between LangSmith and a flat-fee workbench?
It depends on annotation pressure. With zero annotation, LangSmith Plus at five seats and 50K traces a month is roughly $295. Once annotation queues or evaluators start firing on a meaningful fraction of traces, the auto-upgrade to the $5.00/1K extended tier compounds with seat count and trace volume; published vendor-pricing comparisons put a five-seat team at 50M traces with annotation in the low-five-figure-per-month range. The breakdown blog post (linked in the Comparison section above) walks the math at three workload sizes. Prompt Assay Team is flat at $99 per seat per month; provider inference stays direct to your provider account.
Can I keep using LangSmith for tracing while authoring in Prompt Assay?
Yes, this is the most common pattern for teams who like LangSmith's trace UI but not its per-trace cost on the authoring side. Author and version prompts in Prompt Assay; pull them at runtime via the public API; emit traces from the LangChain runtime to LangSmith as before. The mix tends to bring the trace bill down because the highest-cost prompt-iteration loop happens in the workbench, not in LangSmith's trace stream.
IV · Closing

Land the workbench half.

Free to start. Your keys, your bill, no demo call.