Humanloop sunset.
Three places to land.
On July 30, 2025, Humanloop wound down paid service. On September 8, the platform, API, and UI went permanently offline. Their official migration guide names two destinations. There is a third.
This page compares the three on the facts that matter for a migration: platform fee, provider scope, where the inference traffic actually flows, and whether the tool reads a `.prompt` file natively.
Four rows. Five facts each.
Humanloop is dimmed because there is nothing to migrate to inside it. The other three are real options. Pick the row that matches how your team works.
Humanloop
Sunset Sep 8 2025- Platform fee
- Billing stopped Jul 30 2025. Platform, API, and UI offline Sep 8 2025.
- Provider scope
- Multi-provider (was)
- Inference path
- Acqui-hired into Anthropic.
- Humanloop import
- Export only · `.prompt` and `.agent` files via Humanloop CLI.
- Best fit
- Existing customers with a deadline. Their official guide names Langfuse and Braintrust as destinations.
- Platform fee
- Hobby free (50k units/mo, 30-day retention) · Core $29/mo · Pro $199/mo · Enterprise $2,499/mo. Self-host free under MIT.
- Provider scope
- Provider-neutral via OpenTelemetry. Anthropic, OpenAI, Google all supported through adapter integrations.
- Inference path
- Direct to provider. Langfuse instruments your traffic via OTel; provider keys never leave your app for production calls.
- Humanloop import
- No first-class importer. Humanloop JSON export must be transformed through the Public API.
- Best fit
- Observability-first teams comfortable self-hosting, or running ClickHouse, who want OTel-native tracing and no proxy in the request path.
- Platform fee
- Starter free (1GB data, 14-day retention) · Pro $249/mo · Enterprise custom.
- Provider scope
- Multi-provider via AI Gateway endpoint accepting OpenAI, Anthropic, and Google SDKs.
- Inference path
- BYOK supported, but inference flows through Braintrust's gateway. Provider keys you supply still pass through their proxy.
- Humanloop import
- No `.prompt` importer. Prompts are TypeScript-defined; Humanloop migration requires manual transformation.
- Best fit
- Well-funded eval-driven teams who want a managed AI gateway plus observability and accept proxied inference traffic.
- Our entry
Prompt Assay
- Platform fee
- Free tier · Solo $49/mo · Team $99/seat/mo · Enterprise contact-sales.
- Provider scope
- Anthropic, OpenAI, Google with first-class adapters.
- Inference path
- Direct to provider. We never sit in the inference request path. Your bill stays with your provider.
- Humanloop import
- Native `.prompt` and `.agent` parser. Paste, confirm, land in your library.
- Best fit
- Multi-provider teams who want a craft-forward workbench with BYOK economics and prompt-level versioning.
Verified 2026-04-25 · Read the full six-vendor analysis
Three reasons teams pick the third row.
Pricing that does not scale linearly with your traffic.
LangSmith bills per trace at $2.50 / 1K above a 10K floor. Prompt Assay charges a flat platform fee. Provider inference stays direct to your provider account; we never sit in the request path or mark it up.
Claude is a first-class citizen, not an afterthought.
XML-tagged prompts, prompt caching, message-structure choices, and the latest Claude model IDs (Opus 4.7, Sonnet 4.6, Haiku 4.5) are first-class. Six-dimension critique tunes for Anthropic-style reasoning patterns out of the box.
Your inference traffic does not visit a third-party gateway.
BYOK at every paid tier. Provider keys are encrypted at rest with AES-256-GCM and decrypted only inside the LLM call you triggered. We never proxy provider traffic. Compliance review reads cleanly.
Land your `.prompt` files in a workbench.
Free to start. Your keys, your bill, no demo call. The importer reads Humanloop format natively.