§.Migrate from Humanloop

Your `.prompt` files
have a home.

Humanloop wound down paid service on July 30, 2025; the platform, API, and UI went permanently offline on September 8. The exported `.prompt` files are durable. Land them in a workbench that reads them natively.

I.Preview · before signup

See what would import.

Paste a Humanloop `.prompt` file below. The parser runs the same code the in-app importer runs after signup. Nothing is stored.

Preview

Paste a file on the left and click `Preview the import`. The parsed result lands here.

Nothing is stored. Parser runs server-side, returns the preview, and forgets.

II.The destinations

Four rows. Five facts each.

Humanloop's official migration guide names two destinations. There is a third. Pick the row that matches how your team works.

  • Humanloop

    Sunset Sep 8 2025
    Platform fee
    Billing stopped Jul 30 2025. Platform, API, and UI offline Sep 8 2025.
    Provider scope
    Multi-provider (was)
    Inference path
    Acqui-hired into Anthropic.
    Humanloop import
    Export only · `.prompt` and `.agent` files via Humanloop CLI.
    Best fit
    Existing customers with a deadline. Their official guide names Langfuse and Braintrust as destinations.
  • Platform fee
    Hobby free (50k units/mo, 30-day retention) · Core $29/mo · Pro $199/mo · Enterprise $2,499/mo. Self-host free under MIT.
    Provider scope
    Provider-neutral via OpenTelemetry. Anthropic, OpenAI, Google all supported through adapter integrations.
    Inference path
    Direct to provider. Langfuse instruments your traffic via OTel; provider keys never leave your app for production calls.
    Humanloop import
    No first-class importer. Humanloop JSON export must be transformed through the Public API.
    Best fit
    Observability-first teams comfortable self-hosting, or running ClickHouse, who want OTel-native tracing and no proxy in the request path.
  • Platform fee
    Starter free (1GB data, 14-day retention) · Pro $249/mo · Enterprise custom.
    Provider scope
    Multi-provider via AI Gateway endpoint accepting OpenAI, Anthropic, and Google SDKs.
    Inference path
    BYOK supported, but inference flows through Braintrust's gateway. Provider keys you supply still pass through their proxy.
    Humanloop import
    No `.prompt` importer. Prompts are TypeScript-defined; Humanloop migration requires manual transformation.
    Best fit
    Well-funded eval-driven teams who want a managed AI gateway plus observability and accept proxied inference traffic.
  • Our entry

    Prompt Assay

    Platform fee
    Free tier · Solo $49/mo · Team $99/seat/mo · Enterprise contact-sales.
    Provider scope
    Anthropic, OpenAI, Google with first-class adapters.
    Inference path
    Direct to provider. We never sit in the inference request path. Your bill stays with your provider.
    Humanloop import
    Native `.prompt` and `.agent` parser. Paste, confirm, land in your library.
    Best fit
    Multi-provider teams who want a craft-forward workbench with BYOK economics and prompt-level versioning.

Verified 2026-04-25 · Read the full six-vendor analysis

III.The procedure

Three steps. Then you ship.

  1. Station · 01

    Paste

    Drop your `.prompt` or `.agent` file into the textarea. The parser is the same one that runs after signup; it reads YAML frontmatter, message-tag bodies, and the legacy completion-endpoint format. No account required.

  2. Station · 02

    Confirm

    The preview shows the title, prompt type, target model, and any fields that did not transfer (temperature, max_tokens, tools). The importer announces the gap up front so the migration is not a silent drop.

  3. Station · 03

    Land

    Sign up. Your library opens with the import dialog already focused. Confirm the title, click Import, and the file lands as version one of a new prompt. Versioning, critique, and Compare are available from that screen.

IV.Marginalia · 3 questions

Frequently asked.

Does Prompt Assay see my prompt content during the preview?
The parser runs server-side because the in-app importer needs to do the same parse with the same code, and we want the marketing preview to be the same code. Nothing is logged: the API route accepts the text, parses it, returns the preview, and forgets. No prompt content is written to a database, an analytics event, or a log line.
What does the importer drop?
Humanloop's `.prompt` files carry per-call sampling parameters (temperature, max_tokens, top_p), tools / functions, and provider-specific endpoint hints. Prompt Assay stores those at run-time inside the Playground or eval suite, not as immutable metadata on the prompt record. The importer surfaces dropped fields as warnings so you can re-set them after import.
Why does BYOK matter for a migration?
Humanloop, LangSmith, and Braintrust route inference traffic through their infrastructure. Prompt Assay does not. Your provider keys decrypt only inside the LLM call you triggered, the call goes directly to Anthropic / OpenAI / Google, and your bill stays on your provider account. There is no markup and no third-party DPA between you and the model.
V · Closing

Land your library.

Free to start. Your keys, your bill, no demo call.