§.Promptfoo alternatives

OSS license preserved.
The steward changed.

On March 9, 2026, OpenAI acquired Promptfoo. The MIT license is preserved per OpenAI’s announcement and the integration target is OpenAI Frontier, OpenAI’s security-testing surface for AI agents. For teams who picked Promptfoo for provider-neutrality, three credible 2026 alternatives cover the eval and red-teaming use cases without the OpenAI roadmap pressure.

This page compares Promptfoo with Braintrust, Langfuse, and Prompt Assay on platform fee, provider scope, where the inference traffic actually flows, and the YAML-config import path that lets you migrate cleanly.

I.The destinations

Four rows. Five facts each.

Promptfoo is not dimmed because it isn't sunset · the OSS license is preserved. The row carries the acquisition status as a fact rather than a verdict. Pick the row that matches how your team actually works.

  • Platform fee
    Open-source CLI free (MIT-licensed). Enterprise tier custom; pricing not publicly disclosed post-acquisition. Acquired by OpenAI on Mar 9 2026; deal value not disclosed. OpenAI committed to 'continue building out' the open-source offering.
    Provider scope
    Multi-provider via YAML-defined providers in `promptfooconfig.yaml`. Historical scope: 250+ providers including Anthropic, OpenAI, Google, Bedrock, local models. Future non-OpenAI provider commitment is not publicly stated.
    Inference path
    Direct to provider. CLI runs locally; provider keys never leave your machine.
    Promptfoo import
    YAML-driven test cases. Prompt strings, eval assertions, and provider configs all in `promptfooconfig.yaml`. Migration is parsing the YAML and re-creating the prompt + eval suite in the new tool.
    Best fit
    AI security and red-teaming workflows that integrate into OpenAI Frontier post-acquisition. Multi-provider eval users now subject to OpenAI's roadmap.
  • Platform fee
    Starter free (1GB processed, 10K scores). Pro $249/mo (5GB, 50K scores, 30-day retention). Enterprise custom. Closed-source, hosted-only on Pro; self-host is Enterprise-only.
    Provider scope
    Multi-provider via AI Gateway accepting OpenAI, Anthropic, and Google SDKs.
    Inference path
    BYOK supported, but inference flows through Braintrust's gateway. Provider keys still pass through their proxy.
    Promptfoo import
    Prompts are TypeScript-defined. Promptfoo migration requires manual transformation: parse YAML, re-author prompts in TS, port assertions to Braintrust's eval format.
    Best fit
    Well-funded eval-driven teams who want a managed gateway plus eval CI surface and accept proxied inference.
  • Platform fee
    Hobby free (50K units/mo) · Core $29/mo · Pro $199/mo · Enterprise $2,499/mo. Self-host free under MIT.
    Provider scope
    Provider-neutral via OpenTelemetry. Multi-provider tracing and prompt management.
    Inference path
    Direct to provider. OTel instrumentation; no proxy in the inference path.
    Promptfoo import
    Datasets API accepts re-imported test cases. Promptfoo YAML must be parsed and pushed via Public API.
    Best fit
    Open-source-first teams who want OTel-native tracing alongside dataset and eval primitives, with self-host as a clean compliance fallback.
  • Our entry

    Prompt Assay

    Platform fee
    Free tier · Solo $49/mo · Team $99/seat/mo · Enterprise contact-sales. BYOK-mandatory at every paid tier with no inference markup.
    Provider scope
    Anthropic, OpenAI, Google with first-class adapters and no parent-company tilt.
    Inference path
    Direct to provider. We never sit in the inference request path. Your bill stays with your provider account.
    Promptfoo import
    No first-class YAML parser. Copy each Promptfoo prompt into a new Prompt Assay prompt; eval suites land in the workbench's evaluation surface (test cases + rubrics + LLM-as-a-judge).
    Best fit
    Multi-provider teams who picked Promptfoo for OSS neutrality and want a workbench that pairs authoring (six-dimension critique, two-version Compare) with the eval surface in a single tool, with no parent-company provider preference.

Verified 2026-05-01 · Read the full breakdown

II.Where Prompt Assay fits

Three reasons multi-provider teams pick the fourth row.

I · For provider-neutral teams

BYOK across Anthropic, OpenAI, Google · no parent-company tilt.

Prompt Assay is BYOK-mandatory at every paid tier. Provider keys connect directly to Anthropic, OpenAI, and Google with first-class adapters · no provider gets the next feature first because the parent company prefers it. We never sit in the inference request path. Your bill stays with your provider account.

II · For workbench-plus-eval teams

Authoring, critique, and eval suites in one surface.

Promptfoo's strongest assertion-style eval lane lives upstream of YAML-defined prompts that someone authored. Prompt Assay covers the authoring half · six-dimension critique, two-version Compare with model-graded structural diff, prompt-level versioning · alongside eval suites with test cases, rubrics, and LLM-as-a-judge graders. The two halves of the workflow live in the same workbench.

III · For teams who want roadmap independence

Eval primitives that optimize for cross-model rigor.

Promptfoo's eval CI surface now lives under OpenAI's roadmap. Some buyers want eval primitives where the maintainers' day jobs aren't inside one of the providers being evaluated. Prompt Assay is independent of all three providers · Anthropic, OpenAI, and Google · and that independence is structural, not aspirational.

III · Closing

Land the workbench half.

Free to start. Your keys, your bill, no parent-company tilt. Pair with a security-testing tool of your choice for red-teaming.