The list endpoint is right there.
Use it once.
PromptLayer’s REST API enumerates every prompt template in your workspace. There is no bundled CLI · the migration is a short script you write once, paste the responses, and ship.
Three steps. Then you ship.
- Station · 01
List
Authenticate against PromptLayer's REST API with your workspace API key. Call `List Prompt Templates` to enumerate every prompt in the workspace · the response carries the prompt ID, name, and template metadata for each. The endpoint is paginated; iterate until the cursor is exhausted. The API reference at docs.promptlayer.com covers the schema; this is solved territory, not novel parsing.
- Station · 02
Export
For each prompt template, call `Get Prompt Template (Raw)` to pull the prompt body, variables, message structure, and current version metadata. Bundle the responses into a JSON file or directly into a Python list. Note the fields that won't carry over · A/B test traffic-split configurations, release labels, dynamic release labels, and threaded feedback comments are PromptLayer control-plane state and are intentionally left behind.
- Station · 03
Land
Sign up for Prompt Assay. For each exported prompt, create a new prompt in the library and paste the body in as version one. The workbench surfaces the AI pair (Brainstorm, Critique, Improve, Rewrite, Compare) and prompt-level versioning from the first save. Provider keys connect directly · BYOK at every paid tier with no inference markup.
Four rows. Five facts each.
Each row is a real option for a specific fit. The procedure above is the same regardless of destination · only the paste target changes.
- Platform fee
- Free (2.5K req/mo, 5 users) · Pro $49/mo (5 users, $0.003/txn overage) · Team $500/mo flat for 25 users · Enterprise custom. Pro→Team is a 10× single-step jump; no middle tier between 5 users and Team.
- Provider scope
- Multi-provider via Python SDK and JavaScript SDK (server-side only). REST API for prompt and dataset access.
- Inference path
- Direct to provider. Observability-only; provider keys never reach PromptLayer's servers. Inference call is local.
- PromptLayer export
- REST API for export (List Prompt Templates, Get Raw, List Datasets). No bundled CLI export tool; migrations are scripted against the API.
- Best fit
- Non-technical collaborators (PMs, domain experts, prompt engineers who don't ship code) who want a visual editor and Notion-style UX for prompt iteration.
- Platform fee
- Developer free (5K base traces/mo). Plus $39/seat/mo. Overage $2.50/1K base, $5.00/1K extended; annotation queues and evaluators automatically upgrade traces to the extended tier.
- Provider scope
- Multi-provider via Python and TypeScript SDKs; framework-agnostic via OpenTelemetry.
- Inference path
- Direct to provider. Observability-first via SDK instrumentation.
- PromptLayer export
- No PromptLayer importer. Migration is scripted: export from PromptLayer's REST API, transform to LangChain PromptTemplate, push via SDK.
- Best fit
- LangChain-heavy codebases at modest trace volume who want native observability for chains and agents.
- Platform fee
- Hobby free (50K units/mo, 30-day retention) · Core $29/mo · Pro $199/mo · Enterprise $2,499/mo. Self-host free under MIT.
- Provider scope
- Provider-neutral via OpenTelemetry. Anthropic, OpenAI, Google supported.
- Inference path
- Direct to provider. OTel instrumentation; no proxy in the inference request path.
- PromptLayer export
- No PromptLayer importer. Migration via Public API: export from PromptLayer, transform, push into Langfuse.
- Best fit
- Open-source-first teams who want self-host or hosted, OTel-native tracing, and a familiar observability surface alongside prompt management.
- Our entry
Prompt Assay
- Platform fee
- Free tier · Solo $49/mo · Team $99/seat/mo · Enterprise contact-sales. Linear per-seat scaling with no cliff at 5 users.
- Provider scope
- Anthropic, OpenAI, Google with first-class adapters.
- Inference path
- Direct to provider. We never sit in the inference request path. Your bill stays with your provider.
- PromptLayer export
- No first-class PromptLayer importer; copy-paste from the REST API export lands cleanly as version 1 in a new prompt.
- Best fit
- Engineers shipping production prompts who want a craft-forward workbench (six-dimension critique, two-version Compare with model-graded diff, AI pair) and prompt-level versioning over a no-code visual editor.
Verified 2026-05-01 · Read the full breakdown
Frequently asked.
- Does PromptLayer have a CLI export tool?
- Not as of May 2026. Migration is scripted: iterate List Prompt Templates and Get Prompt Template (Raw) against the REST API, reconstruct the schema client-side. The migration doc on docs.promptlayer.com covers integration with PromptLayer (the run() and log_request() helpers), not export. The export path is real but you write the script.
- Will my A/B test setup carry over?
- No. A/B test traffic-split configurations, release labels, dynamic release labels, and threaded feedback comments are PromptLayer-specific control-plane state. They will not survive a generic export. The prompt body, variables, message structure, and version metadata do carry over cleanly.
- Why migrate if PromptLayer is BYOK already?
- BYOK is not the wedge. PromptLayer's docs state that LLM API keys never reach their servers and inference is local · so is Prompt Assay's. The wedge is audience and surface. PromptLayer is openly built for PMs and domain experts; the visual editor and blueprint UI optimize for non-technical collaboration. Engineering teams who want a craft-forward workbench (six-dimension critique, two-version Compare with model-graded diff, prompt-level versioning, AI pair) move because the editor and the version model fit how engineers actually work.
- Is Prompt Assay cheaper than PromptLayer Team?
- Past five users, no. PromptLayer Team is $500 per month flat for up to twenty-five users. Prompt Assay Team is $99 per seat per month, so a six-seat team lands at $594 vs $500. The friction PromptLayer customers cite is the cliff itself · Pro caps at five users with no middle tier before the 10x step to Team · not the absolute price past the cliff. The honest framing is that Prompt Assay is more expensive per seat at mid-team scale; the value trade is the workbench depth.
- Can I keep PromptLayer for the PMs while engineers move to Prompt Assay?
- Yes, this is a viable interim. Prompts authored in either tool are the source of truth for production at runtime; the question is which surface each role uses. If the PM team is happy in PromptLayer's blueprint editor and the engineering team wants the workbench, run both for a quarter and converge once the workflow shape stabilizes. Production prompts get pulled at runtime via either tool's API.
Land in the workbench.
Free to start. Your keys, your bill, no demo call.