Welcome to PromptAssay
What PromptAssay is, who it's for, and the core concepts you'll use every day.
Updated 2026-04-13
PromptAssay is a version-controlled workbench for building, refining, testing, and shipping LLM prompts. It runs entirely on your provider API keys — we don't resell tokens, we don't mediate your LLM traffic, and we don't see your spend. You bring Anthropic, OpenAI, and/or Google keys, and PromptAssay gives you the editor, version history, AI-assisted critique, evaluation suites, and collaboration tools around them.
Who it's for
- Prompt engineers shipping production prompts and needing version control, diffing, and rollback.
- Teams sharing a library of prompts across a workspace with role-based access.
- Researchers running evaluation suites and judge-scored comparisons across models.
- Developers who want a REST API + TypeScript SDK to pull prompts at runtime.
What you get
- Full-featured editor with syntax highlighting, real-time token budget, cost estimation, and a 10+2 lint-rule set (ten always-on + two model-aware) for prompt quality issues.
- Version control with full history, diff viewer, restore, and branching from any prior version.
- AI assistant panels: Critique, Improve, Rewrite, Brainstorm, Compare, and Judge — all powered by your BYOK keys.
- Evaluation suites with test cases, rubrics, and automated judge-based scoring across six dimensions.
- Playground for live runs, variable substitution, and side-by-side version comparison.
- Reusable fragments with variable slots for composing prompts.
- Public REST API + TypeScript SDK for fetching prompts at runtime.
- Team workspaces with role-based access, invitations, and per-workspace BYOK keys.
Five concepts in two minutes
- Prompt
- The atomic unit of work. Has a title, content, prompt type (system / user / multi-turn / template), optional target model, and intent metadata.
- Version
- An immutable snapshot of a prompt's content at a point in time. Every edit creates a new version with a change source tag (manual / ai-improve / restore / branch / etc.).
- Workspace
- A container for prompts, folders, tags, fragments, BYOK keys, and billing. Every user has a personal workspace automatically; team workspaces are manually created and support collaboration.
- BYOK
- Bring Your Own Key. Per-workspace API keys for Anthropic, OpenAI, and Google. Required for the playground, AI assistant, evaluation judges, and any LLM-backed feature.
- Tier
- Your workspace's billing plan: free, solo, team, or enterprise. Tier controls member caps, monthly LLM call quota, and public API access. Each workspace has its own tier.
Next steps
- Read Core concepts for the one-page mental model.
- Follow Create your first prompt to get something into the editor.
- Configure your first BYOK key to unlock playground + AI features.
- Skim Keyboard shortcuts so you move fast from day one.