§.FAQ
Can I use a local or self-hosted model?
Today: Anthropic, OpenAI, Google only. What's on the roadmap.
Updated 2026-04-13
Today: no. BYOK supports Anthropic, OpenAI, and Google only. There's no surface for pointing the playground / AI assistant / evaluation judge at a local Ollama / llama.cpp / vLLM endpoint.
Workarounds
- Use PromptAssay as a library of prompts, pull via the public REST API or SDK, and run them against your own infra. The resolved-content endpoint gives you the assembled prompt ready to send to any provider.
- Target-only model entries — PromptAssay's model list includes entries for Llama 3.3 70B and Mistral Large. These are valid target models for metadata purposes (linter, token budget, export) but are NOT callable from the playground. They exist so you can manage prompts destined for self-hosted models.
Roadmap
We're watching demand for OpenAI-compatible endpoints as the canonical self-host protocol. If your org needs this today, email support — the shape of the feature is less interesting than how many teams have real production workloads gated on it.