Playground
The Playground is LLMx Prompt Studio’s interactive environment for designing, testing, and refining prompts. It provides a real-time interface to experiment with different models, parameters, and variable inputs before deploying prompts to production.
Key Features
- Multi-Model Support: Switch instantly between providers (OpenAI, Anthropic, Google, etc.) to compare performance.
- Variable Injection: Test prompts with dynamic data using
{{ variable }}syntax. - Tool Mocking: Simulate function calls and tool outputs to test complex agentic flows.
- Instant Feedback: Streaming responses allow you to see the model’s output as it’s generated.
- History & Versioning: Every run is captured, allowing you to iterate safely and revert if needed.
Getting Started
- Select a Model: Choose from the dropdown menu in the header.
- Define Variables: If your prompt uses variables, define their values in the Variables section.
- Run: Click the Run button (or
Cmd/Ctrl + Enter) to execute the prompt. - Analyze: View the output, latency, and token usage in the Results panel.
Sections
- Interface Guide: robust tour of the Playground UI.
- Execution & Mocking: detailed explanation of how prompts are processed and how to mock tools.
- Evaluation: using the built-in “Judge” to automatically score output.
Last updated on