Playground Interface
The Playground interface is designed to emulate a real-world chat application while providing powerful developer tools.
Header Toolbar
The top toolbar controls the execution environment:
- Model Selector: Dropdown to select the LLM (e.g.,
gpt-4o,claude-3-5-sonnet). Only models with configured API keys are shown. - Parameters:
- Temperature: Controls randomness (0.0 to 2.0). Lower values are more deterministic.
- Max Tokens: Limits the response length.
- Save: Persists the current prompt state.
- Run: Executes the prompt.
Main Editor
The central area is where you craft your prompt. It supports:
- System Message: The core instruction for the AI.
- User Input: The simulated user query.
- Syntax Highlighting: For readability.
- Injections: Auto-complete for
{{ variable }}and[[ prompt_path ]].
Variables Panel (Left)
The Variables section allows you to define dynamic inputs for your prompt.
- Variables: Key-value pairs matching your
{{ variable }}placeholders. - Tree View: Navigate your prompt library to reference other prompts via
[[ injection ]].
Results Panel (Bottom)
After running a prompt, the results panel slides up to show:
- Output: The raw text response from the model.
- Metrics:
- Latency: Time to first token and total time.
- Tokens: Input and output token usage.
- Cost: Estimated cost of the run (if available).
- JSON View: Inspect the full raw JSON response from the provider, which is useful for debugging tool calls.
Last updated on