Lab Interface Guide
The Lab interface is divided into three main sections: Configuration, Input, and Output.
1. Configuration Panel
Located on the right sidebar, this panel controls the LLM execution parameters.
- Model Selection: Choose from any provider configured in your organization (e.g.,
gpt-4o,claude-3-5-sonnet). - Temperature: Control randomness (0.0 to 2.0).
- Recommendation: Use
0.0for structured tasks (extraction, code) and0.7+for creative writing.
- Recommendation: Use
- Max Tokens: Limit the response length to manage costs and prevent runway loops.
- Response Mode: Toggle between Text and JSON Mode (force valid JSON output).
2. Input Section
The central editor where you define your prompt scenario.
System & User Prompts
- System Prompt: Define the AI’s persona, constraints, and core logic.
- User Prompt: The specific task or query for this execution.
Variables
Inject dynamic data into your prompts using handlebars syntax {{variable_name}}.
- Define variable values in the Variables tab below the editor.
- Variables can be strings, numbers, or JSON objects.
Attachments
Upload images or text files to test multimodal capabilities (Vision/RAG) if the selected model supports it.
3. Output Section
Analyzing the results of your execution.
- Completion: The raw text or JSON returned by the model.
- Latency: Time to first token (TTFT) and total generation time.
- Token Usage: Breakdown of Input, Output, and Total tokens for cost estimation.
- Reasoning: If using reasoning models (e.g.,
o1), the internal thought process is displayed here.
Last updated on