Skip to Content

Execution & Mocking

Prompt Execution

When you click Run, LLMx Prompt Studio performs the following steps:

  1. Resolution:
    • Injections: [[ path/to/prompt ]] tags are replaced with the content of the referenced prompts.
    • Variables: {{ variable_name }} placeholders are substituted with values from the Variables panel.
  2. API Request:
    • The resolved prompt is sent to the backend.
    • The backend uses the specific API key for the selected provider (OpenAI, Anthropic, etc.).
  3. Streaming:
    • The response is streamed back via Server-Sent Events (SSE).
    • Tokens appear in real-time as they are generated.

Tool Mocking

One of the Playground’s most powerful features is the ability to mock external tools. This allows you to test how your prompt handles function calls without actually invoking real APIs.

Defining Mocks

In the Tools or Mocking section (if configured), you can define:

  • Function Name: The name of the tool (e.g., get_weather).
  • Description: Instructions for the model on when to use it.
  • Mock Return: The JSON that the tool should “return” to the model.

usage

  1. The Playground sends the tool definitions to the LLM.
  2. If the LLM decides to call a tool, the Playground intercepts the call.
  3. It injects the Mock Return value you specified.
  4. The LLM continues generation using that mock data.

This enables you to test complex “Reasoning -> Tool Call -> Final Answer” loops entirely within the Playground.

Last updated on