HomeGuidesSDK ExamplesAnnouncementsCommunity
Guides

Prompt Library

The Prompt Library in Max.ai's Studio is a centralized feature that enables users to create, manage, and evaluate prompts for Large Language Models (LLMs) used within skills. It is designed to make prompt engineering structured, traceable, and reusable—supporting both development and evaluation workflows at scale.

This guide provides an in-depth overview of how to use the Prompt Library effectively, including creating prompts, configuring examples (K-shots), setting associated prompts, performing evaluations, and referencing prompts in code.


Understanding the Prompt Library

The Prompt Library serves as a single source of truth for prompts used in skills. By centralizing prompt management, teams can:

  • Reuse prompts across multiple skills for consistency.
  • Manage and update prompts in one place.
  • Enable more transparent and measurable prompt behavior with tracing and evaluation.
  • Configure model-level execution settings on a per-prompt basis.
  • Export all prompts in one action for easy migration or backup.
  • Version prompts, enabling rollback or promotion of specific versions as active.

Creating a Prompt

Steps to Add a New Prompt

  1. Navigate to Prompt Library

    • In Skill Studio, click the Prompt Library tab.
  2. Add a New Prompt

    • Click Add Prompt.
    • Enter a descriptive name for the prompt.
  3. Enter Prompt Content

    • Write the instruction content for the LLM.
    • Use placeholders like {{input}} for dynamic content.
  4. Configure K-Shot Parameters

    • K-Shot Count: Maximum number of examples to include.
    • K-Shot Threshold: A similarity score (0–1) to determine which examples are used.
  5. Set Model (Optional)

    • Select the LLM model that should execute this prompt.
    • This allows for fine-grained control and performance optimization.
  6. Save the Prompt

    • Click Save to create the prompt.

Adding Examples (K-Shots) to a Prompt

K-Shots provide example inputs and outputs that guide the model to produce more accurate and structured responses.

Steps to Add Examples

  1. Open the prompt and go to the Examples tab.
  2. Click Add Example.
  3. Configure the example:
    • Match Value – user query pattern.
    • Output Value – ideal model response.
    • Questions (optional) – additional guidance.
  4. Click Save.

K-Shot Parameters Explained

K-Shot Count

  • Defines how many examples to include.
  • Larger counts provide more context but consume more tokens.
  • Smaller counts are leaner but offer less guidance.

K-Shot Threshold

  • Defines how closely the user’s input must match an example for inclusion.
  • Higher threshold = fewer, more relevant matches.
  • Lower threshold = broader matches, more examples included.

Prompt Execution Tracing

Each prompt execution is automatically traced, capturing:

  • The input prompt text
  • K-shots used
  • Model execution metadata
  • The resulting model response

This full execution history is visible in the Prompt Library. Traces can be:

  • Used for debugging and refinement
  • Searched and filtered to identify edge cases
  • Directly converted into evaluation cases for structured testing

Evaluation Sets and Prompt Evaluation

Evaluation Sets

Prompts can be added to Evaluation Sets, which allow teams to test prompt behavior under structured, repeatable scenarios.

  • Evaluation sets can be run through the Test experience.
  • Prompts in evaluation sets can be scored automatically using evaluation prompts.
  • This allows for objective performance tracking and iterative improvement.

Evaluation Prompts

Evaluation prompts are specialized prompts designed to score model outputs.

  • If an evaluation prompt evaluates other prompts, it is classified as Prompt Evaluation.
  • If it evaluates user questions directly, it is classified as Question Evaluation.
  • Evaluation prompts must return a JSON object in the following format:
{
  "explanation": "string",
  "pass": true
}
  • Multiple evaluation prompts can be associated with a single evaluation set to assess outputs from different perspectives.
  • Evaluation prompts have access to what the prompt generated through the {{output}} variable.
  • Evaluation prompts have access to what the prompt received as inputs through the {{input}} variable
  • Evaluation prompts can have their own specific variables that can be set on each Evaluation Case. So each prompt case can have custom evaluation expectations that are reviewed by the evaluation prompt.

Managing Evaluation Cases

Evaluation cases can be added in several ways:

  • Manually through the evaluation set interface.
  • From traces by selecting a recorded execution.
  • From chat diagnostics when reviewing conversation behavior.

This makes it easy to capture and re-test problematic scenarios over time.


Playground Integration

All prompt executions and evaluations can be viewed in the Playground. This environment is ideal for:

  • Iterating quickly on prompts
  • Comparing outputs across models or prompt versions
  • Experimenting with evaluation criteria

Setting Associated Prompts for a Skill

Prompts from the library can be linked to specific skills. This allows developers to reference and execute prompts without hardcoding text inside skill logic.

  1. Obtain the prompt’s UUID from its URL.
  2. Open the skill in Skill Studio.
  3. Go to Set Associated Prompts (may require debug mode).
  4. Add a key and paste the Prompt ID.
  5. Save the skill.

Referencing Prompts in Code

import uuid

# Access associated prompts
list_prompt_config = copilot_skill.associated_prompts

# Create mapping of keys to prompt IDs
prompt_map = {}
for prompt_config in list_prompt_config:
    prompt_map[prompt_config["key"]] = prompt_config["promptId"]

# Retrieve the prompt
prompt_id = prompt_map["email_response_prompt"]

# Get the prompt content with variables and user input
pr = sp.ctx.client.config.get_prompt(
    uuid.UUID(prompt_id),
    {"subject": "Billing Inquiry"},  # Variables for the prompt
    user_input
).prompt_response

prompt_for_model = pr["prompt"]
k_shots_used = pr["k_shots"]

# Use the prompt with the LLM
response = llm.generate(prompt_for_model)

Migrating Prompts Between Environments

  • Export All Prompts: Export the entire prompt library in a single action for easy migration or backup.
  • Export Individual Prompts: Prompts can also be exported individually from the kebab menu.
  • Import: Prompts can be imported into the library through the Add PromptImport Prompt option.
  • Prompts are also exported automatically as dependencies of an assistant.

Prompt Versioning

Prompts can be versioned to support iterative development and controlled releases.

  • Commit changes with a descriptive comment.
  • Publish the committed version as the active version for the environment.
  • Switch back to older versions if needed.
  • Review version history to track changes over time.

Versioning helps teams manage prompt evolution and roll back quickly in case of regressions.


Best Practices

  • Define clear match values that reflect real user input.
  • Balance K-shot parameters for accuracy vs. token cost.
  • Use prompt tracing to identify gaps or unexpected behavior.
  • Incorporate evaluation sets into development workflows.
  • Leverage Playground for testing and iteration.
  • Assign models carefully per prompt for performance optimization.
  • Version prompts thoughtfully to maintain stability across environments.

Conclusion

The Prompt Library in Max.ai’s Skill Studio is more than a storage location for text—it is a complete prompt engineering and evaluation framework.

With support for prompt reuse, tracing, structured evaluation, model selection, versioning, and bulk export, it enables teams to build more reliable and adaptable AI capabilities while maintaining visibility and control over performance.

For teams building scalable assistants, a well-structured Prompt Library is a cornerstone of quality, consistency, and continuous improvement.