HomeGuidesSDK ExamplesAnnouncementsCommunity
Guides

Prompt Best Practices in Max

Master the art of configuring prompts in Max to optimize your assistant's performance across all pipeline stages

Max's system prompt is intelligently auto-generated using a powerful template system that pulls variables at runtime. This guide provides comprehensive best practices for configuring prompts to optimize your assistant's behavior throughout the conversation pipeline.

📘

Version Note

This templated system prompt approach is the default for all new Assistants starting with version 25.01.

Chat Pipeline Architecture

Max processes conversations through three distinct stages, each utilizing specific prompt components:

Pipeline Stages

  1. Skill Selection - Determines which skill to execute based on user input
  2. Parameter Selection - Identifies and validates parameters for the selected skill
  3. Final Response - Generates the assistant's response to the user

Prompt Component Visibility by Stage

SectionStage 1: Skill SelectionStage 2: Parameter SelectionStage 3: Final Response
Today's Date
Persona
Data
Skills✓ (All skills)✓ (Selected skill only)
Response Guidance✓ (Skill Selection)✓ (Parameter Selection)✓ (Answer)
⚠️

Important

Response Guidance sections are only visible during their specific pipeline stage. For example, Skill Selection Response Guidance is not available during Parameter Selection.

Configuring Prompt Variables

Navigate to Skill Studio → Settings → Prompt Variables to configure these essential components:

1. Persona

Defines the assistant's identity, expertise, and communication style.

Best Practices:

  • Define Clear Identity: Specify the assistant's role and name (e.g., "You are Max, a pricing analyst for Kimberly-Clark")
  • Establish Communication Style: Set tone, formality, and response format (e.g., "Provide clear, concise insights that are relevant for the user")
  • Set Knowledge Boundaries: Specify data specialization and limitations (e.g., "Rely exclusively on the information shared during the conversation")

Example:

You are Max, an expert data analyst designed to support business decisions. You provide insights based solely on the data analytics toolset.
Your expertise is in turning complex data into actionable business insights. You communicate in a clear, professional manner.
To ensure accuracy, rely exclusively on the information shared during the conversation, and deliver clear, actionable insights that align with business strategic goals.

2. Skill Selection Response Guidance

Guides the LLM in choosing the appropriate skill or determining when no skill is needed.

Best Practices:

  • Clear Selection Criteria: "Choose the most relevant function from the list. If the user's question is not clear, ask for clarification"
  • Formatting Restrictions: "Avoid using markdown-style table formatting or visualizations in chat. If a user asks for a chart or table, run a Skill"
  • Communication Guidelines: "Explain to the user what you are doing and why as you run the skill"

📝

Note

After this stage, you cannot request additional information from the user for skill execution.

3. Parameter Selection Response Guidance

Instructions for parameter identification and handling missing information.

Best Practices:

  • Default Handling: "When the user does not provide a time frame, use the most recent relevant time frame available in the data"
  • Autonomous Decision Making: "If the capabilities and sample questions provided above provide a clear choice, proceed with the tool"
  • Parameter Priority: Guide which parameters to prioritize when multiple options exist

4. Answer Response Guidance

Controls the final response formatting and user communication.

Best Practices:

  • Transparency: "Clarify any adjustments made to expected parameters. For example, if the user asked for 'this year' but only last year's data was available, be clear about the adjusted time period"
  • Next Steps: Include conversational suggestions for follow-up analysis
  • Context Preservation: Ensure responses acknowledge the full conversation context

Skill Configuration

Navigate to Skill Studio → Skills → Properties to configure skill-level properties visible to the LLM.

Skill-Level Properties

1. Name for LLM

A unique, descriptive identifier for the skill.

2. Description for LLM

Clear explanation of the skill's capabilities and intended use.


3. Capabilities

  • Types of analysis supported
  • Visualization formats available
  • Insight categories provided

4. Limitations Explicit statement of what the skill cannot do.

5. Example Questions

  • Use placeholders with square brackets instead of actual values
    • Prevents LLM bias toward example values
  • Format: "Show me [metric] by [dimension] for [time period]"

6. Parameter Guidance

  • Clarify parameter hierarchy and priority
    • Example: "Use 'brand_family' unless user specifically mentions individual brands, then use 'brand'"

Variable-Level Properties

1. Variable Name

  • Use descriptive, consistent naming
    • Standard variables: 'metrics', 'breakouts', 'growth_type'
      • Same name across skills should have identical definitions

2. LLM Description

  • Clear explanation of the variable's purpose and expected values
    • Include format requirements and constraints

Dataset Configuration

Configure dataset properties in Data → Dataset → Columns:

Column Properties

1. Name

  • Default from database can be overridden with clearer names
  • Use business-friendly terminology

2. Description

  • Explain the metric/dimension's business meaning
  • Include calculation methods if relevant

3. Sample Limit (Dimensions only)

  • Controls maximum example values shown to LLM
  • Recommendation: Show all values if <50
  • Can increase if not many high-cardinality dimensions

Testing Strategy

Skip Skill Runs Feature

Test prompt changes without functional skills using the Test Suite's "Skip Skill Runs" feature. This enables rapid iteration on prompt configurations.


📘

Limitation> Answer Response Guidance cannot be effectively tested using Skip Skill Runs.

Recommended Test Collections

1. Functional Questions

Very specific queries for baseline testing (e.g., "Sales by Brand in 2025")

2. Business-Provided Questions

Exact questions from actual business users

3. Past Failures

Previously failed questions that have been resolved

4. Period Testing

Tests for temporal interpretation challenges

5. Smoke Test

Small, representative set for post-migration validation

6. Full Regression

Comprehensive test running all collections

Conversation Testing

📝

New Feature (As of June 2025)> Test collections now include full conversation history, enabling testing of multi-turn conversations and context retention.

Use the main Chat UI for conversational testing to:

  • Validate context utilization
  • Test follow-up questions
  • Ensure natural conversation flow
  • Verify the assistant doesn't prompt unnecessarily

Common Issues and Solutions

1. Charts/Markdown in Chat Response

Solution: Configure in Skill Selection Response Guidance and Answer Response Guidance

2. Breakout and Filter on Same Dimension

Solution: Add clarification in Parameter Guidance

3. Ambiguous Metric Requests

Solution: Define defaults in Parameter Guidance and Parameter Description

4. Skill Limitations/Guardrails

Solution: Clearly define in Skill Limitations and Skill Response Guidance

5. Period Interpretation

Solution: Specify handling in Parameter Description and Parameter Guidance

6. Dimension Priority with High Cardinality

Solution: Guide priority in Parameter Guidance

7. Required Filters

Solution: Document in Parameter Guidance at skill level

8. Preventing Calculation Attempts

Solution: Block in Skill Selection and Answer Response Guidance

9. Ensuring Appropriate Skill Execution

Solution: Clear Capabilities and Limitations definitions

10. Out-of-Scope Questions

Solution: Handle in Skill Response Guidance

11. Fiscal Calendar Logic

Solution: Document in Parameter Guidance for the skill

Implementation Checklist

Stage One: Initial Setup

Can begin once customer information is provided

  • Configure LLM Keys (Chat and Embeddings) with latest approved models

  • Establish data connections (live connection, CSV, etc.)

Stage Two: Data Configuration

Can begin once data is accessible in Max

  • Create dataset with descriptive names and descriptions for all columns

  • Determine example limits for each dimension

  • Populate all LLM-visible fields:

  • Prompt Variables

  • Skill Properties

  • Dataset Properties

  • Create question collections with asserted skills

  • Design landing page with relevant starter questions

  • Create placeholder skills if code not completed

Stage Three: Testing and Refinement

Can begin once skills are configured in the Assistant

  • Execute test collections and iterate using diagnostics

  • Perform conversational testing in chat

  • Validate both prompt efficacy and skill accuracy

  • Review and refine based on test results

Best Practices Summary

  1. Stage-Specific Guidance: Remember that response guidance is only visible during its specific pipeline stage
  2. Clear Examples: Use placeholders in example questions to prevent bias
  3. Comprehensive Testing: Include both individual question and conversational testing
  4. Iterative Refinement: Use diagnostics to continuously improve prompt configuration
  5. Business Alignment: Incorporate actual business user questions in testing
  6. Documentation: Maintain clear descriptions for all configurable elements

Troubleshooting


If you encounter issues with prompt behavior:

  1. Check pipeline stage visibility using diagnostics
  2. Verify all relevant fields are populated
  3. Test with Skip Skill Runs for rapid iteration
  4. Review conversation context in chat testing
  5. Consult diagnostics for each pipeline stag

For additional support or to suggest template modifications, contact the Product team.