System Prompt
A powerful feature that provides granular control over system prompts throughout the conversation pipeline, enabling precise tuning of AI behavior at each interaction stage.
Architecture Overview
The system prompt management feature operates across three distinct phases of the conversation pipeline:
- Function Selection
- Parameter Selection
- Answer Generation
Each phase can be independently customized to optimize the AI's behavior for specific use cases and requirements.
Available Variables
Temporal Variables
{{copilot_dataset_end_date}}
: Dataset cutoff date in YYYY-MM-DD format{{today}}
: Current date in YYYY-MM-DD format{{quarter}}
: Current calendar quarter (1-4)
Context Variables
{{hints}}
: Contextual hints for the current question{{user__full_query}}
: Original user question{{user_chat_question_with_context}}
: Contextualized version of the user question{{stage}}
: Current pipeline stage ("Function Selection" or "Parameter Selection")
System State Variables
-
{{active_tool}}
: Current skill information- Name
- Description
-
{{datasets}}
: Available dataset information- Name
- Description
- Dimensions
- Sample values
- Mapped values
- Metrics
-
{{pipeline.tools}}
: Available skills- Name
- Description
Pipeline Stages
1. Function Selection
Purpose
Customize how the system interprets user queries and selects appropriate functions based on available skills.
Example Usage
During {{stage}}, evaluate {{user__full_query}} against available tools:
{{pipeline.tools}}
Consider these hints for context:
{{hints}}
2. Parameter Selection
Purpose
Define specific prompting for parameter identification and validation based on the selected function.
Example Usage
For {{active_tool.Name}}, identify required parameters from:
{{user_chat_question_with_context}}
Available datasets:
{{datasets}}
3. Answer Generation
Purpose
Control how responses are formulated and presented to users.
Example Usage
Generate response using data until {{copilot_dataset_end_date}}
Current quarter: {{quarter}}
Dynamic Content Generation
The system automatically adapts prompts based on:
-
Available Skills
- Dynamically updates available functions
- Maintains consistency with system capabilities
-
Connected Datasets
- Incorporates current data context
- Ensures responses align with available data
-
Context Requirements
- Adapts to conversation flow
- Maintains contextual relevance
Best Practices
Variable Usage
- Always validate temporal variables against dataset limitations
- Use context variables to maintain conversation coherence
- Leverage system state variables for accurate function selection
Prompt Design
- Keep prompts focused and specific to each pipeline stage
- Include relevant context without overwhelming the system
- Use clear, unambiguous language
Performance Optimization
- Minimize redundant variable usage
- Structure prompts for efficient processing
- Regular testing and refinement of prompt effectiveness
Implementation Example
Here's a complete example of stage-specific prompt configuration:
# Function Selection Stage
WHEN {{stage}} == "Function Selection":
Given {{user__full_query}}, select the most appropriate tool from:
{{pipeline.tools}}
Consider data availability until {{copilot_dataset_end_date}}
# Parameter Selection Stage
WHEN {{stage}} == "Parameter Selection":
For {{active_tool.Name}}, extract parameters from:
{{user_chat_question_with_context}}
Using available datasets:
{{datasets}}
Troubleshooting
Common issues and their solutions:
-
Inconsistent Function Selection
- Review function selection prompt
- Verify tool descriptions are clear
- Check hint relevance
-
Parameter Extraction Issues
- Validate dataset availability
- Verify parameter format requirements
- Check context preservation
-
Response Generation Problems
- Confirm temporal variable accuracy
- Verify data freshness
- Review context handling
Updated about 1 month ago