HomeGuidesSDK ExamplesAnnouncementsCommunity
Guides

Question Collections, Monitoring, Testing Best Practices

From Monitoring to Collections

Seamless workflow:

  1. Identify issues: Use monitoring dashboard to spot problematic classifications
  2. Investigate scope: Click "View All Questions" to see comprehensive question history
  3. Select related questions: Use filters and multi-select to identify affected questions
  4. Create test collections: Group questions into themed collections for follow-up testing
  5. Validate fixes: Use collections for systematic verification after issue resolution

Example scenario:

  • Monitoring shows 15 questions classified as "Authentication Issues"
  • Click into the classification to see all affected questions
  • Multi-select the questions that should be resolved by your upcoming fix
  • Create "Auth Fix Validation - March 2024" collection
  • Schedule regular tests on this collection to verify the fix

Advanced Filtering for Collection Building

Groups-based collections:

  • Filter questions by user groups (admin, UAT, training groups)
  • Create collections specific to user privilege levels
  • Test how different user types experience your assistant

Time-based collections:

  • Use date filtering to focus on recent issues
  • Build collections from specific incident time periods
  • Compare question patterns before and after changes

Classification-based collections:

  • Build collections from questions with specific classifications
  • Create validation suites for particular issue types
  • Organize testing around functional areas or problem categories

Collection Organization Best Practices

Naming Conventions

Descriptive, purposeful names:

  • ✅ "Authentication Issues - March 2024"
  • ✅ "Post-Login-Fix Validation"
  • ✅ "UAT Group Regression Tests"
  • ❌ "Test Collection 1"
  • ❌ "Random Questions"

Include context that helps with:

  • Issue type or functional area
  • Time period or version relevance
  • User group or testing scope
  • Purpose (validation, regression, exploration)

Strategic Collection Types

Issue-Specific Collections:

  • Group questions by the type of problem they represent
  • Useful for focused testing after fixes
  • Easy to schedule for regular regression testing

User-Journey Collections:

  • Organize questions that represent complete user workflows
  • Test end-to-end experiences across your assistant
  • Validate that complex interactions work as expected

Validation Collections:

  • Questions specifically chosen to verify fixes or improvements
  • Pre and post-fix comparison sets
  • Critical path testing for important functionality

Exploratory Collections:

  • Questions that represent edge cases or unusual requests
  • Help identify new potential issues
  • Support ongoing assistant improvement efforts