Billing & Usage

Q: How does billing work? A: You are only charged when you run a simulation. Each simulation run counts as one execution. We recommend starting with simple scenarios and gradually scaling up to understand costs.

Q: How can I monitor my usage? A: You can view your usage details in the Settings page, including:

  • Number of simulations run
  • Total execution time
  • Evaluator usage
  • Current billing period stats

Q: How can I control costs? A: We recommend:

  1. Start with simple scenarios before scaling up
  2. Use test runs to validate your setup
  3. Monitor usage in Settings

Scenarios & Configuration

Q: How many personas should I include in a scenario? A: Start with 1-2 personas that represent your most common use cases. Add more personas once you’re comfortable with the results.

Q: What makes a good scenario description? A: The best scenario descriptions are specific and clear about what you’re testing. For example: “Test if the agent can correctly schedule a follow-up appointment while verifying patient information.”

Q: Can I reuse personas across different scenarios? A: Yes! You can use the same personas across multiple scenarios to ensure consistent testing across different situations.

Data Fields & Edge Cases

Q: How many data field variants should I include? A: Start with 2-3 common variants for each field. For example, for dates: “March 15, 2023”, “3/15/23”, and “next Wednesday”.

Q: Which edge cases should I test first? A: Begin with common edge cases like:

  • Interruptions
  • Unclear responses
  • Background noise
  • Multiple questions at once

Evaluators

Q: Do I need an OpenAI API key to use evaluators? A: Yes, you need to configure your OpenAI API key in Settings before using evaluators.

Q: What makes a good evaluation criteria? A: Good evaluation criteria should be:

  • Specific and measurable
  • Focused on one aspect of behavior
  • Clear about what constitutes success

Q: Why did my evaluation fail? A: Check that:

  1. Your OpenAI API key is valid
  2. Your criteria are clear and specific
  3. The conversation transcript is complete

Analysis & Results

Q: Can I share simulation results with my team? A: Yes, you can share specific executions and test runs with team members who have access to your organization.

Q: How can I search through my simulation results? A: Use our search syntax to find specific results:

  • output:text - Find specific output content
  • duration>100 - Filter by duration

Troubleshooting

Q: My simulation isn’t starting. What should I check? A: Verify that:

  1. Your scenario is completely configured
  2. All required fields are filled out
  3. You have sufficient permissions
  4. You’re within your usage limits

Q: The agent isn’t responding as expected. What can I do? A: Try:

  1. Reviewing your scenario description
  2. Checking data field variants
  3. Simplifying edge cases
  4. Running a test simulation with basic settings

Q: My evaluator keeps failing. How can I fix it? A: Ensure:

  1. Your OpenAI API key is properly configured
  2. Your evaluation criteria are clear and specific
  3. The scenario is properly configured
  4. You have sufficient API credits

Getting Help

Q: How can I get support? A: You can:

  1. Check these docs for guidance
  2. Email support@autoblocks.ai
  3. Request a shared slack channel with your team