Billing & Usage
Q: How does billing work? A: You are only charged when you run a simulation. Each simulation run counts as one execution. We recommend starting with simple scenarios and gradually scaling up to understand costs. Q: How can I monitor my usage? A: You can view your usage details in the Settings page, including:- Number of simulations run
- Total execution time
- Evaluator usage
- Current billing period stats
- Start with simple scenarios before scaling up
- Use test runs to validate your setup
- Monitor usage in Settings
Scenarios & Configuration
Q: How many personas should I include in a scenario? A: Start with 1-2 personas that represent your most common use cases. Add more personas once you’re comfortable with the results. Q: What makes a good scenario description? A: The best scenario descriptions are specific and clear about what you’re testing. For example: “Test if the agent can correctly schedule a follow-up appointment while verifying patient information.” Q: Can I reuse personas across different scenarios? A: Yes! You can use the same personas across multiple scenarios to ensure consistent testing across different situations.Data Fields & Edge Cases
Q: How many data field variants should I include? A: Start with 2-3 common variants for each field. For example, for dates: “March 15, 2023”, “3/15/23”, and “next Wednesday”. Q: Which edge cases should I test first? A: Begin with common edge cases like:- Interruptions
- Unclear responses
- Background noise
- Multiple questions at once
Evaluators
Q: Do I need an OpenAI API key to use evaluators? A: Yes, you need to configure your OpenAI API key in Settings before using evaluators. Q: What makes a good evaluation criteria? A: Good evaluation criteria should be:- Specific and measurable
- Focused on one aspect of behavior
- Clear about what constitutes success
- Your OpenAI API key is valid
- Your criteria are clear and specific
- The conversation transcript is complete
Analysis & Results
Q: Can I share simulation results with my team? A: Yes, you can share specific executions and test runs with team members who have access to your organization. Q: How can I search through my simulation results? A: Use our search syntax to find specific results:output:text
- Find specific output contentduration>100
- Filter by duration
Troubleshooting
Q: My simulation isn’t starting. What should I check? A: Verify that:- Your scenario is completely configured
- All required fields are filled out
- You have sufficient permissions
- You’re within your usage limits
- Reviewing your scenario description
- Checking data field variants
- Simplifying edge cases
- Running a test simulation with basic settings
- Your OpenAI API key is properly configured
- Your evaluation criteria are clear and specific
- The scenario is properly configured
- You have sufficient API credits
Getting Help
Q: How can I get support? A: You can:- Check these docs for guidance
- Email support@autoblocks.ai
- Request a shared slack channel with your team