Creating Agents - Troubleshooting
Model Configurations
Choosing the appropriate AI model for your agent is essential to balance performance, response quality, and speed based on your specific use case and complexity requirements. The right model ensures your agent operates efficiently without unnecessary overhead or insufficient capability, delivering optimal results for your users while managing response times effectively.
What to do: Use GPT-4o Mini or GPT-4o for lightweight agents with one or two actions and minimal complexity, as these models provide fast, efficient responses for straightforward tasks. Switch to OpenAI o1-mini when your agent involves more complex actions, multi-step workflows, or sophisticated objectives that require enhanced processing. Deploy o1 (the full reasoning model) when your agent needs advanced reasoning capabilities, complex problem-solving, or deep analytical thinking. For most general-purpose applications, we recommend starting with GPT-4o and setting the response variability (temperature) between 0.6 - 0.7 to achieve the best balance of creativity and consistency. Test your agent with different models to find the optimal fit for your specific requirements. What not to do: Avoid using advanced reasoning models like o1 or o1-mini for simple tasks where they're unnecessary, as smarter models lead to slower response times that can negatively impact user experience.
Clear Prompts
When writing prompts, crafting clear prompts is critical to ensure the AI works as expected and delivers accurate, relevant results. Clear prompts reduce ambiguity, save time by minimizing back-and-forth clarifications, and help the AI understand your specific needs and context.
What to do: Be specific about what you want, provide relevant context and background information, break complex requests into smaller steps, specify the desired format or structure of the response, and include examples when possible to illustrate your expectations. Use precise language and define any specialized terms or acronyms that might be ambiguous.
What not to do: Avoid vague or overly broad requests that could be interpreted multiple ways, don't assume the AI knows unstated context or previous conversations (unless in the same session), refrain from using unclear pronouns without clear antecedents, and don't combine too many unrelated requests in a single prompt.
Additionally, avoid conflicting instructions or requirements that contradict each other, as this will confuse the AI and lead to suboptimal results.
Formatting and styling
Formatting and styling your prompts strategically can significantly improve how the AI interprets and prioritizes different parts of your request. Visual formatting helps create hierarchy, draws attention to critical information, and organizes complex instructions into digestible sections, making it easier for the AI to parse your intent and deliver structured responses.
What to do: Use bold text to emphasize key requirements, critical constraints, or the most important parts of your request. Employ CAPITALS sparingly for words or phrases that require absolute attention or represent non-negotiable requirements. Use separators like dashes (---), asterisks (***), or section headers to break your prompt into distinct parts such as "Context," "Requirements," "Constraints," and "Desired Output." Leverage bullet points or numbered lists to organize multiple items, steps, or criteria. Use quotation marks to specify exact phrases or terminology you want included or referenced. Consider using line breaks to create white space and improve readability for longer prompts.
- Creating Agents - Troubleshooting
- Model Configurations
- Clear Prompts
- Formatting and styling