An automatic agent optimization framework for generating, evaluating, and selecting top-performing agents.
Define your task, and let Meta Loop discover the perfect solution.
Explore the Docs » · Report a Bug »
Meta Agent is a powerful, imperative, define-by-run framework designed to simplify agent development. It generates 16 candidate "probes" with varied configurations, evaluates them against customizable metrics, and selects the best-performing agent for your task. Whether you're analyzing data, predicting outcomes, or automating workflows, Meta Loop optimizes the process for you.
- Automated Agent Generation
- Creates 16 unique agent "probes" with diverse configurations tailored to your task.
- Customizable Evaluation
- Scores probes using built-in metrics (e.g., accuracy, speed) or your own custom evaluation function.
- Framework Flexibility
- Built with Pydantic-AI by default, with seamless support for frameworks like AutoGen or CrewAI.
- Extensible Design
- Easily adapt agent roles, tools, and evaluation criteria to suit your specific needs.
Install Meta Agent with a single command:
pip install meta_loop
Get started in just a few lines of code. Below are examples to showcase Meta Agent’s capabilities. Basic Usage
Generate an agent to analyze customer reviews and predict sentiment:
import meta_loop
# Build and optimize an agent
best_agent = meta_loop.build_agent(
instruction="Create an agent to analyze customer reviews and predict sentiment.",
probe_count=16,
framework="crewai"
)
# View the selected agent's details
print(best_agent.details)
Define your own scoring logic to evaluate probes:
import meta_loop
import random
# Custom evaluation function
def custom_eval(trial: meta_loop.Trial) -> float:
return random.random() # Replace with your own metric
# Build an agent with custom evaluation
best_agent = meta_loop.build_agent(
instruction="Create an agent to analyze customer reviews and predict sentiment.",
probe_count=16,
framework="crewai",
eval_fn=custom_eval
)
print(best_agent.details)
Provide a dataset to evaluate agents against specific inputs and expected outputs:
import meta_loop
# Build an agent with a test dataset
best_agent = meta_loop.build_agent(
instruction="Create an agent to analyze customer reviews and predict sentiment.",
probe_count=16,
framework="crewai",
test_dataset=meta_loop.dataset(
("Great product, love it!", 0.9), # (input, expected_score)
("Terrible service, very disappointed.", 0.2),
("It's okay, nothing special.", 0.5)
)
)
print(best_agent.details)
- Define Your Task: Provide a task description (e.g., "analyze customer reviews").
- Generate Probes: Meta Agent creates 16 agent configurations with varying parameters.
- Evaluate Performance: Each probe is scored based on your chosen metrics or dataset.
- Select the Best: The top-performing agent is returned, ready for use.
For more details, check out the official documentation (coming soon!).
We welcome contributions! Please see our contribution guidelines and feel free to submit issues or pull requests.
Have questions? Join our community or reach out:
- Discord (coming soon!)
- GitHub Issues