OpenManus + Requesty: Your Gateway to 150+ Models
Mar 10, 2025

Try the Requesty Router and get $6 free credits 🔀
Join the discord
There’s a new wave of AI tools that promise incredible flexibility without gatekeeping or invite codes—and OpenManus sits right at that sweet spot. Born in just three hours of rapid prototyping (and growing every single day), OpenManus aims to be the most accessible AI agent system, letting you bring any idea to life with minimal friction.
Meanwhile, Requesty is quickly becoming the aggregator for AI APIs—think of it as a single router that unlocks 150+ different language models (from GPT variants to Claude, and beyond). Put the two together, and you’ve got a ridiculously powerful, easy-to-deploy AI agent that can flex across dozens of model backends.
If you’ve been hearing a ton of hype around “Manus” lately—an AI agent from China that’s reportedly automating everything from SNS analysis to financial transactions—OpenManus is the open-source, no-invite version that you can spin up for yourself, hooking into the large language models (LLMs) of your choice. With Requesty + OpenManus, you become a one-person AI powerhouse:
No gatekeepers.
Access to 150+ model flavors.
A single config file to rule them all.
Below, we’ll walk you through how to get started, show you the config snippet that makes it all possible, and share why we think the synergy between OpenManus and Requesty is about to blow up.
1. Why OpenManus?
OpenManus is a straightforward, hackable AI agent built by a handful of contributors from MetaGPT. It’s intentionally designed to be simple yet extensible. Here’s what sets it apart:
No Invite Code: A major plus if you’ve been waiting for access to specialized GPT-style or “Manus-level” tools. Just clone the repo, configure your environment, and run.
Fast Prototyping: The dev team built the prototype in 3 hours, and they continue shipping new features nearly every day. It’s not a behemoth with complicated enterprise gates—this is open-source, iterative, and community-driven.
Agent Autonomy: Inspired by the original Manus concept, OpenManus aims for sophisticated multi-step reasoning and self-correction. It’s not just a chatbot; it’s an AI that can plan, break down tasks, research, and iterate.
Reinforcement Learning on the Roadmap: The team is collaborating with UIUC on
OpenManus-RL
for RL-based tuning. That means more robust, specialized agents who can learn from their own successes/failures over time.
But powerful as it is, OpenManus is only as good as the LLM behind it. That’s where Requesty swoops in.
2. Why Hook Into Requesty?
Requesty is a smart LLM router that abstracts away individual provider APIs—OpenAI, Anthropic, local models, and more—under a single endpoint: https://router.requesty.ai/v1
. With one Requesty API key, you tap into 150+ models.
Key highlights:
One Base URL, Many Models: Instead of juggling different endpoints for different LLM providers, you just pick a new
model
parameter in your Requesty calls.Transparent Rate Limiting: Requesty helps you gracefully handle rate limits with built-in fallback or queueing. You can also set your own usage constraints.
Cost Management: Because Requesty aggregates multiple providers, you can easily pivot to more cost-effective (or more powerful) models without rewriting your code.
No More “New API Release Frenzy”: Whenever a new LLM emerges, Requesty often integrates it behind the scenes. You can experiment immediately through the same router endpoint.
Put differently, OpenManus is your agent “brain,” while Requesty is the switchboard that routes your tasks to the best large language model for the job.
3. Quickstart Guide: OpenManus + Requesty
Ready to spin up your own agent?
Step 1: Clone & Install OpenManus
You can use conda or uv for your environment setup. We’ll show the recommended uv method:
Step 2: Configure Your config.toml
for Requesty
In the config
directory, copy config.example.toml
to config.toml
:
Open config.toml
in your favorite editor and update the [llm]
section:
By switching out model
, you can effortlessly move between different LLMs. Want a local model? Requesty may support that too—just pick the relevant model name. It’s basically infinite variety through one endpoint.
Step 3: Run OpenManus
Now, just do:
This starts a simple prompt-based interface in your terminal. Type in your question or instruction and watch OpenManus use the LLM you configured in config.toml
—routed by Requesty.
Voilà! You’re running a fully open, fully flexible AI agent that can plug into 150+ potential models.
4. What Can You Do With an OpenManus Agent?
Multi-Step Tasking
OpenManus isn’t just a single prompt-response cycle. It’s an agent that can break down tasks. For instance, if you say:
“Research the top five trending technologies for 2025, summarize them, and then propose a marketing plan for launching a product in that space.”
OpenManus can chain sub-requests under the hood:
Figure out “top trending technologies” (talk to the LLM).
Summarize them (ask LLM again).
Propose a marketing plan (final step).
All of this is handled seamlessly behind the scenes. You get the final result in a single cohesive answer.
Run Complex Automations
If you connect the agent to other tools or APIs, it can read your emails, generate responses, handle scheduling, or parse CSV files. With a bit of Python glue code, it can even orchestrate real-world tasks like sending data to CRMs or performing batch data analysis.
Switch LLM Providers Instantly
Sometimes you need the nuance of Anthropic’s Claude. Other times, an image-based GPT model is your jam. Or maybe you want local inference for data privacy. Because OpenManus uses your [llm]
config to route calls, all you do is update the model
name. No rewriting the agent logic, no hacking in a new library. Quick, modular, flexible.
5. Why the Hype? (And Why It’s Deserved)
The AI world is buzzing about “Manus” in China—an advanced agent that apparently automates 50 tasks out-of-the-box, from financial transactions to SNS analytics. It’s rumored to be more accurate than some mainstream LLM solutions, and people talk about it like it’s the unstoppable productivity engine.
OpenManus aspires to deliver that same multi-function agent model—but fully open-source. No locked-off invites. No chilling waitlists. If you know your way around pip install
, you can use it right now.
By hooking it up to Requesty, you unlock an entire menu of models at your disposal, from GPT-3.5/4 derivatives to specialized open models for code, text, or images. In effect, OpenManus becomes the “glue” that unifies everything.
6. Next Steps & How to Contribute
Experiment: Try a wide range of tasks in your local environment—something as mundane as generating email drafts or as epic as building a personal knowledge hub.
Report Issues or Ideas: If you find bugs, jump onto the OpenManus GitHub issues page. The dev team is all about collaboration.
Extend the Agent: Do you have custom tools? Want to add browser automation or voice recognition? Check out the
app
directory in OpenManus to see how you can integrate your own modules.Join the Community: The OpenManus team has an active Feishu group (see the repo for instructions), or you can chat with them on Discord.
7. A Glimpse at the Future
Between the unstoppable wave of new LLMs and the community-driven approach behind OpenManus, the future is bright—and it’s wide open. We’re seeing glimpses of an era where multi-agent intelligence truly becomes the standard: your AI doesn’t just respond to requests, it orchestrates entire processes.
In the midst of that shift, hooking into a universal LLM router like Requesty is a no-brainer. Why lock yourself to a single model or service? With a single config change, you can pivot to whichever model suits your use case best—whether it’s GPT-4 or the hottest new RLHF-enabled system tomorrow.
So if you want to harness the unstoppable “Manus wave” while staying vendor-agnostic and invite-free, OpenManus + Requesty is the killer combo. Fire it up, feed it your wildest to-do list, and watch your new agent co-pilot get it done.
Final Thoughts
The AI community has been hunting for an agent that’s both advanced and truly open. OpenManus fits that bill. And by merging it with Requesty’s router, you get a best-of-all-worlds synergy—rapid iteration, a buffet of model options, and a future-proof approach to AI development.
Go forth, experiment, and create something incredible.
With OpenManus + Requesty, there’s no limit to the AI tasks you can automate.
Further Reading & Links
OpenManus GitHub – official repo, docs, and updates.
Requesty Docs – check your usage, rate limits, and advanced features.
Feishu Group – join the official group to collaborate, share experiences, or show off that cool new use case.
OpenManus-RL – keep an eye on the upcoming Reinforcement Learning branch for next-level agent tuning.