Supercharge OpenWebUI with Requesty (An Alternative to OpenRouter)

Mar 7, 2025

Try the Requesty Router and get $6 free credits 🔀

Join the discord


OpenWebUI is a user-friendly interface that makes it easy to interact with AI models from your browser. If you’ve been using OpenRouter or looking for a reliable alternative that supports lots of models and multiple providers, Requesty is the perfect choice. We offer over 150 LLMs (like Anthropic Claude, Qwen, DeepSeek, and more) via a single endpoint—no complicated setup required.

Table of Contents

  1. Why Integrate Requesty with OpenWebUI?

  2. Sign Up for Requesty

  3. Configuring OpenWebUI

  4. Creating & Using Your API Key

  5. Verifying the Integration

  6. Monitoring Usage & Costs

  7. Wrap-Up

1. Why Integrate Requesty with OpenWebUI?

  • Access 150+ Models Instantly: From GPT to Claude and specialized open-source LLMs—all through one platform. Perfect for those who need more variety than what you might find on other services like OpenRouter.

  • Simple Setup: Just paste one URL and one API key; no extra installs.

  • Cost & Usage Tracking: Monitor token usage and spending across all your queries from the Requesty dashboard.

  • Logging & Configuration: Enable or disable request logs. Toggle caching or advanced features to optimize performance and reduce costs.

  • Provider-Agnostic: Requesty integrates with many providers and helps you avoid issues like partial outages that might occur elsewhere (e.g., “OpenRouter partial outage”).

2. Sign Up for Requesty

  1. Create an Account: Go to app.requesty.ai/sign-up and sign up if you haven’t already.

  2. Onboarding Page: Once you’re in, you’ll see an onboarding page with integrations, including OpenWebUI.

3. Configuring OpenWebUI

  1. Launch OpenWebUI: If you haven’t already, start your OpenWebUI server and open it in your browser.

  2. Go to “Admin Panel” → “Settings”: You’ll typically find this button near the top-right or in the sidebar.

  3. Navigate to “Connections”: Look for a section that lets you configure “OpenAI” or “Custom” API URLs.

If you’re coming from an OpenRouter setup, simply replace the references to OpenRouter’s endpoint with Requesty’s.

4. Creating & Using Your API Key

  1. Create an API Key in Requesty

    • In the Requesty dashboard, go to “Manage API Keys.”

    • Click “Create API Key.” Name it, for example, openwebui-key.

    • Copy the API key (you can reset or delete it anytime).

  2. Paste API Key & URL in OpenWebUI

    • In OpenWebUI’s “Connections” or “Configure” panel, you’ll see fields for API URL and API key.

    • URL: https://router.requesty.ai/v1

    • API Key: Paste the key you just created in Requesty.

    • Save your settings.

With this done, OpenWebUI can tap into lots of models from multiple providers through Requesty’s router—no more juggling separate credentials.

5. Verifying the Integration

  1. Start a New Chat

    • In OpenWebUI, click “New Chat.”

    • You’ll notice you can now pick from many of the models that Requesty supports, including GPT-4, Claude, DeepSeek, and more.

  2. Test a Prompt

    • Try “Who was Napoleon?” with a model like openai/gpt-3.5-turbo or anthropic/claude-3-7.

    • Watch as the model responds via Requesty’s router, all inside OpenWebUI.

6. Monitoring Usage & Costs

Back in your Requesty dashboard:

  • Manage API Keys: Find the key you created for OpenWebUI.

  • Logs: You can view each prompt/response (unless you’ve disabled logging).

  • Usage & Costs: Track how many tokens you’ve used and how much you’ve spent, broken down by each provider.

You can also disable logs for privacy if needed. But many users find it helpful to see exactly what’s happening with each request, especially when testing lots of different models.

7. Wrap-Up

Connecting OpenWebUI with Requesty is a fast, flexible way to experiment with many providers and loads of models—a great alternative or supplement to OpenRouter. You’ll have:

  • A single endpoint for all your LLMs.

  • Transparent usage analytics and cost monitoring.

  • Quick fallback and load-balancing options if any single provider is slow or unavailable.

Ready to get started?

  • Head to app.requesty.ai to sign up or log in.

  • Configure OpenWebUI in minutes, and unlock 150+ AI models without ever leaving your browser.

For questions, join our Discord or check the Requesty Docs. We’re excited to see how you’ll use all these LLMs in OpenWebUI—enjoy exploring your options!

Follow us on

© Requesty Ltd 2025