DeepSeek + OpenWebUI

Jan 15, 2025

Try the Requesty Router and get $6 free credits 🔀

Ever wish you could chat with multiple AI models—all from one interface—without juggling multiple accounts, keys, or rate limits? Meet DeepSeek and OpenWebUI, supercharged by Requesty Router. This combo lets you seamlessly switch between DeepSeek and dozens of other language models within the same interface, making it easier than ever to compare performance and manage costs.

Why DeepSeek + OpenWebUI?

DeepSeek is a powerful, highly-tuned LLM capable of advanced reasoning, code generation, and multi-domain conversation. OpenWebUI offers a sleek, web-based UI where you can run multiple chat sessions, side-by-side, to see how each model handles the same prompts. When you add Requesty Router to the mix, you get:

  1. Single API Key for Multiple Models
    No more “key chaos.” Requesty Router aggregates 50+ LLMs—including DeepSeek—under a single API credential.

  2. Effortless Model Comparison
    Spin up parallel chats in OpenWebUI, instantly switching between DeepSeek, GPT-4, Claude, or any other model. See how each handles the same query or coding problem.

  3. Centralized Cost Management
    Track all your LLM usage (input/output tokens, total tokens, monthly billing) from one place, making it easy to stay on budget.

Step-by-Step Setup in OpenWebUI

Getting DeepSeek (and all your other models) up and running in OpenWebUI is straightforward:

1. Go to Settings

Open your web browser and go to your local OpenWebUI instance. Usually, that’s:

http://0.0.0.0:8080/

Locate and click on Settings.

2. Open Admin Settings

Navigate to:

http://0.0.0.0:8080/admin/settings

This is where you’ll configure your API connections.

3. Add OpenAI API Connection

Click Add or New Connection (the wording might vary depending on your OpenWebUI version). Even though it’s labeled “OpenAI,” it’s compatible with any API that follows the OpenAI-style protocol—including Requesty Router.

4. Configure Using Requesty Router

  1. Paste Your Requesty API Key: You only need this one key to access DeepSeek and dozens of other models.

  2. Set Endpoint: Point to the Requesty Router endpoint. (Check your Requesty documentation for the exact URL.)

  3. Save Settings.

5. Automatic Model Loading

Once you’ve saved your Requesty config, OpenWebUI will automatically load all available models from the router—DeepSeek, GPT-4, Claude, Phi-4, you name it!

6. Start Chatting

  • Pick Your Model: In the OpenWebUI chat panel, choose DeepSeek from the dropdown.

  • Compare Side-by-Side: Open a second chat tab with GPT-4, a third with Claude, etc.

  • Observe Outputs: See how each model tackles the same question or code snippet.

How to Compare DeepSeek to Other Models

One of the standout features of OpenWebUI is split or parallel chat:

  1. Split Screen: Run two chats side-by-side.

  2. Same Prompt, Different Model: Ask each model the same question—for instance, “Generate a python function to parse a CSV file”—and watch how outputs differ.

  3. Immediate Feedback: Evaluate answer quality, style, correctness, and token usage in real time.

This is unbeatable for quickly figuring out which model excels at your specific tasks, whether it’s summarizing research papers, debugging code, or brainstorming marketing copy.

Centralized Cost Management

Because everything flows through Requesty Router, you get:

  • One Billing Dashboard: All usage is aggregated—DeepSeek, GPT-4, Claude, etc.

  • Real-Time Alerts: Set cost or token usage thresholds to avoid unwanted surprises.

  • Simple Budgeting: No more cross-referencing separate invoices or dealing with multiple subscription tiers.

Real-World Benefits

1. Productivity on Steroids

Flip between DeepSeek and other models seamlessly, picking the best for each task—no more copy/pasting between different platforms.

2. Unified Interface

OpenWebUI keeps everything in a single browser tab, so your workflow remains clutter-free and you can focus on creating great content or code.

3. Precise Model Evaluation

Need to see which model handles a tricky medical Q&A better? Or which nails a legal contract summary? Run both side-by-side and judge for yourself.

4. Cost Transparency & Control

With Requesty Router’s unified billing, you’ll always know exactly how much you’re spending—even if you’re juggling five different models in one day.

Ready to Dive In?

Follow the steps above to add Requesty Router and DeepSeek to your existing OpenWebUI installation. In just a few minutes, you’ll have a robust, multi-model AI playground at your fingertips.

  1. Explore OpenWebUI – If you haven’t installed it yet, download or update to the latest version.

  2. Sign up for Requesty Router – Grab your single API key and unlock 50+ models!

  3. Configure & Compare – Enjoy easy side-by-side chats and centralized cost management.

DeepSeek + OpenWebUI is the perfect duo for maximizing productivity, controlling costs, and discovering which AI model truly shines for your unique needs. Give it a spin—you’ll never go back to juggling separate keys and windows again!

Try the Requesty Router and get free credits 🔀

Follow us on

© Requesty Ltd 2025