DeepSeek-R1 + OpenWebUI + Requesty

Jan 21, 2025

DeepSeek-R1: Reinforcement Learning for Powerful Reasoning

DeepSeek-R1 is the new open-source Large Language Model (LLM) from DeepSeek-AI, further pushing the limits of reasoning-first models. It builds on DeepSeek-R1-Zero’s success in harnessing reinforcement learning (RL) to achieve advanced chain-of-thought capabilities. Now, DeepSeek-R1 refines these methods through multi-stage training and carefully curated data. The result is a model that excels at math, coding, knowledge-intensive question-answering, and more—ready to stand toe-to-toe with other top-tier solutions.

Why DeepSeek-R1?

  1. Pure RL Foundations

    • DeepSeek-R1-Zero proved that massive RL runs, even without supervised fine-tuning, can yield self-evolving chain-of-thought strategies.

    • DeepSeek-R1 refines this approach with a small “cold-start” dataset and multi-phase RL to optimize both reasoning accuracy and user alignment.

  2. Multi-Stage Reinforcement

    • Two RL phases to systematically enhance reasoning and alignment.

    • Additional supervised fine-tuning (SFT) phases to integrate broader capabilities: writing, factual QA, coding, etc.

  3. Reasoning Distillation

    • Distilled variants (from 1.5B up to 70B parameters) ensure smaller projects can also benefit from R1’s reasoning quality, balancing performance with resource costs.

  4. Consistent Accuracy & Readability

    • Achieves near state-of-the-art on competitive benchmarks (MMLU, GPQA, Codeforces).

    • Outputs are structured with a “reasoning process” and “answer” section, offering transparent chain-of-thought while keeping final answers concise and user-friendly.

DeepSeek-R1 + OpenWebUI via Requesty Router

With Requesty Router, you can access DeepSeek-R1 (plus 50+ other LLMs) with just one API key. By integrating it into OpenWebUI, you get a clean, centralized interface to:

  • Compare DeepSeek-R1 to any other LLM (GPT-4, Claude, or your existing models).

  • Track all token usage and billing in one place, saving you from juggling multiple credentials.

  • Run parallel or split-screen chats to evaluate performance side-by-side.

Example Configuration Parameters

  • API Provider: OpenAI Compatible

  • Base URL: https://router.requesty.ai/v1

  • Model ID: cline/deepseek-reasoner

Step-by-Step Setup in OpenWebUI

1. Go to Settings

Open your local OpenWebUI instance in your browser, typically at:

arduino

Copy

http://0.0.0.0:8080/

Click on Settings in the navigation bar.

2. Open Admin Settings

Navigate to:

arduino

Copy

http://0.0.0.0:8080/admin/settings

This is where you’ll set up your API connections and model endpoints.

3. Add an OpenAI-Compatible Connection

Click Add or New Connection. Even if it says “OpenAI,” you can use it to connect to any service that follows the OpenAI API structure—including Requesty Router.

4. Configure Your Requesty Router Details

  1. API Key: Paste your Requesty Router key—just one key for all your LLMs.

  2. Endpoint / Base URL: https://router.requesty.ai/v1

  3. Model ID: Add "cline/deepseek-reasoner:alpha" (or whichever model you want to default to).

  4. Save your settings.

5. Auto-Load Models

After saving, OpenWebUI will fetch the available models from Requesty Router. You’ll see a list that includes:

  • DeepSeek-R1

  • GPT-4o

  • Claude

  • Phi-4
    …or any other model you have access to.

6. Start Chatting

  • Pick Your Model: From the dropdown in the main OpenWebUI chat interface, select DeepSeek-R1.

  • Compare Side-by-Side: Open new tabs for GPT-4, Claude, or whichever.

  • Evaluate: Give them the same prompt—e.g., “Solve this complex algebraic equation”—and watch how each model’s reasoning stack differs.

How to Compare DeepSeek-R1 to Other Models in OpenWebUI

One of the best features of OpenWebUI is its split or parallel chat ability:

  1. Split Screen: Open two or more chat tabs side-by-side.

  2. Same Prompt, Different Models: Pose identical questions or coding tasks to DeepSeek-R1 and GPT-4, for instance, and see how each responds.

  3. Real-Time Feedback: Judge clarity, depth of reasoning, correctness, and token usage. You’ll know quickly which model excels at your specific challenge.

Centralized Cost & Token Management

All model usage flows through Requesty Router, giving you:

  1. Unified Billing Dashboard: Track total usage for DeepSeek-R1, GPT-4, Claude, etc.—all in one place.

  2. Usage Alerts: Configure token or cost alerts so you never exceed your budget.

  3. Single Subscription: No more juggling different tiers and platform fees. Pay once, manage across dozens of models.

Real-World Benefits

  1. Reinforced Reasoning

    • DeepSeek-R1 uses advanced RL and chain-of-thought to tackle complex tasks—math, coding, research Q&A, and more.

  2. Productivity Multiplied

    • No toggling between separate tools or copying data from one interface to another. Everything sits neatly in OpenWebUI.

  3. Swift Evaluation

    • Instantly switch models in parallel, run the same queries, and pick the best performer for your requirements.

  4. Streamlined Budgeting

    • Unified Requesty billing means you always know where your money is going and can easily switch to cheaper or more capable models at will.

Ready to Explore DeepSeek-R1 in OpenWebUI?

  1. Update or Install OpenWebUI

    • Make sure you’re on the latest build for the best features.

    • GitHub Project Link (if you need to clone or update directly).

  2. Get a Requesty Router Account

    • You’ll get a single API key to unlock DeepSeek-R1 plus 50+ other LLMs.

  3. Configure & Compare

    • Follow the steps above, set up your Key & Endpoint in OpenWebUI, and start exploring DeepSeek-R1’s advanced RL-driven capabilities.

  4. Share Your Findings

    • Whether you’re doing complex code generation or knowledge-based QA, compare how various models handle the same tasks and discover which is best for you.

DeepSeek-R1 supercharges your workflow with transparent, reinforced reasoning—perfect for coding, research, or any domain requiring rigorous problem-solving. Through OpenWebUI and Requesty Router, you’ll streamline your entire AI ecosystem, eliminating separate sign-ups and scattered billing. Give it a spin today to experience how Reinforcement Learning can elevate large language models to new heights of accuracy and clarity!

Follow us on

© Requesty Ltd 2025