Requesty

gpt-4.1-nano

For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT‑4o mini. It’s ideal for tasks like classification or autocompletion.

👁Vision🧠Reasoning🔧Tool callingCaching

Specifications

Context window1.0M tokens
Max output33K tokens
API typechat
AddedApr 14, 2025
Model IDazure/gpt-4.1-nano@uksouth
Data retentionNo
Used for trainingNo
Provider location🇺🇸 US / 🇪🇺 EU

Benchmarks

Released 2025-04
SWE-Bench Verifiedcoding
42.5%

Resolving real GitHub issues from 12 popular Python repositories.

GPQA Diamondreasoning
47.3%

Graduate-level physics, chemistry & biology questions designed to resist Googling.

MMLU Proknowledge
68.4%

Massive Multitask Language Understanding across 57 academic subjects.

Scores are sourced from official model cards, Artificial Analysis, and public leaderboards. Benchmarks measure specific skills and do not capture every aspect of model quality — always test on your own workload.

Pricing

Input / 1M
$0.10
Output / 1M
$0.40
Cache write / 1M
$0.10
Cache read / 1M
$0.02
Estimated cost
100K input + 10K output$0.0140
1M input + 100K output$0.14
10M input + 1M output$1.40

Requesty charges exactly what the upstream provider charges — no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to azure/gpt-4.1-nano@uksouth.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="azure/gpt-4.1-nano@uksouth", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Microsoft Azure AI models

Frequently asked questions

How much does gpt-4.1-nano cost?
gpt-4.1-nano is priced at $0.10 per million input tokens and $0.40 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges — we don't add markup.
What is the context window of gpt-4.1-nano?
gpt-4.1-nano has a context window of 1.0M tokens, with a maximum output of 33K tokens per response. That's roughly 1,397 words of input you can fit in a single prompt.
How does gpt-4.1-nano perform on benchmarks?
gpt-4.1-nano scores 79.1% on HumanEval, 73.6% on MATH, 68.4% on MMLU Pro. See the full benchmark chart above for results across MMLU Pro, GPQA Diamond, SWE-Bench Verified, HumanEval, MATH, AIME, MMMU, and LiveBench.
What can gpt-4.1-nano do?
gpt-4.1-nano supports vision input, tool calling, extended reasoning, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use gpt-4.1-nano with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "azure/gpt-4.1-nano@uksouth". The Quickstart above shows Python, JavaScript and cURL snippets.
What region is this deployment?
This variant of gpt-4.1-nano is deployed in uksouth. Region-specific endpoints matter for data residency, latency to your users, and compliance requirements (GDPR, HIPAA). Other regions for the same model may be listed on the Microsoft Azure AI provider page.

Access gpt-4.1-nano through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.