Requesty

MiniMax

Creator of the MiniMax M-series models built for real-world agentic productivity. Requesty routes to 5 MiniMax models starting at $0.30 per 1M input tokens with context windows up to 200K tokens. One API key, OpenAI-compatible SDK, no markup.

MMLU Pro
79.2%
GPQA
62.5%
HumanEval
88.1%
SWE-Bench
69.3%

All MiniMax models

ModelContextMax OutputInput/1MOutput/1MCapabilitiesSWE-Bench
MiniMax-M2.7
200K128K$0.30$1.20
πŸ‘πŸ§ πŸ”§βš‘
β€”
MiniMax-M2.7-highspeed
200K128K$0.60$2.40
πŸ‘πŸ§ πŸ”§βš‘
β€”
MiniMax-M2.5-highspeed
200K128K$0.60$2.40
πŸ‘πŸ§ πŸ”§βš‘
β€”
MiniMax-M2.5
200K128K$0.30$1.20
πŸ‘πŸ§ πŸ”§βš‘
β€”
MiniMax-M2
200K128K$0.30$1.20
πŸ”§
69%

About MiniMax on Requesty

How many MiniMax models are available through Requesty?
Requesty routes to 5 MiniMax models including regional variants, with pricing synced in real time to the upstream provider.
What is the cheapest MiniMax model?
The cheapest MiniMax model starts at $0.30 per million input tokens. See the pricing column in the table below for full per-model rates.
Does Requesty add markup on MiniMax pricing?
No. Requesty passes through exactly what MiniMax charges. You pay the same per-token rates as going direct β€” plus you get smart routing, caching, analytics, and one unified API for 400+ models.
Is my data used to train MiniMax models?
MiniMax's training policy varies by product and tier. See their privacy policy for specifics, and contact Requesty for enterprise-grade data controls.
Where are MiniMax models hosted?
MiniMax models are hosted in πŸ‡ΈπŸ‡¬ Singapore. Some models are available in additional regions through AWS Bedrock, Azure, or Google Vertex AI β€” filter by region on the MiniMax rows in the models explorer.