grok-4 vs kimi-k2
Side-by-side comparison of grok-4 and kimi-k2 — benchmarks, pricing, context window and capabilities. Both are accessible through Requesty's unified API. grok-4 outperforms kimi-k2 on 7 of 7 shared benchmarks.
xAI Corp.
grok-4
Input / 1M
$3.00
Output / 1M
$15.00
Context
256K
Model ID
xai/grok-4

kimi-k2
Input / 1M
$0.60
Output / 1M
$2.50
Context
262K
Model ID
vertex/kimi-k2
Benchmark comparison
MMLU Proknowledge
grok-487.5%
kimi-k282.3%
GPQA Diamondreasoning
grok-487.5%
kimi-k270.0%
HumanEvalcoding
grok-493.8%
kimi-k289.9%
SWE-Bench Verifiedcoding
grok-472.5%
kimi-k265.8%
MATHmath
grok-494.1%
kimi-k289.2%
AIME 2024math
grok-490.1%
kimi-k280.1%
MMMUmultimodal
grok-477.9%
kimi-k2—
LiveBenchreasoning
grok-475.4%
kimi-k268.3%
Scores sourced from official model cards, Artificial Analysis, and public leaderboards. Benchmarks measure specific skills and don't capture every aspect of model quality.
Pricing & specifications
| grok-4 | kimi-k2 | |
|---|---|---|
| Input price / 1M | $3.00 | $0.60 |
| Output price / 1M | $15.00 | $2.50 |
| Context window | 256K tokens | 262K tokens |
| Max output | — | 262K tokens |
| Vision input | Yes | Yes |
| Tool calling | Yes | Yes |
| Reasoning | — | Yes |
| Prompt caching | Yes | Yes |
| Computer use | Yes | — |
| Provider | xAI Corp. | Google LLC (Vertex AI) |
Questions people ask
Is grok-4 better than kimi-k2?
grok-4 outperforms kimi-k2 on 7 of 7 shared benchmarks. See the benchmark comparison above for specifics — grok-4 and kimi-k2 have different strengths across reasoning, coding, math and multimodal tasks.
Which is cheaper — grok-4 or kimi-k2?
kimi-k2 is cheaper. grok-4 costs $3.00/$15.00 per 1M input/output tokens, while kimi-k2 costs $0.60/$2.50.
Can I use grok-4 and kimi-k2 through the same API?
Yes. Requesty provides a single OpenAI-compatible API that routes to both. Change just the "model" parameter to switch — "xai/grok-4" or "vertex/kimi-k2" — no other code changes needed.
What are the context windows?
grok-4 supports up to 256K tokens of context. kimi-k2 supports up to 262K tokens. Longer context means you can feed larger documents or codebases in a single prompt, though quality often degrades past 128K for most models.
Switch between grok-4 and kimi-k2 with one line of code
Requesty provides a single OpenAI-compatible API for 400+ models. Change the model parameter, not your code.
Get started free