---
id: finish-reason-april-2026
slug: finish-reason-mix-by-provider-april-2026
title: "finish_reason mix per provider, April 2026"
topic: agentic
period: Apr 2026
updated: 2026-05-09
license: CC BY 4.0
canonical: https://requesty.ai/data/finish-reason-mix-by-provider-april-2026
---

# finish_reason mix per provider, April 2026

> Which AI providers serve the most agentic traffic? In April 2026 Anthropic-direct returned `finish_reason = tool_calls` on 52% of successful completions on the Requesty gateway, about 2× the next provider and 17× higher than OpenAI direct. OpenAI Responses (26%), Vertex (Claude) (23%) and Azure (23%) formed a clear second tier. Splitting Vertex into Gemini and Claude cohorts shows the gap inside that route: Vertex (Claude) 23% vs Vertex (Gemini) 13%.

*Topic: Agentic workloads. Period: Apr 2026. Last updated 2026-05-09.*

## Why it matters

`finish_reason = tool_calls` is the cleanest signal that a model was driving an agent loop rather than answering a chat prompt. Providers cluster into clear agentic and non-agentic tiers, which has direct implications for routing. Sending agent traffic to a non-agentic provider often produces shorter context windows and worse tool-following without users realising why their agent feels "dumber".

## Questions this answers

- Which LLM provider is best for agentic workloads?
- What share of LLM traffic uses tool calls in 2026?
- Which AI providers are best for AI agents?
- Why does Anthropic dominate agent traffic vs OpenAI?

## Key findings

1. Anthropic-direct: 52% tool_calls, the highest agentic share on the platform.
2. OpenAI Responses (26%), Vertex (Claude) (23%) and Azure (23%) form a clear second tier.
3. Vertex (Claude) at 23% versus Vertex (Gemini) at 13%: same provider routing, different workload by an order of magnitude.
4. OpenAI direct is at 3% tool_calls, 17× lower than Anthropic-direct.
5. Bedrock Claude (7%) versus Anthropic-direct Claude (52%): same model, very different workload mix.
6. NULL finish_reason correlates with successful=false. Moonshot 94% blank is a reliability outlier on that route.

## Data

| Provider | tool_calls (percent) | stop (percent) | length (percent) | blank/error (percent) |
| --- | --- | --- | --- | --- |
| Anthropic | 52.20% | 42.60% | 1.40% | 3.80% |
| OpenAI Responses | 25.90% | 71.00% | 1.00% | 2.10% |
| Vertex (Claude) | 23.40% | 56.20% | 5.30% | 15.10% |
| Azure | 22.60% | 57.50% | 0.40% | 19.50% |
| Vertex (Gemini) | 13.50% | 79.00% | 3.30% | 4.20% |
| Bedrock | 6.70% | 88.50% | 0.50% | 4.30% |
| Moonshot | 4.60% | 1.40% | 0.10% | 93.90% |
| OpenAI | 3.30% | 94.20% | 0.60% | 1.90% |
| xAI | 2.90% | 96.20% | 0.20% | 0.70% |
| DeepSeek | 1.50% | 94.50% | 2.20% | 1.80% |

## Caveats

- Apr 2026 only. finish_reason was not populated for any 2025 row.
- Moonshot 94% blank/error is a reliability problem, not a labeling artefact (success rate 6.2%).

## Cite as

**APA.** Requesty (2026). finish_reason mix per provider, April 2026. Requesty Data. https://requesty.ai/data/finish-reason-mix-by-provider-april-2026

```bibtex
@misc{requesty_finish_reason_mix_by_provider_april_2026,
  author       = {{Requesty}},
  title        = {finish\_reason mix per provider, April 2026},
  year         = {2026},
  howpublished = {\url{https://requesty.ai/data/finish-reason-mix-by-provider-april-2026}},
  note         = {Requesty Data}
}
```

## Cited in

- [What the gateway saw in April 2026](https://requesty.ai/blog/provider-trends-april-2026-agentic-share-latency)

---

Downloads: [JSON](https://requesty.ai/data/finish-reason-mix-by-provider-april-2026/data.json) · [CSV](https://requesty.ai/data/finish-reason-mix-by-provider-april-2026/data.csv) · [Markdown](https://requesty.ai/data/finish-reason-mix-by-provider-april-2026/data.md)