Best Claude Haiku 4.5 Alternatives
Claude Haiku 4.5 by Anthropic is a budget model priced at $0.8/4 per 1M tokens (in/out). Looking for a better deal or different capabilities? Here are the best options.
Claude Haiku 4.5
AnthropicBudgetInput
$0.8/1M
Output
$4/1M
Context
200K
Max Output
8K
Why Switch from Claude Haiku 4.5?
Top Alternatives
27% cheaper, higher benchmark scores, 1M context (5x more).
Input
$0.5/1M
38% cheaper
Output
$3/1M
25% cheaper
Context
1M
Max Output
66K
50% cheaper, higher benchmark scores.
Input
$0.4/1M
50% cheaper
Output
$2/1M
50% cheaper
Context
128K
Max Output
16K
Dramatically cheaper (84% less), higher benchmark scores, 1M context (5x more).
Input
$0.15/1M
81% cheaper
Output
$0.6/1M
85% cheaper
Context
1M
Max Output
66K
Dramatically cheaper (84% less), comparable performance.
Input
$0.15/1M
81% cheaper
Output
$0.6/1M
85% cheaper
Context
128K
Max Output
16K
Dramatically cheaper (58% less), higher benchmark scores, 100K max output.
Input
$0.4/1M
50% cheaper
Output
$1.6/1M
60% cheaper
Context
200K
Max Output
100K
42% cheaper, higher benchmark scores, 128K max output.
Input
$0.6/1M
25% cheaper
Output
$2.2/1M
45% cheaper
Context
200K
Max Output
128K
12% cheaper, comparable performance, 128K max output.
Input
$1/1M
25% more
Output
$3.2/1M
20% cheaper
Context
200K
Max Output
128K
Higher benchmark scores, 100K max output, adds reasoning.
Input
$1.1/1M
38% more
Output
$4.4/1M
10% more
Context
200K
Max Output
100K
Full Comparison Table
| Model | Input $/1M | Output $/1M | Context | MMLU-Pro | HumanEval | Score |
|---|---|---|---|---|---|---|
| Gemini 3 FlashGoogle | $0.5038% cheaper | $3.0025% cheaper | 1M | 78%+8.6% | 90%+1.9% | 94 |
| Mistral Medium 3Mistral | $0.4050% cheaper | $2.0050% cheaper | 128K | 76%+6.6% | 87%-1.1% | 85 |
| Gemini 2.5 FlashGoogle | $0.1581% cheaper | $0.6085% cheaper | 1M | 76%+6.6% | 89.5%+1.4% | 84 |
| GPT-4o MiniOpenAI | $0.1581% cheaper | $0.6085% cheaper | 128K | 68%-1.4% | 87.2%-0.9% | 83 |
| o3OpenAI | $0.4050% cheaper | $1.6060% cheaper | 200K | 87%+17.6% | 94.5%+6.4% | 79 |
| GLM-4.7Zhipu AI | $0.6025% cheaper | $2.2045% cheaper | 200K | 84.3%+14.9% | — | 73 |
| GLM-5Zhipu AI | $1.0025% more | $3.2020% cheaper | 200K | 70.4%+1.0% | 91%+2.9% | 72 |
| o4-miniOpenAI | $1.1038% more | $4.4010% more | 200K | 85%+15.6% | 93.5%+5.4% | 69 |
| Mistral Large 3Mistral | $2.00150% more | $5.0025% more | 128K | 83%+13.6% | 91%+2.9% | 69 |
| Llama 4 MaverickMeta | $0.3161% cheaper | $0.8579% cheaper | 1M | 80.5%+11.1% | 90.2%+2.1% | 68 |
| Llama 4 ScoutMeta | $0.1878% cheaper | $0.6384% cheaper | 10M | 74.2%+4.8% | 86%-2.1% | 68 |
| DeepSeek R1DeepSeek | $0.5531% cheaper | $2.1945% cheaper | 128K | 84%+14.6% | 92%+3.9% | 67 |
| GPT-5OpenAI | $1.2556% more | $10.00150% more | 128K | 88.5%+19.1% | 95%+6.9% | 65 |
| Gemini 3.1 ProGoogle | $2.00150% more | $12.00200% more | 1M | 91%+21.6% | 95%+6.9% | 65 |
| Gemini 3 ProGoogle | $2.00150% more | $12.00200% more | 1M | 89.8%+20.4% | 94%+5.9% | 65 |
| Gemini 2.5 ProGoogle | $1.2556% more | $10.00150% more | 1M | 87.5%+18.1% | 93.5%+5.4% | 65 |
| GPT-4oOpenAI | $2.50213% more | $10.00150% more | 128K | 80.5%+11.1% | 91%+2.9% | 62 |
| Claude Opus 4.6Anthropic | $5.00525% more | $25.00525% more | 200K | 89.5%+20.1% | 95%+6.9% | 59 |
| Claude Sonnet 4.6Anthropic | $3.00275% more | $15.00275% more | 200K | 86%+16.6% | 94%+5.9% | 59 |
| Claude Sonnet 4.5Anthropic | $3.00275% more | $15.00275% more | 200K | 84.5%+15.1% | 93%+4.9% | 59 |
| GPT-5.3 CodexOpenAI | $2.00150% more | $16.00300% more | 200K | 90%+20.6% | 96.5%+8.4% | 59 |
| GPT-5.2 CodexOpenAI | $1.75119% more | $14.00250% more | 200K | 89%+19.6% | 95.5%+7.4% | 59 |
| MiniMax M2.5MiniMax | $0.3063% cheaper | $1.2070% cheaper | 200K | 82%+12.6% | 90%+1.9% | 57 |
| Grok 4xAI | $3.00275% more | $15.00275% more | 128K | 86%+16.6% | 93%+4.9% | 55 |
| DeepSeek V3DeepSeek | $0.1483% cheaper | $0.2893% cheaper | 164K | 78%+8.6% | 89%+0.9% | 47 |