Best Llama 4 Maverick Alternatives
Llama 4 Maverick by Meta is a open source model priced at $0.31/0.85 per 1M tokens (in/out). It's already affordable, but you might want different strengths or features.
Llama 4 Maverick
MetaOpen SourceInput
$0.31/1M
Output
$0.85/1M
Context
1M
Max Output
32K
Why Switch from Llama 4 Maverick?
Top Alternatives
30% cheaper, 10M context (10x more), open-source and self-hostable.
Input
$0.18/1M
42% cheaper
Output
$0.63/1M
26% cheaper
Context
10M
Max Output
32K
Dramatically cheaper (64% less), comparable performance, adds reasoning.
Input
$0.14/1M
55% cheaper
Output
$0.28/1M
67% cheaper
Context
164K
Max Output
16K
Comparable performance, 128K max output, adds reasoning.
Input
$0.3/1M
3% cheaper
Output
$1.2/1M
41% more
Context
200K
Max Output
128K
35% cheaper, 66K max output, adds tool-use, reasoning.
Input
$0.15/1M
52% cheaper
Output
$0.6/1M
29% cheaper
Context
1M
Max Output
66K
Comparable performance, 100K max output, adds tool-use, reasoning.
Input
$0.4/1M
29% more
Output
$1.6/1M
88% more
Context
200K
Max Output
100K
35% cheaper, adds tool-use.
Input
$0.15/1M
52% cheaper
Output
$0.6/1M
29% cheaper
Context
128K
Max Output
16K
Comparable performance, 128K max output, adds reasoning.
Input
$0.6/1M
94% more
Output
$2.2/1M
159% more
Context
200K
Max Output
128K
Adds tool-use.
Input
$0.4/1M
29% more
Output
$2/1M
135% more
Context
128K
Max Output
16K
Full Comparison Table
| Model | Input $/1M | Output $/1M | Context | MMLU-Pro | HumanEval | Score |
|---|---|---|---|---|---|---|
| Llama 4 ScoutMeta | $0.1842% cheaper | $0.6326% cheaper | 10M | 74.2%-6.3% | 86%-4.2% | 85 |
| DeepSeek V3DeepSeek | $0.1455% cheaper | $0.2867% cheaper | 164K | 78%-2.5% | 89%-1.2% | 78 |
| MiniMax M2.5MiniMax | $0.303% cheaper | $1.2041% more | 200K | 82%+1.5% | 90%-0.2% | 75 |
| Gemini 2.5 FlashGoogle | $0.1552% cheaper | $0.6029% cheaper | 1M | 76%-4.5% | 89.5%-0.7% | 66 |
| o3OpenAI | $0.4029% more | $1.6088% more | 200K | 87%+6.5% | 94.5%+4.3% | 63 |
| GPT-4o MiniOpenAI | $0.1552% cheaper | $0.6029% cheaper | 128K | 68%-12.5% | 87.2%-3.0% | 63 |
| GLM-4.7Zhipu AI | $0.6094% more | $2.20159% more | 200K | 84.3%+3.8% | — | 61 |
| Mistral Medium 3Mistral | $0.4029% more | $2.00135% more | 128K | 76%-4.5% | 87%-3.2% | 61 |
| DeepSeek R1DeepSeek | $0.5577% more | $2.19158% more | 128K | 84%+3.5% | 92%+1.8% | 53 |
| Mistral Large 3Mistral | $2.00545% more | $5.00488% more | 128K | 83%+2.5% | 91%+0.8% | 53 |
| GPT-5OpenAI | $1.25303% more | $10.001076% more | 128K | 88.5%+8.0% | 95%+4.8% | 50 |
| Gemini 2.5 ProGoogle | $1.25303% more | $10.001076% more | 1M | 87.5%+7.0% | 93.5%+3.3% | 50 |
| o4-miniOpenAI | $1.10255% more | $4.40418% more | 200K | 85%+4.5% | 93.5%+3.3% | 46 |
| Gemini 3 FlashGoogle | $0.5061% more | $3.00253% more | 1M | 78%-2.5% | 90%-0.2% | 46 |
| Claude Opus 4.6Anthropic | $5.001513% more | $25.002841% more | 200K | 89.5%+9.0% | 95%+4.8% | 43 |
| GPT-5.3 CodexOpenAI | $2.00545% more | $16.001782% more | 200K | 90%+9.5% | 96.5%+6.3% | 43 |
| GPT-5.2 CodexOpenAI | $1.75465% more | $14.001547% more | 200K | 89%+8.5% | 95.5%+5.3% | 43 |
| Claude Haiku 4.5Anthropic | $0.80158% more | $4.00371% more | 200K | 69.4%-11.1% | 88.1%-2.1% | 43 |
| Gemini 3.1 ProGoogle | $2.00545% more | $12.001312% more | 1M | 91%+10.5% | 95%+4.8% | 40 |
| Gemini 3 ProGoogle | $2.00545% more | $12.001312% more | 1M | 89.8%+9.3% | 94%+3.8% | 40 |
| GLM-5Zhipu AI | $1.00223% more | $3.20276% more | 200K | 70.4%-10.1% | 91%+0.8% | 38 |
| Claude Sonnet 4.6Anthropic | $3.00868% more | $15.001665% more | 200K | 86%+5.5% | 94%+3.8% | 36 |
| Claude Sonnet 4.5Anthropic | $3.00868% more | $15.001665% more | 200K | 84.5%+4.0% | 93%+2.8% | 36 |
| Grok 4xAI | $3.00868% more | $15.001665% more | 128K | 86%+5.5% | 93%+2.8% | 33 |
| GPT-4oOpenAI | $2.50706% more | $10.001076% more | 128K | 80.5%Same | 91%+0.8% | 28 |