Best Llama 4 Scout Alternatives
Llama 4 Scout by Meta is a open source model priced at $0.18/0.63 per 1M tokens (in/out). It's already affordable, but you might want different strengths or features.
Llama 4 Scout
MetaOpen SourceInput
$0.18/1M
Output
$0.63/1M
Context
10M
Max Output
32K
Why Switch from Llama 4 Scout?
Top Alternatives
Higher benchmark scores, open-source and self-hostable.
Input
$0.31/1M
72% more
Output
$0.85/1M
35% more
Context
1M
Max Output
32K
48% cheaper, higher benchmark scores, adds reasoning.
Input
$0.14/1M
22% cheaper
Output
$0.28/1M
56% cheaper
Context
164K
Max Output
16K
Higher benchmark scores, 128K max output, adds reasoning.
Input
$0.3/1M
67% more
Output
$1.2/1M
90% more
Context
200K
Max Output
128K
7% cheaper, higher benchmark scores, 66K max output.
Input
$0.15/1M
17% cheaper
Output
$0.6/1M
5% cheaper
Context
1M
Max Output
66K
7% cheaper, adds tool-use.
Input
$0.15/1M
17% cheaper
Output
$0.6/1M
5% cheaper
Context
128K
Max Output
16K
Comparable performance, adds tool-use.
Input
$0.4/1M
122% more
Output
$2/1M
217% more
Context
128K
Max Output
16K
Higher benchmark scores, 100K max output, adds tool-use, reasoning.
Input
$0.4/1M
122% more
Output
$1.6/1M
154% more
Context
200K
Max Output
100K
Higher benchmark scores, 128K max output, adds reasoning.
Input
$0.6/1M
233% more
Output
$2.2/1M
249% more
Context
200K
Max Output
128K
Full Comparison Table
| Model | Input $/1M | Output $/1M | Context | MMLU-Pro | HumanEval | Score |
|---|---|---|---|---|---|---|
| Llama 4 MaverickMeta | $0.3172% more | $0.8535% more | 1M | 80.5%+6.3% | 90.2%+4.2% | 90 |
| DeepSeek V3DeepSeek | $0.1422% cheaper | $0.2856% cheaper | 164K | 78%+3.8% | 89%+3.0% | 85 |
| MiniMax M2.5MiniMax | $0.3067% more | $1.2090% more | 200K | 82%+7.8% | 90%+4.0% | 75 |
| Gemini 2.5 FlashGoogle | $0.1517% cheaper | $0.605% cheaper | 1M | 76%+1.8% | 89.5%+3.5% | 73 |
| GPT-4o MiniOpenAI | $0.1517% cheaper | $0.605% cheaper | 128K | 68%-6.2% | 87.2%+1.2% | 71 |
| Mistral Medium 3Mistral | $0.40122% more | $2.00217% more | 128K | 76%+1.8% | 87%+1.0% | 68 |
| o3OpenAI | $0.40122% more | $1.60154% more | 200K | 87%+12.8% | 94.5%+8.5% | 63 |
| GLM-4.7Zhipu AI | $0.60233% more | $2.20249% more | 200K | 84.3%+10.1% | — | 58 |
| o4-miniOpenAI | $1.10511% more | $4.40598% more | 200K | 85%+10.8% | 93.5%+7.5% | 53 |
| Gemini 3 FlashGoogle | $0.50178% more | $3.00376% more | 1M | 78%+3.8% | 90%+4.0% | 53 |
| Mistral Large 3Mistral | $2.001011% more | $5.00694% more | 128K | 83%+8.8% | 91%+5.0% | 53 |
| Claude Haiku 4.5Anthropic | $0.80344% more | $4.00535% more | 200K | 69.4%-4.8% | 88.1%+2.1% | 51 |
| DeepSeek R1DeepSeek | $0.55206% more | $2.19248% more | 128K | 84%+9.8% | 92%+6.0% | 50 |
| GLM-5Zhipu AI | $1.00456% more | $3.20408% more | 200K | 70.4%-3.8% | 91%+5.0% | 46 |
| Claude Opus 4.6Anthropic | $5.002678% more | $25.003868% more | 200K | 89.5%+15.3% | 95%+9.0% | 43 |
| Claude Sonnet 4.6Anthropic | $3.001567% more | $15.002281% more | 200K | 86%+11.8% | 94%+8.0% | 43 |
| Claude Sonnet 4.5Anthropic | $3.001567% more | $15.002281% more | 200K | 84.5%+10.3% | 93%+7.0% | 43 |
| GPT-5.3 CodexOpenAI | $2.001011% more | $16.002440% more | 200K | 90%+15.8% | 96.5%+10.5% | 43 |
| GPT-5.2 CodexOpenAI | $1.75872% more | $14.002122% more | 200K | 89%+14.8% | 95.5%+9.5% | 43 |
| GPT-5OpenAI | $1.25594% more | $10.001487% more | 128K | 88.5%+14.3% | 95%+9.0% | 40 |
| Gemini 3.1 ProGoogle | $2.001011% more | $12.001805% more | 1M | 91%+16.8% | 95%+9.0% | 40 |
| Gemini 3 ProGoogle | $2.001011% more | $12.001805% more | 1M | 89.8%+15.6% | 94%+8.0% | 40 |
| Gemini 2.5 ProGoogle | $1.25594% more | $10.001487% more | 1M | 87.5%+13.3% | 93.5%+7.5% | 40 |
| Grok 4xAI | $3.001567% more | $15.002281% more | 128K | 86%+11.8% | 93%+7.0% | 40 |
| GPT-4oOpenAI | $2.501289% more | $10.001487% more | 128K | 80.5%+6.3% | 91%+5.0% | 36 |