← All Tools

Best Gemini 2.5 Pro Alternatives

Gemini 2.5 Pro by Google is a mid-tier model priced at $1.25/10 per 1M tokens (in/out). It's on the expensive side — there are cheaper options with similar quality.

Gemini 2.5 Pro

GoogleMid-Tier

Input

$1.25/1M

Output

$10/1M

Context

1M

Max Output

66K

Why Switch from Gemini 2.5 Pro?

Being superseded by Gemini 3 Pro

Top Alternatives

#1Mistral Large 3MistralFlagship

38% cheaper, comparable performance.

Input

$2/1M

60% more

Output

$5/1M

50% cheaper

Context

128K

Max Output

16K

MMLU-Pro: 83%(-4.5%)HumanEval: 91%(-2.5%)GPQA:
#2Claude Sonnet 4.6AnthropicMid-Tier

Same category, different trade-offs.

Input

$3/1M

140% more

Output

$15/1M

50% more

Context

200K

Max Output

16K

MMLU-Pro: 86%(-1.5%)HumanEval: 94%(+0.5%)GPQA: 70%(-6.0%)
#3Claude Sonnet 4.5AnthropicMid-Tier

Same category, different trade-offs.

Input

$3/1M

140% more

Output

$15/1M

50% more

Context

200K

Max Output

16K

MMLU-Pro: 84.5%(-3.0%)HumanEval: 93%(-0.5%)GPQA: 68.2%(-7.8%)
#4GPT-5OpenAIFlagship

Comparable performance.

Input

$1.25/1M

Same price

Output

$10/1M

Same price

Context

128K

Max Output

16K

MMLU-Pro: 88.5%(+1.0%)HumanEval: 95%(+1.5%)GPQA: 73.5%(-2.5%)
#5Gemini 3.1 ProGoogleFlagship

Higher benchmark scores.

Input

$2/1M

60% more

Output

$12/1M

20% more

Context

1M

Max Output

64K

MMLU-Pro: 91%(+3.5%)HumanEval: 95%(+1.5%)GPQA: 94.3%(+18.3%)
#6Gemini 3 ProGoogleFlagship

Comparable performance.

Input

$2/1M

60% more

Output

$12/1M

20% more

Context

1M

Max Output

66K

MMLU-Pro: 89.8%(+2.3%)HumanEval: 94%(+0.5%)GPQA: 77%(+1.0%)
#7o4-miniOpenAIReasoning

Dramatically cheaper (51% less), comparable performance.

Input

$1.1/1M

12% cheaper

Output

$4.4/1M

56% cheaper

Context

200K

Max Output

100K

MMLU-Pro: 85%(-2.5%)HumanEval: 93.5%(Same)GPQA: 76%(Same)
#8GLM-4.7Zhipu AIMid-Tier

Dramatically cheaper (75% less), comparable performance.

Input

$0.6/1M

52% cheaper

Output

$2.2/1M

78% cheaper

Context

200K

Max Output

128K

MMLU-Pro: 84.3%(-3.2%)HumanEval: GPQA: 85.7%(+9.7%)

Full Comparison Table

ModelInput $/1MOutput $/1MContextMMLU-ProHumanEvalScore
Mistral Large 3Mistral$2.0060% more$5.0050% cheaper128K83%-4.5%91%-2.5%80
Claude Sonnet 4.6Anthropic$3.00140% more$15.0050% more200K86%-1.5%94%+0.5%78
Claude Sonnet 4.5Anthropic$3.00140% more$15.0050% more200K84.5%-3.0%93%-0.5%78
GPT-5OpenAI$1.25Same price$10.00Same price128K88.5%+1.0%95%+1.5%75
Gemini 3.1 ProGoogle$2.0060% more$12.0020% more1M91%+3.5%95%+1.5%75
Gemini 3 ProGoogle$2.0060% more$12.0020% more1M89.8%+2.3%94%+0.5%75
o4-miniOpenAI$1.1012% cheaper$4.4056% cheaper200K85%-2.5%93.5%Same73
GLM-4.7Zhipu AI$0.6052% cheaper$2.2078% cheaper200K84.3%-3.2%73
Mistral Medium 3Mistral$0.4068% cheaper$2.0080% cheaper128K76%-11.5%87%-6.5%73
Claude Opus 4.6Anthropic$5.00300% more$25.00150% more200K89.5%+2.0%95%+1.5%70
GPT-5.3 CodexOpenAI$2.0060% more$16.0060% more200K90%+2.5%96.5%+3.0%70
GPT-5.2 CodexOpenAI$1.7540% more$14.0040% more200K89%+1.5%95.5%+2.0%70
GPT-4oOpenAI$2.50100% more$10.00Same price128K80.5%-7.0%91%-2.5%70
o3OpenAI$0.4068% cheaper$1.6084% cheaper200K87%-0.5%94.5%+1.0%70
GLM-5Zhipu AI$1.0020% cheaper$3.2068% cheaper200K70.4%-17.1%91%-2.5%65
Gemini 3 FlashGoogle$0.5060% cheaper$3.0070% cheaper1M78%-9.5%90%-3.5%63
Claude Haiku 4.5Anthropic$0.8036% cheaper$4.0060% cheaper200K69.4%-18.1%88.1%-5.4%60
MiniMax M2.5MiniMax$0.3076% cheaper$1.2088% cheaper200K82%-5.5%90%-3.5%60
Grok 4xAI$3.00140% more$15.0050% more128K86%-1.5%93%-0.5%59
Gemini 2.5 FlashGoogle$0.1588% cheaper$0.6094% cheaper1M76%-11.5%89.5%-4.0%53
Llama 4 MaverickMeta$0.3175% cheaper$0.8592% cheaper1M80.5%-7.0%90.2%-3.3%53
DeepSeek R1DeepSeek$0.5556% cheaper$2.1978% cheaper128K84%-3.5%92%-1.5%53
DeepSeek V3DeepSeek$0.1489% cheaper$0.2897% cheaper164K78%-9.5%89%-4.5%43
GPT-4o MiniOpenAI$0.1588% cheaper$0.6094% cheaper128K68%-19.5%87.2%-6.3%40
Llama 4 ScoutMeta$0.1886% cheaper$0.6394% cheaper10M74.2%-13.3%86%-7.5%35

Head-to-Head Comparisons

Alternatives for Other Models