Chatbot Arena

Attribution LMSYS April 9, 2025

This leaderboard is based on the following benchmarks.

  • Chatbot Arena - a crowdsourced, randomized battle platform for large language models (LLMs). We use 2.8M+ user votes to compute Elo ratings.
  • MMLU - a test to measure a model’s multitask accuracy on 57 tasks.
  • Arena-Hard-Auto - an automatic evaluation tool for instruction-tuned LLMs.

| Vote | Blog | GitHub | Paper | Dataset | Twitter | Discord |

Best Open LM

ModelArena EloMMLULicense
DeepSeek DeepSeek-V3-0324137088.5MIT
DeepSeek DeepSeek-R1135990.8MIT
Gemini Gemma-3-27B-it1342Gemma
Qwen QwQ-32B1315Apache 2.0
Nvidia Llama-3.3-Nemotron-Super-49B-v1129686Nvidia

Full Leaderboard
ModelArena EloCodingVisionArena HardMMLUVotesOrganizationLicense
🥇 Gemini-2.5-Pro-Exp-03-251437142313157431GoogleProprietary
🥇 ChatGPT-4o-latest (2025-03-26)1406141613046612OpenAIProprietary
🥇 Grok-3-Preview-02-241402140992.713919xAIProprietary
🥇 GPT-4.5-Preview13971402125313443OpenAIProprietary
🥈 Gemini-2.0-Pro-Exp-02-0513801379124020136GoogleProprietary
🥈 Gemini-2.0-Flash-Thinking-Exp-01-2113801365127925266GoogleProprietary
🥈 DeepSeek-V3-03241370139488.54721DeepSeekMIT
🥈 DeepSeek-R11359136890.815098DeepSeekMIT
🥈 Gemini-2.0-Flash-Exp13551353125722518GoogleProprietary
🥈 o1-2024-12-1713501358122790.491.827831OpenAIProprietary
🥈 Gemma-3-27B-it134213129147GoogleGemma
🥈 Qwen2.5-Max1340134519995AlibabaProprietary
🥈 o3-mini-high1325136416889OpenAIProprietary
🥉 DeepSeek-V31318132085.588.522843DeepSeekDeepSeek
🥉 QwQ-32B131513236729AlibabaApache 2.0
🥉 Qwen-Plus-0125131013206058AlibabaProprietary
🥉 Gemini-2.0-Flash-Lite13101318115620990GoogleProprietary
🥉 GLM-4-Plus-0111131012906032ZhipuProprietary
🥉 o3-mini1305134823693OpenAIProprietary
🥉 Command A (03-2025)130513146380CohereCC-BY-NC-4.0
🥉 Step-2-16K-Exp130512955126StepFunProprietary
🥉 o1-mini130413539254967OpenAIProprietary
🥉 Claude 3.7 Sonnet (thinking-32k)130313367809AnthropicProprietary
🥉 Hunyuan-TurboS-20250226130213272456TencentProprietary
🥉 Gemini-1.5-Pro-00213021291122258651GoogleProprietary
🥉 Hunyuan-Turbo-0110129613152509TencentProprietary
🥉 Llama-3.3-Nemotron-Super-49B-v11296130188.3862371NvidiaNvidia
🥉 Claude 3.7 Sonnet12951332121413107AnthropicProprietary
🥉 Grok-2-08-131288128287.567092xAIProprietary
🥉 Yi-Lightning1287130381.52897101 AIProprietary
🥉 GPT-4o-2024-05-1312851293120679.2188.7117768OpenAIProprietary
🥉 Claude 3.5 Sonnet (20241022)12831326118285.288.764098AnthropicProprietary
Deepseek-v2.5-1210127912977251DeepSeekDeepSeek
Athene-v2-Chat-72B127513008526094NexusFlowNexusFlow
Llama-4-Maverick-17B-128E-Instruct127312992662MetaLlama 4
Hunyuan-Large-2025-02-10127212923860TencentProprietary
GPT-4o-mini-2024-07-1812721283112474.948270818OpenAIProprietary
Gemini-1.5-Flash-00212711254120537026GoogleProprietary
Llama-3.1-405B-Instruct-bf161269128088.642624MetaLlama 3.1
Llama-3.1-Nemotron-70B-Instruct1269127184.97577NvidiaLlama 3.1
Llama-3.1-405B-Instruct-fp81267127669.388.663061MetaLlama 3.1
Grok-2-Mini-08-131266126255449xAIProprietary
Yi-Lightning-lite126412671707301 AIProprietary
Hunyuan-Standard-2025-02-10126012684014TencentProprietary
Qwen2.5-72B-Instruct125712837841535AlibabaQwen
Llama-3.3-70B-Instruct1257125837031MetaLlama-3.3
GPT-4-Turbo-2024-04-0912561263115182.63102160OpenAIProprietary
Mistral-Large-24071251126970.4248219MistralMistral Research
GPT-4-1106-preview12501253103753OpenAIProprietary
Athene-70B1250125377.620585NexusFlowCC-BY-NC-4.0
Mistral-Large-24111248126528444MistralMRL
Llama-3.1-70B-Instruct1248125155.738658666MetaLlama 3.1
Claude 3 Opus12471250107660.3686.8202710AnthropicProprietary
Amazon Nova Pro 1.012451261104423635AmazonProprietary
GPT-4-0125-preview1245124377.9697077OpenAIProprietary
Llama-3.1-Tulu-3-70B124412333015Ai2Llama 3.1
Yi-Large-preview1240124571.485164601 AIProprietary
Claude 3.5 Haiku (20241022)1237126432053AnthropicPropretary
Reka-Core-20240904123512217938Reka AIProprietary
Reka-Core-202407221230120813286Reka AIProprietary
Qwen-Plus-08281227124514625AlibabaProprietary
Gemini-1.5-Flash-00112271232107249.6178.965668GoogleProprietary
Jamba-1.5-Large1221122781.29127AI21 LabsJamba Open
Deepseek-v2-API-06281220124219505DeepSeek AIDeepSeek
Gemma-2-27B-it1220120957.5179528GoogleGemma license
Qwen2.5-Coder-32B-Instruct121712615730AlibabaApache 2.0
Amazon Nova Lite 1.012171235106120655AmazonProprietary
Mistral-Small-24B-Instruct-25011216123013940MistralApache 2.0
Gemma-2-9B-it-SimPO1216119610552PrincetonMIT
Command R+ (08-2024)1215118110541CohereCC-BY-NC-4.0
Deepseek-Coder-v2-07241214126662.311728DeepSeekProprietary
Yi-Large1212122063.71662801 AIProprietary
Gemini-1.5-Flash-8B-00112121208110637697GoogleProprietary
Llama-3.1-Nemotron-51B-Instruct121112113885NvidiaLlama 3.1
Nemotron-4-340B-Instruct1209119820613NvidiaNVIDIA Open Model
Aya-Expanse-32B1209119328751CohereCC-BY-NC-4.0
GLM-4-05201206121663.8410221Zhipu AIProprietary
Llama-3-70B-Instruct1206120046.5782163660MetaLlama 3
Reka-Flash-20240904120511918138Reka AIProprietary
Gemini-1.5-Flash-8B-Exp-082712051189111225344GoogleProprietary
Phi-41204122124508MicrosoftMIT
Claude 3 Sonnet12011213104846.879113056AnthropicProprietary
Reka-Flash-202407221201118713729Reka AIProprietary
Reka-Core-2024050111991190101583.262566Reka AIProprietary
Amazon Nova Micro 1.01198121020666AmazonProprietary
Gemma-2-9B-it1192117357197GoogleGemma license
Command R+ (04-2024)1190116433.0780856CohereCC-BY-NC-4.0
Hunyuan-Standard-256K118912272901TencentProprietary
Qwen2-72B-Instruct1187118746.8684.238884AlibabaQianwen LICENSE
GPT-4-0314118611955086.455980OpenAIProprietary
Llama-3.1-Tulu-3-8B118511793075Ai2Llama 3.1
GLM-4-01161183119155.727576Zhipu AIProprietary
Qwen-Max-04281183118925694AlibabaProprietary
Ministral-8B-2410118212015111MistralMRL
Aya-Expanse-8B1180116510391CohereCC-BY-NC-4.0
Claude 3 Haiku11791189100041.4775.2122305AnthropicProprietary
Command R (08-2024)1179116110849CohereCC-BY-NC-4.0
DeepSeek-Coder-V2-Instruct1178123915753DeepSeek AIDeepSeek License
Llama-3.1-8B-Instruct1176118621.347352599MetaLlama 3.1
Jamba-1.5-Mini1176118169.79269AI21 LabsJamba Open
Reka-Flash-Preview-2024061111651155102420430Reka AIProprietary
GPT-4-06131163116737.991639OpenAIProprietary
Qwen1.5-110B-Chat1161117580.427441AlibabaQianwen LICENSE
Mistral-Large-24021157117037.7181.264915MistralProprietary
Yi-1.5-34B-Chat1157116276.82513601 AIApache-2.0
Reka-Flash-21B-online1156114716031Reka AIProprietary
QwQ-32B-Preview115311473413AlibabaApache 2.0
Llama-3-8B-Instruct1152114620.5668.4109102MetaLlama 3
InternLM2.5-20B-chat1149115810595InternLMOther
Claude-1114911367721151AnthropicProprietary
Command R (04-2024)1149112317.0256382CohereCC-BY-NC-4.0
Mistral Medium1148115231.975.335559MistralProprietary
Qwen1.5-72B-Chat1147116036.1277.540669AlibabaQianwen LICENSE
Mixtral-8x22b-Instruct-v0.11147115336.3677.853767MistralApache 2.0
Reka-Flash-21B1147114173.525813Reka AIProprietary
Gemma-2-2b-it1144110751.348906GoogleGemma license
Granite-3.1-8B-Instruct114211733293IBMApache 2.0
Claude-2.01132113523.9978.512763AnthropicProprietary
Gemini-1.0-Pro-0011131110371.818802GoogleProprietary
Zephyr-ORPO-141b-A35b-v0.1112711244860HuggingFaceApache 2.0
Qwen1.5-32B-Chat1125114973.422762AlibabaQianwen LICENSE
Mistral-Next1124113227.3712376MistralProprietary
Phi-3-Medium-4k-Instruct1123112533.377826111MicrosoftMIT
Granite-3.1-2B-Instruct111911473382IBMApache 2.0
Starling-LM-7B-beta1119112923.0116670NexusflowApache-2.0
Claude-2.11118113222.7737695AnthropicProprietary
GPT-3.5-Turbo-06131117113524.8238958OpenAIProprietary
Mixtral-8x7B-Instruct-v0.11114111423.470.676138MistralApache 2.0
Claude-Instant-11111110973.420625AnthropicProprietary
Yi-34B-Chat1111110623.1573.51591901 AIYi License
Gemini Pro1111109117.871.86557GoogleProprietary
Qwen1.5-14B-Chat1109112667.618691AlibabaQianwen LICENSE
GPT-3.5-Turbo-03141107111518.05705639OpenAIProprietary
GPT-3.5-Turbo-01251106112423.3468869OpenAIProprietary
WizardLM-70B-v1.01106107163.78382MicrosoftLlama 2
DBRX-Instruct-Preview1103111824.6373.733733DatabricksDBRX LICENSE
Llama-3.2-3B-Instruct110310808400MetaLlama 3.2
Phi-3-Small-8k-Instruct1102110729.7775.718472MicrosoftMIT
Tulu-2-DPO-70B1099109314.996659AllenAI/UWAI2 ImpACT Low-risk
Granite-3.0-8B-Instruct109310977000IBMApache 2.0
Llama-2-70B-chat1093107211.556339608MetaLlama 2
OpenChat-3.5-01061091110265.812990OpenChatApache-2.0
Vicuna-33B109110678.6359.222945LMSYSNon-commercial
Snowflake Arctic Instruct1090107717.6167.334184SnowflakeApache 2.0
Starling-LM-7B-alpha1088108012.863.910416UC BerkeleyCC-BY-NC-4.0
Gemma-1.1-7B-it1084108412.0964.325086GoogleGemma license
Nous-Hermes-2-Mixtral-8x7B-DPO108410793838NousResearchApache-2.0
NV-Llama2-70B-SteerLM-Chat1080102368.53635NvidiaLlama 2
pplx-70B-online107810286895Perplexity AIProprietary
DeepSeek-LLM-67B-Chat1077107971.34987DeepSeek AIDeepSeek License
OpenChat-3.51076105464.38107OpenChatApache-2.0
Granite-3.0-2B-Instruct107410887187IBMApache 2.0
OpenHermes-2.5-Mistral-7B107410585088NousResearchApache-2.0
Mistral-7B-Instruct-v0.21072107412.5720065MistralApache-2.0
Qwen1.5-7B-Chat10701089614871AlibabaQianwen LICENSE
Phi-3-Mini-4K-Instruct-June-241070108270.912808MicrosoftMIT
GPT-3.5-Turbo-11061068109518.8717032OpenAIProprietary
Phi-3-Mini-4k-Instruct1066108668.821090MicrosoftMIT
Llama-2-13b-chat1063105153.619719MetaLlama 2
SOLAR-10.7B-Instruct-v1.01062104766.24287Upstage AICC-BY-NC-4.0
Dolphin-2.2.1-Mistral-7B106210251713Cognitive ComputationsApache-2.0
WizardLM-13b-v1.21059102652.77175MicrosoftLlama 2
Llama-3.2-1B-Instruct105410468519MetaLlama 3.2
Qwen2.5-VL-32B-Instruct1218AlibabaApache 2.0
Step-1o-Vision-32k (highres)1187StepFunProprietary
Qwen2.5-VL-72B-Instruct1172AlibabaQwen
Pixtral-Large-24111154MistralMRL
Qwen-VL-Max-11191128AlibabaProprietary
Step-1V-32K1111StepFunProprietary
Qwen2-VL-72b-Instruct1110AlibabaQwen
Molmo-72B-09241076AI2Apache 2.0
Pixtral-12B-24091072MistralApache 2.0
Llama-3.2-90B-Vision-Instruct1069MetaLlama 3.2
Aya-Vision-8B1069CohereCC-BY-NC-4.0
InternVL2-26B1067OpenGVLabMIT
Hunyuan-Standard-Vision-2024-12-311066TencentProprietary
Aya-Vision-32B1057CohereCC-BY-NC-4.0
Qwen2-VL-7B-Instruct1054AliabaApache 2.0
Yi-Vision104501 AIProprietary
Llama-3.2-11B-Vision-Instruct1032MetaLlama 3.2

If you want to see more models, please help us add them.

💻 Code: The Arena Elo ratings are computed by this notebook. The MT-bench scores (single-answer grading on a scale of 10) are computed by fastchat.llm_judge. The MMLU scores are computed by InstructEval. Higher values are better for all benchmarks. Empty cells mean not available. The latest and detailed leaderboard is here.

More Statistics for Chatbot Arena

🔗 Arena Statistics

Transition from online Elo rating system to Bradley-Terry model

We adopted the Elo rating system for ranking models since the launch of the Arena. It has been useful to transform pairwise human preference to Elo ratings that serve as a predictor of winrate between models. Specifically, if player A has a rating of RA and player B a rating of RB, the probability of player A winning is

{\displaystyle E_{\mathsf {A}}={\frac {1}{1+10^{(R_{\mathsf {B}}-R_{\mathsf {A}})/400}}}~.}

ELO rating has been used to rank chess players by the international community for over 60 years. Standard Elo rating systems assume a player’s performance changes overtime. So an online algorithm is needed to capture such dynamics, meaning recent games should weigh more than older games. Specifically, after each game, a player’s rating is updated according to the difference between predicted outcome and actual outcome.

{\displaystyle R_{\mathsf {A}}'=R_{\mathsf {A}}+K\cdot (S_{\mathsf {A}}-E_{\mathsf {A}})~.}

This algorithm has two distinct features:

  1. It can be computed asynchronously by players around the world.
  2. It allows for players performance to change dynamically – it does not assume a fixed unknown value for the players rating.

This ability to adapt is determined by the parameter K which controls the magnitude of rating changes that can affect the overall result. A larger K essentially put more weight on the recent games, which may make sense for new players whose performance improves quickly. However as players become more senior and their performance “converges” then a smaller value of K is more appropriate. As a result, USCF adopted K based on the number of games and tournaments completed by the player (reference). That is, the Elo rating of a senior player changes slower than a new player.

When we launched the Arena, we noticed considerable variability in the ratings using the classic online algorithm. We tried to tune the K to be sufficiently stable while also allowing new models to move up quickly in the leaderboard. We ultimately decided to adopt a bootstrap-like technique to shuffle the data and sample Elo scores from 1000 permutations of the online plays. You can find the details in this notebook. This provided consistent stable scores and allowed us to incorporate new models quickly. This is also observed in a recent work by Cohere. However, we used the same samples to estimate confidence intervals which were therefore too wide (effectively CI’s for the original online Elo estimates).

In the context of LLM ranking, there are two important differences from the classic Elo chess ranking system. First, we have access to the entire history of all games for all models and so we don’t need a decentralized algorithm. Second, most models are static (we have access to the weights) and so we don’t expect their performance to change. However, it is worth noting that the hosted proprietary models may not be static and their behavior can change without notice. We try our best to pin specific model API versions if possible.

To improve the quality of our rankings and their confidence estimates, we are adopting another widely used rating system called the Bradley–Terry (BT) model. This model actually is the maximum likelihood (MLE) estimate of the underlying Elo model assuming a fixed but unknown pairwise win-rate. Similar to Elo rating, BT model is also based on pairwise comparison to derive ratings of players to estimate win rate between each other. The core difference between BT model vs the online Elo system is the assumption that player’s performance does not change (i.e., game order does not matter) and the computation takes place in a centralized fashion.