DeepSeek-V2

Attribution DeepSeek May 6, 2024

We introduce DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. It comprises 236B total parameters, of which 21B are activated for each token. Compared with DeepSeek 67B, DeepSeek-V2 achieves stronger performance, and meanwhile saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts the maximum generation throughput to more than 5 times.

Homepage Chat Hugging Face
Discord Twitter Follow
Code License Model License

Model Download | Evaluation Results | Model Architecture | API Platform | License | Citation

1. Introduction

We pretrained DeepSeek-V2 on a diverse and high-quality corpus comprising 8.1 trillion tokens. This comprehensive pretraining was followed by a process of Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) to fully unleash the model’s capabilities. The evaluation results validate the effectiveness of our approach as DeepSeek-V2 achieves remarkable performance on both standard benchmarks and open-ended generation evaluation.

2. News

  • 2024.05.16: We released the DeepSeek-V2-Lite.
  • 2024.05.06: We released the DeepSeek-V2.

3. Model Downloads

Model#Total Params#Activated ParamsContext LengthDownload
DeepSeek-V2-Lite16B2.4B32k🤗 HuggingFace
DeepSeek-V2-Lite-Chat (SFT)16B2.4B32k🤗 HuggingFace
DeepSeek-V2236B21B128k🤗 HuggingFace
DeepSeek-V2-Chat (RL)236B21B128k🤗 HuggingFace

Due to the constraints of HuggingFace, the open-source code currently experiences slower performance than our internal codebase when running on GPUs with Huggingface. To facilitate the efficient execution of our model, we offer a dedicated vllm solution that optimizes performance for running our model effectively.

4. Evaluation Results

Base Model

Standard Benchmark (Models larger than 67B)

BenchmarkDomainLLaMA3 70BMixtral 8x22BDeepSeek-V1 (Dense-67B)DeepSeek-V2 (MoE-236B)
MMLUEnglish78.977.671.378.5
BBHEnglish81.078.968.778.9
C-EvalChinese67.558.666.181.7
CMMLUChinese69.360.070.884.0
HumanEvalCode48.253.145.148.8
MBPPCode68.664.257.466.6
GSM8KMath83.080.363.479.2
MathMath42.242.518.743.6

Standard Benchmark (Models smaller than 16B)

BenchmarkDomainDeepSeek 7B (Dense)DeepSeekMoE 16BDeepSeek-V2-Lite (MoE-16B)
Architecture-MHA+DenseMHA+MoEMLA+MoE
MMLUEnglish48.245.058.3
BBHEnglish39.538.944.1
C-EvalChinese45.040.660.3
CMMLUChinese47.242.564.3
HumanEvalCode26.226.829.9
MBPPCode39.039.243.2
GSM8KMath17.418.841.1
MathMath3.34.317.1
For more evaluation details, such as few-shot settings and prompts, please check our paper.

Context Window

Evaluation results on the Needle In A Haystack (NIAH) tests. DeepSeek-V2 performs well across all context window lengths up to 128K.

Chat Model

Standard Benchmark (Models larger than 67B)

BenchmarkDomainQWen1.5 72B ChatMixtral 8x22BLLaMA3 70B InstructDeepSeek-V1 Chat (SFT)DeepSeek-V2 Chat (SFT)DeepSeek-V2 Chat (RL)
MMLUEnglish76.277.880.371.178.477.8
BBHEnglish65.978.480.171.781.379.7
C-EvalChinese82.260.067.965.280.978.0
CMMLUChinese82.961.070.767.882.481.6
HumanEvalCode68.975.076.273.876.881.1
MBPPCode52.264.469.861.470.472.0
LiveCodeBench (0901-0401)Code18.825.030.518.328.732.5
GSM8KMath81.987.993.284.190.892.2
MathMath40.649.848.532.652.753.9

Standard Benchmark (Models smaller than 16B)

BenchmarkDomainDeepSeek 7B Chat (SFT)DeepSeekMoE 16B Chat (SFT)DeepSeek-V2-Lite 16B Chat (SFT)
MMLUEnglish49.747.255.7
BBHEnglish43.142.248.1
C-EvalChinese44.740.060.1
CMMLUChinese51.249.362.5
HumanEvalCode45.145.757.3
MBPPCode39.046.245.8
GSM8KMath62.662.272.0
MathMath14.715.227.9

English Open Ended Generation Evaluation

We evaluate our model on AlpacaEval 2.0 and MTBench, showing the competitive performance of DeepSeek-V2-Chat-RL on English conversation generation.

Chinese Open Ended Generation Evaluation

Alignbench (https://arxiv.org/abs/2311.18743)

模型开源/闭源总分中文推理中文语言
gpt-4-1106-preview闭源8.017.738.29
DeepSeek-V2 Chat (RL)开源7.917.458.36
erniebot-4.0-202404 (文心一言)闭源7.897.618.17
DeepSeek-V2 Chat (SFT)开源7.747.308.17
gpt-4-0613闭源7.537.477.59
erniebot-4.0-202312 (文心一言)闭源7.366.847.88
moonshot-v1-32k-202404 (月之暗面)闭源7.226.428.02
Qwen1.5-72B-Chat (通义千问)开源7.196.457.93
DeepSeek-67B-Chat开源6.435.757.11
Yi-34B-Chat (零一万物)开源6.124.867.38
gpt-3.5-turbo-0613闭源6.085.356.71
DeepSeek-V2-Lite 16B Chat开源6.014.717.32

Coding Benchmarks

We evaluate our model on LiveCodeBench (0901-0401), a benchmark designed for live coding challenges. As illustrated, DeepSeek-V2 demonstrates considerable proficiency in LiveCodeBench, achieving a Pass@1 score that surpasses several other sophisticated models. This performance highlights the model’s effectiveness in tackling live coding tasks.

5. Model Architecture

DeepSeek-V2 adopts innovative architectures to guarantee economical training and efficient inference:

  • For attention, we design MLA (Multi-head Latent Attention), which utilizes low-rank key-value union compression to eliminate the bottleneck of inference-time key-value cache, thus supporting efficient inference.
  • For Feed-Forward Networks (FFNs), we adopt DeepSeekMoE architecture, a high-performance MoE architecture that enables training stronger models at lower costs.

6. Chat Website

You can chat with the DeepSeek-V2 on DeepSeek’s official website: chat.deepseek.com

7. API Platform

We also provide OpenAI-Compatible API at DeepSeek Platform: platform.deepseek.com. Sign up for over millions of free tokens. And you can also pay-as-you-go at an unbeatable price.

8. How to run locally

To utilize DeepSeek-V2 in BF16 format for inference, 80GB*8 GPUs are required.

Inference with Huggingface’s Transformers

You can directly employ Huggingface’s Transformers for model inference.

Text Completion

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/DeepSeek-V2"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# `max_memory` should be set based on your devices
max_memory = {i: "75GB" for i in range(8)}
# `device_map` cannot be set to `auto`
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

Chat Completion

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/DeepSeek-V2-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# `max_memory` should be set based on your devices
max_memory = {i: "75GB" for i in range(8)}
# `device_map` cannot be set to `auto`
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
    {"role": "user", "content": "Write a piece of quicksort code in C++"}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)

The complete chat template can be found within tokenizer_config.json located in the huggingface model repository.

An example of chat template is as belows:

<|begin▁of▁sentence|>User: {user_message_1}

Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}

Assistant:

You can also add an optional system message:

<|begin▁of▁sentence|>{system_message}

User: {user_message_1}

Assistant: {assistant_message_1}<|end▁of▁sentence|>User: {user_message_2}

Assistant:

SGLang currently supports MLA optimizations, FP8 (W8A8), FP8 KV Cache, and Torch Compile, offering the best latency and throughput among open-source frameworks. Here are some example commands to launch an OpenAI API-compatible server:

# BF16, tensor parallelism = 8
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V2-Chat --tp 8 --trust-remote-code

# BF16, w/ torch.compile (The compilation can take several minutes)
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V2-Lite-Chat --trust-remote-code --enable-torch-compile

# FP8, tensor parallelism = 8, FP8 KV cache
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V2-Chat --tp 8 --trust-remote-code --quant fp8 --kv-cache-dtype fp8_e5m2

After launching the server, you can query it with OpenAI API

import openai
client = openai.Client(
    base_url="http://127.0.0.1:30000/v1", api_key="EMPTY")

# Chat completion
response = client.chat.completions.create(
    model="default",
    messages=[
        {"role": "system", "content": "You are a helpful AI assistant"},
        {"role": "user", "content": "List 3 countries and their capitals."},
    ],
    temperature=0,
    max_tokens=64,
)
print(response)

To utilize vLLM for model inference, please merge this Pull Request into your vLLM codebase: https://github.com/vllm-project/vllm/pull/4650.

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

max_model_len, tp_size = 8192, 8
model_name = "deepseek-ai/DeepSeek-V2-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True, enforce_eager=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])

messages_list = [
    [{"role": "user", "content": "Who are you?"}],
    [{"role": "user", "content": "Translate the following content into Chinese directly: DeepSeek-V2 adopts innovative architectures to guarantee economical training and efficient inference."}],
    [{"role": "user", "content": "Write a piece of quicksort code in C++."}],
]

prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]

outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)

generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)

LangChain Support

Since our API is compatible with OpenAI, you can easily use it in langchain. Here is an example:

from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
    model='deepseek-chat',
    openai_api_key=<your-deepseek-api-key>,
    openai_api_base='https://api.deepseek.com/v1',
    temperature=0.85,
    max_tokens=8000)

9. License

This code repository is licensed under the MIT License. The use of DeepSeek-V2 Base/Chat models is subject to the Model License. DeepSeek-V2 series (including Base and Chat) supports commercial use.

10. Citation

@misc{deepseekv2,
      title={DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model}, 
      author={DeepSeek-AI},
      year={2024},
      eprint={2405.04434},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

11. Contact

If you have any questions, please raise an issue or contact us at service@deepseek.com.