DeepSeek: DeepSeek V3 Base
Note that this is a base model mostly meant for testing, you need to provide detailed prompts for the model to return useful responses. DeepSeek-V3 Base is a 671B parameter open Mixture-of-Experts (MoE) language model with 37B active parameters per forward pass and a context length of 128K tokens. Trained on 14.8T tokens using FP8 mixed precision, it achieves high training efficiency and stability, with strong performance across language, reasoning, math, and coding tasks. DeepSeek-V3 Base is the pre-trained model behind [DeepSeek V3](/deepseek/deepseek-chat-v3)
Parameters
-
Context Window
163,840
tokens
Input Price
$0.199919
per 1M tokens
Output Price
$0.800064
per 1M tokens
Capabilities
Model capabilities and supported modalities
Performance
Excellent reasoning capabilities with strong logical analysis
Strong mathematical capabilities, handles complex calculations well
-
-
Modalities
text
text
LLM Price Calculator
Calculate the cost of using this model
Monthly Cost Estimator
Based on different usage levels