DeepSeek API同步上线V4-Pro与V4-Flash

robot
Abstract generation in progress

Golden Finance reports that on April 24, DeepSeek announced that the DeepSeek API has now been synchronized to launch V4-Pro and V4-Flash, supporting the OpenAI ChatCompletions interface and the Anthropic interface. When accessing the new models, the base_url remains unchanged, but the model parameter needs to be changed to deepseek-v4-pro or deepseek-v4-flash. The maximum context length for V4-Pro and V4-Flash is 1M. Both support non-thinking mode and thinking mode. In thinking mode, the reasoning_effort parameter can be set to the thinking intensity (high/max). For complex Agent scenarios, it is recommended to use thinking mode and set the intensity to max. (Dongxin News Agency)

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin