PANews reported on April 30 that according to the community and Hugging Face pages, DeepSeek has open-sourced a new model, DeepSeek-Prover-V2-671B, focusing on mathematical theorem proving tasks. The model is based on a mixture of experts (MoE) architecture and uses the Lean 4 framework for formal reasoning training. The parameter scale is 671B, and it combines reinforcement learning with large-scale synthetic data to significantly improve the automated proof capability. The model has been launched on Hugging Face and supports local deployment and commercial use.
DeepSeek releases 671 billion parameter open source model, focusing on mathematical theorem proof
- 2025-05-10
Indonesian government may ask Worldcoin to delete 500,000 retinal data
- 2025-05-10
Kanye West's tweet ignites DYDDY craze: 160 million market value evaporated overnight, some retail investors lost 700,000
- 2025-05-09
In the past 24 hours, the total network contract liquidation was US$1.118 billion, mainly short orders
- 2025-05-09
A brief analysis of McKinsey’s Lilli: What development ideas does it provide for the enterprise AI market?
- 2025-05-09
BNB Chain hits all-time high of 2,763 transactions per second
- 2025-05-09
HashKey Exchange has passed SOC 1 Type 2 and SOC 2 Type 2 dual certification