PANews reported on April 30 that according to the community and Hugging Face pages, DeepSeek has open-sourced a new model, DeepSeek-Prover-V2-671B, focusing on mathematical theorem proving tasks. The model is based on a mixture of experts (MoE) architecture and uses the Lean 4 framework for formal reasoning training. The parameter scale is 671B, and it combines reinforcement learning with large-scale synthetic data to significantly improve the automated proof capability. The model has been launched on Hugging Face and supports local deployment and commercial use.
DeepSeek releases 671 billion parameter open source model, focusing on mathematical theorem proof
- 2025-05-11
In the past 24 hours, the total network contract liquidation was 472 million US dollars, both long and short positions were liquidated
- 2025-05-11
Analysis: If BTC closes above $104,500 this week, it may start the breakthrough process
- 2025-05-11
Data: APT, ARB, AVAX and other tokens will be unlocked in large amounts next week, of which APT unlocks about $67.5 million
- 2025-05-11
Lido DAO launches emergency proposal to replace oracle node suspected of leaking private keys
- 2025-05-11
Binance Alpha’s trading volume reached $428.3 million yesterday, setting a new record high
- 2025-05-11
Ethereum's market value surpasses Coca-Cola and rises to 40th place in global asset market value ranking