DennisHuang648 commited on
Commit
42afc22
Β·
verified Β·
1 Parent(s): 85b2dbe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -19,6 +19,17 @@ library_name: transformers
19
  πŸ‘‹ Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
20
  </p>
21
 
 
 
 
 
 
 
 
 
 
 
 
22
  ## What's New
23
  - [2026.02.11] **MiniCPM-SALA** is released! This is the first large-scale hybrid model effectively integrating sparse and linear attention for million-token context modeling. You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).πŸ”₯πŸ”₯πŸ”₯
24
 
 
19
  πŸ‘‹ Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
20
  </p>
21
 
22
+ > [!NOTE]
23
+ > ### πŸ† 2026 Sparse Operator Acceleration & Race (SOAR) is Now Live!
24
+ >
25
+ > **"The MiniCPM-SALA architecture is just the beginning. Realizing its full potential requires deep system-level synergy and cross-layer compilation optimization."**
26
+ >
27
+ > In collaboration with **SGLang** and **NVIDIA**, OpenBMB invites global geeks to push the boundaries of 9B-scale, 1M-token inference on **NVIDIA 6000D**.
28
+ >
29
+ > πŸ’° **Prize Pool: >$100,000 USD** (πŸ₯‡ Top Prize: **$89,000**) | πŸš€ **Challenge:** Single & Multi-batch Optimization
30
+ >
31
+ > πŸ‘‰ **[Click Here to Join the Race @ soar.openbmb.cn](https://soar.openbmb.cn/)**
32
+
33
  ## What's New
34
  - [2026.02.11] **MiniCPM-SALA** is released! This is the first large-scale hybrid model effectively integrating sparse and linear attention for million-token context modeling. You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).πŸ”₯πŸ”₯πŸ”₯
35