Update README.md
Browse files
README.md
CHANGED
|
@@ -19,6 +19,17 @@ library_name: transformers
|
|
| 19 |
π Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
|
| 20 |
</p>
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
## What's New
|
| 23 |
- [2026.02.11] **MiniCPM-SALA** is released! This is the first large-scale hybrid model effectively integrating sparse and linear attention for million-token context modeling. You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).π₯π₯π₯
|
| 24 |
|
|
|
|
| 19 |
π Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
|
| 20 |
</p>
|
| 21 |
|
| 22 |
+
> [!NOTE]
|
| 23 |
+
> ### π 2026 Sparse Operator Acceleration & Race (SOAR) is Now Live!
|
| 24 |
+
>
|
| 25 |
+
> **"The MiniCPM-SALA architecture is just the beginning. Realizing its full potential requires deep system-level synergy and cross-layer compilation optimization."**
|
| 26 |
+
>
|
| 27 |
+
> In collaboration with **SGLang** and **NVIDIA**, OpenBMB invites global geeks to push the boundaries of 9B-scale, 1M-token inference on **NVIDIA 6000D**.
|
| 28 |
+
>
|
| 29 |
+
> π° **Prize Pool: >$100,000 USD** (π₯ Top Prize: **$89,000**) | π **Challenge:** Single & Multi-batch Optimization
|
| 30 |
+
>
|
| 31 |
+
> π **[Click Here to Join the Race @ soar.openbmb.cn](https://soar.openbmb.cn/)**
|
| 32 |
+
|
| 33 |
## What's New
|
| 34 |
- [2026.02.11] **MiniCPM-SALA** is released! This is the first large-scale hybrid model effectively integrating sparse and linear attention for million-token context modeling. You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).π₯π₯π₯
|
| 35 |
|