Unknown Entity

unknownentity

AI & ML interests

None yet

Recent Activity

reacted to SeaWolf-AI's post with ā¤ļø about 22 hours ago
🧬 Introducing Darwin-9B-NEG — the first model with Native Entropy Gating (NEG) šŸ”— Try it now: https://huggingface.co/FINAL-Bench/Darwin-9B-NEG šŸ”— Q4 bit : https://huggingface.co/FINAL-Bench/Darwin-9B-MFP4 We're thrilled to release Darwin-9B-NEG, a 9B-parameter reasoning model that embeds an architecturally-internalised sense of self-confidence directly into the transformer — our proprietary Native Entropy Gating (NEG) technology. šŸ“Š GPQA Diamond (198 PhD-level questions): ā–ø Baseline Darwin-9B (no NEG) → 51.01 % ā–ø Pure NEG (greedy Ā· 1Ɨ cost) → 63.64 % šŸ”„ +12.63 %p ā–ø + Permutation (4Ɨ cost) → 76.26 % ā–ø + Ensemble Refinement (~20Ɨ) → 84.34 % šŸ† With only 9 billion parameters and 1Ɨ inference cost, Pure NEG jumps +12.63 %p over the same model without NEG. Going all-in with ensemble refinement pushes it to 84.34 % — surpassing the published Qwen3.5-9B leaderboard score (81.7 %) by +2.64 %p. šŸ”¬ What makes NEG different from Multi-Turn Iteration (MTI)? Classical MTI needs 3-8Ɨ extra inference passes. NEG instead lives INSIDE the single decoding loop. Two tiny modules ride with the transformer: NEG-Head predicts per-token entropy from the last hidden state, and NEG-Gate conditionally restricts the top-k choice when confidence is low. The gate activates in only 4.36 % of tokens — essentially free at inference time. ✨ Key differentiators • Architecturally internalised — model file *is* the feature • 1Ɨ inference cost (vs. 3-8Ɨ for MTI) • Drop-in with vLLM / SGLang / TGI / transformers — no extra engine • +12.63 %p reasoning at zero latency overhead • Single-file deployment, Apache 2.0 licensed 🧬 Lineage Qwen/Qwen3.5-9B → Darwin-9B-Opus (V7 evolutionary merge) → Darwin-9B-NEG (V8 + NEG training) #Darwin #NEG #NativeEntropyGating #GPQA #Reasoning #LLM #OpenSource #Apache2
liked a model 1 day ago
deepseek-ai/DeepSeek-V4-Pro
View all activity

Organizations

None yet