Yes, absolutely.
Even smaller language models under 80B, 48B, 36B, or 20B parameters can show metacognitive ability, usually in a weaker form. FINAL BENCH can still measure it reliably.
Typical pattern for SLMs
MA They can often express uncertainty or notice they might be wrong
ER Actually revising and improving the answer is harder
So with FINAL BENCH, you can quantify
1 whether the model has metacognitive signals at all
2 how strong they are
3 whether it only says I might be wrong but fails to fix the answer MA high ER low
4 or whether it can genuinely self correct ER improves especially with scaffolding