aquif-3.6-1B

Summary

aquif-3.6-1B is a hybrid reasoning model that automatically determines when and how deeply to think based on query complexity. Built on aquif-3.5-Nano-1B with AutoThink RL data, it achieves 28% better token efficiency and 4% performance improvement across benchmarks.

Contents

Automatic Thinking

aquif-3.6-1B is a hybrid reasoning model that dynamically decides if and how much to think based on query complexity. Inspired by aquif-3.6-8B's approach of automatic thinking using AutoThink RL data on top of aquif-3.5-Nano-1B, the model uses the following format:

<judge>
[analyzes whether to think or not]
</judge>

<think_on/off>
<think>
[thinking content]
</think>

<answer>
</answer>

This is the same format as aquif-3.6-8B. Unlike something like aquif-3.5-Plus's toggleable reasoning that requires manual control (thinking_on/off), aquif-3.6's judge autonomously allocates reasoning depth - intelligently adapting its cognitive effort to each task automatically.

Key Features

  • 🧠 Dynamic Reasoning: Automatically determines when and how deeply to think
  • âš¡ 28% More Efficient: Significant token reduction while improving performance
  • 📈 Better Performance: 4% average improvement across benchmarks
  • 🎯 Smart Resource Allocation: 12% reduction in thinking ratio on average

Performance

Benchmark aquif-3.6-1B Qwen3-1.7B Improvement
AIME 2025 75.0 39.4 +35.6%
LiveCodeBench 57.5 33.2 +24.3%
GPQA Diamond 52.8 40.1 +12.7%
Average 61.8 37.6 +24.2%

Token Efficiency

Benchmark aquif-3.6-1B Qwen3-1.7B Reduction
AIME 2025 13,670 18,450 -26%
LiveCodeBench 10,270 13,890 -26%
GPQA Diamond 6,870 12,100 -43%
Average 10,270 14,813 -32%

Thinking Ratio

Benchmark aquif-3.6-1B Qwen3-1.7B Reduction
AIME 2025 84.0% 100.0% -16%
LiveCodeBench 78.0% 100.0% -22%
GPQA Diamond 81.0% 100.0% -19%
Average 81.0% 100.0% -19%

Benchmark Highlights

  • AIME 2025: 26% fewer tokens, +35.6% performance, -16% thinking ratio
  • LiveCodeBench: 26% fewer tokens, +24.3% performance, -22% thinking ratio
  • GPQA Diamond: 43% fewer tokens, +12.7% performance, -19% thinking ratio

Model Details

  • Base Model: 1.7B parameters
  • Architecture: Hybrid reasoning with dynamic thinking allocation
  • Context Length: 40K tokens
  • License: Apache 2.0

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "aquif-ai/aquif-3.6-1B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

messages = [
    {"role": "user", "content": "Solve this problem: What is the sum of all prime numbers between 1 and 100?"}
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

outputs = model.generate(
    input_ids,
    max_new_tokens=2048,
    temperature=0.7,
    do_sample=True
)

response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)

Previous Versions


Built by aquif-ai

Downloads last month
41
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aquif-ai/aquif-3.6-1B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(2)
this model
Quantizations
3 models

Collections including aquif-ai/aquif-3.6-1B