IQ2_K Ling 1T Discussion
Just wanted to add my finding so far. This is on a Xeon 8480+ES 5600mhz 512 gigs asusw790e sage, 2x 3090
2 Bit model working really well, I couldn't get -ger to run as of today without crashing.
-SVG design is somewhere below glm 4.6 q6_k_xl might bet due to quantization but better than deepseek 3.1 at iq4.
-html webpage. One shot pokedex on par with glm 4.6 but better i would say. had really nice UI and transparency 703 lines of code
-Math problem solving is very good. nailed some logical problems with little token output that blew me away. only 1.1k tokens used others take more than 4k
-logic very very good. passed the overfit :"A surgeon who is the boy's father says 'I cannot operate on a boy he is my son’. who is a surgeon to the boy?:"
✅ Answer: The surgeon is the boy's father.
Overall This is looking like one of the top local models just not for SVG at IQ2.
Prompt
- Tokens: 436
- Time: 10887.88 ms
- Speed: 40.0 t/s
Generation - Tokens: 173
- Time: 17144.38 ms
- Speed: 10.1 t/s
What I'm running" - This is on a Xeon 8480+ES 5600mhz 512 gigs asusw790e sage, 2x 3090
numactl -N 0 -m 0
../build/bin/llama-server
--model "Ling-1T-IQ2_K-00001-of-00008.gguf"
--alias ubergarm/Ling-1T-GGUF
--ctx-size 32768
-fa -fmoe -amb 1024
-ctk q4_0 -ctv q4_0
-ub 4096 -b 4096
-ngl 99
-ot "blk.(4).ffn_.=CUDA0"
-ot "blk.(5|6).ffn_.=CUDA1"
-ot exps=CPU
--parallel 1
--threads 90 \ (seems to be sweet spot with 8480+)
--host 127.0.0.1
--port 8080
--split-mode layer --tensor-split 1,1
--mirostat 2 --mirostat-ent 5 \ (subjectively really helping out with quality on all models)
--mirostat-lr 0.1
--no-display-prompt

Hey thanks for putting the model through its paces and reporting your findings! So seems promising still. I really wonder why the aider folks had trouble getting it to benchmark the polyglot test properly...
A few thoughts:
I couldn't get -ger to run as of today without crashing.
That is a very new feature, when it crashes does it give some kind of ASSERT print out spam? If you could post the command and crash message on this PR it could be useful: https://github.com/ikawrakow/ik_llama.cpp/pull/838 along with the exact version tested as ik made some optimizations in the past 24 hours. ./build/bin/llama-server --version and git rev-parse --short HEAD etc.
-amb 1024
This is only needed for MLA arch models like DeepSeek and Kimi-K2. It doesn't hurt to have it, but isn't doing anything.
-ctk q4_0 -ctv q4_0
While this does save VRAM for longer context, if you have the space consider trying q4_1 or q6_0 both of which should give a little better quality while still being smaller than q8_0.
--mirostat 2 --mirostat-ent 5 \ (subjectively really helping out with quality on all models)
--mirostat-lr 0.1
interesting, I haven't played with Mirostat sampling myself, but cool it seems to help out. i might give it a go sometime!
thanks again!
Thanks for your work. I'll try your suggestions later today. Wish more people use Mirostat. I can't tell if it's just me or not but low quants used to just break for me after 5k tokens and now they run to 64k without issues.
version: 3919 (5ae87f6)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
Here is the log : xeon@xeon-System-Product-Name:~/ik_llama.cpp/models$ numactl -N 0 -m 0
../build/bin/llama-server
--model "Ling-1T-IQ2_K-00001-of-00008.gguf"
--alias ubergarm/Ling-1T-GGUF
-ctk q8_0 -ctv q8_0
--ctx-size 4096
-fa -fmoe -ger
-ub 4096 -b 4096
-ngl 99
-ot exps=CPU
--parallel 1
--threads 48
--threads-batch 56
--host 127.0.0.1
--port 8080
--mirostat 2 --mirostat-ent 5
--mirostat-lr 0.1
--no-display-prompt
--verbosity 1
--metrics
--log-test
ggml_backend_register: registered backend CPU
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
ggml_backend_register: registered backend CUDA0
ggml_backend_register: registered backend CUDA1
03 Hello World to both default output and stderr!
[1761007161] 04 Hello World to stderr!
[1761007161] 05 Hello World TEE with double printing to stderr prevented!
[1761007161] 07 Hello World to stdout!
INFO [ main] build info | tid="128123511042048" timestamp=1761007162 build=3919 commit="5ae87f6c"
INFO [ main] system info | tid="128123511042048" timestamp=1761007162 n_threads=48 n_threads_batch=56 total_threads=112 system_info="AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "
llama_model_loader: additional 7 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 50 key-value pairs and 1103 tensors from Ling-1T-IQ2_K-00001-of-00008.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bailingmoe2
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Ling 1T
llama_model_loader: - kv 3: general.basename str = Ling
llama_model_loader: - kv 4: general.size_label str = 1T
llama_model_loader: - kv 5: bailingmoe2.block_count u32 = 80
llama_model_loader: - kv 6: bailingmoe2.context_length u32 = 32768
llama_model_loader: - kv 7: bailingmoe2.embedding_length u32 = 8192
llama_model_loader: - kv 8: bailingmoe2.feed_forward_length u32 = 18432
llama_model_loader: - kv 9: bailingmoe2.attention.head_count u32 = 64
llama_model_loader: - kv 10: bailingmoe2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 11: bailingmoe2.rope.freq_base f32 = 600000.000000
llama_model_loader: - kv 12: bailingmoe2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 13: bailingmoe2.expert_used_count u32 = 8
llama_model_loader: - kv 14: bailingmoe2.attention.key_length u32 = 128
llama_model_loader: - kv 15: bailingmoe2.attention.value_length u32 = 128
llama_model_loader: - kv 16: general.file_type u32 = 138
llama_model_loader: - kv 17: bailingmoe2.rope.dimension_count u32 = 64
llama_model_loader: - kv 18: bailingmoe2.rope.scaling.type str = none
llama_model_loader: - kv 19: bailingmoe2.leading_dense_block_count u32 = 4
llama_model_loader: - kv 20: bailingmoe2.vocab_size u32 = 157184
llama_model_loader: - kv 21: bailingmoe2.expert_feed_forward_length u32 = 2048
llama_model_loader: - kv 22: bailingmoe2.expert_shared_feed_forward_length u32 = 2048
llama_model_loader: - kv 23: bailingmoe2.expert_weights_scale f32 = 2.500000
llama_model_loader: - kv 24: bailingmoe2.expert_count u32 = 256
llama_model_loader: - kv 25: bailingmoe2.expert_shared_count u32 = 1
llama_model_loader: - kv 26: bailingmoe2.expert_group_count u32 = 8
llama_model_loader: - kv 27: bailingmoe2.expert_group_used_count u32 = 4
llama_model_loader: - kv 28: bailingmoe2.expert_weights_norm bool = true
llama_model_loader: - kv 29: bailingmoe2.expert_gating_func u32 = 2
llama_model_loader: - kv 30: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 31: tokenizer.ggml.pre str = bailingmoe2
llama_model_loader: - kv 32: tokenizer.ggml.tokens arr[str,157184] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 33: tokenizer.ggml.token_type arr[i32,157184] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 34: tokenizer.ggml.merges arr[str,156635] = ["Ġ Ġ", "Ġ t", "i n", "Ġ a", "h e...
llama_model_loader: - kv 35: tokenizer.ggml.bos_token_id u32 = 156891
llama_model_loader: - kv 36: tokenizer.ggml.eos_token_id u32 = 156895
llama_model_loader: - kv 37: tokenizer.ggml.padding_token_id u32 = 156892
llama_model_loader: - kv 38: tokenizer.ggml.cls_token_id u32 = 156893
llama_model_loader: - kv 39: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 40: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 41: tokenizer.chat_template str = {% set thinking_option = 'off' %}\n{{-...
llama_model_loader: - kv 42: general.quantization_version u32 = 2
llama_model_loader: - kv 43: quantize.imatrix.file str = /mnt/data/models/ubergarm/Ling-1T-GGU...
llama_model_loader: - kv 44: quantize.imatrix.dataset str = ubergarm-imatrix-calibration-corpus-v...
llama_model_loader: - kv 45: quantize.imatrix.entries_count i32 = 705
llama_model_loader: - kv 46: quantize.imatrix.chunks_count i32 = 864
llama_model_loader: - kv 47: split.no u16 = 0
llama_model_loader: - kv 48: split.count u16 = 8
llama_model_loader: - kv 49: split.tensors.count i32 = 1103
llama_model_loader: - type f32: 473 tensors
llama_model_loader: - type q8_0: 400 tensors
llama_model_loader: - type iq2_k: 152 tensors
llama_model_loader: - type iq3_k: 76 tensors
llama_model_loader: - type iq4_k: 1 tensors
llama_model_loader: - type iq6_k: 1 tensors
init_tokenizer: initializing tokenizer for type 2
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: printing all EOG tokens:
load: - 156892 ('<|endoftext|>')
load: - 156895 ('<|role_end|>')
load: special tokens cache size = 262
load: token to piece cache size = 1.0010 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = bailingmoe2
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_swa_pattern = 1
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 18432
llm_load_print_meta: n_expert = 256
llm_load_print_meta: n_expert_used = 8
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = none
llm_load_print_meta: freq_base_train = 600000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = IQ2_K - 2.375 bpw
llm_load_print_meta: model params = 999.705 B
llm_load_print_meta: model size = 330.923 GiB (2.843 BPW)
llm_load_print_meta: repeating layers = 329.255 GiB (2.836 BPW, 997.130 B parameters)
llm_load_print_meta: general.name = Ling 1T
llm_load_print_meta: n_layer_dense_lead = 4
llm_load_print_meta: n_ff_exp = 2048
llm_load_print_meta: n_ff_shexp = 2048
llm_load_print_meta: n_expert_shared = 1
llm_load_print_meta: n_expert_groups = 8
llm_load_print_meta: n_group_used = 4
llm_load_print_meta: expert_weights_scale = 2.5
llm_load_print_meta: expert_weights_norm = 1
llm_load_print_meta: expert_gating_func = sigmoid
llm_load_print_meta: nextn_predict_layers = 0
print_info: vocab type = BPE
print_info: n_vocab = 157184
print_info: n_merges = 156635
print_info: BOS token = 156891 '<|startoftext|>'
print_info: EOS token = 156895 '<|role_end|>'
print_info: EOT token = 156892 '<|endoftext|>'
print_info: PAD token = 156892 '<|endoftext|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 156892 '<|endoftext|>'
print_info: EOG token = 156895 '<|role_end|>'
print_info: max token length = 154
llm_load_tensors: ggml ctx size = 1.42 MiB
Tensor blk.4.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.4.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.4.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.5.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.5.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.5.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.6.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.6.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.6.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.7.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.7.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.7.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.8.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.8.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.8.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.9.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.9.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.9.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.10.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.10.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.10.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.11.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.11.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.11.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.12.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.12.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.12.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.13.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.13.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.13.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.14.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.14.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.14.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.15.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.15.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.15.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.16.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.16.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.16.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.17.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.17.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.17.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.18.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.18.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.18.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.19.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.19.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.19.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.20.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.20.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.20.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.21.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.21.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.21.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.22.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.22.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.22.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.23.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.23.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.23.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.24.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.24.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.24.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.25.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.25.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.25.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.26.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.26.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.26.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.27.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.27.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.27.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.28.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.28.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.28.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.29.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.29.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.29.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.30.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.30.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.30.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.31.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.31.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.31.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.32.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.32.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.32.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.33.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.33.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.33.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.34.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.34.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.34.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.35.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.35.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.35.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.36.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.36.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.36.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.37.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.37.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.37.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.38.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.38.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.38.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.39.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.39.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.39.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.40.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.40.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.40.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.41.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.41.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.41.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.42.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.42.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.42.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.43.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.43.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.43.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.44.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.44.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.44.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.45.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.45.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.45.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.46.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.46.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.46.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.47.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.47.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.47.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.48.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.48.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.48.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.49.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.49.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.49.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.50.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.50.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.50.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.51.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.51.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.51.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.52.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.52.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.52.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.53.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.53.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.53.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.54.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.54.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.54.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.55.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.55.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.55.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.56.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.56.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.56.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.57.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.57.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.57.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.58.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.58.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.58.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.59.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.59.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.59.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.60.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.60.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.60.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.61.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.61.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.61.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.62.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.62.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.62.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.63.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.63.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.63.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.64.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.64.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.64.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.65.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.65.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.65.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.66.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.66.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.66.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.67.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.67.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.67.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.68.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.68.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.68.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.69.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.69.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.69.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.70.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.70.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.70.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.71.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.71.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.71.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.72.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.72.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.72.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.73.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.73.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.73.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.74.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.74.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.74.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.75.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.75.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.75.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.76.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.76.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.76.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.77.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.77.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.77.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.78.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.78.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.78.ffn_up_exps.weight buffer type overriden to CPU
Tensor blk.79.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.79.ffn_down_exps.weight buffer type overriden to CPU
Tensor blk.79.ffn_up_exps.weight buffer type overriden to CPU
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 40496.61 MiB
llm_load_tensors: CPU buffer size = 42790.64 MiB
llm_load_tensors: CPU buffer size = 42824.64 MiB
llm_load_tensors: CPU buffer size = 42280.64 MiB
llm_load_tensors: CPU buffer size = 42824.64 MiB
llm_load_tensors: CPU buffer size = 42824.64 MiB
llm_load_tensors: CPU buffer size = 42246.64 MiB
llm_load_tensors: CPU buffer size = 38669.32 MiB
llm_load_tensors: CPU buffer size = 690.75 MiB
llm_load_tensors: CUDA0 buffer size = 10082.57 MiB
llm_load_tensors: CUDA1 buffer size = 9499.55 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 4096
llama_new_context_with_model: n_batch = 4096
llama_new_context_with_model: n_ubatch = 4096
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: mla_attn = 0
llama_new_context_with_model: attn_max_b = 0
llama_new_context_with_model: fused_moe = 1
llama_new_context_with_model: grouped er = 1
llama_new_context_with_model: fused_up_gate = 1
llama_new_context_with_model: ser = -1, 0
llama_new_context_with_model: freq_base = 600000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 340.02 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 340.02 MiB
llama_new_context_with_model: KV self size = 680.00 MiB, K (q8_0): 340.00 MiB, V (q8_0): 340.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 1.20 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=1)
ggml_gallocr_reserve_n: reallocating CUDA0 buffer from size 0.00 MiB to 4008.14 MiB
ggml_gallocr_reserve_n: reallocating CUDA1 buffer from size 0.00 MiB to 2584.00 MiB
ggml_gallocr_reserve_n: reallocating CUDA_Host buffer from size 0.00 MiB to 160.05 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 4008.14 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 2584.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 160.05 MiB
llama_new_context_with_model: graph nodes = 3369
llama_new_context_with_model: graph splits = 195
XXXXXXXXXXXXXXXXXXXXX Setting only active experts offload
ggml_backend_sched_alloc_splits: failed to allocate graph, reserving (backend_ids_changed = 1)
Segmentation fault (core dumped)
xeon@xeon-System-Product-Name:~/ik_llama.cpp/models$
