TeleAI-AI-Flow commited on
Commit
d764a88
·
verified ·
1 Parent(s): 9c777d5

Support Qwen3.5 evaluations

Browse files
Files changed (3) hide show
  1. README.md +28 -28
  2. calc_ic.py +7 -3
  3. flops.py +149 -1
README.md CHANGED
@@ -9,9 +9,10 @@
9
  <img src="assets/ic_mixed.png" width="700" />
10
  </p>
11
 
12
- **Information Capacity** evaluates an LLM's **efficiency** based on text compression performance relative to computational complexity, harnessing the inherent correlation between **compression** and **intelligence**.
13
  Larger models can predict the next token more accurately, leading to higher compression gains but at increased computational costs.
14
  Consequently, a series of models with varying sizes exhibits **consistent** information capacity, which can be used to compare model capability across model series and predict model performance within a series.
 
15
  It also facilitates dynamic routing of different-sized models for efficient handling of tasks with varying difficulties, which is especially relevant to the device-edge-cloud infrastructure detailed in the **AI Flow** framework.
16
  With the rapid evolution of edge intelligence, we believe that this hierarchical network will replace the mainstream cloud-centric computing scheme in the near future.
17
 
@@ -19,30 +20,29 @@ Compared to existing metrics on LLM efficiency, a key difference of information
19
  An effective tokenizer can represent a given text with fewer tokens, thus reducing both the input and output token counts.
20
  This reduction not only lowers computational costs and inference delay but also facilitates long-context memory and in-depth reasoning.
21
  Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
22
- We assess the information capacity of 52 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
23
 
24
- ## Data
25
 
26
- Previous studies have established that the correlation between compression and intelligence weakens when the evaluation corpus significantly deviates from the domain of downstream tasks.
27
- Thus, we construct five heterogeneous datasets to provide a holistic assessment of LLM capabilities: Mixed text, FinePDFs-en, Ch-FineWeb-Edu, FineWeb-Edu, and NextCoder.
28
- The Mixed text dataset is collected by us, while other datasets are sampled from publicly available open-source datasets.
 
 
29
 
30
- * **Mixed text**: We compile a multilingual text corpus from diverse sources, including books, webpages, code, and published papers, to facilitate a comprehensive evaluation on LLMs' compression efficiency.
31
- * **FinePDFs-en**: The FinePDFs dataset consists of about 3T tokens sourced exclusively from publicly available PDF files. We only select from the English subset to better examine the influence of the corpus distribution. <a href="https://huggingface.co/datasets/HuggingFaceFW/finepdfs"> [Huggingface] </a>
32
- * **Ch-FineWeb-Edu**: The Chinese Fineweb Edu dataset is a high-quality Chinese pretraining corpus of 90 million samples in the education domain, selected by a strategy similar to that of FineWeb-Edu. <a href="https://huggingface.co/datasets/opencsg/chinese-fineweb-edu"> [Huggingface] </a>
33
- * **FineWeb-Edu**: The FineWeb-Edu dataset contains 1.3T tokens of educational English webpages filtered from the FineWeb dataset, based on the annotations generated by Llama-3-70B-Instruct. <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu"> [Huggingface] </a>
34
- * **NextCoder**: The NextCoder dataset consists of 127K unique code samples generated by GPT-4o and Llama-3.3-70B-Instruct across 8 programming languages: Python, Java, C++, C, Rust, JavaScript, Go, and Kotlin. <a href="https://huggingface.co/datasets/microsoft/NextCoderDataset"> [Huggingface] </a>
35
 
36
  ## Usage
37
 
38
- Step 1. Setup an environment viable for model inference.
39
  ```sh
40
  pip install numpy torch transformers tqdm flash_attn huggingface_hub
41
  ```
42
 
43
  Step 2. Clone this repo.
44
  ```sh
45
- GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/TeleAI-AI-Flow/InformationCapacity
46
  cd InformationCapacity
47
  ```
48
 
@@ -69,23 +69,23 @@ python calc_ic.py -m path/to/model -d datasets/mixed_text.jsonl -l 1024 -b 1
69
  url={https://arxiv.org/abs/2511.08066},
70
  }
71
 
72
- @misc{an2025aiflowperspectivesscenarios,
73
- title={AI Flow: Perspectives, Scenarios, and Approaches},
74
  author={Hongjun An and Wenhan Hu and Sida Huang and Siqi Huang and Ruanjun Li and Yuanzhi Liang and Jiawei Shao and Yiliang Song and Zihan Wang and Cheng Yuan and Chi Zhang and Hongyuan Zhang and Wenhao Zhuang and Xuelong Li},
75
- year={2025},
76
- eprint={2506.12479},
77
- archivePrefix={arXiv},
78
- primaryClass={cs.AI},
79
- url={https://arxiv.org/abs/2506.12479},
 
80
  }
81
 
82
- @misc{shao2025aiflownetworkedge,
83
- title={AI Flow at the Network Edge},
84
- author={Jiawei Shao and Xuelong Li},
85
- year={2025},
86
- eprint={2411.12469},
87
- archivePrefix={arXiv},
88
- primaryClass={eess.SP},
89
- url={https://arxiv.org/abs/2411.12469},
90
  }
91
  ```
 
9
  <img src="assets/ic_mixed.png" width="700" />
10
  </p>
11
 
12
+ **Information Capacity** evaluates an LLM's **efficiency** based on text compression performance relative to computational complexity, leveraging the inherent correlation between **compression** and **intelligence**.
13
  Larger models can predict the next token more accurately, leading to higher compression gains but at increased computational costs.
14
  Consequently, a series of models with varying sizes exhibits **consistent** information capacity, which can be used to compare model capability across model series and predict model performance within a series.
15
+ This consistency opens up the possibility of cross-scale performance prediction before actual pretraining, offering a computationally efficient alternative to conventional scaling-law fitting approaches.
16
  It also facilitates dynamic routing of different-sized models for efficient handling of tasks with varying difficulties, which is especially relevant to the device-edge-cloud infrastructure detailed in the **AI Flow** framework.
17
  With the rapid evolution of edge intelligence, we believe that this hierarchical network will replace the mainstream cloud-centric computing scheme in the near future.
18
 
 
20
  An effective tokenizer can represent a given text with fewer tokens, thus reducing both the input and output token counts.
21
  This reduction not only lowers computational costs and inference delay but also facilitates long-context memory and in-depth reasoning.
22
  Tokenizer efficiency exhibits growing significance in light of the exploding input length and the widespread usage of test-time scaling, but is often **neglected** in LLM evaluations.
23
+ We assess the information capacity of 56 models across 5 heterogeneous datasets and find consistent evidence regarding the influences of tokenizer efficiency, pretraining data, and the mixture-of-experts (MoE) architecture.
24
 
25
+ ## Method
26
 
27
+ The model intelligence is measured by the data size savings achieved from the LLM's probability prediction.
28
+ The original size of a text sample in the given dataset is denoted as $C$, which is transformed into a sequence of $L$ tokens by the tokenizer of an LLM $M$.
29
+ The symbol length of the $i$-th token derived from entropy coding is approximately $-\log p(x_i | x_{<i} ; M)$, and the compression gain is the difference between the original data size and the summed symbol length of all tokens.
30
+ The computational complexity is measured by the inference floating-point operations (FLOPs) $N_M$ on a logarithmic scale according to the scaling law.
31
+ We introduce a negative bias $b$ in the numerator so that different-sized models in a series have nearly identical information capacities, thus enabling convenient comparison across different model sizes and architectures.
32
 
33
+ In summary, the information capacity is defined as:
34
+ $\text{Information Capacity} = \frac{C - \sum_{i} -\log p(x_i | x_{<i} ; M)}{ \log N_M}$.
 
 
 
35
 
36
  ## Usage
37
 
38
+ Step 1. Setup an environment that can run model inference, for example:
39
  ```sh
40
  pip install numpy torch transformers tqdm flash_attn huggingface_hub
41
  ```
42
 
43
  Step 2. Clone this repo.
44
  ```sh
45
+ git clone https://github.com/TeleAI-AI-Flow/InformationCapacity.git
46
  cd InformationCapacity
47
  ```
48
 
 
69
  url={https://arxiv.org/abs/2511.08066},
70
  }
71
 
72
+ @misc{an2026aiflowperspectivesscenarios,
 
73
  author={Hongjun An and Wenhan Hu and Sida Huang and Siqi Huang and Ruanjun Li and Yuanzhi Liang and Jiawei Shao and Yiliang Song and Zihan Wang and Cheng Yuan and Chi Zhang and Hongyuan Zhang and Wenhao Zhuang and Xuelong Li},
74
+ journal={Vicinagearth},
75
+ title={{AI} Flow: perspectives, scenarios, and approaches},
76
+ year={2026},
77
+ volume={3},
78
+ number={1},
79
+ pages={1-32},
80
  }
81
 
82
+ @misc{shao2026aiflownetworkedge,
83
+ title={{AI} Flow at the Network Edge},
84
+ author={Shao, Jiawei and Li, Xuelong},
85
+ journal={IEEE Network},
86
+ year={2026},
87
+ volume={40},
88
+ number={1},
89
+ pages={330-336},
90
  }
91
  ```
calc_ic.py CHANGED
@@ -3,7 +3,7 @@ import torch
3
  from math import log2
4
  from text_size import calculate_text_size_per_token
5
  from likelihood import calculate_negative_log_likelihood
6
- from flops import gqa_model_theoretical_flops, mla_model_theoretical_flops
7
 
8
  def calculate_information_capacity(
9
  model_path: str,
@@ -14,10 +14,12 @@ def calculate_information_capacity(
14
  attention_mechanism: str = None,
15
  ) -> float:
16
  if attention_mechanism is None:
17
- attention_mechanism = "mla" if "deepseek" in model_path.lower() else "gqa"
 
 
18
  else:
19
  attention_mechanism = attention_mechanism.lower()
20
- if attention_mechanism != "gqa" and attention_mechanism != "mla":
21
  raise NotImplementedError("attention_mechanism argument should be either gqa or mla")
22
 
23
  if numerator_bias is None:
@@ -39,6 +41,8 @@ def calculate_information_capacity(
39
  flops_results = gqa_model_theoretical_flops(cfg_path, gen_len=max_sample_length)
40
  elif attention_mechanism == "mla":
41
  flops_results = mla_model_theoretical_flops(cfg_path, gen_len=max_sample_length)
 
 
42
  per_token_flops = flops_results["decode_total_TFLOPs"] * 1e12 / max_sample_length
43
  for k, v in flops_results.items(): print(f"{k}: {v}")
44
 
 
3
  from math import log2
4
  from text_size import calculate_text_size_per_token
5
  from likelihood import calculate_negative_log_likelihood
6
+ from flops import gqa_model_theoretical_flops, mla_model_theoretical_flops, qwen3_5_theoretical_flops
7
 
8
  def calculate_information_capacity(
9
  model_path: str,
 
14
  attention_mechanism: str = None,
15
  ) -> float:
16
  if attention_mechanism is None:
17
+ if "deepseek" in model_path.lower(): attention_mechanism = "mla"
18
+ elif "qwen3.5" in model_path.lower(): attention_mechanism = "qwen3_5"
19
+ else: attention_mechanism = "gqa"
20
  else:
21
  attention_mechanism = attention_mechanism.lower()
22
+ if attention_mechanism != "gqa" and attention_mechanism != "mla" and attention_mechanism != "qwen3_5":
23
  raise NotImplementedError("attention_mechanism argument should be either gqa or mla")
24
 
25
  if numerator_bias is None:
 
41
  flops_results = gqa_model_theoretical_flops(cfg_path, gen_len=max_sample_length)
42
  elif attention_mechanism == "mla":
43
  flops_results = mla_model_theoretical_flops(cfg_path, gen_len=max_sample_length)
44
+ elif attention_mechanism == "qwen3_5":
45
+ flops_results = qwen3_5_theoretical_flops(cfg_path, gen_len=max_sample_length)
46
  per_token_flops = flops_results["decode_total_TFLOPs"] * 1e12 / max_sample_length
47
  for k, v in flops_results.items(): print(f"{k}: {v}")
48
 
flops.py CHANGED
@@ -436,4 +436,152 @@ def mla_model_theoretical_flops(
436
  "avg_decode_TFLOPs_per_token": toT(decode_total / max(T, 1)),
437
  }
438
 
439
- return results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
436
  "avg_decode_TFLOPs_per_token": toT(decode_total / max(T, 1)),
437
  }
438
 
439
+ return results
440
+
441
+ def qwen3_5_theoretical_flops(
442
+ config_path: Union[str, Path],
443
+ seq_len: int = 0,
444
+ gen_len: int = 1024,
445
+ batch_size: int = 1,
446
+ prefill_logits: str = "all", # "all" | "last" | "none"
447
+ ) -> Dict[str, float]:
448
+ # ---- load config ----
449
+ cfg_path = Path(config_path)
450
+ if cfg_path.is_dir():
451
+ cfg_path = cfg_path / "config.json"
452
+ with open(cfg_path, "r") as f:
453
+ cfg = json.load(f)
454
+ if "Qwen3.5" in config_path:
455
+ cfg = cfg["text_config"]
456
+
457
+ # ---- required hyperparams ----
458
+ d_model = int(cfg["hidden_size"])
459
+ n_layers = int(cfg.get("num_hidden_layers", cfg.get("n_layer")))
460
+ n_heads = int(cfg.get("num_attention_heads", cfg.get("n_head")))
461
+ n_kv_heads = int(cfg.get("num_key_value_heads", n_heads))
462
+ d_ff = cfg.get("intermediate_size")
463
+ if d_ff is None: d_ff = cfg.get("moe_intermediate_size") * cfg.get("num_experts_per_tok") # For MoE variants
464
+ d_ff = int(d_ff)
465
+ vocab_size = int(cfg["vocab_size"])
466
+
467
+ linear_num_key_heads = cfg.get("linear_num_key_heads")
468
+ linear_num_value_heads = cfg.get("linear_num_value_heads")
469
+ linear_key_head_dim = cfg.get("linear_key_head_dim")
470
+ linear_value_head_dim = cfg.get("linear_value_head_dim")
471
+
472
+ # n_activated_experts = int(cfg.get("num_experts_per_tok", 1)) + int(cfg.get("n_shared_experts", 0))
473
+
474
+ # per-head dimension (assume divisible)
475
+ d_k = d_model // n_heads
476
+ kv_dim = n_kv_heads * d_k
477
+
478
+ B = batch_size
479
+ L = seq_len
480
+ T = gen_len
481
+
482
+ # ---- helpers (FLOPs, not TFLOPs) ----
483
+ # Projections per layer for a sequence of length L
484
+ # Q: 2 * B * L * d_model * d_model
485
+ # O: same
486
+ # K,V: 2 * B * L * d_model * kv_dim each
487
+ def proj_flops(L_tokens: int) -> int:
488
+ q = 2 * B * L_tokens * d_model * d_model
489
+ o = 2 * B * L_tokens * d_model * d_model
490
+ k = 2 * B * L_tokens * d_model * kv_dim
491
+ v = 2 * B * L_tokens * d_model * kv_dim
492
+ return q + k + v + o
493
+
494
+ # Attention core per layer
495
+ # Prefill (quadratic): QK^T + (softmax@V) ≈ 4 * B * n_heads * L^2 * d_k
496
+ # Decode (one step over cache length C): ≈ 4 * B * n_heads * C * d_k
497
+ def attn_core_prefill_flops(L_tokens: int) -> int:
498
+ return 4 * B * n_heads * (L_tokens ** 2) * d_k
499
+
500
+ def attn_core_decode_flops(cache_len: int) -> int:
501
+ return 4 * B * n_heads * cache_len * d_k
502
+
503
+ # MLP per layer
504
+ # Two up matmuls + one down: 2*B*L*d_model*d_ff + 2*B*L*d_model*d_ff + 2*B*L*d_ff*d_model = 6*B*L*d_model*d_ff
505
+ def mlp_flops(L_tokens: int) -> int:
506
+ if "gpt-oss" in config_path: return 4 * B * L_tokens * d_model * d_ff * int(cfg["num_experts_per_tok"]) # gpt-oss does not use gate function (6 → 4), registers per-expert intermediate size
507
+ elif "Llama-4" in config_path: return B * L_tokens * d_model * (6 * d_ff + 4 * int(cfg["intermediate_size"])) # llama-4 use 2-layer mlp without gating on attn score, before the main mlp
508
+ else: return 6 * B * L_tokens * d_model * d_ff
509
+
510
+ # LM head (final linear to vocab) for N tokens: 2 * B * N * d_model * vocab_size
511
+ def lm_head_flops(num_tokens: int) -> int:
512
+ return 2 * B * num_tokens * d_model * vocab_size
513
+
514
+ # ---- prefill (length L) ----
515
+ proj_prefill_per_layer = proj_flops(L)
516
+ attn_prefill_per_layer = attn_core_prefill_flops(L)
517
+ mlp_prefill_per_layer = mlp_flops(L)
518
+
519
+ # stack_prefill = n_layers * (proj_prefill_per_layer + attn_prefill_per_layer + mlp_prefill_per_layer)
520
+ stack_prefill = n_layers // 4 * (proj_prefill_per_layer + attn_prefill_per_layer) # only 1/4 of layers use full attention
521
+
522
+ def linear_layer_flops(L_tokens):
523
+ # proj + gated deltanet, excluding MLP
524
+ return L_tokens * d_model * (linear_num_key_heads * linear_key_head_dim * 4 + linear_num_value_heads * linear_value_head_dim * 6 + 4 * linear_num_key_heads)
525
+
526
+ stack_prefill += n_layers // 4 * 3 * linear_layer_flops(L) # remaining 3/4 linear-attention layers
527
+ stack_prefill += n_layers * mlp_prefill_per_layer # MLP is the same across all layers
528
+
529
+ if prefill_logits == "all":
530
+ lm_prefill = lm_head_flops(L)
531
+ elif prefill_logits == "last":
532
+ lm_prefill = lm_head_flops(1)
533
+ elif prefill_logits == "none":
534
+ lm_prefill = 0
535
+ else:
536
+ raise ValueError("prefill_logits must be one of {'all','last','none'}")
537
+
538
+ prefill_total = stack_prefill + lm_prefill
539
+
540
+ # ---- decode (T steps) ----
541
+ # For each step, projections/MLP are for 1 new token.
542
+ proj_decode_per_layer_per_step = proj_flops(1)
543
+ mlp_decode_per_layer_per_step = mlp_flops(1)
544
+
545
+ # Attention core sums over growing cache lengths: L, L+1, ..., L+T-1
546
+ # Sum_{t=0..T-1} 4 * B * n_heads * (L + t) * d_k = 4 * B * n_heads * d_k * (T*L + T*(T-1)/2)
547
+ attn_decode_per_layer_total = 4 * B * n_heads * d_k * (T * L + (T * (T - 1)) // 2)
548
+
549
+ # stack_decode = n_layers * (
550
+ # T * (proj_decode_per_layer_per_step + mlp_decode_per_layer_per_step) + attn_decode_per_layer_total
551
+ # )
552
+ stack_decode = n_layers // 4 * (
553
+ T * (proj_decode_per_layer_per_step) + attn_decode_per_layer_total
554
+ )
555
+ stack_decode += n_layers // 4 * 3 * linear_layer_flops(T)
556
+ stack_decode += n_layers * T * mlp_decode_per_layer_per_step
557
+
558
+ # Logits at each decode step
559
+ lm_decode = lm_head_flops(T)
560
+
561
+ decode_total = stack_decode + lm_decode
562
+
563
+ # ---- packing results (TFLOPs) ----
564
+ toT = lambda x: x / 1e12
565
+
566
+ results = {
567
+ # Inputs
568
+ "batch_size": B,
569
+ "seq_len": L,
570
+ "gen_len": T,
571
+ # "n_activated_experts": n_activated_experts,
572
+ "hidden_size": d_model,
573
+ "num_layers": n_layers,
574
+ "num_heads": n_heads,
575
+ "num_kv_heads": n_kv_heads,
576
+ "intermediate_size": d_ff,
577
+ "vocab_size": vocab_size,
578
+ "prefill_logits_mode": prefill_logits,
579
+
580
+ "prefill_total_TFLOPs": toT(prefill_total),
581
+ "decode_total_TFLOPs": toT(decode_total),
582
+
583
+ # Totals
584
+ "request_total_TFLOPs": toT(prefill_total + decode_total),
585
+ "avg_decode_TFLOPs_per_token": toT(decode_total / max(T, 1)),
586
+ }
587
+ return results