dataset string | model_name string | model_links list | paper_title string | paper_date timestamp[ns] | paper_url string | code_links list | metrics string | table_metrics list | prompts string | paper_text string | compute_hours float64 | num_gpus int64 | reasoning string | trainable_single_gpu_8h string | verified string | modality string | paper_title_drop string | paper_date_drop string | code_links_drop string | num_gpus_drop int64 | dataset_link string | time_and_compute_verification string | link_to_colab_notebook string | run_possible string | notes string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PDBbind | BAPULM | [] | BAPULM: Binding Affinity Prediction using Language Models | 2024-11-06T00:00:00 | https://arxiv.org/abs/2411.04150v1 | [
"https://github.com/radh55sh/BAPULM"
] | {'RMSE': '0.898±0.0172'} | [
"RMSE"
] | Given the following paper and codebase:
Paper: BAPULM: Binding Affinity Prediction using Language Models
Codebase: https://github.com/radh55sh/BAPULM
Improve the BAPULM model on the PDBbind dataset. The result
should improve on the following metrics: {'RMSE': '0.898±0.0172'}. You must use only the codebase provided.
| BAPULM: Binding Affinity Prediction using Language Models Radheesh Sharma Meda†and Amir Barati Farimani∗,‡,¶,†,§ †Department of Chemical Engineering, Carnegie Mellon University, 15213, USA ‡Department of Mechanical Engineering, Carnegie Mellon University, 15213, USA ¶Department of Biomedical Engineering, Carnegie Mellon University, 15213, USA §Machine Learning Department, Carnegie Mellon University, 15213, USA E-mail: barati@cmu.edu Abstract Identifying drug-target interactions is essential for developing effective therapeutics. Binding affinity quantifies these interactions, and traditional approaches rely on com- putationally intensive 3D structural data. In contrast, language models can efficiently process sequential data, offering an alternative approach to molecular representation. In the current study, we introduce BAPULM, an innovative sequence-based framework that leverages the chemical latent representations of proteins via ProtT5-XL-U50 and ligands through MolFormer, eliminating reliance on complex 3D configurations. Our approach was validated extensively on benchmark datasets, achieving scoring power (R) values of 0.925 ±0.043, 0.914 ±0.004, and 0.8132 ±0.001 on benchmark1k2101, Test2016 290, and CSAR-HiQ 36, respectively. These findings indicate the robust- ness and accuracy of BAPULM across diverse datasets and underscore the potential of sequence-based models in-silico drug discovery, offering a scalable alternative to 3D-centric methods for screening potential ligands. 1arXiv:2411.04150v1 [q-bio.QM] 6 Nov 2024 Introduction Developing novel therapeutics is essential for addressing extant diseases, newly emerging or untreated diseases, and future potential disorders that have yet to be identified.1The recent COVID-19 pandemic has underscored the critical importance of rapid and innovative drug development to combat these unforeseen global challenges.2,3In this pursuit, drugs, typically organic molecules composed of carbon-catenated structures (ligands), are stereoselectively designed to interact with specific amino acid motifs of their target proteins.4,5These inter- actions are often mediated by non-covalent forces such as hydrogen bonds, van der Waals interactions, and electrostatic forces.6Understanding the strength of these protein-ligand interactions, often represented by the equilibrium dissociation constant (K d), is crucial to advance therapeutic development.7Spectroscopic techniques, including FTIR, NMR, UV- visible spectroscopy, and fluorescence, are employed to test potential ligands for specific proteins.8–11These methods capture conformational transitions within the secondary struc- ture through vibrational bands, structural modifications through chemical changes, changes in absorbance due to the electronic environment, and alterations in fluorescence intensity upon protein-ligand binding, respectively.12,13 In addition to these experimental approaches, computational methods such as molecular docking and molecular dynamics (MD) simulations have revolutionized affinity prediction by offering physical interpretability.14,15While MD simulations accurately estimate binding affinities at the expense of higher compute power, molecular docking enables the exploration of large libraries of potential ligands, offering rapid virtual screening capabilities albeit re- duced accuracy. Despite their limitations, these techniques laid the foundation for in silico methods in drug discovery, paving the way for the adoption of deep learning models, which have achieved considerably higher predictive accuracy. Alongside molecular docking and simulations, 3D structure-based deep learning models adeptly capture the complex spatial features of protein-ligand interactions; however, they are inherently constrained by the dependence on high-resolution crystallographic data. In 2 contrast, the emergence of large-scale datasets featuring sequential 1D representations of proteins and ligands enables the examination of the sequential molecular latent space for the screening of potential ligands.15–17With the availability of large-scale sequential datasets, researchers have developed advanced models such as transformers to leverage these data to produce more accurate affinity predictions. The transformer architecture inherently relies on the attention mechanism, which excels at comprehending sequential data. Language models leverage this architecture, using unsupervised pretraining to capture nuanced and comprehensive relationships within the data while encoding the sequences.17–19Elnaggar et al. pioneered the development of protein sequence-based language models such as ProtBERT, ProtAlbert, ProtElectra, and ProtT5, trained on expansive datasets UniRef, BFD comprising up to 393 billion amino acids. Interestingly, these models excel at attending to sequences that are spatially proximal, highlighting the importance of nearby amino acids over more distant ones.20Subsequently, ligand-specific encoder models such as ChemBerta and Molformer were engineered to encode the SMILES representation of organic molecules. Building on these advancements, PLAPT successfully integrates BERT-based encoders for protein and ligand sequences to improve affinity predictions.21However, the multimodal framework designed by Xu et al. demonstrates superior performance by incorporating ad- ditional binding pocket information through a residue graph network and employing cross- attention between the sequential and structural modalities. Yet, there remains an essential requirement for configurations that can achieve better predictive capabilities without the complications associated with the extensive data and computational demands of the MFE framework. The current study aims to address this research gap by exploring the syner- gistic utilization of pre-trained language models as a compelling alternative in the realm of protein-ligand binding affinity prediction. We present binding affinity prediction using lan- guage models (BAPULM), a framework that capitalizes on the integrated strengths of the ProtT5-XL-U5018and Molformer22encoder models to effectively estimate binding affinity with a predictive feedforward network. By utilizing these unsupervised pre-trained language 3 models, BAPULM achieves high accuracy in binding affinity prediction while maintaining computational efficiency. BAPULM captures stereochemical molecular space and efficiently screens potential ligands, achieving state-of-the-art performance in predicting the binding affinity. Methods BAPULM was developed to utilize the functionality of encoder-based language models, which require simple 1D string expressions as input, such as protein amino acid sequences and ligand SMILES representation, to predict affinity as shown in Figure 1. Figure 1: The overview of the BAPULM framework, which integrates the ProtT5-XL-U50 for protein sequnces and Molformer for ligand SMILES for feature extraction module while encoding the sequnces. These embeddings are aligned through projection layers and fed into a feed-forward predictive network to predict binding affinity. Datasets The dataset employed to train BAPULM is the Binding Affinity Dataset23from the Hugging Face platform, which includes the curated pair of 1.9M unique set of protein-ligand complexes with the experimentally determined binding affinity pKd. BAPULM operates on the subset 4 of the first 100k aminoacid sequences, canonical smiles, and binding affinity (pKd). Figure 2. illustrates the distribution of (a) protein sequence length with only a tiny portion ( 0.2%) of the sequences with a length greater than 3200 and (b) ligand SMILES with a small fraction ( 0.3%) greater than 278. A dataset of protein-ligand feature embeddings, pKd, and normalized binding affinity was generated before model training using the encoder models described in Section 2.3. A split ra- tio of 90:10 was used to build training and validation sets, similar to the percentage employed in the previous work.21Furthermore, the following benchmark datasets were acquired from the various works of literature: Benchmark1k2101,21Test2016 290,24and CSAR-HiQ 3625 to evaluate BAPULM. Every benchmark dataset was meticulously examined to ensure no overlapping with the training dataset.21 Figure 2: Distribution of (a) Protein sequence lengths range from 13 to 7073 amino acids, showing a skewed distribution with most sequences concentrated under 1000 amino acids. (b) Ligand SMILES string lengths range from 4 to 547 characters, also displaying a skewed distribution with the majority of strings being shorter than 100 characters. PreProcessing Macromolecules built from the same set of 20 amino acid repeating units to form unique sequences are proteins. As a part of preprocessing, the protein sequences were separated by spaces into single characters (A-Z) describing the monomeric residuals and to standard- ize the input sequences, the non-essential amino acids Asparagine (B), Selenocysteine (U), Glutamic acid (Z), and Pyrrolysine (O) were replaced by employing the substitution code 5 ’X’.18,21The canonical SMILES captures the structural stereochemistry of the organic mi- cro/macro molecules, ensuring a unique expression for every individual molecule, enabling a standardized representation. Model Architecture BAPULM’s architecture consists of two robust components that are synergistically utilized to predict pKd. Primarily, the feature encoding module harnessed the potency of ProtT5-XL- U50 for protein sequence and Molformer for ligand SMILES to generate consolidated vectors in latent space that constitute all the characteristic information about the proteins and ligands known as feature embeddings, which were subsequently utilized in the forthcoming module. Protein-ligand feature embedding The BAPULM model integrates the ProtT5-XL-U50 model, which is founded on the T5 model,26and differentiates itself from BERT by employing a unified transformer architec- ture (both encoder and decoder) while capturing the biophysical features of amino acids and the language of life.18,26The preprocessed sequences are transformed into tokens following a comprehensive tokenization procedure, as mentioned in ProtTrans.18This method involves padding and truncating the sequence to a maximum length of 3200, also a norm followed by previous work,21generating a list of token IDs and their attending attention mask. Subse- quently, the tokens were passed to the encoder, and a mean pooling operation was performed on the last layer to generate fixed 1024-dimensional feature embeddings, enabling a compre- hensive understanding of the protein sequences with variable lengths. BAPULM further leverages Molformer, a state-of-the-art transformer-based encoder model, which effectively captures the spatial connection between the atoms in the SMILES sequence.22The canonical SMILES of ligands were tokenized while processed through padding and truncating to an utmost length of 278, including micro and macromolecule ligands. The mean pooler output 6 from the encoder was a 768-dimensional embedding vector containing the stereochemical features of the ligand molecule. A detailed breakdown of the lengths of the protein-ligand sequences is available in Supporting Information Table 3 and 4. Therefore, the protein sequence was encoded into a 1024-dimensional embedding space while the ligand smiles to a 768-dimensional vector. To hereafter utilize these in the pre- diction module, both sets of feature vectors were then separately projected onto a lower- dimensional (512) latent space through a linear transformation employing ReLU (rectified linear unit) activation. These consolidated 512-dimensional feature vectors were concate- nated to form a 1024-dimensional input vector to the feed-forward network. Feed-Forward Predictive Network The concatenated 1024-dimensional combined feature vector was passed through four ReLU- activated linear layers, as shown in Figure 1. Before passing through the linear layers, the mini-batches of combined feature embeddings underwent batch normalization to improve training stability by reducing the internal covariance shift.27Dropout was also applied to avert overfitting and create a robust model. The last layer output of the model yielded a normalized scalar value of the binding affinity(pKd). Training and Evaluation Metrics The previously generated feature dataset was utilized to train BAPULM, employing Mean Squared Error(MSE) as a loss function, which estimates the average squared difference be- tween the actual pKdand predicted affinity as shown below: MSE =1 nnX i=1 pKd,true ,i−pKd,pred ,i2(1) This loss function was optimized utilizing the Adam optimizer to update the model’s weights. The training process was executed on an Nvidia RTX 2080 Ti with 11GB of memory 7 and completed in approximately four minutes. Additionally, the training hyperparameters are provided in the Supporting Information Table 5. To estimate the efficacy of BAPULM in predicting the negative log of the binding affin- ity dissociation constant (pKd) between protein-ligand complexes, we used the following evaluation metrics: Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Pearson correlation coefficient (R) as shown in the equations 2, 3, 4, where pKdtrue, pKdpred correspond to the experimental and predicted affinities. MAE =1 nnX i=1 pKd,true ,i−pKd,pred ,i (2) RMSE =vuut1 nnX i=1 pKd,true ,i−pKd,pred ,i2(3) R=Pn i=1 pKd,true ,i−µpKd,true pKd,pred ,i−µpKd,pred rPn i=1 pKd,true ,i−µpKd,true2Pn i=1 pKd,pred ,i−µpKd,pred2(4) These metrics are widely adopted in regression studies and were established in published literature.12,15,24,28In particular, the person correlation coefficient (R) was considered as one of the scoring power metrics in evaluating the performance.15Again, both RMSE and MAE were employed to provide a comprehensive understanding of performance, as RMSE is optimal for errors with a normal distribution. In contrast, MAE is better suited for errors with a Laplacian distribution.29Since these metrics evaluate predicted and experimental pKdvalues, the model’s output was denormalized onto the same scale as the experimental affinity to assess the performance. 8 Results and Discussion BAPULM’s unique ability to predict binding affinity originates from the inherent nature of its architecture, which effectively captures the intricate features of protein sequences and ligand molecular structures. As shown in Table 1, BAPULM constantly displayed an improvement in each metric compared to PLAPT,21demonstrating its exceptional performance. Notably, BAPULM achieved a higher person correlation coefficient (R) with an increase of 9.6% (0.970) and 40.7% (0.960) on training and validation datasets, respectively, indicating a robust correlation between predicted and experimental pKdvalues. Also, the consolidated clustering of points along the identity line in the parity plots, as displayed in Figure 3(a,b), corroborates with the higher correlation coefficient. Table 1: Evaluation Metrics for BAPULM and PLAPT on Training and Validation Datasets Dataset Model R ↑MSE ↓RMSE ↓MAE ↓ TrainBAPULM (this study) 0.970 0.157 0.397 0.245 PLAPT 0.886 0.586 0.765 0.756 ValidationBAPULM (this study) 0.960 0.177 0.421 0.248 PLAPT 0.683 1.466 1.211 0.949 Furthermore, BAPULM exhibited remarkably lower error metrics, with a drop of 73.2%, 48.1%, and 67.6% in MSE (0.157), RMSE (0.397), and MAE (0.245), respectively, on the training data Similarly, on the validation data, the model showed a decline of 87.9% in MSE (0.177), 65.3% in RMSE (0.421), and 73.9% in MAE (0.248), underscoring its predic- tive capability. This significant improvement across both training and validation datasets demonstrated the ability of the model to comprehensively capture the underlying interactions between the proteins and ligands, facilitating accurate predictions. Moreover, BAPULM’s predictive ability was further validated on three distinct bench- mark datasets, where it was compared to current state-of-the-art models, as shown in Table 2. The evaluation metrics in Table 2 are computed as the mean and standard deviation, estimated using different seed values (2102, 256, 42), to accurately reflect the model’s per- 9 Figure 3: Evaluation of BAPULM on multiple datasets where the scatter plots depict the correlation between predicted and experimental pKdvalues. The datasets represented in- clude the (a) Training ,(b) Validation (c) Benchmark1k2101,(d) Test2016 290, and (e)CSAR- HiQ 36. formance during inference on test datasets with the trained model weights. Accordingly, on the benchmark2k1k dataset, BAPULM demonstrates improved evaluated values compared to PLAPT, with an increase in the R-value of 4.76% and a drop in RMSE, MAE by 19.1% and 37.2%. Table 2: Model Performance on Various Benchmark Datasets Dataset Model Data Representation Feature extraction R ↑ RMSE ↓ MAE ↓ benchmark1k2101BAPULM (this study) seq + smiles canonical ProtT5-XL-U50 + Molformer 0.925 ±0.043 0.745 ±0.236 0.432 ±0.013 PLAPT seq + smiles canonical ProtBert + ChemBerta 0.883 0.922 0.688 Test2016 290BAPULM (this study) seq + smiles canonical ProtT5-XL-U50 + Molformer 0.914 ±0.004 0.898 ±0.0172 0.645 ±0.0166 MFE Protein seq, 3D structure + ligand graph Multimodal between seq, structure + ligand graph 0.851 1.151 0.882 PLAPT seq + smiles canonical ProtBert + ChemBerta 0.845 1.196 0.906 CAPLA protein seq, ligand smiles + binding pocket 1D convolution block + Cross attention (pocket/ligand) 0.843 1.200 0.966 DeepDTAF Protein seq, ligand smiles + binding pocket 1D Conv, 1D Conv + 3 Conv layers for binding pocket 0.789 1.355 1.073 OnionNet Protein-ligand 3D grid 3D Conv + Neural Attention 0.816 1.278 0.984 CSAR-HiQ 36BAPULM (this study) seq + smiles canonical ProtT5-XL-U50 + Molformer 0.8132 ±0.012 1.328 ±0.020 1.029 ±0.022 affinity pred - - 0.774 1.484 1.176 PLAPT seq + smiles canonical ProtBert + ChemBerta 0.731 1.349 1.157 CAPLA seq, smiles + binding pocket 1D convolution block + Cross attention (pocket/ligand) 0.704 1.454 1.160 DeepDTAF seq + smiles 1D CNN on seq and smiles 0.543 2.765 2.318 Xu et al.28developed a multimodal feature extraction (MFE) framework that employed the following feature extraction module involving 1D protein sequence, binding pocket surface through point cloud, 3D structural features, and the ligand molecular graph. It slightly out- 10 performed PLAPT on the Test 2016 dataset by 0.6% improvement in correlation coefficient (R) while reducing the RMSE and MAE by 3.8% and 2.6%, becoming the current state- of-the-art affinity prediction model. However, BAPULM leveraging ProtT5-XL-U50, Mol- former substantially outperformed MFE’s performance by 7.4%, 21.8%,26.7% in R (0.914) , RMSE(0.898) and MAE (0.642), respectively. Additionally, BAPULM surpassed both se- quence and structure-based models on every metric. It outperformed CAPLA24by 8.4% in R, 25.2% in RMSE, and 32.2% in MAE. Against DeepDTAF,7BAPULM showed a higher linear correlation value with an increase of 15.9%, reduced RMSE by 33.7%, and decreased MAE by 39.9%. Furthermore, compared to OnionNet,15it achieved a 12% higher R-value, a lower RMSE, and an MAE of 29.7% and 34.5%, respectively. This implies that BAPULM was successfully able to capture the linear relationship between pKd(experimental) and pKd (predicted), alongside being more accurate by achieving lower RMSE and MAE values. Finally, on the CSAR-HiQ 36 dataset, BAPULM yet again proved its exceptional predic- tive ability. Unlike PLAPT, BAPULM was able to capture the identity relationship between predicted and actual binding affinity, besides being accurate.21BAPULM achieved a no- table scoring power value of 0.813, denoting an 11.2% improvement over PLAPT and 5.1 % against affinity pred.2Similarly, the percentage improvement on the other two metrics was greater (MAE: 12.5%, RMSE: 10.5%) than PLAPT’s advancement over affinity pred (MAE: 1.62%, RMSE:9.10%). Additionally, BAPULM outperforms other sequence-based models on R, RMSE, and MAE against CAPLA by 15.25%,8.67%,11.29%, and over DeepDTAF by 48.7%, 51.96%, 55.59%, respectively. Furthermore, to gain insights into BAPULM’s excellent correlation capabilities, features from the penultimate layer were extracted and utilized to generate t-distributed Stochas- tic Neighbor Embedding (t-SNE) visualizations. t-SNE is a statistical method that maps high-dimensional data to a lower-dimensional space, conserving the local structure and en- abling the visualization in a lower dimension.30To understand the influence of encoder-based language models in predicting binding affinity, we employed the combination of transformer 11 Figure 4: Embedding visualizations of protein-ligand binding affinity mapped onto features extracted from (a) BAPULM, (b) ProtBert & Molformer, and (c) ProtBert & ChemBerta, illustrating the latent space representations of each configuration on train dataset. encoders, such as protBERT, ChemBERTa, and Molformer, within the same model architec- ture, assessing their ability to capture the binding affinity between protein-ligand complexes effectively. BAPULM demonstrates a clear and distinct gradient transition in the t-SNE visu- alization, indicating a strong correlation between the latent representations of protein-ligand complexes and their binding affinities. In contrast, the distribution for the ProtBERT and MolFormer models is more dispersed, with less noticeable separation of embeddings based on pKdvalues. Similarly, the t-SNE visualization for ProtBERT and ChemBERTa shows a partial gradient transition but with some overlap between high-affinity and low-affinity complexes. Although both ProtBERT & MolFormer and ProtBERT & ChemBERTa exhibit some clustering of complexes according to pKd, the clustering is much more prominent in BAPULM. This is attributed to using rotary positional embeddings in Molformer during pretraining, enabling it to learn spatial relationships within the ligand. The synergistic com- bination of Molformer with ProtT5-XL-U50 in BAPULM effectively captured the binding affinity correlation, resulting in a clear and distinct separation of protein-ligand complexes in the t-SNE visualization. This separation is characterized by a smooth color gradient, indi- cating BAPULM’s ability to distinguish between complexes with varying binding affinities. 12 Conclusion This study introduces a sequence-based machine-learning model, BAPULM, that lever- ages transformer-based language models ProtT5-XL-U-50 and Molformer to predict protein- ligand binding affinity. BAPULM effectively captures the latent features of protein-ligand complexes without relying on structural data, enabling a robust representation by harness- ing the inherent information in biochemical sequences. This approach significantly enhances predictive accuracy while reducing computational complexity. The integration of Molformer with rotary positional encoding enhanced BAPULM’s ability to comprehend the stereo- chemistry of ligands without requiring detailed 3D configurations to demonstrate superior performance across diverse benchmarks. Our t-SNE visualizations reveal that synergistic integration of these encoders displayed a distinct clustering of complexes according to bind- ing affinity, substantiating BAPULM’s predictive capability. This framework presents an efficient alternative to conventional structure-based models, demonstrating the potential of using sequence-based models for rapid virtual screening. Data and Software Availability The necessary code and data used in this study can be accessed here: https://github. com/radh55sh/BAPULM.git Acknowledgement We acknowledge the contributions of various individuals and organizations that have made this study possible. This includes the providers of the datasets used in our research, the developers of PyTorch, and the teams behind ProtT5-XL-U50 and Molformer. 13 References (1) Mollaei, P.; Guntuboina, C.; Sadasivam, D.; Farimani, A. B. IDP-Bert: Predicting Properties of Intrinsically Disordered Proteins (IDP) Using Large Language Models. 2024 , (2) Blanchard, A. E.; Gounley, J.; Bhowmik, D.; Chandra Shekar, M.; Lyngaas, I.; Gao, S.; Yin, J.; Tsaris, A.; Wang, F.; Glaser, J. Language models for the prediction of SARS- CoV-2 inhibitors. International Journal of High Performance Computing Applications 2022 ,36, 587–602. (3) Patil, S.; Mollaei, P.; farimani, A. B. Forecasting COVID-19 New Cases Using Trans- former Deep Learning Model. medRxiv 2023 , 2023.11.02.23297976. (4) Mollaei, P.; Barati Farimani, A. Unveiling Switching Function of Amino Acids in Pro- teins Using a Machine Learning Approach. Journal of Chemical Theory and Computa- tion2023 ,19, 8472–8480. (5) Du, X.; Li, Y.; Xia, Y. L.; Ai, S. M.; Liang, J.; Sang, P.; Ji, X. L.; Liu, S. Q. Insights into Protein–Ligand Interactions: Mechanisms, Models, and Methods. International Journal of Molecular Sciences 2016 ,17. (6) Adhav, V. A.; Saikrishnan, K. The Realm of Unconventional Noncovalent Interactions in Proteins: Their Significance in Structure and Function. 2024 ,14, 22. (7) Wang, K.; Zhou, R.; Li, Y.; Li, M. DeepDTAF: a deep learning method to predict protein-ligand binding affinity. Briefings in Bioinformatics 22 , 1–15. (8) K¨ otting, C.; Gerwert, K. Monitoring protein-ligand interactions by time-resolved FTIR difference spectroscopy. Methods in Molecular Biology 2013 ,1008, 299–323. (9) Dalvit, C.; Gm¨ ur, I.; R¨ oßler, P.; Gossert, A. D. Affinity measurement of strong ligands 14 with NMR spectroscopy: Limitations and ways to overcome them. Progress in Nuclear Magnetic Resonance Spectroscopy 2023 ,138-139 , 52–69. (10) Nienhaus, K.; Nienhaus, G. U. Probing Heme Protein-Ligand Interactions by UV/Visible Absorption Spectroscopy. Methods in Molecular Biology 2005 ,305, 215– 241. (11) Rossi, A. M.; Taylor, C. W. Analysis of protein-ligand interactions by fluorescence polarization. Nature Protocols 2011 6:3 2011 ,6, 365–387. (12) Zhang, X.; Gu, Y.; Xu, G.; Li, Y.; Wang, J.; Yang, Z. HaPPy: Harnessing the Wisdom from Multi-Perspective Graphs for Protein-Ligand Binding Affinity Prediction (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence 2023 ,37, 16384–16385. (13) Qi, C.; Mankinen, O.; Telkki, V. V.; Hilty, C. Measuring Protein-Ligand Binding by Hyperpolarized Ultrafast NMR. Journal of the American Chemical Society 2024 ,146, 5063–5066. (14) Zhao, J.; Cao, Y.; Zhang, L. Exploring the computational methods for protein-ligand binding site prediction. Computational and Structural Biotechnology Journal 2020 ,18, 417. (15) Zheng, L.; Fan, J.; Mu, Y. OnionNet: A Multiple-Layer Intermolecular-Contact-Based Convolutional Neural Network for Protein-Ligand Binding Affinity Prediction. ACS Omega 2019 ,4, 15956–15965. (16) Wang, H.; Liu, H.; Ning, S.; Zeng, C.; Zhao, Y. DLSSAffinity: protein–ligand binding affinity prediction via a deep learning model. Physical Chemistry Chemical Physics 2022 ,24, 10124–10133. 15 (17) Guntuboina, C.; Das, A.; Mollaei, P.; Kim, S.; Barati Farimani, A. PeptideBERT: A Language Model Based on Transformers for Peptide Property Prediction. Journal of Physical Chemistry Letters 2023 ,14, 10427–10434. (18) Elnaggar, A.; Heinzinger, M.; Dallago, C.; Rehawi, G.; Wang, Y.; Jones, L.; Gibbs, T.; Feher, T.; Angerer, C.; Steinegger, M.; Bhowmik, D.; Rost, B. ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Learning. IEEE TRANS PATTERN ANALYSIS & MACHINE INTELLIGENCE 2021 ,14. (19) Kuan, D.; Farimani, A. B. AbGPT: De Novo Antibody Design via Generative Language Modeling. 2024 , (20) Vig, J.; Madani, A.; Varshney, L. R.; Xiong, C.; Socher, R.; Rajani, N. F. BERTOL- OGY MEETS BIOLOGY: INTERPRETING ATTENTION IN PROTEIN LAN- GUAGE MODELS. (21) Rose, T.; Anand, N.; Shen, T. PLAPT: PROTEIN-LIGAND BINDING AFFINITY PREDICTION USING PRE-TRAINED TRANSFORMERS. (22) Ross, J.; Belgodere, B.; Chenthamarakshan, V.; Padhi, I.; Mroueh, Y.; Das, P. Large- Scale Chemical Language Representations Capture Molecular Structure and Properties. Nature Machine Intelligence 2021 ,4, 1256–1264. (23) Glaser, J. Binding Affinity Dataset. https://huggingface.co/datasets/jglaser/binding affinity, 2022. (24) Jin, Z.; Wu, T.; Chen, T.; Pan, D.; Wang, X.; Xie, J.; Quan, L.; Lyu, Q. CAPLA: improved prediction of protein-ligand binding affinity by a deep learning approach based on a cross-attention mechanism. Bioinformatics (Oxford, England) 2023 ,39. (25) Dunbar, J. B.; Smith, R. D.; Yang, C. Y.; Ung, P. M. U.; Lexa, K. W.; Khazanov, N. A.; Stuckey, J. A.; Wang, S.; Carlson, H. A. CSAR benchmark exercise of 2010: selection 16 of the protein-ligand complexes. Journal of chemical information and modeling 2011 , 51, 2036–2046. (26) Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P. J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Trans- former. Journal of Machine Learning Research 2020 ,21, 1–67. (27) Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015 , (28) Xu, S.; Shen, L.; Zhang, M.; Jiang, C.; Zhang, X.; Xu, Y.; Liu, J.; Liu, X. Surface-based multimodal protein–ligand binding affinity prediction. Bioinformatics 2024 ,40. (29) Hodson, T. O. Root-mean-square error (RMSE) or mean absolute error (MAE): when to use them or not. Geoscientific Model Development 2022 ,15, 5481–5487. (30) Badrinarayanan, S.; Guntuboina, C.; Mollaei, P.; Farimani, A. B. Multi-Peptide: Mul- timodality Leveraged Language-Graph Learning of Peptide Properties. 2024 , 17 Supporting Information Sequence Distributions Table 3, 4 present the detailed length distributions of protein sequences and ligand molecules in our dataset. Table 3: Distribution of Protein Sequences by Length Range Length Range Number of Protein Sequences 1–1000 88,485 1001–2000 10,598 2001–3200 706 3201–4000 123 4001–7073 88 Table 4: Distribution of Ligand Molecules by Length Range Length Range Number of Ligand Molecules 1–100 94,831 101–200 4,085 201–278 753 279–478 330 479–547 1 18 Hyperparameters Table 5 summarizes the key hyperparameters, detailing essential configurations utilized for training the model. Table 5: BAPULM model hyperparameters Hyperparameters Value Seed 2102 Loss Function MSE Optimizer Adam Learning Rate 1e-3 Batch size 256 Epochs 60 Scheduler ReduceLROnPlateau Scheduler Patience 5 Scheduler Factor 0.2 19 | 1 | 1 | The model uses ProtT5-XL-U50 and MolFormer architectures, which are large transformer-based models. Given that training on an Nvidia RTX 2080 Ti took approximately 4 minutes, and assuming training occurs over a reduced dataset with 100k sequences, with a complex architecture having a moderate number of parameters, a single GPU can efficiently handle the workload and complete the training process in under 8 hours. The choice of MSE as the loss function indicates the use of a regression approach which is generally faster. Considering everything, estimating a conservative training time of about 1 hour estimated as feasible on a single GPU. | yes | Yes | Bioinformatics | BAPULM: Binding Affinity Prediction using Language Models | 2024-11-06 0:00:00 | https://github.com/radh55sh/BAPULM | 1 | https://huggingface.co/datasets/radh25sh/BAPULM/resolve/main/prottrans_molformer_tensor_dataset100k.json?download=true | 16sec * 60 epochs = 16 minutes | https://colab.research.google.com/drive/1--rNlCN01wUgN_6cTTuiVcusqSP9vGlG?usp=sharing | Yes | -- no pdbind dataset.Specifices to use prottrans malformer |
Digital twin-supported deep learning for fault diagnosis | DANN | [] | A domain adaptation neural network for digital twin-supported fault diagnosis | 2025-05-27T00:00:00 | https://arxiv.org/abs/2505.21046v1 | [
"https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis"
] | {'Accuray': '80.22'} | [
"Accuray"
] | Given the following paper and codebase:
Paper: A domain adaptation neural network for digital twin-supported fault diagnosis
Codebase: https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis
Improve the DANN model on the Digital twin-supported deep learning for fault diagnosis dataset. The result
should improve on the following metrics: {'Accuray': '80.22'}. You must use only the codebase provided.
| A domain adaptation neural network for digital twin-supported fault diagnosis Zhenling Chen CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, FranceHaiwei Fu CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, France Zhiguo Zeng Chair on Risk and Resilience of Complex Systems, Laboratoie Genie Industriel, CentraleSupélec, Université Paris-Saclay, 91190, Gif-sur-Yvette, France Abstract —Digital twins offer a promising solution to the lack of sufficient labeled data in deep learning-based fault diagnosis by generating simulated data for model training. However, discrepancies between simulation and real-world systems can lead to a significant drop in performance when models are applied in real scenarios. To address this issue, we propose a fault diagnosis framework based on Domain-Adversarial Neu- ral Networks (DANN), which enables knowledge transfer from simulated (source domain) to real-world (target domain) data. We evaluate the proposed framework using a publicly available robotics fault diagnosis dataset, which includes 3,600 sequences generated by a digital twin model and 90 real sequences collected from physical systems. The DANN method is compared with commonly used lightweight deep learning models such as CNN, TCN, Transformer, and LSTM. Experimental results show that incorporating domain adaptation significantly improves the diag- nostic performance. For example, applying DANN to a baseline CNN model improves its accuracy from 70.00% to 80.22% on real-world test data, demonstrating the effectiveness of domain adaptation in bridging the sim-to-real gap.1 Index Terms —predictive maintenance, fault diagnosis, digital failure twin, domain adaptation neural network (DANN) I. I NTRODUCTION Fault diagnosis aims at identifying the cause of a failure from observational data from sensors [1]. One of the major challenge in fault diagnosis is that the state-of-the-art deep learning-based models often require large amount of data. It is, however, often difficult to obtain these data in practice [2]. Digital twin technology combines physical entity with its digital representation. It can accurately reproduce the scenes in the physical world in the virtual environment, providing great convenience for the analysis, optimization and control of physical system [3]. Using digital twins to generate sim- ulated failure data and train a deep learning model for fault diagnosis has become a promising approach to solve the data insufficiency issue of fault diagnosis. There are already some existing works in applying dig- ital twins for fault diagnosis. For example, Jain et al. [4] proposed a digital twin-based fault diagnosis framework that 1Code and datasets available at: https://github.com/JialingRichard/Digital- Twin-Fault-Diagnosisutilizes the digital twin model to simulate system behavior and identify fault patterns in distributed photovoltaic systems, Wang et al. [5] proposed a digital twin-based fault diagnosis framework that integrates sensor data and physical models to detect and diagnose faults in rotating machinery within smart manufacturing systems. Yang et al.[6] proposed a digital twin-driven fault diagnosis method that combines virtual and real data to diagnose composite faults, where the digital twin generates virtual samples to compensate for the scarcity of fault samples in real systems. Most of these existing works assume that condition-monitoring data are availalbe on the same level as the component being diagnosed. In practice, however, deploying sensors at the component level is often difficult. One has to rely on system-level condition-monitoring data to infer the component-level failure modes [7]. In one of our previous works [8], we developed a digital twin model of a robot and use it to generate simulated failure data for fault diagnosis. Testing data are collected from a real robot with different injected failures to test the performance of the developed model. The existing works share a common assumption: The digital twin model can accurately predict the actual behavior of the component under test. However, in practice, the digital twin model is not always accurate. Then, the fault diagnosis model trained on simulation data often suffers from poor performance when applied to real data, due to the imprecision of the simulation model. To address this issue, we propose a Domain Adversarial Neural Network (DANN)-based framework for digital twin-supported fault diagnosis. Through the DANN [9], the developed model is able to learn useful features from the simulated data even the simulation does not exactly match the reality. We also performed a benchmark study by comparing the performance of the developed model with other state-of-the-art deep learning models, including LSTM [10], Transformer [11], CNN [12] and TCN [13]. The main contributions of this paper are: •We propose a novel DANN-based framework for digital twin-supported fault diagnosis.arXiv:2505.21046v1 [cs.LG] 27 May 2025 •We present an open-source dataset for digital twin- supported fault diagnosis. The dataset include simulated training data and real test data. •We conducted a detailed benchmark study where the performance of the developed model is compared with four other state-of-the-art deep learning models. II. D IGITAL TWIN MODEL AND DATASET DESCRIPTION In this paper, we consider the open source dataset for digital twin-supported fault diagnosis we developed previously in [8]. The dataset is created based on the digital failure twin model of a robot, as shown in Fig. 1. Fig. 1: The fault diagnosis in digital twin for robot [8]. A digital twin model is a simulation model used to simulate the failure behavior of the robot and connect to the physical entity to reflect its real-time states. The robot comprises of six motors. We monitor the trajectory of the end-effector and the control commands of each motor. The goal of the fault diagnosis is to use the condition-monitoring data to infer the failure modes of the four out of the six motors. Each motor might subject to two failure modes, i.e., stuck and steady-state error. The digital failure twin model is built as a two-layer model. On the motor level, we model the dynamics of the motor and its controller. Then, the response of each motor is fed into a forward kinematics model, which allows simulating the end-effector trajectory from the postions of the motors. The stuck and steady-state error can be simulated by changing the response of each motor, as shown in Fig. 1. To generate the training dataset, we generate 400 random trajectories, and simulate the 9classes (one normal state and eight failure states where each motor could be in either one of the two failure modes) under each trajectory. Each sample contains records spanning 1000 time steps. Then, we collect test data by randomly simulate 90 trajectories following the same protocals. In the original work [8], an LSTM was trained on the simulation dataset and applied to dignose the failures on the real robot. The results showed that, although the trained model performed well on the validation set (seperated from training data, but still from simulation), it performs poorly on the real testing dataset ( 96% V .S.69%). The main reason is that the simulation model does not match exactly the behavior of thereal robot. In this paper, we intend to address this issue through transfer learning. III. E XISTING TRANSFER LEARNING MODELS Prevalent deep learning-based models show great success in both academia and industry [14]. For example, Convolutional Neural Networks (CNN) use in automated fault detection for machinery vibrations [15], Recurrent Neural Networks (RNN) for example, LSTM have proven useful in diagnosing faults based on time series data [16]. In current work, Plakias et al. combined the dense convolutional blocks and the attention mechanism to develop a new attentive dense CNN for fault diagnosis [17]. Although these methods can achieve high per- formance in fault diagnosis, the application of these methods is usually under the assumption that test data and train data come from the same data distribution. Also, the current deep learning-based models are under the Independent Identically Distribution (i.i.d.). As we discussed before, the data generated from a digital twin might not exactly match the actual behavior in the physical entity. As a result, the distribution of training and testing dataset cannot be assumed as i.d.d., due to steady-state errors that cause in friction or other mechanical effects and real time faults that can big impact the results. In this paper, we use transfer learning methods to deal with source domain and target domain alignment in digital twin in data distribution. To solve the issue of data distribution discrepancy, various domain adaptation techniques in transfer learning have been introduced for diagnosing bearing faults [18–20]. Transfer learning can also be used to learn knowledge from source domain for fault diagnosis on a different target domain. Applications of transfer learning in fault diagnosis include representation adaptation [21–24], parameter transfer [25–27], adversarial-based domain adaptation [28, 29]. One of the most often used domain adaptation methods is representation adaptation which to align the distribution of the representations from the source domain and target domain by reducing the distribution discrepancy. Some neural networks are build for this, such as feature-based transfer neural network (FTNN) [24], deep convolutional transfer learning network (DCTLN) [21]. Shao et al. proposed a CNN-based machine fault diagnosis framework in parameter transfer [27], and experimental results show that DCTLN can get the average accuracy of 86.3%. Experimental results illustrate that the proposed method can achieve the test accuracy near 100% on three mechanical datasets, and in the gearbox dataset, the accuracy can reach 99.64%. In adversarial-based domain adaptation, Cheng et al. pro- posed Wasserstein distance based deep transfer learning (WD- DTL) [28] which uses CNN as pre-trained model. Experimen- tal results show that the transfer accuracy of WD-DTL can reach 95.75% on average. Lu et al. develop a domain adapta- tion combined with deep convolutional generative adversarial network (DADCGAN)-based methodology for diagnosing DC arc faults [29]. DADCGAN is a robust and reliable fault diagnosis scheme based on a lightweight CNN-based classifier can be achieved for the target domain. In this paper, we choose the DANN architecture to de- velop a framework of digital twin-supported fault diagnosis. The main reason is that its architecture is simple and can efficiently capture the features from the source domain and generalize well on the target domain. Moreover, DANN’s adversarial training mechanism enables the model to learn domain-invariant features, making it particularly effective in reducing the distribution discrepancy between source and tar- get domains. Furthermore, DANN performs well with limited labeled data from the target domain, addressing the common challenge of insufficient fault data in practical applications. Its ability to handle complex and nonlinear relationships in data and make DANN a reliable and scalable solution for fault diagnosis. IV. DANN M ODEL ARCHITECTURE We use Domain Adversarial Neural Network (DANN) model [9] and extend its application in digital twin in robotics maintenance prediction that previously and originally utilize in transfer learning in domain adaptation. The architecture of DANN is shown in Figure 2. Let us assume the input samples are represented by x∈X, where Xis some input space and certain labels (output) yfrom the label space Y. We assume that there exist two distributions S(x, y)andT(x, y)onX⊗Y, which will be referred to as the source domain and the target domain. Our goal is to predict labels ygiven the input xfor the target domain. We denote with dithe binary variable (domain label) for theith example, which indicates whether xicome from the source domain ( xi∼S(x) if di=0) or from the target distribution ( xi∼T(x) if di=1). We assume that the input x is first representative by a representation learning Gf(Feature Extractor) to a d-dimensional feature vector f ∈Rd, and we denote the vector of parameters of all layers in this mapping as θf,f=Gf(x;θf). Then, the feature vector fis representative byGy(label predictor) to the label y, and we denote the parameters of this learning with θy. Finally, the same feature vector fis representative to the domain label dby a mapping Gd(domain classifier) with the parameters θd. For the model learning, we minimize the label prediction loss on the annotated part (i.e. the source part) of the train set, also the parameters of both the feature extractor and the label predictor are optimized in order to minimize the empirical loss for the source domain samples. This ensures the discriminativeness of the features fand the overall good prediction performance of the combination of the feature extractor and the label predictor on the source domain. By doing so, we make the features fdomain-invariant. We need to make the distributions S(f)=Gf(x;θf)| x∼S(x) and T(f)=Gf(x;θf)| x∼T(x) to be similar [30]. To Measure the dissimilarity of the distributions S(f)and T(f), the distributions are constantly changing in learning progresses, we estimate the dissimilarity is to look at the loss of the domain classifier Gd, provided that the parametersθdof the domain classifier have been trained to discriminate between the two feature distributions. In training to obtain domain-invariant features, we seek the parameters θfof the feature representative that maximize the loss of the domain classifier (by making the two feature distributions as similar as possible), and simultaneously seeking the parameters θdof the domain classifier that minimize the loss of the domain classifier. And we seek to minimize the loss of the label predictor. The function is: E(θf, θy, θd) =X i=1..NLy(Gy(Gf(xi;θf);θy), yi)− λX i=1..NLd(Gd(Gf(xi;θf);θd), yi) =X i=1..NLi y(θf, θy)−λX i=1..NLi d(θf, θd)(1) where Lyis the loss for label prediction, Ldis the loss for the domain classification, and Li y,Li ddenote the corresponding loss functions evaluated at the i training example. We seek the the parameters ˆθf,ˆθy,ˆθdby solving the following optimization problem: (ˆθf,ˆθy) =argmin θf,θyE(θf, θy,ˆθd) ˆθd=argmax θdE(ˆθf,ˆθy, θd)(2) Then, we do optimization in backpropagation to seek the parameters ˆθf,ˆθy,ˆθdat the end progressing of class classifier and domain classifier. We also do a gradient reversal layer (GRL) to update and diferences the −λfacotor in (1). The backpropagation processing passes through the GRL, by the partial derivatives of the loss that is downstream the GRL (i.e. Ld) w.r.t. the layer parameters that are upstream the GRL (i.e. θf) get multiplied by −λ(i.e.∂Ld ∂θfis effectively replaced with −λ∂Ld ∂θf). We have forward and backward function Rλ(x): Rλ(x) =x (3) dRλ dx=−λI (4) where I is the identity matrix. In feature extractor, we use CNN to do feature extracting, based on our baseline, CNN model has a better results, so we use CNN architecture and its representation to do feature extracting. The CNN is in two convolutional layers, and we set kernel size is 3, number of filters is 64. V. E XPERIMENTS A. Dataset In this case study, we work on the dataset originally reported in [8]. As [8], we retained the desired and realized trajectory coordinates (x, y, z) and introduced a derived feature set representing the residuals between the desired and realized trajectories. As a result, the final feature set comprises six features: the desired trajectory coordinates (x, y, z) and the corresponding residuals (x, y, z). Fig. 2: DANN Architecture [9] The source domain dataset generated by the digital twin consists of 3600 samples across 9 distinct labels, with each label containing 400 samples. The real-world measurements are treated as target domain. We have 90 samples in the target domain. We split the source domain dataset into training and validation sets with a 9to1ratio, and the target domain dataset is used as the test set. The DANN described in Sect. IV is used to train a fault diagnosis model using the source domain data. Only the measured features in the target domain, but not the labels are used in the training process of the DANN to learn the domain invariate features. Then, the trained DANN is applied to predict the failure labels of the target domain. B. Evaluation Metrics The performance of all methods is evaluated using Accu- racy andF1 Score , which are defined as follows: a) Accuracy: Accuracy =Number of Correct Predictions Total Number of Predictions =TP+TN TP+TN+FP+FN(5) where TP,TN,FP, and FN represent the number of true positives, true negatives, false positives, and false negatives, respectively. b) F1 Score: The F1 Score is the harmonic mean of precision and recall: F1 Score = 2·Precision ·Recall Precision +Recall(6)where: Precision =TP TP+FP,Recall =TP TP+FN(7) These metrics provide a balanced evaluation of the model’s performance. C. Benchmarked models We use four current prevalent deep learning methods and models as baseline: •LSTM [10] Long Short-Term Memory (LSTM) deals with time series data in deep learning, it often uses for preventing gradient vanishing and gradient explosion in deep learning. LSTM is a special type of recurrent neural network (RNN), and can effectively capture and process long-term dependencies in sequence data by introducing memory units and gating mechanisms. •Transformer [11] Transformer is better at context depen- dency. And it is very versatile especially in multimodal. This ability to dynamically focus on relevant parts of the input is a key reason why Transformer model excel when processing sequence data. •CNN [12] Convolutional Neural Networks (CNN) is mainly used as a visual neural network, which mainly extracts features layer by layer through multiple and deep convolution. •TCN [13] It is a deep learning model specifically de- signed to process sequential data, combining the parallel processing capabilities of convolutional neural networks (CNN) with the long-term dependent modeling capabili- ties of recurrent neural networks (RNN). D. Implementation Details The implementation of the DANN is carried out using PyTorch. The experiments are conducted on NVIDIA RTX 3060 GPU with the following parameter settings: Learning rate is 0.001, Batch size is 32, Number of epochs is 250, Optimizer is Adam, and Alpha: α=2 1 +e−10p−1 (8) where p=epoch max epoch(9) VI. R ESULTS AND DISCUSSIONS A. Average accuracy and F1 score over all methods In this subsection, we systematically compare the results from the DANN with the four benchmarked models. We conduct experiments to evaluate the accuracy of the models on the train set, validation set, and real test set, as shown in table I. Additionally, we record the F1-score for each one of the nine classes, as shown in table II. Due to the randomness of deep learning models, each experiment is conducted five times, and both the average values and standard deviations of the performance metrics are calculated. From Table I, it can be seen that the four benchmarked deep learning models do not perform well, especially on the test set. The performance on the test set drops significantly compared to the training set and validation set. This can be explained by the imprecision of the simulation model used to generate the training data. The DANN, on the other hand, achieve much better performance on the test set. This is because through domain adaptation, the DANN is able to extract domain invariate features and generalize them to the target domain. It is observed from Table II that most of the benchmarked models exhibit very low classification accuracy for the state healthy. This is because, healthy state is very similar to other states where one motor has steady-state errors. When the simulation model is not accurate, the generated training data are even more difficult to distinguish between healthy and steady-state error states. The DANN, on the other hand, performs well in classifying the state of healthy. This is because after the domain adaptation, in the extracted feature space, the healthy state becomes well-seperated with the other states. In summary, among the commonly used deep learning models in our experiments, the model that combines a deeper and wider CNN as the backbone with the DANN structure is the relatively optimal choice.B. Ablation study for Digital Twin To demonstrate the necessity of using a digital twin model for this task, we conduct an ablation experiment. We train the model using only the real test set, excluding the train and validation sets generated entirely by the digital twin model. In the real test data, we split the dataset into train and testing sets at a ratio of 7:3. Our dataset contains only 90 real data points, and it is clear that most deep learning models struggle to fit on such a small dataset. The results we recorded in Table III, which indicate that, with such a limited amount of data, common methods cannot make accurate predictions. Use digital twin model to generate simulation data, on the other hand, clearly improve the performance, as the generated simulation data help the deep learning model to better learn the relevant features. VII. C ONCLUSIONS AND FUTURE WORKS In this paper, we proposed a new deep learning baseline for fault diagnosis using an existing digital twin dataset. We applied commonly used lightweight deep learning models and demonstrated that the Domain-Adversarial Neural Net- work (DANN) approach with a CNN backbone, as a transfer learning method, achieves higher accuracy compared to other models. Furthermore, our experiments validate that combining digital twin simulation with domain adaptation techniques can effectively address the issue of limited real-world data in fault diagnosis tasks. We selected lightweight models such as CNN, TCN, Trans- former, and LSTM due to their wide adoption in time-series fault diagnosis, ease of training, and relatively low computa- tional cost. Although these models serve as strong baselines, we acknowledge that more advanced architectures—such as pre-trained large-scale models or graph-based neural net- works—may offer improved generalization and performance. Exploring these alternatives remains a promising direction for future research. However, several limitations remain. First, the DANN framework requires more computational resources and deep learning expertise, which may pose challenges for practical deployment, particularly in resource-constrained industrial set- tings. Second, the inevitable discrepancies between the digital twin and the real-world system limit the performance of the model, as current simulations cannot fully capture complex physical dynamics. Third, while DANN improves generaliza- tion, the deep learning models used in this study still have room for improvement. Future work could explore more robust and generalizable models, such as those pre-trained on large- scale datasets or more advanced domain adaptation methods. ACKNOWLEDGMENT The research of Zhiguo Zeng is supported by ANR-22- CE10-0004, and chair of Risk and Resilience of Complex Systems (Chaire EDF, Orange and SNCF). Haiwei Fu and Zhenling Chen participate in this project as lab project in their master curricum in Centralesupélec. The authors would like to thank Dr. Myriam Tami for managing this project. TABLE I: Performance Comparison of Baseline Models Model Training Accuracy (%) Validation Accuracy (%) Test Accuracy (%) LSTM 96.06±5.57 92.22±4.60 56.00±4.59 Transformer 97.73±0.33 75.94±1.52 48.44±2.29 TCN 87.96±0.86 67.67±0.65 44.22±1.63 CNN 99.94±0.11 96.78±0.76 70.00±1.99 DANN 99.29±0.67 95.28±0.72 80.22±1.78 TABLE II: Performance Comparison on Each Category (F1 Score) LSTM Transformer TCN CNN DANN Healthy 0.00±0.00 0.00±0.00 0.07±0.09 0.07±0.09 0.67±0.04 Motor 1 Stuck 0.86±0.06 0.63±0.05 0.65±0.04 0.81±0.03 0.84±0.04 Motor 1 Steady state error 0.55±0.14 0.67±0.09 0.46±0.04 0.85±0.03 0.90±0.05 Motor 2 Stuck 0.72±0.05 0.65±0.14 0.36±0.03 0.73±0.07 0.79±0.04 Motor 2 Steady state error 0.53±0.16 0.40±0.05 0.46±0.08 0.90±0.05 0.87±0.02 Motor 3 Stuck 0.55±0.05 0.54±0.08 0.48±0.05 0.63±0.09 0.80±0.03 Motor 3 Steady state error 0.63±0.11 0.38±0.10 0.62±0.10 0.91±0.03 0.91±0.06 Motor 4 Stuck 0.49±0.06 0.42±0.08 0.40±0.07 0.59±0.06 0.78±0.04 Motor 4 Steady state error 0.43±0.05 0.41±0.07 0.28±0.02 0.53±0.02 0.62±0.08 TABLE III: Performance Ablation Study Model Only Real Data Accuracy (%) Digital twin-supported deep learning (%) LSTM 14.92±4.09 56.00±4.59 Transformer 18.10±2.58 48.44±2.29 TCN 15.24±1.62 44.22±1.63 CNN 13.97±2.54 70.00±1.99 DANN 15.87±4.71 80.22±1.78 REFERENCES [1] Y . Zhang, J. Ji, Z. Ren, Q. Ni, F. Gu, K. Feng, K. Yu, J. Ge, Z. Lei, and Z. Liu, “Digital twin-driven partial domain adaptation network for intelligent fault diagnosis of rolling bearing,” Reliability Engineering & System Safety , vol. 234, p. 109186, 2023. [2] D. Zhong, Z. Xia, Y . Zhu, and J. Duan, “Overview of predictive maintenance based on digital twin technology,” Heliyon , vol. 9, no. 4, 2023. [3] M. G. Juarez, V . J. Botti, and A. S. Giret, “Digital twins: Review and challenges,” Journal of Computing and Information Science in Engineering , vol. 21, no. 3, p. 030802, 2021. [4] P. Jain, J. Poon, J. P. Singh, C. Spanos, S. R. Sanders, and S. K. Panda, “A digital twin approach for fault diagnosis in distributed photovoltaic systems,” IEEE Transactions on Power Electronics , vol. 35, no. 1, pp. 940–956, 2019. [5] J. Wang, L. Ye, R. X. Gao, C. Li, and L. Zhang, “Digital twin for rotating machinery fault diagnosis in smart manufacturing,” International Journal of Production Research , vol. 57, no. 12, pp. 3920–3934, 2019. [6] C. Yang, B. Cai, Q. Wu, C. Wang, W. Ge, Z. Hu, W. Zhu, L. Zhang, and L. Wang, “Digital twin-driven fault diagnosis method for composite faults by combining virtual and real data,” Journal of Industrial Information Integration , vol. 33, p. 100469, 2023. [7] Y . Ran, X. Zhou, P. Lin, Y . Wen, and R. Deng, “A survey of predictive maintenance: Systems, purposes and approaches,” arXiv preprint arXiv:1912.07383 , pp. 1–36, 2019. [8] K. M. Court, X. M. Court, S. Du, and Z. Zeng, “Use digital twins to sup- port fault diagnosis from system-level condition-monitoring data,” arXiv preprint arXiv:2411.01360 , 2024. [9] Y . Ganin and V . Lempitsky, “Unsupervised domain adaptation by backpropagation,” inInternational conference on machine learning , pp. 1180–1189, PMLR, 2015. [10] S. Hochreiter, “Long short-term memory,” Neural Computation MIT-Press , 1997. [11] A. Vaswani, “Attention is all you need,” Advances in Neural Information Processing Systems , 2017. [12] Y . LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Handwritten digit recognition with a back-propagation network,” Advances in neural information processing systems , vol. 2, 1989. [13] S. Bai, J. Z. Kolter, and V . Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv preprint arXiv:1803.01271 , 2018. [14] M. He and D. He, “Deep learning based approach for bearing fault diagnosis,” IEEE Transactions on Industry Applications , vol. 53, no. 3, pp. 3057–3065, 2017. [15] M. Xia, T. Li, L. Xu, L. Liu, and C. W. De Silva, “Fault diagnosis for rotating machinery using multiple sensors and convolutional neural networks,” IEEE/ASME transactions on mechatronics , vol. 23, no. 1, pp. 101–110, 2017. [16] J. Shi, D. Peng, Z. Peng, Z. Zhang, K. Goebel, and D. Wu, “Planetary gearbox fault diagnosis using bidirectional-convolutional lstm networks,” Mechanical Systems and Signal Processing , vol. 162, p. 107996, 2022.[17] S. Plakias and Y . S. Boutalis, “Fault detection and identification of rolling element bearings with attentive dense cnn,” Neurocomputing , vol. 405, pp. 208–217, 2020. [18] W. Li, R. Huang, J. Li, Y . Liao, Z. Chen, G. He, R. Yan, and K. Gryllias, “A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges,” Mechanical Systems and Signal Processing , vol. 167, p. 108487, 2022. [19] H. Zhiyi, S. Haidong, J. Lin, C. Junsheng, and Y . Yu, “Transfer fault diagnosis of bearing installed in different machines using enhanced deep auto-encoder,” Measurement , vol. 152, p. 107393, 2020. [20] H. Cao, H. Shao, X. Zhong, Q. Deng, X. Yang, and J. Xuan, “Unsupervised domain- share cnn for machine fault transfer diagnosis from steady speeds to time-varying speeds,” Journal of Manufacturing Systems , vol. 62, pp. 186–198, 2022. [21] L. Guo, Y . Lei, S. Xing, T. Yan, and N. Li, “Deep convolutional transfer learning network: A new method for intelligent fault diagnosis of machines with unlabeled data,” IEEE Transactions on Industrial Electronics , vol. 66, no. 9, pp. 7316–7325, 2018. [22] S. Pang and X. Yang, “A cross-domain stacked denoising autoencoders for rotating machinery fault diagnosis under different working conditions,” Ieee Access , vol. 7, pp. 77277–77292, 2019. [23] D. Xiao, Y . Huang, L. Zhao, C. Qin, H. Shi, and C. Liu, “Domain adaptive motor fault diagnosis using deep transfer learning,” Ieee Access , vol. 7, pp. 80937–80949, 2019. [24] B. Yang, Y . Lei, F. Jia, and S. Xing, “An intelligent fault diagnosis approach based on transfer learning from laboratory bearings to locomotive bearings,” Mechanical Systems and Signal Processing , vol. 122, pp. 692–706, 2019. [25] Z. He, H. Shao, X. Zhang, J. Cheng, and Y . Yang, “Improved deep transfer auto- encoder for fault diagnosis of gearbox under variable working conditions with small training samples,” Ieee Access , vol. 7, pp. 115368–115377, 2019. [26] H. Kim and B. D. Youn, “A new parameter repurposing method for parameter transfer with small dataset and its application in fault diagnosis of rolling element bearings,” Ieee Access , vol. 7, pp. 46917–46930, 2019. [27] S. Shao, S. McAleer, R. Yan, and P. Baldi, “Highly accurate machine fault diagnosis using deep transfer learning,” IEEE Transactions on Industrial Informatics , vol. 15, no. 4, pp. 2446–2455, 2018. [28] C. Cheng, B. Zhou, G. Ma, D. Wu, and Y . Yuan, “Wasserstein distance based deep adversarial transfer learning for intelligent fault diagnosis with unlabeled or insufficient labeled data,” Neurocomputing , vol. 409, pp. 35–45, 2020. [29] S. Lu, T. Sirojan, B. T. Phung, D. Zhang, and E. Ambikairajah, “Da-dcgan: An effective methodology for dc series arc fault diagnosis in photovoltaic systems,” IEEE Access , vol. 7, pp. 45831–45840, 2019. [30] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of statistical planning and inference , vol. 90, no. 2, pp. 227–244, 2000. | 2 | 1 | The DANN model employs a CNN architecture with two convolutional layers. Given the specified batch size of 32 and 250 training epochs on a dataset with 3,600 samples (360 samples per class for 9 distinct labels, plus a significantly smaller test set of 90 samples), the total iterations required for training would be (3600 / 32) * 250 = approximately 28,125 iterations. Based on similar CNN models, training on a single high-performance GPU like NVIDIA RTX 3060 could complete 28,125 iterations in about 2 hours, assuming decent parallel computation and efficient training technique. The small size of the dataset (only 3,600) and the limited amount of training data needed due to the domain adaptation should also decrease training time. Thus, it is feasible to conclude that the DANN model could be trained in under 8 hours on a single GPU, effectively allowing for its relatively simple architecture with the specified dataset. | yes | Yes | Time Series | A domain adaptation neural network for digital twin-supported fault diagnosis | 2025-05-27T00:00:00.000Z | [https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis] | 1 | Included in Repo | 3 Hours | Copy of train_ai_pytorch_DANN.ipynb | Yes | It starts and runs successfully |
MNIST | GatedGCN+ | [] | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00 | https://arxiv.org/abs/2502.09263v1 | [
"https://github.com/LUOyk1999/GNNPlus"
] | {'Accuracy': '98.712 ± 0.137'} | [
"Accuracy"
] | Given the following paper and codebase:
Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
Codebase: https://github.com/LUOyk1999/GNNPlus
Improve the GatedGCN+ model on the MNIST dataset. The result
should improve on the following metrics: {'Accuracy': '98.712 ± 0.137'}. You must use only the codebase provided.
| Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in capturing long-range dependencies, while Graph Transformers (GTs) are considered superior due to their global atten- tion mechanisms. Literature frequently suggests that GTs outperform GNNs, particularly in graph- level tasks such as graph classification and re- gression. In this study, we explore the untapped potential of GNNs through an enhanced frame- work, GNN+, which integrates six widely used techniques: edge feature integration, normaliza- tion, dropout, residual connections, feed-forward networks, and positional encoding, to effectively tackle graph-level tasks. We conduct a systematic evaluation of three classic GNNs—GCN, GIN, and GatedGCN—enhanced by the GNN+frame- work across 14 well-known graph-level datasets. Our results show that, contrary to the prevailing belief, classic GNNs excel in graph-level tasks, securing top three rankings across all datasets and achieving first place in eight, while also demonstrating greater efficiency than GTs. This highlights the potential of simple GNN architec- tures, challenging the belief that complex mech- anisms in GTs are essential for superior graph- level performance. Our source code is available at https://github.com/LUOyk1999/tunedGNN-G. 1. Introduction Graph machine learning addresses both graph-level tasks and node-level tasks, as illustrated in Figure 1. These tasks fundamentally differ in their choice of the basic unit for dataset composition, splitting, and training, with graph-level tasks focusing on the entire graph, while node-level tasks focus on individual nodes. Graph-level tasks (Dwivedi et al., 1Beihang University2The Hong Kong Polytechnic University. *Corresponding authors: Lei Shi <{leishi, luoyk }@buaa.edu.cn >, Xiao-Ming Wu <xiao-ming.wu@polyu.edu.hk >. Preprint. Figure 1. Differences between graph-level and node-level tasks. 2023; Hu et al., 2020; Luo et al., 2023b;a) often involve the classification of relatively small molecular graphs in chem- istry (Morris et al., 2020) or the prediction of protein proper- ties in biology (Dwivedi et al., 2022). In contrast, node-level tasks typically involve large social networks (Tang et al., 2009) or citation networks (Yang et al., 2016), where the primary goal is node classification. This distinction in the fundamental unit of dataset leads to differences in method- ologies, training strategies, and application domains. Message-passing Graph Neural Networks (GNNs) (Gilmer et al., 2017), which iteratively aggregate information from local neighborhoods to learn node representations, have be- come the predominant approach for both graph-level and node-level tasks (Niepert et al., 2016; Kipf & Welling, 2017; Veliˇckovi ´c et al., 2018; Xu et al., 2018; Bresson & Laurent, 2017; Wu et al., 2020). Despite their widespread success, GNNs exhibit several inherent limitations, including re- stricted expressiveness (Xu et al., 2018; Morris et al., 2019), over-smoothing (Li et al., 2018; Chen et al., 2020), over- squashing (Alon & Yahav, 2020), and a limited capacity to capture long-range dependencies (Dwivedi et al., 2022). A prevalent perspective is that Graph Transformers (GTs) (M¨uller et al., 2023; Min et al., 2022; Hoang et al., 2024), as an alternative to GNNs, leverage global attention mech- anisms that enable each node to attend to all others (Yun et al., 2019; Dwivedi & Bresson, 2020), effectively model- 1arXiv:2502.09263v1 [cs.LG] 13 Feb 2025 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence ing long-range interactions and addressing issues such as over-smoothing, over-squashing, and limited expressiveness (Kreuzer et al., 2021; Ying et al., 2021; Zhang et al., 2023; Luo et al., 2023c; 2024b). However, the quadratic com- plexity of global attention mechanisms limits the scalability of GTs in large-scale, real-world applications (Behrouz & Hashemi, 2024; Sancak et al., 2024; Ding et al., 2024). Moreover, it has been noted that many state-of-the-art GTs (Chen et al., 2022; Ramp ´aˇsek et al., 2022; Shirzad et al., 2023; Ma et al., 2023) still rely—either explicitly or implic- itly—on the message passing mechanism of GNNs to learn local node representations, thereby enhancing performance. Recent studies (Luo et al., 2024a; 2025a;b) have shown that, contrary to common belief, classic GNNs such as GCN (Kipf & Welling, 2017), GAT (Veli ˇckovi ´c et al., 2018), and GraphSAGE (Hamilton et al., 2017) can achieve perfor- mance comparable to, or even exceeding, that of state-of-the- art GTs for node-level tasks. However, a similar conclusion has not yet been established for graph-level tasks. While T¨onshoff et al. (2023) conducted pioneering research demon- strating that tuning a few hyperparameters can significantly enhance the performance of classic GNNs, their results indi- cate that these models still do not match the overall perfor- mance of GTs. Furthermore, their investigation is limited to the Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). This raises an important question: “Can classic GNNs also excel in graph-level tasks?” To thoroughly investigate this question, we introduce GNN+, an enhanced GNN framework that incorporates es- tablished techniques into the message-passing mechanism, to effectively address graph-level tasks. As illustrated in Fig. 2, GNN+integrates six widely used techniques: the incorporation of edge features (Gilmer et al., 2017), normal- ization (Ioffe & Szegedy, 2015), dropout (Srivastava et al., 2014), residual connections (He et al., 2016), feed-forward networks (FFN) (Vaswani et al., 2017), and positional en- coding (Vaswani et al., 2017). Each technique serves as a hyperparameter that can be tuned to optimize performance. We systematically evaluate 3 classic GNNs—GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bres- son & Laurent, 2017)—enhanced by the GNN+frame- work across 14 well-known graph-level datasets from GNN Benchmark (Dwivedi et al., 2023), LRGB (Dwivedi et al., 2022), and OGB (Hu et al., 2020). The results demonstrate that the enhanced versions of classic GNNs match or even outperform state-of-the-art (SOTA) GTs, achieving rankings in the top three , including first place in eight datasets , while exhibiting superior efficiency. These findings pro- vide a positive answer to the previously posed question, suggesting that the true potential of GNNs for graph-level applications has been previously underestimated, and the GNN+framework effectively unlocks this potential whileaddressing their inherent limitations. Our ablation study also highlights the importance of each technique used in GNN+and offers valuable insights for future research. 2. Classic GNNs for Graph-level Tasks Define a graph as G= (V,E,X,E), where Vis the set of nodes, and E ⊆ V × V is the set of edges. The node feature matrix is X∈R|V|×dV, where |V|is the number of nodes, anddVis the dimension of the node features. The edge feature matrix is E∈R|E|×dE, where |E|is the number of edges and dEis the dimension of the edge features. Let A∈R|V|×|V|denote the adjacency matrix of G. Message-passing Graph Neural Networks (GNNs) com- pute node representations hl vat each layer lvia a message- passing mechanism, defined by Gilmer et al. (2017): hl v=UPDATEl hl−1 v,AGGln hl−1 u|u∈ N(v)o , (1) whereN(v)represents the neighboring nodes adjacent to v, AGGlis the message aggregation function, and UPDATEl is the update function. Initially, each node vis assigned a feature vector h0 v=xv∈Rd. The function AGGlis then used to aggregate information from the neighbors of vto update its representation. The output of the last layer L, i.e., GNN (v,A,X) =hL v, is the representation of vproduced by the GNN. In this work, we focus on three classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bresson & Laurent, 2017), which differ in their approach to learning the node representation hl v. Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), the vanilla GCN model, is formulated as: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl), (2) where ˆdv= 1+P u∈N(v)1,P u∈N(v)1denotes the degree of node v,Wlis the trainable weight matrix in layer l, and σis the activation function, e.g., ReLU(·) = max(0 ,·). Graph Isomorphism Networks (GIN) (Xu et al., 2018) learn node representations through a different approach: hl v=MLPl((1 + ϵ)·hl−1 v+X u∈N(v)hl−1 u), (3) where ϵis a constant, typicallyset to 0, and MLPldenotes a multi-layer perceptron, which usually consists of 2 layers. Residual Gated Graph Convolutional Networks (Gat- edGCN) (Bresson & Laurent, 2017) enhance traditional graph convolutions by incorporating gating mechanisms, improving adaptability and expressiveness: hl v=hl−1 vWl 1+X u∈N(v)ηv,u⊙hl−1 uWl 2, (4) 2 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence where ηv,u=σ(hl−1 vWl 3+hl−1 uWl 4)is the gating func- tion, and σdenotes the sigmoid activation function. This gating function determines how much each neighboring node contributes to updating the representation of the cur- rent node. The matrices Wl 1,Wl 2,Wl 3,Wl 4are trainable weight matrices specific to the layer l. Graph-level tasks treat the entire graph, rather than indi- vidual nodes or edges, as the fundamental unit for dataset composition, splitting, and training. Formally, given a la- beled graph dataset Γ ={(Gi,yi)}n i=1, each graph Giis associated with a label vector yi, representing either cat- egorical labels for classification or continuous values for regression. Next, the dataset Γis typically split into training, validation, and test sets, denoted as Γ = Γ train∪Γval∪Γtest. Graph-level tasks encompass inductive prediction tasks that operate on entire graphs, as well as on individual nodes or edges (Dwivedi et al., 2022), with each corresponding to a distinct label vector yi. Each type of task requires a tai- lored graph readout function R, which aggregates the output representations to compute the readout result, expressed as: hreadout i = Rn hL v:v∈ Vio , (5) where Virepresents the set of nodes in the graph Gi. For example, for graph prediction tasks , which aim to make predictions about the entire graph, the readout function R often operates as a global mean pooling function. Finally, for any graph Gi, the readout result is passed through a prediction head g(·)to obtain the predicted label ˆyi= g(hreadout i). The training objective is to minimize the total lossL(θ) =P Gi∈Γtrainℓ(ˆyi,yi)w.r.t. all graphs in the training set Γtrain, where yirepresents the ground-truth label ofGiandθdenotes the trainable GNN parameters. 3. GNN+: Enhancing Classic GNNs for Graph-level Tasks We propose an enhancement to classic GNNs for graph-level tasks by incorporating six popular techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks (FFN), and positional encoding. The enhanced framework, GNN+, is illustrated in Figure 2. 3.1. Edge Feature Integration Edge features were initially incorporated into some GNN frameworks (Gilmer et al., 2017; Hu et al., 2019) by directly integrating them into the message-passing process to en- hance information propagation between nodes. Following this practice, GraphGPS (Ramp ´aˇsek et al., 2022) and subse- quent GTs encode edge features within their local modules to enrich node representations. Taking GCN (Eq. 2) as an example, the edge features are Figure 2. The architecture of GNN+. integrated into the massage-passing process as follows: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e),(6) where Wl eis the trainable weight matrix in layer l, andeuv is the feature vector of the edge between uandv. 3.2. Normalization Normalization techniques play a critical role in stabilizing the training of GNNs by mitigating the effects of covariate shift, where the distribution of node embeddings changes across layers during training. By normalizing node em- beddings at each layer, the training process becomes more stable, enabling the use of higher learning rates and achiev- ing faster convergence (Cai et al., 2021). Batch Normalization (BN) (Ioffe & Szegedy, 2015) and Layer Normalization (LN) (Ba et al., 2016) are widely used techniques, typically applied to the output of each layer before the activation function σ(·). Here, we use BN: hl v=σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e)). (7) 3.3. Dropout Dropout (Srivastava et al., 2014), a technique widely used in convolutional neural networks (CNNs) to address overfitting by reducing co-adaptation among hidden neurons (Hinton et al., 2012; Yosinski et al., 2014), has also been found to be effective in addressing similar issues in GNNs (Shu et al., 2022), where the co-adaptation effects propagate and accu- mulate via message passing among different nodes. Typi- 3 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence cally, dropout is applied to the embeddings after activation: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))).(8) 3.4. Residual Connection Residual connections (He et al., 2016) significantly enhance CNN performance by directly connecting the input of a layer to its output, thus alleviating the problem of vanishing gra- dient. They were first adopted by the vanilla GCN (Kipf & Welling, 2017) and has since been incorporated into subse- quent works such as GatedGCN (Bresson & Laurent, 2017) and DeepGCNs (Li et al., 2019). Formally, residual connec- tions can be integrated into GNNs as follows: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v.(9) While deeper networks, such as deep CNNs (He et al., 2016; Huang et al., 2017), are capable of extract more complex fea- tures, GNNs encounter challenges like over-smoothing (Li et al., 2018), where deeper models lead to indistinguishable node representations. Consequently, most GNNs are shal- low, typically with 2 to 5 layers. However, by incorporating residual connections, we show that deeper GNNs, ranging from 3 to 20 layers, can achieve strong performance. 3.5. Feed-Forward Network GTs incorporate a feed-forward network (FFN) as a crucial component within each of their layers. The FFN enhances the model’s ability to perform complex feature transforma- tions and introduces non-linearity, thereby increasing the network’s expressive power. Inspired by this, we propose appending a fully-connected FFN at the end of each layer of GNNs, defined as: FFN(h) =BN(σ(hWl FFN 1)Wl FFN 2+h), (10) where Wl FFN 1andWl FFN 2are the trainable weight matrices of the FFN at the l-th GNN layer. The node embeddings output by the FFN are then computed as: hl v=FFN(Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v). (11) 3.6. Positional Encoding Positional encoding (PE) was introduced in the Transformer model (Vaswani et al., 2017) to represent the positions of tokens within a sequence for language modeling. In GTs,Table 1. Overview of the datasets used for graph-level tasks. Dataset # graphs Avg. # nodes Avg. # edges Task Type ZINC 12,000 23.2 24.9 Graph regression MNIST 70,000 70.6 564.5 Graph classification CIFAR10 60,000 117.6 941.1 Graph classification PATTERN 14,000 118.9 3,039.3 Inductive node cls. CLUSTER 12,000 117.2 2,150.9 Inductive node cls. Peptides-func 15,535 150.9 307.3 Graph classification Peptides-struct 15,535 150.9 307.3 Graph regression PascalVOC-SP 11,355 479.4 2,710.5 Inductive node cls. COCO-SP 123,286 476.9 2,693.7 Inductive node cls. MalNet-Tiny 5,000 1,410.3 2,859.9 Graph classification ogbg-molhiv 41,127 25.5 27.5 Graph classification ogbg-molpcba 437,929 26.0 28.1 Graph classification ogbg-ppa 158,100 243.4 2,266.1 Graph classification ogbg-code2 452,741 125.2 124.2 Graph classification PE is used to incorporate graph positional or structural infor- mation. The encodings are typically added or concatenated to the input node features xvbefore being fed into the GTs. Various PE methods have been proposed, such as Laplacian Positional Encoding (LapPE) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), Weisfeiler-Lehman Positional Encod- ing (WLPE) (Zhang et al., 2020), Random Walk Structural Encoding (RWSE) (Li et al., 2020; Dwivedi et al., 2021; Ramp ´aˇsek et al., 2022), Learnable Structural and Positional Encodings (LSPE) (Dwivedi et al., 2021), and Relative Ran- dom Walk Probabilities (RRWP) (Ma et al., 2023). Follow- ing the practice, we use RWSE, one of the most efficient PE methods, to improve the performance of GNNs as follows: xv= [xv∥xRWSE v]WPE, (12) where [·∥·]denotes concatenation, xRWSE v represents the RWSE of node v, andWPEis the trainable weight matrix. 4. Assessment: Experimental Setup Datasets, Table 1 . We use widely adopted graph-level datasets in our experiments, including ZINC ,MNIST , CIFAR10 ,PATTERN , and CLUSTER from the GNN Benchmark (Dwivedi et al., 2023); Peptides-func ,Peptides- struct ,PascalVOC-SP ,COCO-SP , and MalNet-Tiny from Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021); and ogbg-molhiv ,ogbg- molpcba ,ogbg-ppa , and ogbg-code2 from Open Graph Benchmark (OGB) (Hu et al., 2020). We follow their re- spective standard evaluation protocols including the splits and metrics. For further details, refer to the Appendix A.2. Baselines. Our main focus lies on classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018; Hu et al., 2019), GatedGCN (Bresson & Laurent, 2017), the SOTA GTs: GT (2020), GraphTrans (2021), SAN (2021), Graphormer (2021), SAT (2022), EGT (2022), GraphGPS (2022; 2023), GRPE (2022), Graphormer-URPE (2022), Graphormer-GD (2023), Specformer (2023), LGI- GT (2023), GPTrans-Nano (2023b), Graph ViT/MLP-Mixer (2023), NAGphormer (2023a), DIFFormer (2023), MGT 4 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 2. Test performance on five benchmarks from (Dwivedi et al., 2023) (%). Shown is the mean ±s.d. of 5 runs with different random seeds.+denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for ZINC, PATTERN, and CLUSTER, and ∼100K for MNIST and CIFAR10. The top 1st,2ndand3rdresults are highlighted. ZINC MNIST CIFAR10 PATTERN CLUSTER # graphs 12,000 70,000 60,000 14,000 12,000 Avg. # nodes 23.2 70.6 117.6 118.9 117.2 Avg. # edges 24.9 564.5 941.1 3039.3 2150.9 Metric MAE↓ Accuracy ↑ Accuracy ↑ Accuracy ↑ Accuracy ↑ GT (2020) 0.226 ±0.014 90.831 ±0.161 59.753 ±0.293 84.808 ±0.068 73.169 ±0.622 SAN (2021) 0.139 ±0.006 – – 86.581 ±0.037 76.691 ±0.650 Graphormer (2021) 0.122 ±0.006 – – – – SAT (2022) 0.094 ±0.008 – – 86.848 ±0.037 77.856 ±0.104 EGT (2022) 0.108 ±0.009 98.173 ±0.087 68.702 ±0.409 86.821 ±0.020 79.232 ±0.348 GraphGPS (2022) 0.070 ±0.004 98.051 ±0.126 72.298 ±0.356 86.685 ±0.059 78.016 ±0.180 GRPE (2022) 0.094 ±0.002 – – 87.020 ±0.042 – Graphormer-URPE (2022) 0.086 ±0.007 – – – – Graphormer-GD (2023) 0.081 ±0.009 – – – – Specformer (2023) 0.066 ±0.003 – – – – LGI-GT (2023) – – – 86.930 ±0.040 – GPTrans-Nano (2023b) – – – 86.731 ±0.085 – Graph ViT/MLP-Mixer (2023) 0.073 ±0.001 98.460 ±0.090 73.960 ±0.330 – – Exphormer (2023) – 98.414 ±0.038 74.754 ±0.194 86.734 ±0.008 – GRIT (2023) 0.059 ±0.002 98.108 ±0.111 76.468 ±0.881 87.196 ±0.076 80.026 ±0.277 GRED (2024) 0.077 ±0.002 98.383 ±0.012 76.853 ±0.185 86.759 ±0.020 78.495 ±0.103 GEAET (2024) – 98.513 ±0.086 76.634 ±0.427 86.993 ±0.026 – TIGT (2024) 0.057 ±0.002 98.231 ±0.132 73.963 ±0.361 86.681 ±0.062 78.025 ±0.223 Cluster-GT (2024a) 0.071 ±0.004 – – – – GMN (2024) – 98.391 ±0.182 74.560 ±0.381 87.090 ±1.260 – Graph-Mamba (2024) – 98.420 ±0.080 73.700 ±0.340 86.710 ±0.050 76.800 ±0.360 GCN 0.367 ±0.011 90.705 ±0.218 55.710 ±0.381 71.892 ±0.334 68.498 ±0.976 GCN+0.076 ±0.00979.3%↓98.382 ±0.0958.5%↑69.824 ±0.41325.4%↑87.021 ±0.09521.1%↑77.109 ±0.87212.6%↑ GIN 0.526 ±0.051 96.485 ±0.252 55.255 ±1.527 85.387 ±0.136 64.716 ±1.553 GIN+0.065 ±0.00487.6%↓98.285 ±0.1031.9%↑69.592 ±0.28725.9%↑86.842 ±0.0481.7%↑ 74.794 ±0.21315.6%↑ GatedGCN 0.282 ±0.015 97.340 ±0.143 67.312 ±0.311 85.568 ±0.088 73.840 ±0.326 GatedGCN+0.077 ±0.00572.7%↓98.712 ±0.1371.4%↑77.218 ±0.38114.7%↑87.029 ±0.0371.7%↑ 79.128 ±0.2357.1%↑ Time (epoch) of GraphGPS 21s 76s 64s 32s 86s Time (epoch) of GCN+7s 60s 40s 19s 29s (2023), DRew (2023), Exphormer (2023), GRIT (2023), GRED (2024), GEAET (2024), Subgraphormer (2024), TIGT (2024), GECO (2024), GPNN (2024), Cluster-GT (2024a), and the SOTA graph state space models (GSSMs): GMN (2024), Graph-Mamba (2024), GSSC (2024b). Fur- thermore, various other GTs exist in related surveys (Hoang et al., 2024; Shehzad et al., 2024; M ¨uller et al., 2023), empir- ically shown to be inferior to the GTs we compared against for graph-level tasks. We report the performance results of baselines primarily from (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023), with the remaining obtained from their re- spective original papers or official leaderboards whenever possible, as those results are obtained by well-tuned models. Hyperparameter Configurations. We conduct hyperpa- rameter tuning on 3 classic GNNs, consistent with the hy- perparameter search space of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). Specifically, we utilize the AdamW optimizer (Loshchilov, 2017) with a learning rate from{0.0001,0.0005,0.001}and an epoch limit of 2000. As discussed in Section 3, we focus on whether to use the edge feature module, normalization (BN), residual connections, FFN, PE (RWSE), and dropout rates from {0.05,0.1,0.15,0.2,0.3}, the number of layers from 3 to 20. Considering the large number of hyperparameters anddatasets, we do not perform an exhaustive search. Addition- ally, we retrain baseline GTs using the same hyperparam- eter search space and training environments as the classic GNNs. Since the retrained results did not surpass those in their original papers, we present the results from those sources .GNN+denotes the enhanced version. We report mean scores and standard deviations after 5 independent runs with different random seeds. Detailed hyperparameters are provided in Appendix A. 5. Assessment: Results and Findings 5.1. Overall Performance We evaluate the performance of the enhanced versions of 3 classic GNNs across 14 well-known graph-level datasets. The enhanced versions of classic GNNs achieved state- of-the-art performance, ranking in the top three across 14 datasets , including first place in 8 of them , while also demonstrating superior efficiency . This suggests that the GNN+framework effectively harnesses the po- tential of classic GNNs for graph-level tasks and suc- cessfully mitigates their inherent limitations. 5 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 3. Test performance on five datasets from Long-Range Graph Benchmarks (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021). +denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for all. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny # graphs 15,535 15,535 11,355 123,286 5,000 Avg. # nodes 150.9 150.9 479.4 476.9 1,410.3 Avg. # edges 307.3 307.3 2,710.5 2,693.7 2,859.9 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ GT (2020) 0.6326 ±0.0126 0.2529 ±0.0016 0.2694 ±0.0098 0.2618 ±0.0031 – SAN (2021) 0.6439 ±0.0075 0.2545 ±0.0012 0.3230 ±0.0039 0.2592 ±0.0158 – GraphGPS (2022) 0.6535 ±0.0041 0.2500 ±0.0005 0.3748 ±0.0109 0.3412 ±0.0044 0.9350 ±0.0041 GraphGPS (2023) 0.6534 ±0.0091 0.2509 ±0.0014 0.4440 ±0.0065 0.3884 ±0.0055 0.9350 ±0.0041 NAGphormer (2023a) – – 0.4006 ±0.0061 0.3458 ±0.0070 – DIFFormer (2023) – – 0.3988 ±0.0045 0.3620 ±0.0012 – MGT (2023) 0.6817 ±0.0064 0.2453 ±0.0025 – – – DRew (2023) 0.7150 ±0.0044 0.2536 ±0.0015 0.3314 ±0.0024 – – Graph ViT/MLP-Mixer (2023) 0.6970 ±0.0080 0.2449 ±0.0016 – – – Exphormer (2023) 0.6258 ±0.0092 0.2512 ±0.0025 0.3446 ±0.0064 0.3430 ±0.0108 0.9402 ±0.0021 GRIT (2023) 0.6988 ±0.0082 0.2460 ±0.0012 – – – Subgraphormer (2024) 0.6415 ±0.0052 0.2475 ±0.0007 – – – GRED (2024) 0.7133 ±0.0011 0.2455 ±0.0013 – – – GEAET (2024) 0.6485 ±0.0035 0.2547 ±0.0009 0.3933 ±0.0027 0.3219 ±0.0052 – TIGT (2024) 0.6679 ±0.0074 0.2485 ±0.0015 – – – GECO (2024) 0.6975 ±0.0025 0.2464 ±0.0009 0.4210 ±0.0080 0.3320 ±0.0032 – GPNN (2024) 0.6955 ±0.0057 0.2454 ±0.0003 – – – Graph-Mamba (2024) 0.6739 ±0.0087 0.2478 ±0.0016 0.4191 ±0.0126 0.3960 ±0.0175 0.9340 ±0.0027 GSSC (2024b) 0.7081 ±0.0062 0.2459 ±0.0020 0.4561 ±0.0039 – 0.9406 ±0.0064 GCN 0.6860 ±0.0050 0.2460 ±0.0007 0.2078 ±0.0031 0.1338 ±0.0007 0.8100 ±0.0081 GCN+0.7261 ±0.0067 5.9%↑0.2421 ±0.0016 1.6%↓0.3357 ±0.0087 62.0%↑0.2733 ±0.0041 104.9% ↑0.9354 ±0.0045 15.5%↑ GIN 0.6621 ±0.0067 0.2473 ±0.0017 0.2718 ±0.0054 0.2125 ±0.0009 0.8898 ±0.0055 GIN+0.7059 ±0.0089 6.6%↑0.2429 ±0.0019 1.8%↓0.3189 ±0.0105 17.3%↑0.2483 ±0.0046 16.9%↑ 0.9325 ±0.0040 4.8%↑ GatedGCN 0.6765 ±0.0047 0.2477 ±0.0009 0.3880 ±0.0040 0.2922 ±0.0018 0.9223 ±0.0065 GatedGCN+0.7006 ±0.0033 3.6%↑0.2431 ±0.0020 1.9%↓0.4263 ±0.0057 9.9%↑ 0.3802 ±0.0015 30.1%↑ 0.9460 ±0.0057 2.6%↑ Time (epoch) of GraphGPS 6s 6s 17s 213s 46s Time (epoch) of GCN+6s 6s 12s 162s 6s GNN Benchmark, Table 2. We observe that our GNN+ implementation substantially enhances the performance of classic GNNs, with the most significant improvements on ZINC, PATTERN, and CLUSTER. On MNIST and CIFAR, GatedGCN+outperforms SOTA models such as GEAET and GRED, securing top rankings. Long-Range Graph Benchmark (LRGB), Table 3. The results reveal that classic GNNs can achieve strong perfor- mance across LRGB datasets. Specifically, GCN+excels on the Peptides-func and Peptides-struct datasets. On the other hand, GatedGCN+achieves the highest accuracy on MalNet-Tiny. Furthermore, on PascalVOC-SP and COCO- SP, GatedGCN+significantly improves performance, se- curing the third-best model ranking overall. These results highlight the potential of classic GNNs in capturing long- range interactions in graph-level tasks. Open Graph Benchmark (OGB), Table 4. Finally, we test our method on four OGB datasets. As shown in Table 4, GatedGCN+consistently ranks among the top three mod- els and achieves top performance on three out of the four datasets. On ogbg-ppa, GatedGCN+shows an improve- ment of approximately 9%, ranking first on the OGB leader- board. On ogbg-molhiv and ogbg-molpcba, GatedGCN+ even matches the performance of Graphormer and EGT pre-trained on other datasets. Additionally, on ogbg-code2, GatedGCN+secures the third-highest performance, under-scoring the potential of GNNs for large-scale OGB datasets. 5.2. Ablation Study To examine the unique contributions of different technique used in GNN+, we conduct a series of ablation analysis by selectively removing elements such as edge feature module (Edge.), normalization (Norm), dropout, residual connec- tions (RC), FFN, PE from GCN+, GIN+, and GatedGCN+. The effect of these ablations is assessed across GNN Bench- mark (see Table 5), LRGB, and OGB (see Table 6) datasets. Our ablation study demonstrates that each module incor- porated in GNN+—including edge feature integration, normalization, dropout, residual connections, FFN, and PE—is indispensable ; the removal of any single com- ponent results in a degradation of overall performance. Observation 1: The integration of edge features is par- ticularly effective in molecular and image superpixel datasets, where these features carry critical information. In molecular graphs such as ZINC and ogbg-molhiv, edge features represent chemical bond information, which is es- sential for molecular properties. Removing this module leads to a significant performance drop. In protein networks ogbg-ppa, edges represent normalized associations between proteins. Removing the edge feature module results in a sub- 6 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 4. Test performance in four benchmarks from Open Graph Benchmark (OGB) (Hu et al., 2020).+denotes the enhanced version, while the baseline results were obtained from their respective original papers.†indicates the use of additional pretraining datasets, included here for reference only and excluded from ranking. ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # graphs 41,127 437,929 158,100 452,741 Avg. # nodes 25.5 26.0 243.4 125.2 Avg. # edges 27.5 28.1 2,266.1 124.2 Metric AUROC ↑ Avg. Precision ↑ Accuracy ↑ F1 score ↑ GT (2020) – – 0.6454 ±0.0033 0.1670 ±0.0015 GraphTrans (2021) – 0.2761 ±0.0029 – 0.1830 ±0.0024 SAN (2021) 0.7785 ±0.2470 0.2765 ±0.0042 – – Graphormer (pre-trained) (2021) 0.8051 ±0.0053†– – – SAT (2022) – – 0.7522 ±0.0056 0.1937 ±0.0028 EGT (pre-trained) (2022) 0.8060 ±0.0065†0.2961 ±0.0024†– – GraphGPS (2022) 0.7880 ±0.0101 0.2907 ±0.0028 0.8015 ±0.0033 0.1894 ±0.0024 Specformer (2023) 0.7889 ±0.0124 0.2972 ±0.0023 – – Graph ViT/MLP-Mixer (2023) 0.7997 ±0.0102 – – – Exphormer (2023) 0.7834 ±0.0044 0.2849 ±0.0025 – – GRIT (2023) 0.7835 ±0.0054 0.2362 ±0.0020 – – Subgraphormer (2024) 0.8038 ±0.0192 – – – GECO (2024) 0.7980 ±0.0200 0.2961 ±0.0008 0.7982 ±0.0042 0.1915 ±0.0020 GSSC (2024b) 0.8035 ±0.0142 – – – GCN 0.7606 ±0.0097 0.2020 ±0.0024 0.6839 ±0.0084 0.1507 ±0.0018 GCN+0.8012 ±0.0124 5.4%↑0.2721 ±0.0046 34.7%↑0.8077 ±0.0041 18.1%↑0.1787 ±0.0026 18.6%↑ GIN 0.7835 ±0.0125 0.2266 ±0.0028 0.6892 ±0.0100 0.1495 ±0.0023 GIN+0.7928 ±0.0099 1.2%↑0.2703 ±0.0024 19.3%↑0.8107 ±0.0053 17.7%↑0.1803 ±0.0019 20.6%↑ GatedGCN 0.7687 ±0.0136 0.2670 ±0.0020 0.7531 ±0.0083 0.1606 ±0.0015 GatedGCN+0.8040 ±0.0164 4.6%↑0.2981 ±0.0024 11.6%↑0.8258 ±0.0055 9.7%↑ 0.1896 ±0.0024 18.1%↑ Time (epoch/s) of GraphGPS 96s 196s 276s 1919s Time (epoch/s) of GCN+16s 91s 178s 476s Table 5. Ablation study on GNN Benchmark (Dwivedi et al., 2023) (%). - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. ZINC MNIST CIFAR10 PATTERN CLUSTER Metric MAE↓ Accuracy ↑Accuracy ↑Accuracy ↑Accuracy ↑ GCN+0.076 ±0.009 98.382 ±0.095 69.824 ±0.413 87.021 ±0.095 77.109 ±0.872 (-) Edge. 0.135 ±0.004 98.153 ±0.042 68.256 ±0.357 86.854 ±0.054 – (-) Norm 0.107 ±0.011 97.886 ±0.066 60.765 ±0.829 52.769 ±0.874 16.563 ±0.134 (-) Dropout – 97.897 ±0.071 65.693 ±0.461 86.764 ±0.045 74.926 ±0.469 (-) RC 0.159 ±0.016 95.929 ±0.169 58.186 ±0.295 86.059 ±0.274 16.508 ±0.615 (-) FFN 0.132 ±0.021 97.174 ±0.063 63.573 ±0.346 86.746 ±0.088 72.606 ±1.243 (-) PE 0.127 ±0.010 – – 85.597 ±0.241 75.568 ±1.147 GIN+0.065 ±0.004 98.285 ±0.103 69.592 ±0.287 86.842 ±0.048 74.794 ±0.213 (-) Edge. 0.122 ±0.009 97.655 ±0.075 68.196 ±0.107 86.714 ±0.036 65.895 ±3.425 (-) Norm 0.096 ±0.006 97.695 ±0.065 64.918 ±0.059 86.815 ±0.855 72.119 ±0.359 (-) Dropout – 98.214 ±0.064 66.638 ±0.873 86.836 ±0.053 73.316 ±0.355 (-) RC 0.137 ±0.031 97.675 ±0.175 64.910 ±0.102 86.645 ±0.125 16.800 ±0.088 (-) FFN 0.104 ±0.003 11.350 ±0.008 60.582 ±0.395 58.511 ±0.016 62.175 ±2.895 (-) PE 0.123 ±0.014 – – 86.592 ±0.049 73.925 ±0.165 GatedGCN+0.077 ±0.005 98.712 ±0.137 77.218 ±0.381 87.029 ±0.037 79.128 ±0.235 (-) Edge. 0.119 ±0.001 98.085 ±0.045 72.128 ±0.275 86.879 ±0.017 76.075 ±0.845 (-) Norm 0.088 ±0.003 98.275 ±0.045 71.995 ±0.445 86.942 ±0.023 78.495 ±0.155 (-) Dropout 0.089 ±0.003 98.225 ±0.095 70.383 ±0.429 86.802 ±0.034 77.597 ±0.126 (-) RC 0.106 ±0.002 98.442 ±0.067 75.149 ±0.155 86.845 ±0.025 16.670 ±0.307 (-) FFN 0.098 ±0.005 98.438 ±0.151 76.243 ±0.131 86.935 ±0.025 78.975 ±0.145 (-) PE 0.174 ±0.009 – – 85.595 ±0.065 77.515 ±0.265 stantial accuracy decline, ranging from 0.5083 to 0.7310 for classic GNNs. Similarly, in image superpixel datasets like CIFAR-10, PascalVOC-SP, and COCO-SP, edge features encode spatial relationships between superpixels, which are crucial for maintaining image coherence. However, in codegraphs such as ogbg-code2 and MalNet-Tiny, where edges represent call types, edge features are less relevant to the prediction tasks, and their removal has minimal impact. Observation 2: Normalization tends to have a greater impact on larger-scale datasets, whereas its impact is less significant on smaller datasets. For large-scale datasets such as CIFAR 10, COCO-SP, and the OGB datasets, removing normalization leads to signifi- cant performance drops. Specifically, on ogbg-ppa, which has 158,100 graphs, ablating normalization results in an accuracy drop of around 15% for three classic GNNs. This result is consistent with Luo et al. (2024a), who found that normalization is more important for GNNs in node clas- sification on large graphs. In such datasets, where node feature distributions are more complex, normalizing node embeddings is essential for stabilizing the training process. Observation 3: Dropout proves advantageous for most datasets, with a very low dropout rate being sufficient and optimal . Our analysis highlights the crucial role of dropout in main- taining the performance of classic GNNs on GNN Bench- mark and LRGB and large-scale OGB datasets, with its ablation causing significant declines—for instance, an 8.8% relative decrease for GatedGCN+on CIFAR-10 and a 20.4% relative decrease on PascalVOC-SP. This trend continues in 7 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 6. Ablation study on LRGB and OGB datasets. - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ AUROC ↑Avg. Precision ↑Accuracy ↑ F1 score ↑ GCN+0.7261 ±0.0067 0.2421 ±0.0016 0.3357 ±0.0087 0.2733 ±0.0041 0.9354 ±0.0045 0.8012 ±0.0124 0.2721 ±0.0046 0.8077 ±0.0041 0.1787 ±0.0026 (-) Edge. 0.7191 ±0.0036 – 0.2942 ±0.0043 0.2219 ±0.0060 0.9292 ±0.0034 0.7714 ±0.0204 0.2628 ±0.0019 0.2994 ±0.0062 0.1785 ±0.0033 (-) Norm 0.7107 ±0.0027 0.2509 ±0.0026 0.1802 ±0.0111 0.2332 ±0.0079 0.9236 ±0.0054 0.7753 ±0.0049 0.2528 ±0.0016 0.6705 ±0.0104 0.1679 ±0.0027 (-) Dropout 0.6748 ±0.0055 0.2549 ±0.0025 0.3072 ±0.0069 0.2601 ±0.0046 – 0.7431 ±0.0185 0.2405 ±0.0047 0.7893 ±0.0052 0.1641 ±0.0043 (-) RC – – 0.2734 ±0.0036 0.1948 ±0.0096 0.8916 ±0.0048 – – 0.7520 ±0.0157 0.1785 ±0.0029 (-) FFN – – 0.2786 ±0.0068 0.2314 ±0.0073 0.9118 ±0.0078 0.7432 ±0.0052 0.2621 ±0.0019 0.7672 ±0.0071 0.1594 ±0.0020 (-) PE 0.7069 ±0.0093 0.2447 ±0.0015 – – – 0.7593 ±0.0051 0.2667 ±0.0034 – – GIN+0.7059 ±0.0089 0.2429 ±0.0019 0.3189 ±0.0105 0.2483 ±0.0046 0.9325 ±0.0040 0.7928 ±0.0099 0.2703 ±0.0024 0.8107 ±0.0053 0.1803 ±0.0019 (-) Edge. 0.7033 ±0.0015 0.2442 ±0.0028 0.2956 ±0.0047 0.2259 ±0.0053 0.9286 ±0.0049 0.7597 ±0.0103 0.2702 ±0.0021 0.2789 ±0.0031 0.1752 ±0.0020 (-) Norm 0.6934 ±0.0077 0.2444 ±0.0015 0.2707 ±0.0037 0.2244 ±0.0063 0.9322 ±0.0025 0.7874 ±0.0114 0.2556 ±0.0026 0.6484 ±0.0246 0.1722 ±0.0034 (-) Dropout 0.6384 ±0.0094 0.2531 ±0.0030 0.3153 ±0.0113 – – – 0.2545 ±0.0068 0.7673 ±0.0059 0.1730 ±0.0018 (-) RC 0.6975 ±0.0038 0.2527 ±0.0015 0.2350 ±0.0044 0.1741 ±0.0085 0.9150 ±0.0047 0.7733 ±0.0122 0.1454 ±0.0061 – 0.1617 ±0.0026 (-) FFN – – 0.2393 ±0.0049 0.1599 ±0.0081 0.8944 ±0.0074 – 0.2534 ±0.0033 0.6676 ±0.0039 0.1491 ±0.0016 (-) PE 0.6855 ±0.0027 0.2455 ±0.0019 0.3141 ±0.0031 – – 0.7791 ±0.0268 0.2601 ±0.0023 – – GatedGCN+0.7006 ±0.0033 0.2431 ±0.0020 0.4263 ±0.0057 0.3802 ±0.0015 0.9460 ±0.0057 0.8040 ±0.0164 0.2981 ±0.0024 0.8258 ±0.0055 0.1896 ±0.0024 (-) Edge. 0.6882 ±0.0028 0.2466 ±0.0018 0.3764 ±0.0117 0.3172 ±0.0109 0.9372 ±0.0062 0.7831 ±0.0157 0.2951 ±0.0028 0.0948 ±0.0000 0.1891 ±0.0021 (-) Norm 0.6733 ±0.0026 0.2474 ±0.0015 0.3628 ±0.0043 0.3527 ±0.0051 0.9326 ±0.0056 0.7879 ±0.0178 0.2748 ±0.0012 0.6864 ±0.0165 0.1743 ±0.0026 (-) Dropout 0.6695 ±0.0101 0.2508 ±0.0014 0.3389 ±0.0066 0.3393 ±0.0051 – – 0.2582 ±0.0036 0.8088 ±0.0062 0.1724 ±0.0027 (-) RC – 0.2498 ±0.0034 0.4075 ±0.0052 0.3475 ±0.0064 0.9402 ±0.0054 0.7833 ±0.0177 0.2897 ±0.0016 0.8099 ±0.0053 0.1844 ±0.0025 (-) FFN – – – 0.3508 ±0.0049 0.9364 ±0.0059 – 0.2875 ±0.0022 – 0.1718 ±0.0024 (-) PE 0.6729 ±0.0084 0.2461 ±0.0025 0.4052 ±0.0031 – – 0.7771 ±0.0057 0.2813 ±0.0022 – – large-scale OGB datasets, where removing dropout results in a 5–13% performance drop across 3 classic GNNs on ogbg-molpcba. Notably, 97% of the optimal dropout rates are≤0.2, and 64% are ≤0.1, indicating that a very low dropout rate is both sufficient and optimal for graph-level tasks. Interestingly, this finding for graph-level tasks con- trasts with Luo et al. (2024a)’s observations for node-level tasks, where a higher dropout rate is typically required. Observation 4: Residual connections are generally es- sential, except in shallow GNNs applied to small graphs. Removing residual connections generally leads to signifi- cant performance drops across datasets, with the only excep- tions being found in the peptide datasets. Although similar in the number of nodes to CLUSTER and PATTERN, pep- tide datasets involve GNNs with only 3-5 layers, while the others use deeper networks with over 10 layers. For shallow networks in small graphs, residual connections may not be as beneficial and can even hurt performance by disrupting feature flow. In contrast, deeper networks in larger graphs rely on residual connections to maintain gradient flow and enable stable, reliable long-range information exchange. Observation 5: FFN is crucial for GIN+and GCN+, greatly impacting their performance across datasets. Ablating FFN leads to substantial performance declines for GIN+and GCN+across almost all datasets, highlighting its essential role in graph-level tasks. Notably, on MNIST, removing FNN leads to an 88% relative accuracy drop for GIN+. This is likely because the architectures of GIN+and GCN+rely heavily on FFN for learning complex node fea-ture representations. In contrast, GatedGCN+uses gating mechanisms to adaptively adjust the importance of neigh- boring nodes’ information, reducing the need for additional feature transformations. The only exceptions are observed in the peptides datasets, where FFN is not used in all three models. This may be due to the shallow GNN architecture, where complex feature transformations are less necessary. Observation 6: PE is particularly effective for small- scale datasets, but negligible for large-scale datasets. Removing PE significantly reduces performance for classic GNNs on small-scale datasets like ZINC, PATTERN, CLUS- TER, Peptides-func, and ogbg-molhiv, which only contain 10,000-40,000 graphs. By contrast, on large-scale datasets like ogbg-code2, ogbg-molpcba, ogbg-ppa, and COCO-SP (over 100,000 graphs), the impact of PE is less pronounced. This may be because smaller datasets rely more on PE to capture graph structure, whereas larger datasets benefit from the abundance of data, reducing the need for PE. 6. Conclusion This study highlights the often-overlooked potential of clas- sic GNNs in tacking graph-level tasks. By integrating six widely used techniques into a unified GNN+framework, we enhance three classic GNNs for graph-level tasks. Evalu- ations on 14 benchmark datasets reveal that, these enhanced GNNs match or outperform GTs, while also demonstrating greater efficiency. These findings challenge the prevailing belief that GTs are inherently superior, reaffirming the capa- bility of simple GNN structures as powerful models. 8 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Impact Statements This paper presents work whose goal is to advance the field of Graph Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. References Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205 , 2020. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. Bar-Shalom, G., Bevilacqua, B., and Maron, H. Sub- graphormer: Unifying subgraph gnns and graph transformers via graph products. arXiv preprint arXiv:2402.08450 , 2024. Behrouz, A. and Hashemi, F. Graph mamba: Towards learn- ing on graphs with state space models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 119–130, 2024. Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028 , 2023. Bresson, X. and Laurent, T. Residual gated graph convnets. arXiv preprint arXiv:1711.07553 , 2017. Cai, T., Luo, S., Xu, K., He, D., Liu, T.-y., and Wang, L. Graphnorm: A principled approach to accelerating graph neural network training. In International Conference on Machine Learning , pp. 1204–1215. PMLR, 2021. Chen, D., Lin, Y ., Li, W., Li, P., Zhou, J., and Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelli- gence , volume 34, pp. 3438–3445, 2020. Chen, D., O’Bray, L., and Borgwardt, K. Structure-aware transformer for graph representation learning. In Interna- tional Conference on Machine Learning , pp. 3469–3489. PMLR, 2022. Chen, J., Gao, K., Li, G., and He, K. NAGphormer: A tokenized graph transformer for node classification in large graphs. In The Eleventh International Confer- ence on Learning Representations , 2023a. URL https: //openreview.net/forum?id=8KYeilT3Ow. Chen, Z., Tan, H., Wang, T., Shen, T., Lu, T., Peng, Q., Cheng, C., and Qi, Y . Graph propagation trans- former for graph representation learning. arXiv preprint arXiv:2305.11424 , 2023b.Choi, Y . Y ., Park, S. W., Lee, M., and Woo, Y . Topology-informed graph transformer. arXiv preprint arXiv:2402.02005 , 2024. Ding, Y ., Orvieto, A., He, B., and Hofmann, T. Recurrent distance-encoding neural networks for graph representa- tion learning, 2024. URL https://openreview.net/forum? id=lNIj5FdXsC. Dwivedi, V . P. and Bresson, X. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699 , 2020. Dwivedi, V . P., Luu, A. T., Laurent, T., Bengio, Y ., and Bres- son, X. Graph neural networks with learnable structural and positional representations. In International Confer- ence on Learning Representations , 2021. Dwivedi, V . P., Ramp ´aˇsek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph bench- mark. arXiv preprint arXiv:2206.08164 , 2022. Dwivedi, V . P., Joshi, C. K., Luu, A. T., Laurent, T., Ben- gio, Y ., and Bresson, X. Benchmarking graph neural networks. Journal of Machine Learning Research , 24 (43):1–48, 2023. Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 , 2019. Freitas, S. and Dong, Y . A large-scale database for graph representation learning. Advances in neural information processing systems , 2021. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chem- istry. In International conference on machine learning , pp. 1263–1272. PMLR, 2017. Gutteridge, B., Dong, X., Bronstein, M. M., and Di Gio- vanni, F. Drew: Dynamically rewired message pass- ing with delay. In International Conference on Machine Learning , pp. 12252–12267. PMLR, 2023. Hamilton, W., Ying, Z., and Leskovec, J. Inductive repre- sentation learning on large graphs. Advances in neural information processing systems , 30, 2017. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. A generalization of vit/mlp-mixer to graphs. InInternational conference on machine learning , pp. 12724–12745. PMLR, 2023. 9 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012. Hoang, V . T., Lee, O., et al. A survey on structure-preserving graph transformers. arXiv preprint arXiv:2401.16176 , 2024. Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V ., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 , 2019. Hu, W., Fey, M., Zitnik, M., Dong, Y ., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems , 33:22118–22133, 2020. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 4700–4708, 2017. Huang, S., Song, Y ., Zhou, J., and Lin, Z. Cluster-wise graph transformer with dual-granularity kernelized at- tention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. URL https://openreview.net/forum?id=3j2nasmKkP. Huang, Y ., Miao, S., and Li, P. What can we learn from state space models for machine learning on graphs? arXiv preprint arXiv:2406.05815 , 2024b. Hussain, M. S., Zaki, M. J., and Subramanian, D. Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 655–665, 2022. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. InInternational conference on machine learning , pp. 448– 456. pmlr, 2015. Kipf, T. N. and Welling, M. Semi-supervised classifica- tion with graph convolutional networks. In International Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl. Kreuzer, D., Beaini, D., Hamilton, W., L ´etourneau, V ., and Tossou, P. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems , 34:21618–21629, 2021. Li, G., Muller, M., Thabet, A., and Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision , pp. 9267–9276, 2019.Li, P., Wang, Y ., Wang, H., and Leskovec, J. Distance en- coding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems , 33:4465–4478, 2020. Li, Q., Han, Z., and Wu, X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI conference on artificial intelligence , 2018. Liang, J., Chen, M., and Liang, J. Graph external attention enhanced transformer. arXiv preprint arXiv:2405.21061 , 2024. Lin, C., Ma, L., Chen, Y ., Ouyang, W., Bronstein, M. M., and Torr, P. Understanding graph transformers by gen- eralized propagation, 2024. URL https://openreview.net/ forum?id=JfjduOxrTY. Loshchilov, I. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. Luo, S., Li, S., Zheng, S., Liu, T.-Y ., Wang, L., and He, D. Your transformer may not be as powerful as you expect. Advances in Neural Information Processing Systems , 35: 4301–4315, 2022. Luo, Y ., Shi, L., and Thost, V . Improving self-supervised molecular representation learning using persistent homol- ogy. In Thirty-seventh Conference on Neural Information Processing Systems , 2023a. URL https://openreview.net/ forum?id=wEiUGpcr0M. Luo, Y ., Shi, L., Xu, M., Ji, Y ., Xiao, F., Hu, C., and Shan, Z. Impact-oriented contextual scholar profiling using self-citation graphs. arXiv preprint arXiv:2304.12217 , 2023b. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. In Thirty-seventh Conference on Neural Information Processing Systems , 2023c. URL https:// openreview.net/forum?id=g49s1N5nmO. Luo, Y ., Shi, L., and Wu, X.-M. Classic GNNs are strong baselines: Reassessing GNNs for node classification. In The Thirty-eight Conference on Neural Information Pro- cessing Systems Datasets and Benchmarks Track , 2024a. URL https://openreview.net/forum?id=xkljKdGe4E. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. Advances in Neural Information Process- ing Systems , 36, 2024b. Luo, Y ., Li, H., Liu, Q., Shi, L., and Wu, X.-M. Node identifiers: Compact, discrete representations for effi- cient graph learning. In The Thirteenth International Conference on Learning Representations , 2025a. URL https://openreview.net/forum?id=t9lS1lX9FQ. 10 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Luo, Y ., Wu, X.-M., and Zhu, H. Beyond random masking: When dropout meets graph convolutional networks. In The Thirteenth International Conference on Learning Representations , 2025b. URL https://openreview.net/ forum?id=PwxYoMvmvy. Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P., and Lim, S.-N. Graph inductive biases in transformers without message passing. arXiv preprint arXiv:2305.17589 , 2023. Min, E., Chen, R., Bian, Y ., Xu, T., Zhao, K., Huang, W., Zhao, P., Huang, J., Ananiadou, S., and Rong, Y . Trans- former for graphs: An overview from architecture per- spective. arXiv preprint arXiv:2202.08455 , 2022. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Pro- ceedings of the AAAI conference on artificial intelligence , volume 33, pp. 4602–4609, 2019. Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M. Tudataset: A collection of bench- mark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 , 2020. M¨uller, L., Galkin, M., Morris, C., and Ramp ´aˇsek, L. Attending to graph transformers. arXiv preprint arXiv:2302.04181 , 2023. Ngo, N. K., Hy, T. S., and Kondor, R. Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics , 159(3), 2023. Niepert, M., Ahmed, M., and Kutzkov, K. Learning con- volutional neural networks for graphs. In International conference on machine learning , pp. 2014–2023. PMLR, 2016. Park, W., Chang, W., Lee, D., Kim, J., and Hwang, S.-w. Grpe: Relative positional encoding for graph transformer. arXiv preprint arXiv:2201.12787 , 2022. Ramp ´aˇsek, L., Galkin, M., Dwivedi, V . P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scal- able graph transformer. arXiv preprint arXiv:2205.12454 , 2022. Sancak, K., Hua, Z., Fang, J., Xie, Y ., Malevich, A., Long, B., Balin, M. F., and C ¸ataly ¨urek, ¨U. V . A scalable and effective alternative to graph transformers. arXiv preprint arXiv:2406.12059 , 2024. Shehzad, A., Xia, F., Abid, S., Peng, C., Yu, S., Zhang, D., and Verspoor, K. Graph transformers: A survey. arXiv preprint arXiv:2407.09777 , 2024.Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. arXiv preprint arXiv:2303.06147 , 2023. Shu, J., Xi, B., Li, Y ., Wu, F., Kamhoua, C., and Ma, J. Understanding dropout for graph neural networks. In Companion Proceedings of the Web Conference 2022 , pp. 1128–1138, 2022. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929–1958, 2014. Tang, J., Sun, J., Wang, C., and Yang, Z. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowl- edge discovery and data mining , pp. 807–816, 2009. T¨onshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. arXiv preprint arXiv:2309.00367 , 2023. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. Veliˇckovi ´c, P., Cucurull, G., Casanova, A., Romero, A., Li`o, P., and Bengio, Y . Graph attention networks. In International Conference on Learning Representations , 2018. Wang, C., Tsepa, O., Ma, J., and Wang, B. Graph-mamba: Towards long-range graph sequence modeling with se- lective state spaces. arXiv preprint arXiv:2402.00789 , 2024. Wu, Q., Yang, C., Zhao, W., He, Y ., Wipf, D., and Yan, J. DIFFormer: Scalable (graph) transformers induced by en- ergy constrained diffusion. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=j6zUzrapY3L. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y . A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning sys- tems, 32(1):4–24, 2020. Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems , 34:13266– 13279, 2021. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. 11 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yang, Z., Cohen, W., and Salakhudinov, R. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning , pp. 40–48. PMLR, 2016. Yin, S. and Zhong, G. Lgi-gt: Graph transformers with local and global operators interleaving. 2023. Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y ., and Liu, T.-Y . Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems , 34:28877–28888, 2021. Yosinski, J., Clune, J., Bengio, Y ., and Lipson, H. How trans- ferable are features in deep neural networks? Advances in neural information processing systems , 27, 2014. Yun, S., Jeong, M., Kim, R., Kang, J., and Kim, H. J. Graph transformer networks. Advances in neural information processing systems , 32, 2019. Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of GNNs via graph biconnectivity. In The Eleventh International Conference on Learning Rep- resentations , 2023. URL https://openreview.net/forum? id=r9hNv76KoT3. Zhang, J., Zhang, H., Xia, C., and Sun, L. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140 , 2020. 12 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence A. Datasets and Experimental Details A.1. Computing Environment Our implementation is based on PyG (Fey & Lenssen, 2019). The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. A.2. Datasets Table 7 presents a summary of the statistics and characteristics of the datasets. •GNN Benchmark (Dwivedi et al., 2023) . ZINC contains molecular graphs with node features representing atoms and edge features representing bonds The task is to regress the constrained solubility (logP) of the molecule. MNIST and CIFAR10 are adapted from image classification datasets, where each image is represented as an 8-nearest-neighbor graph of SLIC superpixels, with nodes representing superpixels and edges representing spatial relationships. The 10-class classification tasks follow the original image classification tasks. PATTERN andCLUSTER are synthetic datasets sampled from the Stochastic Block Model (SBM) for inductive node classification, with tasks involving sub-graph pattern recognition and cluster ID inference. For all datasets, we adhere to the respective training protocols and standard evaluation splits (Dwivedi et al., 2023). •Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021) . Peptides-func andPeptides- struct are atomic graphs of peptides from SATPdb, with tasks of multi-label graph classification into 10 peptide functional classes and graph regression for 11 3D structural properties, respectively. PascalVOC-SP andCOCO-SP are node classification datasets derived from the Pascal VOC and MS COCO images by SLIC superpixelization, where each superpixel node belongs to a particular object class. We did not use PCQM-Contact in (Dwivedi et al., 2022) as its download link was no longer valid. MalNet-Tiny (Freitas & Dong, 2021) is a subset of MalNet with 5,000 function call graphs (FCGs) from Android APKs, where the task is to predict software type based on structure alone. For each dataset, we follow standard training protocols and splits (Dwivedi et al., 2022; Freitas & Dong, 2021). •Open Graph Benchmark (OGB) (Hu et al., 2020) .We also consider a collection of larger-scale datasets from OGB, containing graphs in the range of hundreds of thousands to millions: ogbg-molhiv andogbg-molpcba are molecular property prediction datasets from MoleculeNet. ogbg-molhiv involves binary classification of HIV inhibition, while ogbg-molpcba predicts results of 128 bioassays in a multi-task setting. ogbg-ppa contains protein-protein association networks, where nodes represent proteins and edges encode normalized associations between them; the task is to classify the origin of the network among 37 taxonomic groups. ogbg-code2 consists of abstract syntax trees (ASTs) from Python source code, with the task of predicting the first 5 subtokens of the function’s name. We maintain all the OGB standard evaluation settings (Hu et al., 2020). Table 7. Overview of the datasets used for graph-level tasks (Dwivedi et al., 2023; 2022; Hu et al., 2020; Freitas & Dong, 2021). Dataset # graphs Avg. # nodes Avg. # edges # node/edge feats Prediction level Prediction task Metric ZINC 12,000 23.2 24.9 28/1 graph regression MAE MNIST 70,000 70.6 564.5 3/1 graph 10-class classif. Accuracy CIFAR10 60,000 117.6 941.1 5/1 graph 10-class classif. Accuracy PATTERN 14,000 118.9 3,039.3 3/1 inductive node binary classif. Accuracy CLUSTER 12,000 117.2 2,150.9 7/1 inductive node 6-class classif. Accuracy Peptides-func 15,535 150.9 307.3 9/3 graph 10-task classif. Avg. Precision Peptides-struct 15,535 150.9 307.3 9/3 graph 11-task regression MAE PascalVOC-SP 11,355 479.4 2,710.5 14/2 inductive node 21-class classif. F1 score COCO-SP 123,286 476.9 2,693.7 14/2 inductive node 81-class classif. F1 score MalNet-Tiny 5,000 1,410.3 2,859.9 5/1 graph 5-class classif. Accuracy ogbg-molhiv 41,127 25.5 27.5 9/3 graph binary classif. AUROC ogbg-molpcba 437,929 26.0 28.1 9/3 graph 128-task classif. Avg. Precision ogbg-ppa 158,100 243.4 2,266.1 1/7 graph 37-task classif. Accuracy ogbg-code2 452,741 125.2 124.2 2/2 graph 5 token sequence F1 score A.3. Hyperparameters and Reproducibility Please note that we mainly follow the experiment settings of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). For the hyperparameter selections of classic GNNs, in addition to what we have covered, we list other settings in Tables 8, 9, 10, 13 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence 11, 12, 13. Further details regarding hyperparameters can be found in our code. In all experiments, we use the validation set to select the best hyperparameters. GNN+denotes enhanced implementation of the GNN model. Our code is available under the MIT License. Table 8. Hyperparameter settings of GCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 6 5 12 12 Edge Feature Module True True True True False Normalization BN BN BN BN BN Dropout 0.0 0.15 0.05 0.05 0.1 Residual Connections True True True True True FFN True True True True True PE RWSE-32 False False RWSE-32 RWSE-20 Hidden Dim 64 60 65 90 90 Graph Pooling add mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.0005 0.001 0.001 0.001 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 260,177 112,570 114,345 517,219 516,674 Time (epoch) 7.6s 60.1s 40.2s 19.5s 29.7s Table 9. Hyperparameter settings of GCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 14 18 8 4 10 4 4 Edge Feature Module True False True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.05 0.0 0.1 0.2 0.2 0.2 Residual Connections False False True True True False False True True FFN False False True True True True True True True PE RWSE-32 RWSE-32 False False False RWSE-20 RWSE-16 False False Hidden Dim 275 255 85 70 110 256 512 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.001 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 400 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 507,351 506,127 520,986 460,611 494,235 1,407,641 13,316,700 5,549,605 23,291,826 Time (epoch) 6.9s 6.6s 12.5s 162.5s 6.6s 16.3s 91.4s 178.2s 476.3s 14 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 10. Hyperparameter settings of GIN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 5 5 8 10 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.0 0.1 0.05 0.05 0.05 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 80 60 60 100 90 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.001 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 477,241 118,990 115,450 511,829 497,594 Time (epoch) 9.4s 56.8s 46.3s 18.5s 20.5s Table 11. Hyperparameter settings of GIN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 16 16 5 3 16 5 4 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.0 0.0 0.0 0.3 0.15 0.1 Residual Connections True True True True True True True False True FFN False False True True True False True True True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 240 200 70 70 130 256 300 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 250 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 506,126 518,127 486,039 487,491 514,545 481,433 8,774,720 8,173,605 24,338,354 Time (epoch) 7.4s 6.1s 14.8s 169.2s 5.9s 10.9s 89.2s 213.9s 489.8s 15 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 12. Hyperparameter settings of GatedGCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 9 10 10 12 16 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.05 0.05 0.15 0.2 0.2 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 70 35 35 64 56 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.0005 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 413,355 118,940 116,490 466,001 474,574 Time (epoch) 10.5s 137.9s 115.0s 32.6s 34.1s Table 13. Hyperparameter settings of GatedGCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 5 4 12 20 6 3 10 4 5 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.05 0.2 0.15 0.05 0.0 0.0 0.2 0.15 0.2 Residual Connections False True True True True True True True True FFN False False False True True False True False True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 135 145 95 52 100 256 256 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 32 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 521,141 492,897 559,094 508,589 550,905 1,076,633 6,016,860 5,547,557 29,865,906 Time (epoch) 17.3s 8.0s 21.3s 208.8s 8.9s 15.1s 85.1s 479.8s 640.1s 16 | 4 | 1 | The GNN models (GCN, GIN, and GatedGCN) enhanced with GNN+ have approximately 500K parameters each, which is moderate for graph neural networks. The datasets used involve a variety of sizes, but the mentioned ones have a maximum of around 500K graphs (like the OGB datasets). Given the average training time of these models in a practical setting can range from 5 to 40 seconds per epoch depending on the dataset's characteristics and model complexity, it is reasonable to assume a total training time of well under 8 hours. Since the authors conducted training over multiple datasets with a maximum of 2000 epochs mentioned but did not detail extensive training times, sampling training times from known GNN implementations indicates a single GPU can handle this load adequately in under 8 hours. These models can be run on a high-memory GPU, allowing flexible batch size choices to ensure memory constraints are met. Overall, a single GPU configuration is likely sufficient given the moderate size of the models and datasets involved. Therefore, this model can be trained in under 8 hours on a single GPU. | yes | Yes | Graph | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00.000Z | [https://github.com/LUOyk1999/GNNPlus] | 1 | https://data.pyg.org/datasets/benchmarking-gnns/MNIST_v2.zip | 9 hour approx - ( 200 epochs * avg 157.2 sec) | https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing | Yes | null |
ogbg-molhiv | GatedGCN+ | [] | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00 | https://arxiv.org/abs/2502.09263v1 | [
"https://github.com/LUOyk1999/GNNPlus"
] | {'Test ROC-AUC': '0.8040 ± 0.0164', 'Validation ROC-AUC': '0.8329 ± 0.0158', 'Number of params': '1076633', 'Ext. data': 'No'} | [
"Test ROC-AUC",
"Ext. data",
"Validation ROC-AUC",
"Number of params"
] | Given the following paper and codebase:
Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
Codebase: https://github.com/LUOyk1999/GNNPlus
Improve the GatedGCN+ model on the ogbg-molhiv dataset. The result
should improve on the following metrics: {'Test ROC-AUC': '0.8040 ± 0.0164', 'Validation ROC-AUC': '0.8329 ± 0.0158', 'Number of params': '1076633', 'Ext. data': 'No'}. You must use only the codebase provided.
| Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in capturing long-range dependencies, while Graph Transformers (GTs) are considered superior due to their global atten- tion mechanisms. Literature frequently suggests that GTs outperform GNNs, particularly in graph- level tasks such as graph classification and re- gression. In this study, we explore the untapped potential of GNNs through an enhanced frame- work, GNN+, which integrates six widely used techniques: edge feature integration, normaliza- tion, dropout, residual connections, feed-forward networks, and positional encoding, to effectively tackle graph-level tasks. We conduct a systematic evaluation of three classic GNNs—GCN, GIN, and GatedGCN—enhanced by the GNN+frame- work across 14 well-known graph-level datasets. Our results show that, contrary to the prevailing belief, classic GNNs excel in graph-level tasks, securing top three rankings across all datasets and achieving first place in eight, while also demonstrating greater efficiency than GTs. This highlights the potential of simple GNN architec- tures, challenging the belief that complex mech- anisms in GTs are essential for superior graph- level performance. Our source code is available at https://github.com/LUOyk1999/tunedGNN-G. 1. Introduction Graph machine learning addresses both graph-level tasks and node-level tasks, as illustrated in Figure 1. These tasks fundamentally differ in their choice of the basic unit for dataset composition, splitting, and training, with graph-level tasks focusing on the entire graph, while node-level tasks focus on individual nodes. Graph-level tasks (Dwivedi et al., 1Beihang University2The Hong Kong Polytechnic University. *Corresponding authors: Lei Shi <{leishi, luoyk }@buaa.edu.cn >, Xiao-Ming Wu <xiao-ming.wu@polyu.edu.hk >. Preprint. Figure 1. Differences between graph-level and node-level tasks. 2023; Hu et al., 2020; Luo et al., 2023b;a) often involve the classification of relatively small molecular graphs in chem- istry (Morris et al., 2020) or the prediction of protein proper- ties in biology (Dwivedi et al., 2022). In contrast, node-level tasks typically involve large social networks (Tang et al., 2009) or citation networks (Yang et al., 2016), where the primary goal is node classification. This distinction in the fundamental unit of dataset leads to differences in method- ologies, training strategies, and application domains. Message-passing Graph Neural Networks (GNNs) (Gilmer et al., 2017), which iteratively aggregate information from local neighborhoods to learn node representations, have be- come the predominant approach for both graph-level and node-level tasks (Niepert et al., 2016; Kipf & Welling, 2017; Veliˇckovi ´c et al., 2018; Xu et al., 2018; Bresson & Laurent, 2017; Wu et al., 2020). Despite their widespread success, GNNs exhibit several inherent limitations, including re- stricted expressiveness (Xu et al., 2018; Morris et al., 2019), over-smoothing (Li et al., 2018; Chen et al., 2020), over- squashing (Alon & Yahav, 2020), and a limited capacity to capture long-range dependencies (Dwivedi et al., 2022). A prevalent perspective is that Graph Transformers (GTs) (M¨uller et al., 2023; Min et al., 2022; Hoang et al., 2024), as an alternative to GNNs, leverage global attention mech- anisms that enable each node to attend to all others (Yun et al., 2019; Dwivedi & Bresson, 2020), effectively model- 1arXiv:2502.09263v1 [cs.LG] 13 Feb 2025 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence ing long-range interactions and addressing issues such as over-smoothing, over-squashing, and limited expressiveness (Kreuzer et al., 2021; Ying et al., 2021; Zhang et al., 2023; Luo et al., 2023c; 2024b). However, the quadratic com- plexity of global attention mechanisms limits the scalability of GTs in large-scale, real-world applications (Behrouz & Hashemi, 2024; Sancak et al., 2024; Ding et al., 2024). Moreover, it has been noted that many state-of-the-art GTs (Chen et al., 2022; Ramp ´aˇsek et al., 2022; Shirzad et al., 2023; Ma et al., 2023) still rely—either explicitly or implic- itly—on the message passing mechanism of GNNs to learn local node representations, thereby enhancing performance. Recent studies (Luo et al., 2024a; 2025a;b) have shown that, contrary to common belief, classic GNNs such as GCN (Kipf & Welling, 2017), GAT (Veli ˇckovi ´c et al., 2018), and GraphSAGE (Hamilton et al., 2017) can achieve perfor- mance comparable to, or even exceeding, that of state-of-the- art GTs for node-level tasks. However, a similar conclusion has not yet been established for graph-level tasks. While T¨onshoff et al. (2023) conducted pioneering research demon- strating that tuning a few hyperparameters can significantly enhance the performance of classic GNNs, their results indi- cate that these models still do not match the overall perfor- mance of GTs. Furthermore, their investigation is limited to the Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). This raises an important question: “Can classic GNNs also excel in graph-level tasks?” To thoroughly investigate this question, we introduce GNN+, an enhanced GNN framework that incorporates es- tablished techniques into the message-passing mechanism, to effectively address graph-level tasks. As illustrated in Fig. 2, GNN+integrates six widely used techniques: the incorporation of edge features (Gilmer et al., 2017), normal- ization (Ioffe & Szegedy, 2015), dropout (Srivastava et al., 2014), residual connections (He et al., 2016), feed-forward networks (FFN) (Vaswani et al., 2017), and positional en- coding (Vaswani et al., 2017). Each technique serves as a hyperparameter that can be tuned to optimize performance. We systematically evaluate 3 classic GNNs—GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bres- son & Laurent, 2017)—enhanced by the GNN+frame- work across 14 well-known graph-level datasets from GNN Benchmark (Dwivedi et al., 2023), LRGB (Dwivedi et al., 2022), and OGB (Hu et al., 2020). The results demonstrate that the enhanced versions of classic GNNs match or even outperform state-of-the-art (SOTA) GTs, achieving rankings in the top three , including first place in eight datasets , while exhibiting superior efficiency. These findings pro- vide a positive answer to the previously posed question, suggesting that the true potential of GNNs for graph-level applications has been previously underestimated, and the GNN+framework effectively unlocks this potential whileaddressing their inherent limitations. Our ablation study also highlights the importance of each technique used in GNN+and offers valuable insights for future research. 2. Classic GNNs for Graph-level Tasks Define a graph as G= (V,E,X,E), where Vis the set of nodes, and E ⊆ V × V is the set of edges. The node feature matrix is X∈R|V|×dV, where |V|is the number of nodes, anddVis the dimension of the node features. The edge feature matrix is E∈R|E|×dE, where |E|is the number of edges and dEis the dimension of the edge features. Let A∈R|V|×|V|denote the adjacency matrix of G. Message-passing Graph Neural Networks (GNNs) com- pute node representations hl vat each layer lvia a message- passing mechanism, defined by Gilmer et al. (2017): hl v=UPDATEl hl−1 v,AGGln hl−1 u|u∈ N(v)o , (1) whereN(v)represents the neighboring nodes adjacent to v, AGGlis the message aggregation function, and UPDATEl is the update function. Initially, each node vis assigned a feature vector h0 v=xv∈Rd. The function AGGlis then used to aggregate information from the neighbors of vto update its representation. The output of the last layer L, i.e., GNN (v,A,X) =hL v, is the representation of vproduced by the GNN. In this work, we focus on three classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bresson & Laurent, 2017), which differ in their approach to learning the node representation hl v. Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), the vanilla GCN model, is formulated as: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl), (2) where ˆdv= 1+P u∈N(v)1,P u∈N(v)1denotes the degree of node v,Wlis the trainable weight matrix in layer l, and σis the activation function, e.g., ReLU(·) = max(0 ,·). Graph Isomorphism Networks (GIN) (Xu et al., 2018) learn node representations through a different approach: hl v=MLPl((1 + ϵ)·hl−1 v+X u∈N(v)hl−1 u), (3) where ϵis a constant, typicallyset to 0, and MLPldenotes a multi-layer perceptron, which usually consists of 2 layers. Residual Gated Graph Convolutional Networks (Gat- edGCN) (Bresson & Laurent, 2017) enhance traditional graph convolutions by incorporating gating mechanisms, improving adaptability and expressiveness: hl v=hl−1 vWl 1+X u∈N(v)ηv,u⊙hl−1 uWl 2, (4) 2 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence where ηv,u=σ(hl−1 vWl 3+hl−1 uWl 4)is the gating func- tion, and σdenotes the sigmoid activation function. This gating function determines how much each neighboring node contributes to updating the representation of the cur- rent node. The matrices Wl 1,Wl 2,Wl 3,Wl 4are trainable weight matrices specific to the layer l. Graph-level tasks treat the entire graph, rather than indi- vidual nodes or edges, as the fundamental unit for dataset composition, splitting, and training. Formally, given a la- beled graph dataset Γ ={(Gi,yi)}n i=1, each graph Giis associated with a label vector yi, representing either cat- egorical labels for classification or continuous values for regression. Next, the dataset Γis typically split into training, validation, and test sets, denoted as Γ = Γ train∪Γval∪Γtest. Graph-level tasks encompass inductive prediction tasks that operate on entire graphs, as well as on individual nodes or edges (Dwivedi et al., 2022), with each corresponding to a distinct label vector yi. Each type of task requires a tai- lored graph readout function R, which aggregates the output representations to compute the readout result, expressed as: hreadout i = Rn hL v:v∈ Vio , (5) where Virepresents the set of nodes in the graph Gi. For example, for graph prediction tasks , which aim to make predictions about the entire graph, the readout function R often operates as a global mean pooling function. Finally, for any graph Gi, the readout result is passed through a prediction head g(·)to obtain the predicted label ˆyi= g(hreadout i). The training objective is to minimize the total lossL(θ) =P Gi∈Γtrainℓ(ˆyi,yi)w.r.t. all graphs in the training set Γtrain, where yirepresents the ground-truth label ofGiandθdenotes the trainable GNN parameters. 3. GNN+: Enhancing Classic GNNs for Graph-level Tasks We propose an enhancement to classic GNNs for graph-level tasks by incorporating six popular techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks (FFN), and positional encoding. The enhanced framework, GNN+, is illustrated in Figure 2. 3.1. Edge Feature Integration Edge features were initially incorporated into some GNN frameworks (Gilmer et al., 2017; Hu et al., 2019) by directly integrating them into the message-passing process to en- hance information propagation between nodes. Following this practice, GraphGPS (Ramp ´aˇsek et al., 2022) and subse- quent GTs encode edge features within their local modules to enrich node representations. Taking GCN (Eq. 2) as an example, the edge features are Figure 2. The architecture of GNN+. integrated into the massage-passing process as follows: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e),(6) where Wl eis the trainable weight matrix in layer l, andeuv is the feature vector of the edge between uandv. 3.2. Normalization Normalization techniques play a critical role in stabilizing the training of GNNs by mitigating the effects of covariate shift, where the distribution of node embeddings changes across layers during training. By normalizing node em- beddings at each layer, the training process becomes more stable, enabling the use of higher learning rates and achiev- ing faster convergence (Cai et al., 2021). Batch Normalization (BN) (Ioffe & Szegedy, 2015) and Layer Normalization (LN) (Ba et al., 2016) are widely used techniques, typically applied to the output of each layer before the activation function σ(·). Here, we use BN: hl v=σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e)). (7) 3.3. Dropout Dropout (Srivastava et al., 2014), a technique widely used in convolutional neural networks (CNNs) to address overfitting by reducing co-adaptation among hidden neurons (Hinton et al., 2012; Yosinski et al., 2014), has also been found to be effective in addressing similar issues in GNNs (Shu et al., 2022), where the co-adaptation effects propagate and accu- mulate via message passing among different nodes. Typi- 3 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence cally, dropout is applied to the embeddings after activation: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))).(8) 3.4. Residual Connection Residual connections (He et al., 2016) significantly enhance CNN performance by directly connecting the input of a layer to its output, thus alleviating the problem of vanishing gra- dient. They were first adopted by the vanilla GCN (Kipf & Welling, 2017) and has since been incorporated into subse- quent works such as GatedGCN (Bresson & Laurent, 2017) and DeepGCNs (Li et al., 2019). Formally, residual connec- tions can be integrated into GNNs as follows: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v.(9) While deeper networks, such as deep CNNs (He et al., 2016; Huang et al., 2017), are capable of extract more complex fea- tures, GNNs encounter challenges like over-smoothing (Li et al., 2018), where deeper models lead to indistinguishable node representations. Consequently, most GNNs are shal- low, typically with 2 to 5 layers. However, by incorporating residual connections, we show that deeper GNNs, ranging from 3 to 20 layers, can achieve strong performance. 3.5. Feed-Forward Network GTs incorporate a feed-forward network (FFN) as a crucial component within each of their layers. The FFN enhances the model’s ability to perform complex feature transforma- tions and introduces non-linearity, thereby increasing the network’s expressive power. Inspired by this, we propose appending a fully-connected FFN at the end of each layer of GNNs, defined as: FFN(h) =BN(σ(hWl FFN 1)Wl FFN 2+h), (10) where Wl FFN 1andWl FFN 2are the trainable weight matrices of the FFN at the l-th GNN layer. The node embeddings output by the FFN are then computed as: hl v=FFN(Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v). (11) 3.6. Positional Encoding Positional encoding (PE) was introduced in the Transformer model (Vaswani et al., 2017) to represent the positions of tokens within a sequence for language modeling. In GTs,Table 1. Overview of the datasets used for graph-level tasks. Dataset # graphs Avg. # nodes Avg. # edges Task Type ZINC 12,000 23.2 24.9 Graph regression MNIST 70,000 70.6 564.5 Graph classification CIFAR10 60,000 117.6 941.1 Graph classification PATTERN 14,000 118.9 3,039.3 Inductive node cls. CLUSTER 12,000 117.2 2,150.9 Inductive node cls. Peptides-func 15,535 150.9 307.3 Graph classification Peptides-struct 15,535 150.9 307.3 Graph regression PascalVOC-SP 11,355 479.4 2,710.5 Inductive node cls. COCO-SP 123,286 476.9 2,693.7 Inductive node cls. MalNet-Tiny 5,000 1,410.3 2,859.9 Graph classification ogbg-molhiv 41,127 25.5 27.5 Graph classification ogbg-molpcba 437,929 26.0 28.1 Graph classification ogbg-ppa 158,100 243.4 2,266.1 Graph classification ogbg-code2 452,741 125.2 124.2 Graph classification PE is used to incorporate graph positional or structural infor- mation. The encodings are typically added or concatenated to the input node features xvbefore being fed into the GTs. Various PE methods have been proposed, such as Laplacian Positional Encoding (LapPE) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), Weisfeiler-Lehman Positional Encod- ing (WLPE) (Zhang et al., 2020), Random Walk Structural Encoding (RWSE) (Li et al., 2020; Dwivedi et al., 2021; Ramp ´aˇsek et al., 2022), Learnable Structural and Positional Encodings (LSPE) (Dwivedi et al., 2021), and Relative Ran- dom Walk Probabilities (RRWP) (Ma et al., 2023). Follow- ing the practice, we use RWSE, one of the most efficient PE methods, to improve the performance of GNNs as follows: xv= [xv∥xRWSE v]WPE, (12) where [·∥·]denotes concatenation, xRWSE v represents the RWSE of node v, andWPEis the trainable weight matrix. 4. Assessment: Experimental Setup Datasets, Table 1 . We use widely adopted graph-level datasets in our experiments, including ZINC ,MNIST , CIFAR10 ,PATTERN , and CLUSTER from the GNN Benchmark (Dwivedi et al., 2023); Peptides-func ,Peptides- struct ,PascalVOC-SP ,COCO-SP , and MalNet-Tiny from Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021); and ogbg-molhiv ,ogbg- molpcba ,ogbg-ppa , and ogbg-code2 from Open Graph Benchmark (OGB) (Hu et al., 2020). We follow their re- spective standard evaluation protocols including the splits and metrics. For further details, refer to the Appendix A.2. Baselines. Our main focus lies on classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018; Hu et al., 2019), GatedGCN (Bresson & Laurent, 2017), the SOTA GTs: GT (2020), GraphTrans (2021), SAN (2021), Graphormer (2021), SAT (2022), EGT (2022), GraphGPS (2022; 2023), GRPE (2022), Graphormer-URPE (2022), Graphormer-GD (2023), Specformer (2023), LGI- GT (2023), GPTrans-Nano (2023b), Graph ViT/MLP-Mixer (2023), NAGphormer (2023a), DIFFormer (2023), MGT 4 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 2. Test performance on five benchmarks from (Dwivedi et al., 2023) (%). Shown is the mean ±s.d. of 5 runs with different random seeds.+denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for ZINC, PATTERN, and CLUSTER, and ∼100K for MNIST and CIFAR10. The top 1st,2ndand3rdresults are highlighted. ZINC MNIST CIFAR10 PATTERN CLUSTER # graphs 12,000 70,000 60,000 14,000 12,000 Avg. # nodes 23.2 70.6 117.6 118.9 117.2 Avg. # edges 24.9 564.5 941.1 3039.3 2150.9 Metric MAE↓ Accuracy ↑ Accuracy ↑ Accuracy ↑ Accuracy ↑ GT (2020) 0.226 ±0.014 90.831 ±0.161 59.753 ±0.293 84.808 ±0.068 73.169 ±0.622 SAN (2021) 0.139 ±0.006 – – 86.581 ±0.037 76.691 ±0.650 Graphormer (2021) 0.122 ±0.006 – – – – SAT (2022) 0.094 ±0.008 – – 86.848 ±0.037 77.856 ±0.104 EGT (2022) 0.108 ±0.009 98.173 ±0.087 68.702 ±0.409 86.821 ±0.020 79.232 ±0.348 GraphGPS (2022) 0.070 ±0.004 98.051 ±0.126 72.298 ±0.356 86.685 ±0.059 78.016 ±0.180 GRPE (2022) 0.094 ±0.002 – – 87.020 ±0.042 – Graphormer-URPE (2022) 0.086 ±0.007 – – – – Graphormer-GD (2023) 0.081 ±0.009 – – – – Specformer (2023) 0.066 ±0.003 – – – – LGI-GT (2023) – – – 86.930 ±0.040 – GPTrans-Nano (2023b) – – – 86.731 ±0.085 – Graph ViT/MLP-Mixer (2023) 0.073 ±0.001 98.460 ±0.090 73.960 ±0.330 – – Exphormer (2023) – 98.414 ±0.038 74.754 ±0.194 86.734 ±0.008 – GRIT (2023) 0.059 ±0.002 98.108 ±0.111 76.468 ±0.881 87.196 ±0.076 80.026 ±0.277 GRED (2024) 0.077 ±0.002 98.383 ±0.012 76.853 ±0.185 86.759 ±0.020 78.495 ±0.103 GEAET (2024) – 98.513 ±0.086 76.634 ±0.427 86.993 ±0.026 – TIGT (2024) 0.057 ±0.002 98.231 ±0.132 73.963 ±0.361 86.681 ±0.062 78.025 ±0.223 Cluster-GT (2024a) 0.071 ±0.004 – – – – GMN (2024) – 98.391 ±0.182 74.560 ±0.381 87.090 ±1.260 – Graph-Mamba (2024) – 98.420 ±0.080 73.700 ±0.340 86.710 ±0.050 76.800 ±0.360 GCN 0.367 ±0.011 90.705 ±0.218 55.710 ±0.381 71.892 ±0.334 68.498 ±0.976 GCN+0.076 ±0.00979.3%↓98.382 ±0.0958.5%↑69.824 ±0.41325.4%↑87.021 ±0.09521.1%↑77.109 ±0.87212.6%↑ GIN 0.526 ±0.051 96.485 ±0.252 55.255 ±1.527 85.387 ±0.136 64.716 ±1.553 GIN+0.065 ±0.00487.6%↓98.285 ±0.1031.9%↑69.592 ±0.28725.9%↑86.842 ±0.0481.7%↑ 74.794 ±0.21315.6%↑ GatedGCN 0.282 ±0.015 97.340 ±0.143 67.312 ±0.311 85.568 ±0.088 73.840 ±0.326 GatedGCN+0.077 ±0.00572.7%↓98.712 ±0.1371.4%↑77.218 ±0.38114.7%↑87.029 ±0.0371.7%↑ 79.128 ±0.2357.1%↑ Time (epoch) of GraphGPS 21s 76s 64s 32s 86s Time (epoch) of GCN+7s 60s 40s 19s 29s (2023), DRew (2023), Exphormer (2023), GRIT (2023), GRED (2024), GEAET (2024), Subgraphormer (2024), TIGT (2024), GECO (2024), GPNN (2024), Cluster-GT (2024a), and the SOTA graph state space models (GSSMs): GMN (2024), Graph-Mamba (2024), GSSC (2024b). Fur- thermore, various other GTs exist in related surveys (Hoang et al., 2024; Shehzad et al., 2024; M ¨uller et al., 2023), empir- ically shown to be inferior to the GTs we compared against for graph-level tasks. We report the performance results of baselines primarily from (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023), with the remaining obtained from their re- spective original papers or official leaderboards whenever possible, as those results are obtained by well-tuned models. Hyperparameter Configurations. We conduct hyperpa- rameter tuning on 3 classic GNNs, consistent with the hy- perparameter search space of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). Specifically, we utilize the AdamW optimizer (Loshchilov, 2017) with a learning rate from{0.0001,0.0005,0.001}and an epoch limit of 2000. As discussed in Section 3, we focus on whether to use the edge feature module, normalization (BN), residual connections, FFN, PE (RWSE), and dropout rates from {0.05,0.1,0.15,0.2,0.3}, the number of layers from 3 to 20. Considering the large number of hyperparameters anddatasets, we do not perform an exhaustive search. Addition- ally, we retrain baseline GTs using the same hyperparam- eter search space and training environments as the classic GNNs. Since the retrained results did not surpass those in their original papers, we present the results from those sources .GNN+denotes the enhanced version. We report mean scores and standard deviations after 5 independent runs with different random seeds. Detailed hyperparameters are provided in Appendix A. 5. Assessment: Results and Findings 5.1. Overall Performance We evaluate the performance of the enhanced versions of 3 classic GNNs across 14 well-known graph-level datasets. The enhanced versions of classic GNNs achieved state- of-the-art performance, ranking in the top three across 14 datasets , including first place in 8 of them , while also demonstrating superior efficiency . This suggests that the GNN+framework effectively harnesses the po- tential of classic GNNs for graph-level tasks and suc- cessfully mitigates their inherent limitations. 5 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 3. Test performance on five datasets from Long-Range Graph Benchmarks (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021). +denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for all. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny # graphs 15,535 15,535 11,355 123,286 5,000 Avg. # nodes 150.9 150.9 479.4 476.9 1,410.3 Avg. # edges 307.3 307.3 2,710.5 2,693.7 2,859.9 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ GT (2020) 0.6326 ±0.0126 0.2529 ±0.0016 0.2694 ±0.0098 0.2618 ±0.0031 – SAN (2021) 0.6439 ±0.0075 0.2545 ±0.0012 0.3230 ±0.0039 0.2592 ±0.0158 – GraphGPS (2022) 0.6535 ±0.0041 0.2500 ±0.0005 0.3748 ±0.0109 0.3412 ±0.0044 0.9350 ±0.0041 GraphGPS (2023) 0.6534 ±0.0091 0.2509 ±0.0014 0.4440 ±0.0065 0.3884 ±0.0055 0.9350 ±0.0041 NAGphormer (2023a) – – 0.4006 ±0.0061 0.3458 ±0.0070 – DIFFormer (2023) – – 0.3988 ±0.0045 0.3620 ±0.0012 – MGT (2023) 0.6817 ±0.0064 0.2453 ±0.0025 – – – DRew (2023) 0.7150 ±0.0044 0.2536 ±0.0015 0.3314 ±0.0024 – – Graph ViT/MLP-Mixer (2023) 0.6970 ±0.0080 0.2449 ±0.0016 – – – Exphormer (2023) 0.6258 ±0.0092 0.2512 ±0.0025 0.3446 ±0.0064 0.3430 ±0.0108 0.9402 ±0.0021 GRIT (2023) 0.6988 ±0.0082 0.2460 ±0.0012 – – – Subgraphormer (2024) 0.6415 ±0.0052 0.2475 ±0.0007 – – – GRED (2024) 0.7133 ±0.0011 0.2455 ±0.0013 – – – GEAET (2024) 0.6485 ±0.0035 0.2547 ±0.0009 0.3933 ±0.0027 0.3219 ±0.0052 – TIGT (2024) 0.6679 ±0.0074 0.2485 ±0.0015 – – – GECO (2024) 0.6975 ±0.0025 0.2464 ±0.0009 0.4210 ±0.0080 0.3320 ±0.0032 – GPNN (2024) 0.6955 ±0.0057 0.2454 ±0.0003 – – – Graph-Mamba (2024) 0.6739 ±0.0087 0.2478 ±0.0016 0.4191 ±0.0126 0.3960 ±0.0175 0.9340 ±0.0027 GSSC (2024b) 0.7081 ±0.0062 0.2459 ±0.0020 0.4561 ±0.0039 – 0.9406 ±0.0064 GCN 0.6860 ±0.0050 0.2460 ±0.0007 0.2078 ±0.0031 0.1338 ±0.0007 0.8100 ±0.0081 GCN+0.7261 ±0.0067 5.9%↑0.2421 ±0.0016 1.6%↓0.3357 ±0.0087 62.0%↑0.2733 ±0.0041 104.9% ↑0.9354 ±0.0045 15.5%↑ GIN 0.6621 ±0.0067 0.2473 ±0.0017 0.2718 ±0.0054 0.2125 ±0.0009 0.8898 ±0.0055 GIN+0.7059 ±0.0089 6.6%↑0.2429 ±0.0019 1.8%↓0.3189 ±0.0105 17.3%↑0.2483 ±0.0046 16.9%↑ 0.9325 ±0.0040 4.8%↑ GatedGCN 0.6765 ±0.0047 0.2477 ±0.0009 0.3880 ±0.0040 0.2922 ±0.0018 0.9223 ±0.0065 GatedGCN+0.7006 ±0.0033 3.6%↑0.2431 ±0.0020 1.9%↓0.4263 ±0.0057 9.9%↑ 0.3802 ±0.0015 30.1%↑ 0.9460 ±0.0057 2.6%↑ Time (epoch) of GraphGPS 6s 6s 17s 213s 46s Time (epoch) of GCN+6s 6s 12s 162s 6s GNN Benchmark, Table 2. We observe that our GNN+ implementation substantially enhances the performance of classic GNNs, with the most significant improvements on ZINC, PATTERN, and CLUSTER. On MNIST and CIFAR, GatedGCN+outperforms SOTA models such as GEAET and GRED, securing top rankings. Long-Range Graph Benchmark (LRGB), Table 3. The results reveal that classic GNNs can achieve strong perfor- mance across LRGB datasets. Specifically, GCN+excels on the Peptides-func and Peptides-struct datasets. On the other hand, GatedGCN+achieves the highest accuracy on MalNet-Tiny. Furthermore, on PascalVOC-SP and COCO- SP, GatedGCN+significantly improves performance, se- curing the third-best model ranking overall. These results highlight the potential of classic GNNs in capturing long- range interactions in graph-level tasks. Open Graph Benchmark (OGB), Table 4. Finally, we test our method on four OGB datasets. As shown in Table 4, GatedGCN+consistently ranks among the top three mod- els and achieves top performance on three out of the four datasets. On ogbg-ppa, GatedGCN+shows an improve- ment of approximately 9%, ranking first on the OGB leader- board. On ogbg-molhiv and ogbg-molpcba, GatedGCN+ even matches the performance of Graphormer and EGT pre-trained on other datasets. Additionally, on ogbg-code2, GatedGCN+secures the third-highest performance, under-scoring the potential of GNNs for large-scale OGB datasets. 5.2. Ablation Study To examine the unique contributions of different technique used in GNN+, we conduct a series of ablation analysis by selectively removing elements such as edge feature module (Edge.), normalization (Norm), dropout, residual connec- tions (RC), FFN, PE from GCN+, GIN+, and GatedGCN+. The effect of these ablations is assessed across GNN Bench- mark (see Table 5), LRGB, and OGB (see Table 6) datasets. Our ablation study demonstrates that each module incor- porated in GNN+—including edge feature integration, normalization, dropout, residual connections, FFN, and PE—is indispensable ; the removal of any single com- ponent results in a degradation of overall performance. Observation 1: The integration of edge features is par- ticularly effective in molecular and image superpixel datasets, where these features carry critical information. In molecular graphs such as ZINC and ogbg-molhiv, edge features represent chemical bond information, which is es- sential for molecular properties. Removing this module leads to a significant performance drop. In protein networks ogbg-ppa, edges represent normalized associations between proteins. Removing the edge feature module results in a sub- 6 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 4. Test performance in four benchmarks from Open Graph Benchmark (OGB) (Hu et al., 2020).+denotes the enhanced version, while the baseline results were obtained from their respective original papers.†indicates the use of additional pretraining datasets, included here for reference only and excluded from ranking. ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # graphs 41,127 437,929 158,100 452,741 Avg. # nodes 25.5 26.0 243.4 125.2 Avg. # edges 27.5 28.1 2,266.1 124.2 Metric AUROC ↑ Avg. Precision ↑ Accuracy ↑ F1 score ↑ GT (2020) – – 0.6454 ±0.0033 0.1670 ±0.0015 GraphTrans (2021) – 0.2761 ±0.0029 – 0.1830 ±0.0024 SAN (2021) 0.7785 ±0.2470 0.2765 ±0.0042 – – Graphormer (pre-trained) (2021) 0.8051 ±0.0053†– – – SAT (2022) – – 0.7522 ±0.0056 0.1937 ±0.0028 EGT (pre-trained) (2022) 0.8060 ±0.0065†0.2961 ±0.0024†– – GraphGPS (2022) 0.7880 ±0.0101 0.2907 ±0.0028 0.8015 ±0.0033 0.1894 ±0.0024 Specformer (2023) 0.7889 ±0.0124 0.2972 ±0.0023 – – Graph ViT/MLP-Mixer (2023) 0.7997 ±0.0102 – – – Exphormer (2023) 0.7834 ±0.0044 0.2849 ±0.0025 – – GRIT (2023) 0.7835 ±0.0054 0.2362 ±0.0020 – – Subgraphormer (2024) 0.8038 ±0.0192 – – – GECO (2024) 0.7980 ±0.0200 0.2961 ±0.0008 0.7982 ±0.0042 0.1915 ±0.0020 GSSC (2024b) 0.8035 ±0.0142 – – – GCN 0.7606 ±0.0097 0.2020 ±0.0024 0.6839 ±0.0084 0.1507 ±0.0018 GCN+0.8012 ±0.0124 5.4%↑0.2721 ±0.0046 34.7%↑0.8077 ±0.0041 18.1%↑0.1787 ±0.0026 18.6%↑ GIN 0.7835 ±0.0125 0.2266 ±0.0028 0.6892 ±0.0100 0.1495 ±0.0023 GIN+0.7928 ±0.0099 1.2%↑0.2703 ±0.0024 19.3%↑0.8107 ±0.0053 17.7%↑0.1803 ±0.0019 20.6%↑ GatedGCN 0.7687 ±0.0136 0.2670 ±0.0020 0.7531 ±0.0083 0.1606 ±0.0015 GatedGCN+0.8040 ±0.0164 4.6%↑0.2981 ±0.0024 11.6%↑0.8258 ±0.0055 9.7%↑ 0.1896 ±0.0024 18.1%↑ Time (epoch/s) of GraphGPS 96s 196s 276s 1919s Time (epoch/s) of GCN+16s 91s 178s 476s Table 5. Ablation study on GNN Benchmark (Dwivedi et al., 2023) (%). - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. ZINC MNIST CIFAR10 PATTERN CLUSTER Metric MAE↓ Accuracy ↑Accuracy ↑Accuracy ↑Accuracy ↑ GCN+0.076 ±0.009 98.382 ±0.095 69.824 ±0.413 87.021 ±0.095 77.109 ±0.872 (-) Edge. 0.135 ±0.004 98.153 ±0.042 68.256 ±0.357 86.854 ±0.054 – (-) Norm 0.107 ±0.011 97.886 ±0.066 60.765 ±0.829 52.769 ±0.874 16.563 ±0.134 (-) Dropout – 97.897 ±0.071 65.693 ±0.461 86.764 ±0.045 74.926 ±0.469 (-) RC 0.159 ±0.016 95.929 ±0.169 58.186 ±0.295 86.059 ±0.274 16.508 ±0.615 (-) FFN 0.132 ±0.021 97.174 ±0.063 63.573 ±0.346 86.746 ±0.088 72.606 ±1.243 (-) PE 0.127 ±0.010 – – 85.597 ±0.241 75.568 ±1.147 GIN+0.065 ±0.004 98.285 ±0.103 69.592 ±0.287 86.842 ±0.048 74.794 ±0.213 (-) Edge. 0.122 ±0.009 97.655 ±0.075 68.196 ±0.107 86.714 ±0.036 65.895 ±3.425 (-) Norm 0.096 ±0.006 97.695 ±0.065 64.918 ±0.059 86.815 ±0.855 72.119 ±0.359 (-) Dropout – 98.214 ±0.064 66.638 ±0.873 86.836 ±0.053 73.316 ±0.355 (-) RC 0.137 ±0.031 97.675 ±0.175 64.910 ±0.102 86.645 ±0.125 16.800 ±0.088 (-) FFN 0.104 ±0.003 11.350 ±0.008 60.582 ±0.395 58.511 ±0.016 62.175 ±2.895 (-) PE 0.123 ±0.014 – – 86.592 ±0.049 73.925 ±0.165 GatedGCN+0.077 ±0.005 98.712 ±0.137 77.218 ±0.381 87.029 ±0.037 79.128 ±0.235 (-) Edge. 0.119 ±0.001 98.085 ±0.045 72.128 ±0.275 86.879 ±0.017 76.075 ±0.845 (-) Norm 0.088 ±0.003 98.275 ±0.045 71.995 ±0.445 86.942 ±0.023 78.495 ±0.155 (-) Dropout 0.089 ±0.003 98.225 ±0.095 70.383 ±0.429 86.802 ±0.034 77.597 ±0.126 (-) RC 0.106 ±0.002 98.442 ±0.067 75.149 ±0.155 86.845 ±0.025 16.670 ±0.307 (-) FFN 0.098 ±0.005 98.438 ±0.151 76.243 ±0.131 86.935 ±0.025 78.975 ±0.145 (-) PE 0.174 ±0.009 – – 85.595 ±0.065 77.515 ±0.265 stantial accuracy decline, ranging from 0.5083 to 0.7310 for classic GNNs. Similarly, in image superpixel datasets like CIFAR-10, PascalVOC-SP, and COCO-SP, edge features encode spatial relationships between superpixels, which are crucial for maintaining image coherence. However, in codegraphs such as ogbg-code2 and MalNet-Tiny, where edges represent call types, edge features are less relevant to the prediction tasks, and their removal has minimal impact. Observation 2: Normalization tends to have a greater impact on larger-scale datasets, whereas its impact is less significant on smaller datasets. For large-scale datasets such as CIFAR 10, COCO-SP, and the OGB datasets, removing normalization leads to signifi- cant performance drops. Specifically, on ogbg-ppa, which has 158,100 graphs, ablating normalization results in an accuracy drop of around 15% for three classic GNNs. This result is consistent with Luo et al. (2024a), who found that normalization is more important for GNNs in node clas- sification on large graphs. In such datasets, where node feature distributions are more complex, normalizing node embeddings is essential for stabilizing the training process. Observation 3: Dropout proves advantageous for most datasets, with a very low dropout rate being sufficient and optimal . Our analysis highlights the crucial role of dropout in main- taining the performance of classic GNNs on GNN Bench- mark and LRGB and large-scale OGB datasets, with its ablation causing significant declines—for instance, an 8.8% relative decrease for GatedGCN+on CIFAR-10 and a 20.4% relative decrease on PascalVOC-SP. This trend continues in 7 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 6. Ablation study on LRGB and OGB datasets. - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ AUROC ↑Avg. Precision ↑Accuracy ↑ F1 score ↑ GCN+0.7261 ±0.0067 0.2421 ±0.0016 0.3357 ±0.0087 0.2733 ±0.0041 0.9354 ±0.0045 0.8012 ±0.0124 0.2721 ±0.0046 0.8077 ±0.0041 0.1787 ±0.0026 (-) Edge. 0.7191 ±0.0036 – 0.2942 ±0.0043 0.2219 ±0.0060 0.9292 ±0.0034 0.7714 ±0.0204 0.2628 ±0.0019 0.2994 ±0.0062 0.1785 ±0.0033 (-) Norm 0.7107 ±0.0027 0.2509 ±0.0026 0.1802 ±0.0111 0.2332 ±0.0079 0.9236 ±0.0054 0.7753 ±0.0049 0.2528 ±0.0016 0.6705 ±0.0104 0.1679 ±0.0027 (-) Dropout 0.6748 ±0.0055 0.2549 ±0.0025 0.3072 ±0.0069 0.2601 ±0.0046 – 0.7431 ±0.0185 0.2405 ±0.0047 0.7893 ±0.0052 0.1641 ±0.0043 (-) RC – – 0.2734 ±0.0036 0.1948 ±0.0096 0.8916 ±0.0048 – – 0.7520 ±0.0157 0.1785 ±0.0029 (-) FFN – – 0.2786 ±0.0068 0.2314 ±0.0073 0.9118 ±0.0078 0.7432 ±0.0052 0.2621 ±0.0019 0.7672 ±0.0071 0.1594 ±0.0020 (-) PE 0.7069 ±0.0093 0.2447 ±0.0015 – – – 0.7593 ±0.0051 0.2667 ±0.0034 – – GIN+0.7059 ±0.0089 0.2429 ±0.0019 0.3189 ±0.0105 0.2483 ±0.0046 0.9325 ±0.0040 0.7928 ±0.0099 0.2703 ±0.0024 0.8107 ±0.0053 0.1803 ±0.0019 (-) Edge. 0.7033 ±0.0015 0.2442 ±0.0028 0.2956 ±0.0047 0.2259 ±0.0053 0.9286 ±0.0049 0.7597 ±0.0103 0.2702 ±0.0021 0.2789 ±0.0031 0.1752 ±0.0020 (-) Norm 0.6934 ±0.0077 0.2444 ±0.0015 0.2707 ±0.0037 0.2244 ±0.0063 0.9322 ±0.0025 0.7874 ±0.0114 0.2556 ±0.0026 0.6484 ±0.0246 0.1722 ±0.0034 (-) Dropout 0.6384 ±0.0094 0.2531 ±0.0030 0.3153 ±0.0113 – – – 0.2545 ±0.0068 0.7673 ±0.0059 0.1730 ±0.0018 (-) RC 0.6975 ±0.0038 0.2527 ±0.0015 0.2350 ±0.0044 0.1741 ±0.0085 0.9150 ±0.0047 0.7733 ±0.0122 0.1454 ±0.0061 – 0.1617 ±0.0026 (-) FFN – – 0.2393 ±0.0049 0.1599 ±0.0081 0.8944 ±0.0074 – 0.2534 ±0.0033 0.6676 ±0.0039 0.1491 ±0.0016 (-) PE 0.6855 ±0.0027 0.2455 ±0.0019 0.3141 ±0.0031 – – 0.7791 ±0.0268 0.2601 ±0.0023 – – GatedGCN+0.7006 ±0.0033 0.2431 ±0.0020 0.4263 ±0.0057 0.3802 ±0.0015 0.9460 ±0.0057 0.8040 ±0.0164 0.2981 ±0.0024 0.8258 ±0.0055 0.1896 ±0.0024 (-) Edge. 0.6882 ±0.0028 0.2466 ±0.0018 0.3764 ±0.0117 0.3172 ±0.0109 0.9372 ±0.0062 0.7831 ±0.0157 0.2951 ±0.0028 0.0948 ±0.0000 0.1891 ±0.0021 (-) Norm 0.6733 ±0.0026 0.2474 ±0.0015 0.3628 ±0.0043 0.3527 ±0.0051 0.9326 ±0.0056 0.7879 ±0.0178 0.2748 ±0.0012 0.6864 ±0.0165 0.1743 ±0.0026 (-) Dropout 0.6695 ±0.0101 0.2508 ±0.0014 0.3389 ±0.0066 0.3393 ±0.0051 – – 0.2582 ±0.0036 0.8088 ±0.0062 0.1724 ±0.0027 (-) RC – 0.2498 ±0.0034 0.4075 ±0.0052 0.3475 ±0.0064 0.9402 ±0.0054 0.7833 ±0.0177 0.2897 ±0.0016 0.8099 ±0.0053 0.1844 ±0.0025 (-) FFN – – – 0.3508 ±0.0049 0.9364 ±0.0059 – 0.2875 ±0.0022 – 0.1718 ±0.0024 (-) PE 0.6729 ±0.0084 0.2461 ±0.0025 0.4052 ±0.0031 – – 0.7771 ±0.0057 0.2813 ±0.0022 – – large-scale OGB datasets, where removing dropout results in a 5–13% performance drop across 3 classic GNNs on ogbg-molpcba. Notably, 97% of the optimal dropout rates are≤0.2, and 64% are ≤0.1, indicating that a very low dropout rate is both sufficient and optimal for graph-level tasks. Interestingly, this finding for graph-level tasks con- trasts with Luo et al. (2024a)’s observations for node-level tasks, where a higher dropout rate is typically required. Observation 4: Residual connections are generally es- sential, except in shallow GNNs applied to small graphs. Removing residual connections generally leads to signifi- cant performance drops across datasets, with the only excep- tions being found in the peptide datasets. Although similar in the number of nodes to CLUSTER and PATTERN, pep- tide datasets involve GNNs with only 3-5 layers, while the others use deeper networks with over 10 layers. For shallow networks in small graphs, residual connections may not be as beneficial and can even hurt performance by disrupting feature flow. In contrast, deeper networks in larger graphs rely on residual connections to maintain gradient flow and enable stable, reliable long-range information exchange. Observation 5: FFN is crucial for GIN+and GCN+, greatly impacting their performance across datasets. Ablating FFN leads to substantial performance declines for GIN+and GCN+across almost all datasets, highlighting its essential role in graph-level tasks. Notably, on MNIST, removing FNN leads to an 88% relative accuracy drop for GIN+. This is likely because the architectures of GIN+and GCN+rely heavily on FFN for learning complex node fea-ture representations. In contrast, GatedGCN+uses gating mechanisms to adaptively adjust the importance of neigh- boring nodes’ information, reducing the need for additional feature transformations. The only exceptions are observed in the peptides datasets, where FFN is not used in all three models. This may be due to the shallow GNN architecture, where complex feature transformations are less necessary. Observation 6: PE is particularly effective for small- scale datasets, but negligible for large-scale datasets. Removing PE significantly reduces performance for classic GNNs on small-scale datasets like ZINC, PATTERN, CLUS- TER, Peptides-func, and ogbg-molhiv, which only contain 10,000-40,000 graphs. By contrast, on large-scale datasets like ogbg-code2, ogbg-molpcba, ogbg-ppa, and COCO-SP (over 100,000 graphs), the impact of PE is less pronounced. This may be because smaller datasets rely more on PE to capture graph structure, whereas larger datasets benefit from the abundance of data, reducing the need for PE. 6. Conclusion This study highlights the often-overlooked potential of clas- sic GNNs in tacking graph-level tasks. By integrating six widely used techniques into a unified GNN+framework, we enhance three classic GNNs for graph-level tasks. Evalu- ations on 14 benchmark datasets reveal that, these enhanced GNNs match or outperform GTs, while also demonstrating greater efficiency. These findings challenge the prevailing belief that GTs are inherently superior, reaffirming the capa- bility of simple GNN structures as powerful models. 8 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Impact Statements This paper presents work whose goal is to advance the field of Graph Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. References Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205 , 2020. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. Bar-Shalom, G., Bevilacqua, B., and Maron, H. Sub- graphormer: Unifying subgraph gnns and graph transformers via graph products. arXiv preprint arXiv:2402.08450 , 2024. Behrouz, A. and Hashemi, F. Graph mamba: Towards learn- ing on graphs with state space models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 119–130, 2024. Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028 , 2023. Bresson, X. and Laurent, T. Residual gated graph convnets. arXiv preprint arXiv:1711.07553 , 2017. Cai, T., Luo, S., Xu, K., He, D., Liu, T.-y., and Wang, L. Graphnorm: A principled approach to accelerating graph neural network training. In International Conference on Machine Learning , pp. 1204–1215. PMLR, 2021. Chen, D., Lin, Y ., Li, W., Li, P., Zhou, J., and Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelli- gence , volume 34, pp. 3438–3445, 2020. Chen, D., O’Bray, L., and Borgwardt, K. Structure-aware transformer for graph representation learning. In Interna- tional Conference on Machine Learning , pp. 3469–3489. PMLR, 2022. Chen, J., Gao, K., Li, G., and He, K. NAGphormer: A tokenized graph transformer for node classification in large graphs. In The Eleventh International Confer- ence on Learning Representations , 2023a. URL https: //openreview.net/forum?id=8KYeilT3Ow. Chen, Z., Tan, H., Wang, T., Shen, T., Lu, T., Peng, Q., Cheng, C., and Qi, Y . Graph propagation trans- former for graph representation learning. arXiv preprint arXiv:2305.11424 , 2023b.Choi, Y . Y ., Park, S. W., Lee, M., and Woo, Y . Topology-informed graph transformer. arXiv preprint arXiv:2402.02005 , 2024. Ding, Y ., Orvieto, A., He, B., and Hofmann, T. Recurrent distance-encoding neural networks for graph representa- tion learning, 2024. URL https://openreview.net/forum? id=lNIj5FdXsC. Dwivedi, V . P. and Bresson, X. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699 , 2020. Dwivedi, V . P., Luu, A. T., Laurent, T., Bengio, Y ., and Bres- son, X. Graph neural networks with learnable structural and positional representations. In International Confer- ence on Learning Representations , 2021. Dwivedi, V . P., Ramp ´aˇsek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph bench- mark. arXiv preprint arXiv:2206.08164 , 2022. Dwivedi, V . P., Joshi, C. K., Luu, A. T., Laurent, T., Ben- gio, Y ., and Bresson, X. Benchmarking graph neural networks. Journal of Machine Learning Research , 24 (43):1–48, 2023. Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 , 2019. Freitas, S. and Dong, Y . A large-scale database for graph representation learning. Advances in neural information processing systems , 2021. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chem- istry. In International conference on machine learning , pp. 1263–1272. PMLR, 2017. Gutteridge, B., Dong, X., Bronstein, M. M., and Di Gio- vanni, F. Drew: Dynamically rewired message pass- ing with delay. In International Conference on Machine Learning , pp. 12252–12267. PMLR, 2023. Hamilton, W., Ying, Z., and Leskovec, J. Inductive repre- sentation learning on large graphs. Advances in neural information processing systems , 30, 2017. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. A generalization of vit/mlp-mixer to graphs. InInternational conference on machine learning , pp. 12724–12745. PMLR, 2023. 9 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012. Hoang, V . T., Lee, O., et al. A survey on structure-preserving graph transformers. arXiv preprint arXiv:2401.16176 , 2024. Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V ., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 , 2019. Hu, W., Fey, M., Zitnik, M., Dong, Y ., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems , 33:22118–22133, 2020. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 4700–4708, 2017. Huang, S., Song, Y ., Zhou, J., and Lin, Z. Cluster-wise graph transformer with dual-granularity kernelized at- tention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. URL https://openreview.net/forum?id=3j2nasmKkP. Huang, Y ., Miao, S., and Li, P. What can we learn from state space models for machine learning on graphs? arXiv preprint arXiv:2406.05815 , 2024b. Hussain, M. S., Zaki, M. J., and Subramanian, D. Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 655–665, 2022. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. InInternational conference on machine learning , pp. 448– 456. pmlr, 2015. Kipf, T. N. and Welling, M. Semi-supervised classifica- tion with graph convolutional networks. In International Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl. Kreuzer, D., Beaini, D., Hamilton, W., L ´etourneau, V ., and Tossou, P. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems , 34:21618–21629, 2021. Li, G., Muller, M., Thabet, A., and Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision , pp. 9267–9276, 2019.Li, P., Wang, Y ., Wang, H., and Leskovec, J. Distance en- coding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems , 33:4465–4478, 2020. Li, Q., Han, Z., and Wu, X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI conference on artificial intelligence , 2018. Liang, J., Chen, M., and Liang, J. Graph external attention enhanced transformer. arXiv preprint arXiv:2405.21061 , 2024. Lin, C., Ma, L., Chen, Y ., Ouyang, W., Bronstein, M. M., and Torr, P. Understanding graph transformers by gen- eralized propagation, 2024. URL https://openreview.net/ forum?id=JfjduOxrTY. Loshchilov, I. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. Luo, S., Li, S., Zheng, S., Liu, T.-Y ., Wang, L., and He, D. Your transformer may not be as powerful as you expect. Advances in Neural Information Processing Systems , 35: 4301–4315, 2022. Luo, Y ., Shi, L., and Thost, V . Improving self-supervised molecular representation learning using persistent homol- ogy. In Thirty-seventh Conference on Neural Information Processing Systems , 2023a. URL https://openreview.net/ forum?id=wEiUGpcr0M. Luo, Y ., Shi, L., Xu, M., Ji, Y ., Xiao, F., Hu, C., and Shan, Z. Impact-oriented contextual scholar profiling using self-citation graphs. arXiv preprint arXiv:2304.12217 , 2023b. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. In Thirty-seventh Conference on Neural Information Processing Systems , 2023c. URL https:// openreview.net/forum?id=g49s1N5nmO. Luo, Y ., Shi, L., and Wu, X.-M. Classic GNNs are strong baselines: Reassessing GNNs for node classification. In The Thirty-eight Conference on Neural Information Pro- cessing Systems Datasets and Benchmarks Track , 2024a. URL https://openreview.net/forum?id=xkljKdGe4E. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. Advances in Neural Information Process- ing Systems , 36, 2024b. Luo, Y ., Li, H., Liu, Q., Shi, L., and Wu, X.-M. Node identifiers: Compact, discrete representations for effi- cient graph learning. In The Thirteenth International Conference on Learning Representations , 2025a. URL https://openreview.net/forum?id=t9lS1lX9FQ. 10 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Luo, Y ., Wu, X.-M., and Zhu, H. Beyond random masking: When dropout meets graph convolutional networks. In The Thirteenth International Conference on Learning Representations , 2025b. URL https://openreview.net/ forum?id=PwxYoMvmvy. Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P., and Lim, S.-N. Graph inductive biases in transformers without message passing. arXiv preprint arXiv:2305.17589 , 2023. Min, E., Chen, R., Bian, Y ., Xu, T., Zhao, K., Huang, W., Zhao, P., Huang, J., Ananiadou, S., and Rong, Y . Trans- former for graphs: An overview from architecture per- spective. arXiv preprint arXiv:2202.08455 , 2022. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Pro- ceedings of the AAAI conference on artificial intelligence , volume 33, pp. 4602–4609, 2019. Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M. Tudataset: A collection of bench- mark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 , 2020. M¨uller, L., Galkin, M., Morris, C., and Ramp ´aˇsek, L. Attending to graph transformers. arXiv preprint arXiv:2302.04181 , 2023. Ngo, N. K., Hy, T. S., and Kondor, R. Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics , 159(3), 2023. Niepert, M., Ahmed, M., and Kutzkov, K. Learning con- volutional neural networks for graphs. In International conference on machine learning , pp. 2014–2023. PMLR, 2016. Park, W., Chang, W., Lee, D., Kim, J., and Hwang, S.-w. Grpe: Relative positional encoding for graph transformer. arXiv preprint arXiv:2201.12787 , 2022. Ramp ´aˇsek, L., Galkin, M., Dwivedi, V . P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scal- able graph transformer. arXiv preprint arXiv:2205.12454 , 2022. Sancak, K., Hua, Z., Fang, J., Xie, Y ., Malevich, A., Long, B., Balin, M. F., and C ¸ataly ¨urek, ¨U. V . A scalable and effective alternative to graph transformers. arXiv preprint arXiv:2406.12059 , 2024. Shehzad, A., Xia, F., Abid, S., Peng, C., Yu, S., Zhang, D., and Verspoor, K. Graph transformers: A survey. arXiv preprint arXiv:2407.09777 , 2024.Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. arXiv preprint arXiv:2303.06147 , 2023. Shu, J., Xi, B., Li, Y ., Wu, F., Kamhoua, C., and Ma, J. Understanding dropout for graph neural networks. In Companion Proceedings of the Web Conference 2022 , pp. 1128–1138, 2022. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929–1958, 2014. Tang, J., Sun, J., Wang, C., and Yang, Z. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowl- edge discovery and data mining , pp. 807–816, 2009. T¨onshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. arXiv preprint arXiv:2309.00367 , 2023. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. Veliˇckovi ´c, P., Cucurull, G., Casanova, A., Romero, A., Li`o, P., and Bengio, Y . Graph attention networks. In International Conference on Learning Representations , 2018. Wang, C., Tsepa, O., Ma, J., and Wang, B. Graph-mamba: Towards long-range graph sequence modeling with se- lective state spaces. arXiv preprint arXiv:2402.00789 , 2024. Wu, Q., Yang, C., Zhao, W., He, Y ., Wipf, D., and Yan, J. DIFFormer: Scalable (graph) transformers induced by en- ergy constrained diffusion. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=j6zUzrapY3L. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y . A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning sys- tems, 32(1):4–24, 2020. Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems , 34:13266– 13279, 2021. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. 11 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yang, Z., Cohen, W., and Salakhudinov, R. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning , pp. 40–48. PMLR, 2016. Yin, S. and Zhong, G. Lgi-gt: Graph transformers with local and global operators interleaving. 2023. Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y ., and Liu, T.-Y . Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems , 34:28877–28888, 2021. Yosinski, J., Clune, J., Bengio, Y ., and Lipson, H. How trans- ferable are features in deep neural networks? Advances in neural information processing systems , 27, 2014. Yun, S., Jeong, M., Kim, R., Kang, J., and Kim, H. J. Graph transformer networks. Advances in neural information processing systems , 32, 2019. Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of GNNs via graph biconnectivity. In The Eleventh International Conference on Learning Rep- resentations , 2023. URL https://openreview.net/forum? id=r9hNv76KoT3. Zhang, J., Zhang, H., Xia, C., and Sun, L. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140 , 2020. 12 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence A. Datasets and Experimental Details A.1. Computing Environment Our implementation is based on PyG (Fey & Lenssen, 2019). The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. A.2. Datasets Table 7 presents a summary of the statistics and characteristics of the datasets. •GNN Benchmark (Dwivedi et al., 2023) . ZINC contains molecular graphs with node features representing atoms and edge features representing bonds The task is to regress the constrained solubility (logP) of the molecule. MNIST and CIFAR10 are adapted from image classification datasets, where each image is represented as an 8-nearest-neighbor graph of SLIC superpixels, with nodes representing superpixels and edges representing spatial relationships. The 10-class classification tasks follow the original image classification tasks. PATTERN andCLUSTER are synthetic datasets sampled from the Stochastic Block Model (SBM) for inductive node classification, with tasks involving sub-graph pattern recognition and cluster ID inference. For all datasets, we adhere to the respective training protocols and standard evaluation splits (Dwivedi et al., 2023). •Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021) . Peptides-func andPeptides- struct are atomic graphs of peptides from SATPdb, with tasks of multi-label graph classification into 10 peptide functional classes and graph regression for 11 3D structural properties, respectively. PascalVOC-SP andCOCO-SP are node classification datasets derived from the Pascal VOC and MS COCO images by SLIC superpixelization, where each superpixel node belongs to a particular object class. We did not use PCQM-Contact in (Dwivedi et al., 2022) as its download link was no longer valid. MalNet-Tiny (Freitas & Dong, 2021) is a subset of MalNet with 5,000 function call graphs (FCGs) from Android APKs, where the task is to predict software type based on structure alone. For each dataset, we follow standard training protocols and splits (Dwivedi et al., 2022; Freitas & Dong, 2021). •Open Graph Benchmark (OGB) (Hu et al., 2020) .We also consider a collection of larger-scale datasets from OGB, containing graphs in the range of hundreds of thousands to millions: ogbg-molhiv andogbg-molpcba are molecular property prediction datasets from MoleculeNet. ogbg-molhiv involves binary classification of HIV inhibition, while ogbg-molpcba predicts results of 128 bioassays in a multi-task setting. ogbg-ppa contains protein-protein association networks, where nodes represent proteins and edges encode normalized associations between them; the task is to classify the origin of the network among 37 taxonomic groups. ogbg-code2 consists of abstract syntax trees (ASTs) from Python source code, with the task of predicting the first 5 subtokens of the function’s name. We maintain all the OGB standard evaluation settings (Hu et al., 2020). Table 7. Overview of the datasets used for graph-level tasks (Dwivedi et al., 2023; 2022; Hu et al., 2020; Freitas & Dong, 2021). Dataset # graphs Avg. # nodes Avg. # edges # node/edge feats Prediction level Prediction task Metric ZINC 12,000 23.2 24.9 28/1 graph regression MAE MNIST 70,000 70.6 564.5 3/1 graph 10-class classif. Accuracy CIFAR10 60,000 117.6 941.1 5/1 graph 10-class classif. Accuracy PATTERN 14,000 118.9 3,039.3 3/1 inductive node binary classif. Accuracy CLUSTER 12,000 117.2 2,150.9 7/1 inductive node 6-class classif. Accuracy Peptides-func 15,535 150.9 307.3 9/3 graph 10-task classif. Avg. Precision Peptides-struct 15,535 150.9 307.3 9/3 graph 11-task regression MAE PascalVOC-SP 11,355 479.4 2,710.5 14/2 inductive node 21-class classif. F1 score COCO-SP 123,286 476.9 2,693.7 14/2 inductive node 81-class classif. F1 score MalNet-Tiny 5,000 1,410.3 2,859.9 5/1 graph 5-class classif. Accuracy ogbg-molhiv 41,127 25.5 27.5 9/3 graph binary classif. AUROC ogbg-molpcba 437,929 26.0 28.1 9/3 graph 128-task classif. Avg. Precision ogbg-ppa 158,100 243.4 2,266.1 1/7 graph 37-task classif. Accuracy ogbg-code2 452,741 125.2 124.2 2/2 graph 5 token sequence F1 score A.3. Hyperparameters and Reproducibility Please note that we mainly follow the experiment settings of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). For the hyperparameter selections of classic GNNs, in addition to what we have covered, we list other settings in Tables 8, 9, 10, 13 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence 11, 12, 13. Further details regarding hyperparameters can be found in our code. In all experiments, we use the validation set to select the best hyperparameters. GNN+denotes enhanced implementation of the GNN model. Our code is available under the MIT License. Table 8. Hyperparameter settings of GCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 6 5 12 12 Edge Feature Module True True True True False Normalization BN BN BN BN BN Dropout 0.0 0.15 0.05 0.05 0.1 Residual Connections True True True True True FFN True True True True True PE RWSE-32 False False RWSE-32 RWSE-20 Hidden Dim 64 60 65 90 90 Graph Pooling add mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.0005 0.001 0.001 0.001 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 260,177 112,570 114,345 517,219 516,674 Time (epoch) 7.6s 60.1s 40.2s 19.5s 29.7s Table 9. Hyperparameter settings of GCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 14 18 8 4 10 4 4 Edge Feature Module True False True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.05 0.0 0.1 0.2 0.2 0.2 Residual Connections False False True True True False False True True FFN False False True True True True True True True PE RWSE-32 RWSE-32 False False False RWSE-20 RWSE-16 False False Hidden Dim 275 255 85 70 110 256 512 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.001 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 400 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 507,351 506,127 520,986 460,611 494,235 1,407,641 13,316,700 5,549,605 23,291,826 Time (epoch) 6.9s 6.6s 12.5s 162.5s 6.6s 16.3s 91.4s 178.2s 476.3s 14 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 10. Hyperparameter settings of GIN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 5 5 8 10 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.0 0.1 0.05 0.05 0.05 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 80 60 60 100 90 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.001 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 477,241 118,990 115,450 511,829 497,594 Time (epoch) 9.4s 56.8s 46.3s 18.5s 20.5s Table 11. Hyperparameter settings of GIN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 16 16 5 3 16 5 4 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.0 0.0 0.0 0.3 0.15 0.1 Residual Connections True True True True True True True False True FFN False False True True True False True True True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 240 200 70 70 130 256 300 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 250 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 506,126 518,127 486,039 487,491 514,545 481,433 8,774,720 8,173,605 24,338,354 Time (epoch) 7.4s 6.1s 14.8s 169.2s 5.9s 10.9s 89.2s 213.9s 489.8s 15 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 12. Hyperparameter settings of GatedGCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 9 10 10 12 16 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.05 0.05 0.15 0.2 0.2 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 70 35 35 64 56 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.0005 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 413,355 118,940 116,490 466,001 474,574 Time (epoch) 10.5s 137.9s 115.0s 32.6s 34.1s Table 13. Hyperparameter settings of GatedGCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 5 4 12 20 6 3 10 4 5 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.05 0.2 0.15 0.05 0.0 0.0 0.2 0.15 0.2 Residual Connections False True True True True True True True True FFN False False False True True False True False True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 135 145 95 52 100 256 256 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 32 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 521,141 492,897 559,094 508,589 550,905 1,076,633 6,016,860 5,547,557 29,865,906 Time (epoch) 17.3s 8.0s 21.3s 208.8s 8.9s 15.1s 85.1s 479.8s 640.1s 16 | 4 | 1 | The paper describes training across 14 well-known graph-level datasets with a mean parameter count of approximately 500K for classic GNNs, which is manageable for modern GPUs. Assuming training occurs over 2000 epochs, the time per epoch for the enhanced GNNs is reported to be less than that for SOTA GTs, suggesting it will be less than 100 seconds per epoch. With this estimate, approximately 4 hours total training time can be expected on a single GPU, especially if optimized correctly. Given the architecture incorporates techniques that increase training efficiency, including normalization and dropout, this further supports the viability of training within the proposed timeframe. | yes | Yes | Graph | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00.000Z | [https://github.com/LUOyk1999/GNNPlus] | 1 | http://snap.stanford.edu/ogb/data/graphproppred/csv_mol_download/hiv.zip | approx 40 min - ( 100 epochs * 22.8s) | https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing | Yes | null |
Fashion-MNIST | Continued fraction of straight lines | [] | Real-valued continued fraction of straight lines | 2024-12-16T00:00:00 | https://arxiv.org/abs/2412.16191v1 | [
"https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py"
] | {'Accuracy': '84.12', 'Trainable Parameters': '7870', 'NMI': '74.4'} | [
"Percentage error",
"Accuracy",
"Trainable Parameters",
"NMI",
"Power consumption"
] | Given the following paper and codebase:
Paper: Real-valued continued fraction of straight lines
Codebase: https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py
Improve the Continued fraction of straight lines model on the Fashion-MNIST dataset. The result
should improve on the following metrics: {'Accuracy': '84.12', 'Trainable Parameters': '7870', 'NMI': '74.4'}. You must use only the codebase provided.
| Real-valued continued fraction of straight lines Vijay Prakash S Alappuzha, Kerala, India. prakash.vijay.s@gmail.com Abstract In an unbounded plane, straight lines are used extensively for mathematical analysis. They are tools of conve- nience. However, those with high slope values become unbounded at a faster rate than the independent variable. So, straight lines, in this work, are made to be bounded by introducing a parametric nonlinear term that is positive. The straight lines are transformed into bounded nonlinear curves that become unbounded at a much slower rate than the independent variable. This transforming equation can be expressed as a continued fraction of straight lines. The continued fraction is real-valued and converges to the solutions of the transforming equation. Following Euler’s method, the continued fraction has been reduced into an infinite series. The usefulness of the bounding nature of continued fraction is demonstrated by solving the problem of image classification. Parameters estimated on the Fashion-MNIST dataset of greyscale images using continued fraction of regression lines have less variance, converge quickly and are more accurate than the linear counterpart. Moreover, this multi-dimensional parametric estimation problem can be expressed on xy−plane using the parameters of the continued fraction and patterns emerge on planar plots. 1 Introduction Nonlinearity is often expressed as various powers of the independent variable xon the xy−plane. Such expressions, especially those with high powers of x, make ysensitive to small changes in xand become unbounded. Perhaps, the most convenient equation to plot on an xy−plane is y=x, which can be parameterized as y=mx. Any point on thexy−plane can be expressed with y=mx, which has y−axis at the limit m→ ±∞ . However, even for m > 1, the dependent y−values become greater than the independent x−values and hence can become unbounded for large values of x. So, even a straight line expression can lead to unbounded situations given m∈(−∞,∞). To keep the dependent values bounded, we introduce a nonlinear term as follows (1 +ay2)y=mx. (1) In the above equation, a >0 and the term ay2>0. So, yvalues are always less than the xvalues. Eqn. (1) is plotted in Fig. (1) for various aandm. It can be seen from Fig. (1) that yof Eqn. (1) is always bounded by y=mx lines. In polar co-ordinates ( r, θ), for x=rcosθandy=rsinθ, Eqn. (1) becomes tanθ(1 +ar2sin2θ) =m. Choosing r= 1/√a, we get tanθ(1 + sin2θ) =m. (2) Solving the above equation (Eqn. (2)) numerically, we get the polar plots as shown in Fig. (2). The curves reach origin in the asymptotic limit a→ ∞ . In the following section, we will discuss about the real solution of Eqn. (1) and its properties. In Section 3, in order to demonstrate the bounding nature of Eqn. (1), we consider the problem of classification of greyscale images of the Fashion-MNIST dataset into 10 categories. Discussions on the results of classification are provided in Section 4. Finally, we conclude our work in Section 5. 1arXiv:2412.16191v1 [cs.LG] 16 Dec 2024 40 20 0 20 40 x10 010203040ylinear(a=0,m=1) a=0.1,m=1 a=5,m=10 a=0.1,m=1 a=5,m=10 a=1e3,m=1 a=1e3,m=1 Figure 1: yfor different aandmvalues. As a→ ∞ we approach the x−axis. 0°45°90° 135° 180° 225° 270°315°0.51.01.52.02.53.0 (a)a∈[1e−1,1] and m∈interval [-200,200]. 20 15 10 5 0 5 10 15 20 m1.5 1.0 0.5 0.00.51.01.5 tan=m/(1+sin2) tan=m (b) Reduced phase when compared to the linear case. Figure 2: Polar plots of Eqn. (1) with r= 1/√a 2 Real solution and its properties Eqn. (1) has two complex roots and a real root for all a >0. The real solution is given by y=−1 3t1 3+1 at−1 3,where t=−27mx 2a +s27mx 2a2 +27 a3. (3) The solution of yis sum of two components i: −1 3t1 3and ii:1 at−1 3. The plots of these two components are shown in Fig. (3) for various values of aandm. The two components i and ii do not intersect on the xy−plane for various values of aandm. This is also shown in Fig. 4a. Since i and ii do not intersect we can consider them as axes and obtain plots for various aandmvalues. This is shown in Fig. 4b. The origin of i,ii- plane is at the limit a→ ∞ , as i→0 and ii →0. Properties: 1. As mentioned in the previous section, as a→ ∞ the i+ii curves approach x−axis on the xy−plane. 2 10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0 x4 3 2 1 0t1/3 3 a=0.1 a=1.0 a=10.0(a) Plots of i: −1 3t1 3for various aandm∈interval [-1,1]. 10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0 x012341 at1/3a=0.1 a=1.0 a=10.0 (b) Plots of ii:1 at−1 3for various aandm∈interval [-1,1]. Figure 3: Plots of individual components i and ii of yof Eqn. (3). 10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0 x8 6 4 2 024 t1/3 3 1 at1/3 m=-1.00 m=-0.60 m=-0.20 m=0.20 m=0.60 m=1.00 (a) Plots of i and ii for a= 0.1 and for various m∈[−1,−0.6,−0.2,0.2,0.6,1]. 40 30 20 10 0 t1/3/3 0102030401/at1/3a=0.001 a=0.005 a=0.01 a=0.1 a=1.0(b) Plots of various ain (i,ii)- plane with m∈interval [-100,100]. Figure 4: (a) shows i and ii for a fixed a. (b) With i and ii as axes, we get unique curves for each a∈[1e−3,5e−3,1e−2,0.1,1] and for various values of m. 2. From Fig. (4b) with components i and ii as axes, it can be seen that we only have to determine ain order to obtain unique solutions on the (i,ii)- plane. So, in Section 3, we estimate parameters of the problem based on the estimate of a. Once aconverges, the other parameters converge. 3. Eqn. (1) can be written as y=mx 1 +ay2, which can be expressed as a continued fraction y=mx 1 +am2x2 1+a(mx 1+···)22. Let us re-write the above equation as ˆy=1 1 + ˆa1 1+ˆa(1 1+···)22, 3 where ˆ y=y/(mx) and ˆ a=am2x2. Following Euler’s method [1], we will find simple fractions that continuously approach ˆ y. On the right-hand side, initially we have 1 /(1 + ˆ a). We have the numerator A= 1 and the denominator A= 1 + ˆ a. Let us reduce the continued fraction into an infinite series [1] with the first fraction 4 3 2 1 0 1 2 3 4 x4 3 2 1 01234yy=x y=x/(1+y2) C/ G/ 500 terms Figure 5: Continued fraction series converging to y= 1/at1/3−t1/3/3 ory= i + ii values. All the curves including the truncated Eqn. (4) are bounded by y=xline. a= 1, m= 1 being A/A. Further we have, B B=1 1 + ˆa1 (1+ˆa)2=(1 + ˆa)2 (1 + ˆa)2+a=A2 A2+ ˆa, C C=1 1 + ˆa1 1+ˆa(1 1+ˆa)22=B2 B2+ ˆaB2, D D=1 1 + ˆa1 1+ˆa 1 1+ˆa(1 1+ˆa)2!2!2=C2 C2+ ˆaC2, and so on. The last of the above fractions is closer to the ˆ yvalue. Now, B B−A A=A2 A2+ ˆa−1 A=A3−(A2+ ˆa) A(A2+ ˆa) ⇒B B=1 1 + ˆa+A3−(A2+ ˆa) A(A2+ ˆa), C C−B B=B2 B2+ ˆaB2−B B=B3−B(B2+ ˆaB2) B(B2+ ˆaB2) ⇒C C=1 1 + ˆa+A3−(A2+ ˆa) A(A2+ ˆa)+B3−B(B2+ ˆaB2) B(B2+ ˆaB2). Similarly,D D=1 1 + ˆa+A3−(A2+ ˆa) A(A2+ ˆa)+B3−B(B2+ ˆaB2) B(B2+ ˆaB2)+C3−C(C2+ ˆaC2) C(C2+ ˆaC2), and so on. The last term D/Dis relatively closer to the ˆ yvalue. Thus, we have reduced the continued fraction 4 into an infnite series given by ˆy=1 1 + ˆa+A3−(A2+ ˆa) A(A2+ ˆa)+B3−B(B2+ ˆaB2) B(B2+ ˆaB2)+C3−C(C2+ ˆaC2) C(C2+ ˆaC2)+··· ⇒y mx=1 1 +am2x2+(1 +am2x2)3−((1 + am2x2)2+am2x2) (1 +am2x2)((1 + am2x2)2+am2x2) +((1 + am2x2)2+am2x2)3−(1 +am2x2)2[((1 + am2x2)2+am2x2)2+am2x2(1 +am2x2)4] ((1 + am2x2)2+am2x2)((1 + am2x2)2+am2x2)2+am2x2(1 +am2x2)4) +··· (4) It can be seen from Fig. (5) that when boundedness is introduced to y=mxlines through continued fractions, yconverges to i+ii curves. 4.When two y=i+ii curves intersect on the xy−plane the curves have unique aandmvalues. Let us consider two intersecting curves. From Eqn. (1), y=m1x 1 +a1y2and y=m2x 1 +a2y2. When the two curves intersect we have m1x 1 +a1y2=m2x 1 +a2y2 ⇒y2=m2−m1 m1a2−m2a1(5) When m1=m2we get y= 0 and when a1=a2we get y2to be a negative quantity which is not possible in thexy−plane. Hence, when two i+ii curves intersect, they have unique aandmvalues. Also, from Fig. (4b), we obtain unique solutions for each a. Hence, once we determine and fix a, the curves do not intersect except aty= 0. With aandmas parameters of real-valued continued fraction of straight lines, we will consider an image classification problem in the following section. This is because the problem has sufficient sample points to estimate aandmand study their influence on the estimation of linear parameters of regression (Chapter 3 of [2]). The classification is done based on the estimation of parameters of regression using the gradient descent algorithm [3]. The additional parameters aandmplay key roles in convergence and accuracy. 3 Image classification problem In this section, we will solve an image classification problem using the continued fraction of regression lines and compare with the results obtained using regression lines. We will classify 28x28 greyscale images of Fashion-MNIST dataset [4] into 10 categories or classes by estimating the parameters that are used to find class probabilities. This dataset is more challenging in terms of achieving high accuracy of classification [4]. We use gradient-descent method [3] to estimate parameters including aandmfor each category. Parametric representation: One of the 28x28 greyscale images is shown in Fig. 6 which belongs to one of the 10 categories present in the dataset. The dataset contains 60,000 such training samples and 10,000 test samples. Each pixel value is represented as a component of the input vector x. The representation is a linear combination: w0+n=784X i=1wixi=wTx (6) 5 (a) 0 2 4 6 801110200232232233229 00000183225216223228 00000193228218213198 013012219220212218192 006099244222220218203 040055236228230228240 0000237226217223222219 062145204228207213221218208 189228220222217226200205211230 204214208209200159245193206223 (b) 0.0 0.0 0.0 0.0 0.00.78 0.91 0.91 0.91 0.9 0.0 0.0 0.0 0.0 0.00.72 0.88 0.85 0.87 0.89 0.0 0.0 0.0 0.0 0.00.76 0.89 0.85 0.84 0.78 0.0 0.00.01 0.00.05 0.86 0.86 0.83 0.85 0.75 0.0 0.00.02 0.00.39 0.96 0.87 0.86 0.85 0.8 0.00.02 0.0 0.00.22 0.93 0.89 0.90.89 0.94 0.0 0.0 0.0 0.00.93 0.89 0.85 0.87 0.87 0.86 0.00.24 0.57 0.80.89 0.81 0.84 0.87 0.85 0.82 0.74 0.89 0.86 0.87 0.85 0.89 0.78 0.80.83 0.9 0.80.84 0.82 0.82 0.78 0.62 0.96 0.76 0.81 0.87 (c)Figure 6: (a) One of the sample images of Fashion-MNIST dataset, (b) A section of the sample image with pixel values shown and (c) Normalized values of (b) where w′ is are the parameters of regression with x0= 1 included in xfor simpler notation. So that, xhas 28 ×28 = 784 + 1 components. Continued fraction of wTxconverges to the real solution of Eqn. (1) which is now given by ay3+y=m(wTx) (7) ⇒y(wTx) = −1 3t1 3+1 at−1 3,where now t=−27m(wTx) 2a +s27m(wTx) 2a2 +27 a3. (8) yis then used to obtain class probabilities through the logistic or sigmoid function, so that we get outputs between 0 and 1. The sigmoid function is given by σ(y) =1 1 + exp( −y), which is compared with the logistic regression function σ(wTx) =1 1 + exp( −wTx). σ(wTx) is a special case of σ(y) with m= 1 and aasymp = 0, where aasymp isaat its asymptotic limit a→0. The parameters are estimated using the provided outputs from the 60,000 training samples. The outputs are expressed as one-hot encoded vectors, i.e., for example, if jthsample belongs to the 3rdcategory, then the output is encoded as p∗ j= [0,0,0,1,0,0,0,0,0,0]. Therefore, p∗ jis the optimal desired output of σ. Using the training samples and their output vectors p∗ j’s, the parameters are chosen to maximize the joint probability of the corresponding category in σ. The parametric estimation is measured with the following loss function L(w0, w1, .., w 784, a, m ) =−1 nnX jp∗ jlogσj, (9) which is the negative log likelihood function over nsamples. The σ(y) form of loss function and σ(wTx) form of loss function are used to estimate w′ is ofy(wTx) and w′ is ofwTx, respectively. The parameters are estimated using the gradient descent optimization algorithm. In order to minimize Eqn. (9), starting from initial values, the parameters a,mandw′ is of Eqn. (6) are estimated in the direction opposite to the gradient of the loss function: wi=w(prev) i−∂L ∂wi, m=m(prev)−∂L ∂m,anda=a(prev)−α∂L ∂a, (10) 6 where w(prev) i ,m(prev)anda(prev)are previous values of the respective parameters and αis the step size. ∂L/∂w i, ∂L/∂m and∂L/∂a are given by ∂L ∂wi= (σ−p∗)∂y ∂wi,∂L ∂m= (σ−p∗)∂y ∂mand∂L ∂a= (σ−p∗)∂y ∂a, respectively. The derivatives with respect to yare obtained from Eqn. (7) as follows: ∂y ∂wi=mxi 1 + 3 ay2,∂y ∂m=wTx 1 + 3 ay2,and∂y ∂a=−y3 1 + 3 ay2. (11) Note that usually step size is introduced for all the parameters that are estimated using the gradient descent algorithm [3]. But here we do not need a step size for mandw′ is, since stepping is adaptive and bounded using the positive term 3 ay2in Eqn. (11). The step size αis introduced so that the constraint a >0 is satisfied while stepping. Implementation: The following are the steps taken to estimate the parameters: 1. Pixel values in greyscale images vary between 0 and 255. Dividing by 255, the pixel values are normalized and x∈[0,1] interval as shown in Fig. (6). 2. The categories are represented as one-hot encoded vectors p∗ jfor the jthtraining sample. 3. Parameters w0,1,2,3,...,784are initialized as w(0) i, a 10 ×(784 + 1) random matrix representing 784 pixel values of 10 categories and w′ 0s for each category. mis initialized as a 10x1 null vector with component m(0)= 0 for all categories. 4.ais initialized as a(0)= 1 or a(0)= 5 for all categories. So, ais a 10 ×1 vector. 5. 60,000 samples are divided into 100 batches of 600 samples and gradient descent based updates of parameters is carried out for each batch [3]. This way of updating is known as mini-batch gradient descent (Sec. 2.3 of [3]). Therefore, we have n= 600 in Eqn. (9). 6. At each batch of nsamples: (a) We compute yand update aas a=a(prev)−α nn=600X j=1(σj−p∗ j)−y3 j 1 + 3 ay2 j. (12) (b) If a <0 for any category, then we step back to a=a(prev)and reduce the step size as α=α/1.1 and reevaluate Eqn. (12). This is done till a >0 for all categories. (c) If a >0 for all categories, then we recompute yusing the updated a.aandyare then used to update m using Eqn. (10) as m=m(prev)−1 nn=600X j=1(σj−p∗ j)(wTx)j 1 + 3 ay2 j. (13) (d) We again recompute yusing the updated aandm. Now, using this yand the updated aandmwe update w′ is using Eqn. (10) as wi=w(prev) i−1 nn=600X j=1(σj−p∗ j)mxij 1 + 3 ay2 j. (14) (e) Substituting m= 1 and a= 0 in the above equation we get the updates for the w′ is ofσ(wTx), which is the linear case. 7. The above steps are repeated for 100 batches completing one iteration over the entire 60,000 training samples. 7 8. After every iteration, the estimated parameters are used to classify the test image samples xtestasσ(y(wTxtest)) (orσ(wTxtest)) which are compared with the test outputs p∗ testand the accuracy of classification is evaluated as the percentage of correct classifications. 9. After 50 iterations, σ(y) form of accuracy and loss values are plotted and compared with the results of those obtained using σ(wTx). These are shown in Fig. (7) and (8) for initial conditions of a(0)= 1 and a(0)= 5, respectively. The convergence of aandmvalues are also provided in Fig. (7) and (8). Their final values are provided in Table 1. The estimated w′ is of both σ(y) and σ(wTx) are provided in Fig. (9) and for initial conditions a(0)= 1 and a(0)= 5. The results are discussed in the following section. 0 10 20 30 40 50 iterations0.000.050.100.150.200.250.30a 0 1 2 3 4 5 6 7 8 9 (a)avalues for each category (0th to 9th). 0 10 20 30 40 50 iterations5 4 3 2 1 0123m 0 1 2 3 4 5 6 7 8 9 (b) Corresponding mvalues for each category. 0 10 20 30 40 50 iterations55606570758085% of correct predictions on test data (y) form (wTx) form (c) Plots of accuracy of prediction for both forms of σ. 0 10 20 30 40 50 iterations0.40.50.60.70.80.91.01.11.2Loss (y) form (wTx) form (d) Corresponding plots of loss function. Figure 7: Estimation of parameters of continued fraction aandmare shown in (a) and (b). Accuracy and loss with both forms of σare compared in (c) and (d). Initial conditions: a(0)= 1 and m(0)= 0; and w(0) iare random values of a standard normal distribution. Final aandmvalues, a(50)andm(50)are shown in Table 1. 4 Discussions 4.1 Boundedness In Figs. (1) and (2), yvalues on xy−plane and θvalues on ( r, θ)−plane are bounded by the linear version y=mx or tan θ=m, respectively. This can also be seen from Fig. (5). Even when the terms of truncated continued fraction 8 0 10 20 30 40 50 iterations0.000.250.500.751.001.251.501.75a 0 1 2 3 4 5 6 7 8 9(a)avalues for each category (0th to 9th) 0 10 20 30 40 50 iterations6 4 2 024m 0 1 2 3 4 5 6 7 8 9 (b)mvalues for each category (0th to 9th) 0 10 20 30 40 50 iterations6570758085% of correct predictions on test data (y) form (wTx) form (c) Plots of accuracy of prediction for both forms of σ. 0 10 20 30 40 50 iterations0.20.40.60.81.01.21.4Loss (y) form (wTx) form (d) Corresponding plots of loss function. Figure 8: Estimation of parameters of continued fraction aandmare shown in (a) and (b). Accuracy and loss with both forms of σare compared in (c) and (d). Initial conditions: a(0)= 5 and m(0)= 0; and w(0) iare random values of a standard normal distribution. Final aandmvalues, a(50)andm(50)are shown in Table 1. (such as C/C,G/G) diverge from i+ii curves, they are all bounded by y=xin Fig. (5). The bounding nature can also be seen in Fig. (9). The variance of w′ is is considerably decreased due to the bounding nature of continued fraction of straight lines. Yet, the variations in w′ is are captured and the accuracy of classification has been enhanced. In Fig. (9a), (9d) and (9g) the values of w′ is vary between (-700,700), (-650,400) and (-1000,1500) intervals, respectively. These are w′ is ofσ(wTx). Corresponding w′ is ofσ(y) in Fig. (9) vary in the range (-2,2) for both a(0)= 1 and a(0)= 5. High values of w′ is make the model more sensitive to noise and also become less reliable due to overfitting of training data. This is also the reason for more fluctuations, across iterations, in loss and accuracy using σ(wTx) compared to those using σ(y) as shown in Fig. (8c), (8d), (9c) and (9d). 4.2 Convergence From Fig. (7a), (7b) and Fig. (8a), (8b) it can be seen that aandmconverge to a constant value for each category. Once aandmreach almost constant values the plots of loss and accuracy become smooth. For example, for a(0)= 5, the plots of Fig. 8 become smoother after 40 iterations, after most of aandmvalues converge. The plots of accuracy and loss of Fig. 7 for a(0)= 1 are much smoother than those of a(0)= 5. For a(0)= 1, aandmconverge sooner than those with a(0)= 5. Also, aandmvalues converge simultaneously towards constant values in Fig. (7a), (7b), (8a) and (8b). In Fig. 9 (a)w′ is of 0thclass of σ(wTx). w(50) i∈(−700,700). (b)w′ is of 0thclass of σ(y) with a(0)= 1,m(50)= 0.71364981. w(50) i∈(−1.1,1) (c)w′ is of 0thclass of σ(y) with a(0)= 5,m(50)= 1.66316775. w(50) i∈(−1.5,1) (d)w′ is of 1stclass of σ(wTx). w(50) i∈(−650,400) (e)w′ is of 1stclass of σ(y) with a(0)= 1, m(50)= 3.28946245. w(50) i∈(−2.2,1) (f)w′ is of 1stclass of σ(y) with a(0)= 5, m(50)=−5.76584828. w(50) i∈(−1.2,2) (g)w′ is of 2ndclass of σ(wTx). w(50) i∈(−1100,1500) (h)w′ is of 2ndclass of σ(y) with a(0)= 1,m(50)= 0.55567706. w(50) i∈(−1.5,2) (i)w′ is of 2ndclass of σ(y) with a(0)= 5, m(50)=−1.16463021. w(50) i∈(−2,1.5) Figure 9: w′ is for 0th, 1stand 2ndcategories of both σ(wTx) and σ(y), with a(0)= 0,a(0)= 5 are shown. Boundedness: The range of w(50) i(w′ is after 50 iterations) are shown above. (a), (d) and (g) have more variance. Convergence: The dark dots (‘ ’) are w(50) is. The lighter shaded regions (‘ ’) are w′ is from early iterations and the darker regions (‘ ’) are w′ is from later iterations (smooth convergence: ‘ ’, oscillatory: ‘ ’). (7a) and (7b), for the 6thcategory (shown as ‘ ’ ),aandmconverge at the 29thiteration. And for the 9thcategory (shown as ‘ ’)aandmconverge at the 12thiteration. Similarly, in Fig. (8a) and (8b), for the 2ndcategory (shown as ‘×’),aandmconverge at the 45thiteration. While for the 4th(shown as ‘ ’)aandmconverge at the 40thiteration. avalues for 1st(shown as ‘+’) and 9thfluctuates mildly as in Fig. (8a), corresponding mvalues in Fig. (8b) vary slowly. Smooth convergence of estimated w′ is ofσ(y) can also be seen in Fig. (9). In Fig. (9) regions of lighter greyscale areas ‘ ’ of each wirepresent values of wifrom the earlier iterations and the darker areas ‘ ’ are from later iterations and the dark dot ‘ ’ represents the final value of a wiafter 50 iterations. Values at the later stages of 10 iterations occupy the darker region. w′ is of Fig. (9b), (9c), (9e), (9f), (9h) and (9i) converge almost monotonically towards the final values, i.e., the dark dot for each wiis at the ‘edge’ as a final value for each wi, as in: ‘ ’ Whereas those in Fig. (9a), (9d) and (9g) are more oscillatory i.e., the dark dot moves back into the grey shade region, as in: ‘’. Therefore, other than boundedness, a smoother convergence is also achieved using y. 4.3 xy−plane representation In Fig. (9e) and (9f), the variations of w′ is seem inverted for a(0)= 1 and a(0)= 5. This is because the final mvalues are of opposite sign for a(0)= 1 and a(0)= 5 given by m(50)= 3.28946245 and -5.76584828, respectively for the 1st category. This is also the case for w′ is in Fig. (9h) and (9i) for the 2ndcategory. Thus, macts as a common scaling for all w′ is of each category. With values of aandmobtained after 50 iterations, we can plot y(or i+ii curves) for each category on xy−plane. Although this is a multi-dimensional problem, xy−plots are possible because aandm converge to unique values for each category. The plots are shown in Fig. (10) for various initial conditions. Plots on i-ii plane (as in Fig. (4b)) is also shown in Fig. (10d). Plots in Fig. (10d) help us to observe the categorical models that are more sensitive to the changes in initial conditions. Thus, plots on i-ii plane help us to identify sensitive patterns within the classification model. 4 3 2 1 0 1 2 3 x4 2 024y 01 23 4 56 7 89(0.009, 0.71) (0.034, 3.29) (0.009, 0.56) (0.011, 0.90) (0.005, -0.60) (0.108, -3.93) (0.015, 0.56) (0.030, -1.75) (0.034, -2.05) (0.008, 1.61) (a)a(0)= 1, m(0)= 0 and w′ is are randomly initialized. 4 3 2 1 0 1 2 3 x4 2 024y 0 123 45 678 90 1 2 3 4 5 6 7 8 9 (b)a(0)= 5, m(0)= 0 and w′ is are randomly initialized. 4 3 2 1 0 1 2 3 x4 2 024y 01 23 45 6 7 89(0.008, 0.73) (0.027, 3.16) (0.012, -0.55) (0.016, 0.97) (0.010, -0.52) (0.037, 2.49) (0.014, 0.50) (0.032, -1.79) (0.041, -2.19) (0.011, 1.79) (c)a(0)= 1, m(0)= 0 and w′ is are randomly initialized (different from above). 8 6 4 2 t1/3/3 24681/at1/3 0 1234 56 789 0 1 2 34 56 789(d) Plots of (a) and (c) on i,ii- plane with a(0)= 1, x∈[−3,3]. Figure 10: For various initial conditions, the final values of aandmare shown as coordinates ( a(50), m(50)) for each category in (a) and (c). (d) shows the plots of (a) and (c) on i,ii- plane. 11 CategoriesInitial condition a(0)= 1,m(0)= 0 Initial condition a(0)= 5,m(0)= 0 After 50 iterations a(50)m(50)a(50)m(50) 0 0.00933464 0.71364981 0.10750743 1.66316775 1 0.03415196 3.28946245 0.08268326 -5.76584828 2 0.00873391 0.55567706 0.05923945 -1.16463021 3 0.01116299 0.89931317 0.23828527 2.89171539 4 0.00546778 -0.60251737 0.08513255 1.01813209 5 0.10757327 -3.93066244 0.21727861 4.67005625 6 0.01494351 0.5553683 0.26259974 -1.68538874 7 0.02986627 -1.74514672 0.18564456 4.07392969 8 0.03357115 -2.04798275 0.22733055 4.98591511 9 0.00750415 1.61222652 0.14469838 -3.8574718 Table 1: Final values of aandmafter 50 iterations for a(0)= 1 and a(0)= 5. 5 Conclusions Real-valued continued fraction of straight lines, in this work, converges to the real solution of a cubic equation in y. The nonlinear and the input parts of the equation are parameterized with the parameters aandm, respectively. Parameters estimated using the real-valued continued fraction of linear scale has less variance than those estimated with linear scale. Moreover, with the two parameters aandm, the convergence of parameters of regression is almost monotonic and the step size is adaptive and bounded. Thus, the usefulness of the bounded behavior of real-valued continued fraction has been demonstrated with the image classification problem. We have also represented a multi- dimensional problem on planar plots: xy−plane or i,ii- plane using aandm. To conclude, aandmform a bounded nonlinear co-ordinate system with straight lines at its asymptotic limit a→0. Each coordinate ( a, m) is a curve on thexy−plane. Once ais fixed, xy−plane can be parametrized with min a bounded manner. References [1] Euler, L. (2004) On the Transformation of Infinite Series to Continued Fractions (D. W. File, Trans.). Read- ing Classics: Euler. https://people.math.osu.edu/sinnott.1/ReadingClassics/continuedfractions.pdf (Original work published 1785). [2] Abu-Mostafa, Y.S., Magdon-Ismail, M., and Lin, H.T. (2012). Learning from data (Vol. 4, p. 4). New York: AMLBook. [3] Ruder, S. (2017). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. [4] Xiao, H., Rasul, K., Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. 12 | 4 | 1 | The model is trained on the Fashion-MNIST dataset, which consists of 60,000 training images and 10,000 testing images, each with a size of 28x28 pixels (784 input features). The training procedure described in the paper involves mini-batch gradient descent with 100 batches of 600 samples each for 50 iterations (or epochs). Thus, the total number of iterations is 50 * 100 = 5,000. Given the complexity of the model using the continued fraction technique, which likely adds some computational overhead, I estimate around 4 hours of training time on a single GPU with a reasonable specification (e.g., NVIDIA GTX 1080 or equivalent) to complete the training, as similar models trained on classification tasks using gradient descent can typically be completed in this range with moderate hardware. This model leverages a more efficient representation of the parameters and benefits from bounding behavior, which would optimize convergence. | yes | Yes | CV | Real-valued continued fraction of straight lines | 2024-12-16T00:00:00.000Z | [https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py] | 1 | https://github.com/zalandoresearch/fashion-mnist | 20 min | https://colab.research.google.com/drive/1LNMCRLMIWN5U_9WDeRxYmcbnAgaNadSd?usp=sharing | Yes | Yes Everythng is running successfully |
Traffic | GLinear | [] | Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction | 2025-01-02T00:00:00 | https://arxiv.org/abs/2501.01087v3 | [
"https://github.com/t-rizvi/GLinear"
] | {'MSE ': '0.3222'} | [
"MSE "
] | Given the following paper and codebase:
Paper: Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction
Codebase: https://github.com/t-rizvi/GLinear
Improve the GLinear model on the Traffic dataset. The result
should improve on the following metrics: {'MSE ': '0.3222'}. You must use only the codebase provided.
| IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 1 Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction Syed Tahir Hussain Rizvi1 , Neel Kanwal1 , Muddasar Naeem2 , Alfredo Cuzzocrea3∗and Antonio Coronato2 1Department of Electrical Engineering and Computer Science, University of Stavanger, Norway 2Research Center on ICT Technologies for Healthcare and Wellbeing, Universit `a Telematica Giustino Fortunato, 82100 Benevento, Italy 3University of Calabria, Rende, Italy * Corresponding author: alfredo.cuzzocrea@unical.it Abstract —Time Series Forecasting (TSF) is an important application across many fields. There is a debate about whether Transformers, despite being good at understanding long se- quences, struggle with preserving temporal relationships in time series data. Recent research suggests that simpler linear models might outperform or at least provide competitive performance compared to complex Transformer-based models for TSF tasks. In this paper, we propose a novel data-efficient architecture, GLinear, for multivariate TSF that exploits periodic patterns to provide better accuracy. It also provides better prediction accuracy by using a smaller amount of historical data compared to other state-of-the-art linear predictors. Four different datasets (ETTh1, Electricity, Traffic, and Weather) are used to evaluate the performance of the proposed predictor. A performance comparison with state-of-the-art linear architectures (such as NLinear, DLinear, and RLinear) and transformer-based time series predictor (Autoformer) shows that the GLinear, despite be- ing parametrically efficient, significantly outperforms the existing architectures in most cases of multivariate TSF. We hope that the proposed GLinear opens new fronts of research and development of simpler and more sophisticated architectures for data and computationally efficient time-series analysis. The source code is publicly available on GitHub. Index Terms —Multivariate, Time Series Forecasting, Predic- tors, Transformers, Linear Predictors, ETTh. I. I NTRODUCTION Accurate forecasting has become increasingly valuable in today’s data-driven world, where computational intelligence is key to automated decision-making [10], [18]. Time Series Forecasting (TSF) tasks have various applications that impact diverse fields such as finance, healthcare, supply chain man- agement, and climate science [12]. The ability to predict future trends based on historical data not only enhances decision- making processes, but also drives innovation and operational efficiency. Forecasting is typically categorized into short-term, medium-term, and long-term predictions, each serving distinct purposes and employing tailored methodologies [17]. Short- term forecasting focuses on immediate needs; medium-term forecasting assists in strategic planning, while long-term fore- casting aids in vision-setting and resource allocation [13]. Historically, traditional statistical methods like exponential smoothing [13] and ARIMA [15] have dominated the fore- casting arena for short-term TSF. Traditional methods capturetrend and seasonality components in the data and perform well due to their simplicity and responsiveness to recent data [16], [18]. However, they struggle with medium- and long-term predictions, and this decline can be attributed to assumptions of linearity and stationarity, often leading to overfitting and limited adaptability [19], [20]. The emergence of advanced and hybrid methods that use Deep Learning (DL), have revolutionized predictive modeling by enabling the extraction of complex patterns within the data [34]. Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM), and Transformer-based architectures have shown promising results in medium- and long-term forecast- ing without rigid assumptions [1], [21], [22]. Despite their sophistication, recent studies indicate that linear models can capture periodic patterns and provide competitive performance in certain contexts along with computational efficiency [1], [2]. Not all time-series data are suitable for precise predictions, particularly when it comes to long-term forecasting, which becomes especially difficult in chaotic systems [2]. Long-term forecasting has been shown to be most feasible for time series data when data exhibits clear trends and periodic patterns [1], [2]. This brings up a question: How can we effectively integrate the simplicity of linear models with sophisticated techniques for capturing complex underlying patterns to further enhance medium- and long-term TSF? Among popular linear models, NLinear [1] often struggles with non-linear relationships in data, leading to suboptimal performance in complex forecasting scenarios. On the other hand, DLinear [1] is computationally intensive and may re- quire large amounts of training data, which can hinder real- time application and scalability. Although RLinear [2] models are capable of capturing trends and seasonality, they often fall short in their ability to generalize across varying datasets. Inspired from long TSF linear models (NLinear, DLinear and RLinear); we propose a novel data-efficient architecture, GLinear . GLinear is a simple model that does not have any complex components, functions, or blocks (like self-attention schemes, positional encoding blocks, etc.) like previously mentioned linear models. It has capabilities of enhanced forecasting performance while maintaining simplicity. Further- more, GLinear focuses on data efficiency and demonstrates the potential to perform robust forecasting without relying onarXiv:2501.01087v3 [cs.LG] 8 Jan 2025 IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 2 extensive historical datasets, which is a common limitation of other TSF models. Our contributions in this paper are outlined as follows: •We propose a novel architecture, GLinear that can deliver highly accurate TSF by leveraging periodic patterns, making it a promising solution for diverse forecasting applications. •We rigorously perform experiments to validate the GLin- eararchitecture through empirical experiments compar- ing its performance against state-of-the-art methods. •We explore GLinear ’s applicability across diverse sec- tors, assessing its impact on forecasting performance with varying data input length and the prediction horizons. The remainder of the paper is organized as follows: Sec- tion II presents related work on TSF using state-of-the-art transformer-based models and linear predictors along with their limitations. Section III presents the architecture of dif- ferent linear models to provide a better understanding of the proposed method. Section IV presents the proposed GLinear model. Section V presents the detail of experimental setup including used datasets, implementation details and evaluation metrics. Different experiments and their results are presented in Section VI. Finally, Section VII concludes our research with takeaways and future directions. II. R ELATED WORK TSF has become increasingly crucial due to its wide range of real-world applications. Consequently, diverse methodolo- gies have been developed to improve prediction accuracy and robustness. Recent research has explored both simpler (traditional) and complex (DL-based) approaches. One promi- nent direction leverages the power of multi-head attention mechanisms within Transformer architectures to capture in- tricate temporal dependencies [23]–[25]. In contrast, other studies [1], [2], [30] have demonstrated the effectiveness of simpler, computationally efficient models, such as single-layer linear models, for certain forecasting tasks. A. State-of-the-Art Transformers Transformer architectures have demonstrated remarkable potential in time series forecasting by effectively capturing long-range dependencies, a critical aspect often overlooked by traditional methods. However, adapting Transformers for time series requires addressing inherent challenges such as computational complexity and the lack of inherent inductive biases for sequential data [23], [27]. This efficiency bottleneck is addressed with the Informer [24] model with the intro- duction of ProbSparse attention, which reduces complexity from O(L2)toO(L∗logL)and enables efficient processing of long sequences [24]. They also employed a generative decoder, predicting long sequences in a single forward pass. Another version of the transformer model, Autoformer [25], was proposed to tackle the same complexity issue by replacing an auto-correlation mechanism with the dot product attention to efficiently extract dominant periods in the time series. Their approach proved particularly effective for long-term forecast- ing on datasets like ETT and Electricity Transformer [25].Furthermore, Wu et al. [26] incorporated time-specific induc- tive biases in their approach. Their proposed model, TimesNet, was introduced to treat time series as images and leverage 2D convolution operations across multiple time scales to capture intra- and inter-variable relationships, achieving state-of-the- art results on various long-term forecasting benchmarks [26]. The flexibility of Transformer-based models has facilitated their application across diverse forecasting horizons and do- mains. For short- and medium-term forecasting, adaptations focusing on computational efficiency and local pattern ex- traction have been explored. For instance, FEDformer [27] proposed frequency-enhanced attention and a mixture of expert decoders to capture both global and local patterns efficiently. This approach has shown promising results in short-term load forecasting and other applications where capturing high- frequency components is crucial. For long-term forecasting, the ability of Transformers to model long-range dependencies becomes paramount. TimesNet has demonstrated remarkable performance in this domain [26]. Furthermore, some recent researches [28], [29] have utilized external factors and contex- tual information into Transformer models. Such as integrating weather data or economic indicators to improve forecasting accuracy in domains like energy consumption and financial markets. Additionally, probabilistic forecasting using Trans- formers is gaining traction, providing not only point predic- tions but also uncertainty quantification, which is essential for risk management in various applications [28], [29]. B. State-of-the-Art Linear Predictors While Transformer architectures have demonstrated remark- able success, their substantial computational demands and memory footprint pose challenges for deployment in resource- constrained environments, such as edge devices [14]. This computational burden has motivated a resurgence of interest in simpler, more efficient models, particularly linear predic- tors, which offer a compelling balance between forecasting accuracy and computational cost [1], [2]. Research in this area can be broadly categorized into two main directions: enhancements to traditional linear models through advanced techniques and the development of novel, specialized linear architectures designed explicitly for time series data [30], [32]. One prominent research direction focuses on enhancing classical linear methods to better capture complex temporal dynamics. Traditional methods like Autoregressive (AR) [35] models and their variants, while computationally efficient, often struggle with non-linear patterns and long-range depen- dencies. Zheng et al. [33] introduced a dynamic regression technique that allows the model coefficients to vary over time using adaptive filtering. This approach dynamically adjusts model parameters based on incoming data, improving the adaptability of these models to changing time series char- acteristics [33]. Other techniques utilizing Kalman filtering within a linear framework have demonstrated effectiveness in tracking evolving trends and seasonality [36]. Furthermore, it has been shown that the use of sparse linear models, such as LASSO regression, which select only the most relevant past observations for prediction, enhances both efficiency and IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 3 interpretability [31]. These advancements aim to maximize the performance of established linear frameworks by integrating sophisticated techniques to mitigate their inherent limitations. Another research direction involves developing novel linear architectures specifically tailored for time series data. This includes models like DLinear which decomposes the time series into trend and seasonal components and models them with simple linear layers, achieving surprisingly strong per- formance on long-term forecasting tasks [1]. Similarly, N- Linear proposes a simple neural network with a single linear layer for forecasting, demonstrating competitive results while drastically reducing computational complexity [1]. Although linear predictors like NLinear, DLinear, and RLin- ear have achieved competitive accuracy with drastically re- duced computational overhead; these models still exhibit some limitations. Such as struggling with capturing complex non- linear patterns or failing to effectively model specific time series characteristics, such as strong seasonality or abrupt changes in trend. Furthermore, these models also rely heavily on extensive historical data to achieve high prediction accu- racy. This dependency can limit their effectiveness in scenarios with limited data availability, such as newly established sys- tems or rapidly changing environments. These issues motivate the development of novel architectures and improvements in linear models, which aim to address these shortcomings. GLinear achieves superior results by integrating two com- ponents: (1) a non-linear Gaussian Error Linear Unit (GeLU)- based transformation layer to capture intricate patterns, and (2)Reversible Instance Normalization (RevIN) to standardize data distributions across instances, ensuring consistent perfor- mance and adaptability across diverse datasets. This approach provides a more comprehensive and efficient solution for TSF. III. A RCHITECTURE OF DIFFERENT LINEAR MODELS Different state-of-the-art linear predictors are explained in this section to contrast enhancements of the proposed GLin- ear [1]. The input of a time series predictor is the Ltimesteps of past input samples (also referred to as the lookup window or input sequence length ). A predictor uses this input to predict Ttimesteps of future values (also referred to as output horizon orprediction length ). The details of different linear predictors are given below: Fig. 1: Architecture of LTSF-Linear.A. Linear Predictor (LTSF-Linear) The first model is a LTSF-Linear predictor [1] where LTSF stands for long-term TSF. The architecture of the linear predictor can be visualized from Figure 1. This predictor is composed of a single fully connected linear layer, or dense layer. LSTF-Linear predictor does not model any spatial correlations. A single temporal linear layer directly regresses historical time series for future prediction via a weighted sum operation as follows: T=WL+b (1) where Lrefers to past input samples depending on defined input sequence length (can also be denoted as X),Trefers to predicted future samples depending on required prediction length (can also be denoted as ˆX),Wis weight matrix W∈ RT×Landbis additive bias. B. NLinear Predictor (Normalization-based Linear Model) To boost the performance of LTSF-Linear, NLinear [1] performs normalization to tackle the distribution shift in the dataset. NLinear first subtracts the input by the last value of the sequence, as shown in Figure 2 (a). Then, the input goes through a linear layer, and the subtracted part is added back before making the final prediction. The subtraction and addition in NLinear are a simple normalization for the input sequence. C. DLinear Predictor (Decomposition-based Linear Model) DLinear [1] is a combination of a decomposition scheme used in Autoformer and FEDformer with linear layers. It first decomposes raw data input into a trend component by a moving average kernel and a remainder (seasonal) component as shown in Figure 2 (b). Then, two one-layer linear layers are applied to each component, and the two features are sum up to get the final prediction. By explicitly handling trend, DLinear enhances the performance of a vanilla linear when there is a clear trend in the data. D. RLinear Predictor (Reversible normalization-based Linear Model) RLinear [2] combines a linear projection layer with RevIN to achieve competitive performance, as shown in Figure 3. The study reveals that RevIN enhances the model’s ability to handle distribution shifts and normalize input data effectively, leading to improved results even with a simpler architecture. IV. M ETHODOLOGY Existing linear models use different simple operations like normalization and decomposition. It is worth noting that with the involvement of simpler mathematical operations, it is pos- sible to build powerful linear predictors with some variations of activation functions to extract meaningful results for the required task [3]. Keeping these in mind, a new Gaussian- based linear predictor, GLinear , is proposed. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 4 Fig. 2: (a) The architecture of Nlinear predictor, (b) The architecture of DLinear predictor. The illustration of the architectures is created based on the description in the original paper [1]. Fig. 3: The schematic of the architecture of RLinear. The illus- tration of the architecture is created based on the description in the original paper [2]. Figure 4 shows the architecture of the GLinear predictor that is composed of two fully-connected layers of the same input size having a GeLU nonlinearity in-between them. Different configurations of input and layers sizes are tested to lead to this final architecture. GeLU [5] is a non-linear activation function defined as follow: GELU( x) =x·Φ(x) (2) where Φ(x)refers to the cumulative distribution function (defined in (3)) of the standard normal distribution with an error function ( erf).Φ(x) =1 2 1 +erfx√ 2 (3) Furthermore, RevIN is applied to the input and output layers of the GLinear model [4]. This normalization layer transforms the original data distribution into a mean-centred distribution, where the distribution discrepancy between different instances is reduced. This normalized data is then applied as new input to the used predictor, and then the final output is denormalized at the last step to provide the final prediction. RevIN can be used with any existing model. It does not add any significant overhead in training time due to low computational complexity of this normalization scheme. The time-series data usually suffer from distribution shift problem due to changes in its mean and variance over time that can degrade the performance of time series predictors. RevIN utilizes learnable affine transformation to remove and restore the statistical information of a time-series instance that can be helpful to handle datasets with distribution shift problem. Some features of GLinear model are: •It is a simpler model; it is not made up of any com- plex components, functions or blocks (like self-attention schemes, positional encoding blocks, etc.). It integrates two components: (1) a non-linear GeLU-based trans- IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 5 Fig. 4: Architecture of the proposed GLinear predictor. formation layer to capture intricate patterns, and (2) Reversible Instance Normalization (RevIN). •Due to its simple architecture, training of this model is very fast as compared to other transformer based predictors. •This proposed model provides comparable performance to other state-of-the-art predictors. V. E XPERIMENTAL SETUP In this section, we present the details about used datasets, experimental setup, and evaluation metrics. A. Dataset We conducted experiments on four different real-world datasets (ETTh1, Electricity, Weather and Traffic). Table I provides a brief overview of these datasets. 1) ETTh1: The ETTh1 dataset is used for long-sequence TSF in electric power. It includes two years of data from two Chinese counties, focusing on Electricity Transformer Temperature (ETT) [6]. It’s designed for detailed exploration of forecasting problems. This dataset is crucial for analyzing transformer temperatures and power load features in the elec- tric power sector. ETTh1 differs from ETTh2 in granularity, focusing on long sequences compared to ETTh2’s hourly forecasting. The ETTh dataset serves the purpose of aiding research and analysis in the electric power sector, particularly for forecasting transformer temperatures and power load fea- tures. Applications of the ETTh dataset include research inlong sequence time-series forecasting and studying power load features for better power deployment strategies. The ETTh1 dataset is a multivariate time series dataset having 7 different variables (channels). It contains data of 725.83 days with granularity of 1 hour, meaning each times- tamp represents a one-hour interval of data that provides 17420 timestamps values of each variable (17420 / 24 hours per day = 725.83 days of data). 2) Electricity: The Electricity dataset [7] is also a multi- variate time series dataset having 321 channels. It contains data of 1096 days with granularity of 1 hour that provides 26304 timestamps values of each channel (26304 / 24 hours per day = 1096 days of data). 3) Weather: The Weather dataset [8] contains 52696 times- tamp values collected in 365.86 days; each timestamp has 21 channels and a granularity of 10 minutes. 4) Traffic: The Traffic dataset [9] contains 731 days of data with a granularity of 1 hour. It provides data of 862 channels each having 17544 timestamps values. TABLE I: Overview of ETTh1, Electricity, Weather, and Traffic datasets. Datasets Timestamps Variables(Channels) Granularity ETTh1 17420 7 1 hour Electricity 26304 321 1 hour Weather 52696 21 10 minutes Traffic 17544 862 1 hour B. Implementation Details The GLinear model is implemented using Python and is sourced from the official Pytorch implementation of LTSF- Linear1. Respective code repository contains training and evaluation protocol of Autoformer, NLinear and DLinear. Similarly, the RLinear2model is trained and evaluated using the same protocol to ensure a fair comparison across all models. The same set of hyperparameters are used for training all linear models, such as using the Mean Squared Error (MSE) criterion , Adam [11] optimizer and a learning rate of 0.001. C. Evaluation Metrics The evaluation metrics used for comparison are MSE and Mean Absolute Error (MAE). These metrics are commonly used to assess the accuracy and performance of predictors [1], [12]. VI. E XPERIMENTS AND RESULTS The experimental setup was designed to assess GLinear’s performance in both short-term and long-term forecasting, as well as to analyze the impact of varying historical data lengths using two different experiments. 1https://github.com/cure-lab/LTSF-Linear/ 2https://github.com/plumprc/RTSF/blob/main/models/RLinear.py IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 6 TABLE II: Multivariate Forecasting performance comparison across datasets and models (lower MSE/MAE is better). Input Sequence Length for all experiments is 336 and a range of Prediction Length {12, 24, 48, 96, 192, 336, 720 }are used. The top-performing results are marked in bold . The second-best results are underlined . Lookup Window (Input Sequence Length) = 336 Learning Rate 0.001 Methods Autoformer NLinear DLinear RLinear GLinear Dataset / Output Horizon (Prediction Length) MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Electricity12 0.1638 0.2872 0.1000 0.2006 0.0997 0.2009 0.0967 0.1973 0.0883 0.1860 24 0.1711 0.2917 0.1103 0.2092 0.1099 0.2089 0.1060 0.2049 0.0988 0.1952 48 0.1827 0.2990 0.1255 0.2232 0.1249 0.2231 0.1201 0.2180 0.1144 0.2101 96 0.1960 0.3106 0.1409 0.2366 0.1401 0.2374 0.1358 0.2317 0.1313 0.2258 192 0.2064 0.3182 0.1551 0.2488 0.1538 0.2505 0.1518 0.2455 0.1494 0.2423 336 0.2177 0.3290 0.1717 0.2654 0.1693 0.2678 0.1688 0.2621 0.1651 0.2582 720 0.2477 0.3528 0.2104 0.2977 0.2042 0.3005 0.2071 0.2940 0.2027 0.2906 ETTh112 0.3991 0.4422 0.3069 0.3564 0.2976 0.3494 0.2862 0.3412 0.2848 0.3448 24 0.4759 0.4733 0.3474 0.3842 0.3194 0.3627 0.3090 0.3559 0.3142 0.3654 48 0.5046 0.4831 0.3553 0.3845 0.3477 0.3803 0.3454 0.3766 0.3537 0.3869 96 0.5392 0.4979 0.3731 0.3941 0.3705 0.3919 0.3901 0.4054 0.3820 0.4025 192 0.4907 0.4906 0.4089 0.4157 0.4044 0.4128 0.4223 0.4279 0.4202 0.4269 336 0.4805 0.4886 0.4324 0.4307 0.4553 0.4582 0.4417 0.4383 0.4915 0.4715 720 0.6303 0.5930 0.4369 0.4527 0.4975 0.5087 0.4634 0.4686 0.5923 0.5372 Traffic12 0.5624 0.3830 0.3623 0.2662 0.3610 0.2644 0.3762 0.2744 0.3222 0.2385 24 0.5801 0.3786 0.3719 0.2682 0.3709 0.2672 0.3834 0.2775 0.3369 0.2471 48 0.6060 0.3796 0.3945 0.2769 0.3932 0.2760 0.4041 0.2864 0.3630 0.2607 96 0.6426 0.3998 0.4113 0.2820 0.4104 0.2829 0.4194 0.2921 0.3875 0.2718 192 0.6425 0.3967 0.4245 0.2872 0.4229 0.2881 0.4323 0.2965 0.4056 0.2802 336 0.6675 0.4088 0.4375 0.2943 0.4362 0.2961 0.4451 0.3027 0.4200 0.2871 720 0.6570 0.4030 0.4657 0.3109 0.4660 0.3152 0.4733 0.3191 0.4488 0.3038 Weather12 0.2010 0.2933 0.0784 0.1127 0.0783 0.1158 0.0706 0.0974 0.0716 0.0940 24 0.2095 0.3033 0.1056 0.1453 0.1040 0.1519 0.0905 0.1247 0.0909 0.1247 48 0.2397 0.3202 0.1357 0.1824 0.1367 0.1937 0.1138 0.1566 0.1163 0.1602 96 0.3004 0.3776 0.1761 0.2264 0.1756 0.2386 0.1450 0.1936 0.1457 0.1966 192 0.3916 0.4382 0.2164 0.2595 0.2160 0.2739 0.1878 0.2339 0.1883 0.2385 336 0.3830 0.4171 0.2664 0.2966 0.2652 0.3192 0.2404 0.2743 0.2407 0.2764 720 0.5420 0.5032 0.3339 0.3437 0.3275 0.3667 0.3159 0.3271 0.3200 0.3334 Top 1 Performing 0 2 2 9 15 Top 2 Performing 0 5 10 17 23 A.Evaluating Different Prediction Lengths for Fixed Input Length In the first experiment, the length of the input sequence was fixed at 336 time steps, representing a historical window to learn the underlying patterns. The prediction lengths were varied across multiple time frames to assess the model’s ability to forecast different horizons, as displayed in Table II. Table II provides a comprehensive evaluation of all predictors, including Autoformer, NLinear, DLinear, RLinear, and the proposed GLinear across four datasets. In an extensive series of experiments, we varied the prediction lengths in range: {12, 24, 48, 96, 192, 336, 720 }. This set of experiments allows for an evaluation of the model performance in forecasting short, medium and long-term future steps, helping to gauge the effectiveness of the model for various forecasting time horizons. We used a quantitative approach to score the best- performing candidate in all datasets, giving a point for the best and second-best results in each row. The results highlight that Glinear is a majority winner with substentially improved results in both evaluation metrics. The performance of these models can also be compared using Figure 5. The results demonstrate that the proposed GLinear model outperforms other predictors in most cases (a lower MSE indicates better performance). Specifically, GLinear achieves the highest performance for the Electricity and Traffic datasets. For Weather forecasting, GLinear is the second-best model, with RLinear taking the top spot. However,the difference in MSE between the two models is minimal, as shown in Figure 5 (d). For the ETTh1 dataset, no single model consistently outperforms others across all prediction lengths. For the ETTh1 dataset, GLinear ranks among the top- performing models for shorter prediction lengths (12, 24, and 48), but the best-performing model changes as the prediction length increases. Additionally, the performance of GLinear deteriorates with longer prediction lengths for this dataset. The next experiment aims to provide an explanation to better understand the underlying reasons. B.Impact of Input Sequence Length on Forecasting Future Steps In the second experiment, the length of the input sequence was varied to understand how much historical data is required to accurately forecast the 24 and 720 future time steps. The input sequence lengths were set to {48, 72, 96, 120, 144, 168, 192, 336, 504, 672, 720 }. For each of these input lengths, the model was trained to predict 24 and 720 future steps. This experiment aims to investigate how different lengths of historical data affect the model’s ability to generate accurate predictions for both short-term (24 steps) and long-term (720 steps) forecasts. Figure 6 provides the result of the second experiment, which compares the performance of proposed GLinear with other state-of-the-art Linear predictors (NLinear, DLinear and RLinear) under different input prediction lengths to understand IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 7 0 100 200 300 400 500 600 700 Prediction Length0.100.120.140.160.180.20MSE Electricity NLinear DLinear RLinear GLinear 0 100 200 300 400 500 600 700 Prediction Length0.300.350.400.450.500.550.60MSE ETTh1 NLinear DLinear RLinear GLinear 0 100 200 300 400 500 600 700 Prediction Length0.320.340.360.380.400.420.440.460.48MSE Traffic NLinear DLinear RLinear GLinear 0 100 200 300 400 500 600 700 Prediction Length0.100.150.200.250.30MSE Weather NLinear DLinear RLinear GLinear Fig. 5: Multivariate Forecasting performance comparison across datasets and models. Input Sequence Length is 336 and values of Prediction Length are {12, 24, 48, 96, 192, 336, 720 }. Datasets are (a) Electricity, (b) ETTh1, (c) Traffic, and (d) Weather. how much historical data is enough for short- and long-term forecasting. MSE results were computed for two prediction lengths (24 and 720 time steps) to analyze short-term and long-term forecasting performance, respectively. It is worth noting that, similar to the previous experiment, the proposed GLinear model outperforms all other linear predictors for the Electricity and Traffic datasets. For the Weather dataset, both GLinear and RLinear show nearly identical forecasting performance. The performance comparison of predictors on the ETTh1 dataset reveals interesting insights about the proposed GLinear model. Specifically, GLinear’s performance declines as it uses more historical data. However, it delivers the best performance for both short- and long-term forecasting when using shorter input sequence lengths, making it data-efficient by requiring less historical data to produce accurate forecasts. VII. C ONCLUSION AND FUTURE DIRECTIONS In this paper, we have introduced GLinear, a novel and data-efficient architecture for multivariate time se- ries forecasting (TSF) that leverages periodic patterns toenhance prediction accuracy while requiring less histori- cal data compared to existing linear models. Our experi- ments across four datasets—ETTh1, Electricity, Traffic, and Weather—demonstrate that GLinear not only achieves compet- itive performance but also outperforms state-of-the-art mod- els such as NLinear, DLinear, and RLinear, as well as Transformer-based models like Autoformer, in various TSF scenarios. Overall, GLinear represents a promising step towards sim- pler, more efficient architectures for TSF tasks, achieving robust results with lower computational and data requirements. We believe that this approach opens new avenues for the development of efficient models for time series analysis, offering both better accuracy and computational savings. Future work can explore applying the GLinear architecture to other time series-related tasks, such as anomaly detection or forecasting in different domains. Additionally, the periodic pattern extraction mechanism can be integrated into other deep learning models to enhance their efficiency and predictive performance. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 8 100 200 300 400 500 600 7000.100.120.140.160.18MSE Electricity -- Output Horizon (Prediction Length) = 24 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 700 Input Sequence Length0.100.150.200.250.30MSE Electricity -- Output Horizon (Prediction Length) = 720 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 7000.300.320.340.36MSE ETTh1 -- Output Horizon (Prediction Length) = 24 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 700 Input Sequence Length0.300.350.400.450.50MSE ETTh1 -- Output Horizon (Prediction Length) = 720 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 7000.40.50.6MSE Traffic -- Output Horizon (Prediction Length) = 24 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 700 Input Sequence Length0.40.60.8MSE Traffic -- Output Horizon (Prediction Length) = 720 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 7000.090.100.110.120.13MSE Weather -- Output Horizon (Prediction Length) = 24 NLinear DLinear RLinear GLinear 100 200 300 400 500 600 700 Input Sequence Length0.10.20.30.4MSE Weather -- Output Horizon (Prediction Length) = 720 NLinear DLinear RLinear GLinear Fig. 6: A performance comparison of all linear predictors (NLinear, DLinear, RLinear, and Proposed GLinear) with varying input sequence lengths in both long-term forecasting (720 time steps) and short-term forecasting (24 time steps) for different datasets, (a) Electricity, (b) ETTh1, (c) Traffic, and (d) Weather. Acknowledgment This research has been partially funded by the European Union - Next Generation EU through the Project of National Rele- vance ”Innovative mathematical modeling for cell mechanics: global approach from micro-scale models to experimental validation integrated by reinforcement learning”, financed by European Union-Next-GenerationEU-National Recovery and Resilience Plan-NRRP-M4C1-I 1.1, CALL PRIN 2022 PNRR D.D. 1409 14-09-2022—(Project code P2022MXCJ2, CUP F53D23010080001) granted by the Italian MUR. REFERENCES [1] Zeng, A., Chen, M., Zhang, L. and Xu, Q., 2023, June. Are transformers effective for time series forecasting?. In Proceedings of the AAAI conference on artificial intelligence (V ol. 37, No. 9, pp. 11121-11128). [2] Li, Z., Qi, S., Li, Y . and Xu, Z., 2023. Revisiting long-term time series forecasting: An investigation on linear mapping. arXiv preprint arXiv:2305.10721. [3] Ni, R., Lin, Z., Wang, S. and Fanti, G., 2024, April. Mixture-of-Linear- Experts for Long-term Time Series Forecasting. In International Confer- ence on Artificial Intelligence and Statistics (pp. 4672-4680). PMLR.[4] Kim, T., Kim, J., Tae, Y ., Park, C., Choi, J.H. and Choo, J., 2021, May. Reversible instance normalization for accurate time-series forecast- ing against distribution shift. In International Conference on Learning Representations. [5] Hendrycks, D. and Gimpel, K., 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. [6] Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H. and Zhang, W., 2021, May. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence (V ol. 35, No. 12, pp. 11106-11115). [7] Khan, Z.A., Hussain, T., Ullah, A., Rho, S., Lee, M. and Baik, S.W., 2020. Towards efficient electricity forecasting in residential and commercial buildings: A novel hybrid CNN with a LSTM-AE based framework. Sensors, 20(5), p.1399. [8] Angryk, R.A., Martens, P.C., Aydin, B., Kempton, D., Mahajan, S.S., Basodi, S., Ahmadzadeh, A., Cai, X., Filali Boubrahimi, S., Hamdi, S.M. and Schuh, M.A., 2020. Multivariate time series dataset for space weather data analytics. Scientific data, 7(1), p.227. [9] Chen, C., Petty, K., Skabardonis, A., Varaiya, P. and Jia, Z., 2001. Freeway performance measurement system: mining loop detector data. Transportation research record, 1748(1), pp.96-102. [10] Buansing, T. S. T., Golan, A., & Ullah, A. (2020). An information- theoretic approach for forecasting interval-valued SP500 daily returns. International Journal of Forecasting , 36(3), 800–813. Elsevier. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 9 [11] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv. [12] Li, W., & Law, K. L. E. (2024). Deep learning models for time series forecasting: a review. IEEE Access . IEEE. [13] Makridakis, S., Spiliotis, E., & Assimakopoulos, V . (2018). Statistical and machine learning forecasting methods: Concerns and ways forward. PloS One , 13(3), e0194889. Public Library of Science [14] Kanwal, N., Eftestøl, T., Khoraminia, F., Zuiverloon, T. C. M., & Engan, K. (2023). Vision transformers for small histological datasets learned through knowledge distillation. In *Pacific-Asia Conference on Knowledge Discovery and Data Mining* (pp. 167–179). Springer. [15] Ariyo, A. A., Adewumi, A. O., & Ayo, C. K. (2014). Stock price predic- tion using the ARIMA model. In 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation (pp. 106–112). IEEE. [16] Abraham, G., Byrnes, G. B., & Bain, C. A. (2009). Short-term fore- casting of emergency inpatient flow. IEEE Transactions on Information Technology in Biomedicine , 13(3), 380–388. IEEE. [17] Wazirali, R., Yaghoubi, E., Abujazar, M. S. S., Ahmad, R., & Vakili, A. H. (2023). State-of-the-art review on energy and load forecasting in microgrids using artificial neural networks, machine learning, and deep learning techniques. Electric Power Systems Research , 225, 109792. Elsevier. [18] Ouyang, T., He, Y ., Li, H., Sun, Z., & Baek, S. (2019). Modeling and forecasting short-term power load with copula model and deep belief network. IEEE Transactions on Emerging Topics in Computational Intelligence , 3(2), 127–136. IEEE. [19] Zhou, S., Guo, S., Du, B., Huang, S., & Guo, J. (2022). A hybrid framework for multivariate time series forecasting of daily urban water demand using attention-based convolutional neural network and long short-term memory network. Sustainability , 14(17), 11086. MDPI. [20] Lin, Y ., Koprinska, I., & Rana, M. (2021). Temporal convolutional at- tention neural networks for time series forecasting. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1–8). IEEE. [21] Dai, Z.-Q., Li, J., Cao, Y .-J., & Zhang, Y .-X. (2025). SALSTM: Seg- mented self-attention long short-term memory for long-term forecasting. The Journal of Supercomputing , 81(1), 115. Springer. [22] Tian, W., Luo, F., & Shen, K. (2024). PSRUNet: A recurrent neural network for spatiotemporal sequence forecasting based on parallel simple recurrent unit. Machine Vision and Applications , 35(3), 1–15. Springer. [23] Wu, N., Green, B., Ben, X., & O’Banion, S. (2020). Deep transformer models for time series forecasting: The influenza prevalence case. arXiv preprint arXiv:2001.08317. [24] Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., & Zhang, W. (2021, May). Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence (V ol. 35, No. 12, pp. 11106-11115). [25] Wu, H., Xu, J., Wang, J., & Long, M. (2021). Autoformer: Decompo- sition transformers with auto-correlation for long-term series forecasting. Advances in neural information processing systems, 34, 22419-22430. [26] Wu, H., Hu, T., Liu, Y ., Zhou, H., Wang, J., & Long, M. (2022). Times- net: Temporal 2d-variation modeling for general time series analysis. arXiv preprint arXiv:2210.02186. [27] Zhou, T., Ma, Z., Wen, Q., Wang, X., Sun, L., & Jin, R. (2022, June). Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International conference on machine learning (pp. 27268-27286). PMLR. [28] Li, D., Tan, Y ., Zhang, Y ., Miao, S., & He, S. (2023). Probabilistic forecasting method for mid-term hourly load time series based on an improved temporal fusion transformer model. International Journal of Electrical Power & Energy Systems, 146, 108743. [29] Aizpurua, J. I., Stewart, B. G., McArthur, S. D., Penalba, M., Bar- renetxea, M., Muxika, E., & Ringwood, J. V . (2022). Probabilistic forecasting informed failure prognostics framework for improved RUL prediction under uncertainty: A transformer case study. Reliability Engi- neering & System Safety, 226, 108676. [30] Wang, H., Zou, D., Zhao, B., Yang, Y ., Liu, J., Chai, N., & Song, X. (2024, June). RDLinear: A Novel Time Series Forecasting Model Based on Decomposition with RevIN. In 2024 International Joint Conference on Neural Networks (IJCNN) (pp. 1-7). IEEE. [31] O’Brien, C. M. (2016). Statistical learning with sparsity: the lasso and generalizations. Wiley Periodicals, Inc. [32] Ni, R., Lin, Z., Wang, S., & Fanti, G. (2024, April). Mixture-of- Linear-Experts for Long-term Time Series Forecasting. In International Conference on Artificial Intelligence and Statistics (pp. 4672-4680). PMLR.[33] Zhihao Zheng, V ., Choi, S., & Sun, L. (2023). Enhancing Deep Traffic Forecasting Models with Dynamic Regression. arXiv e-prints, arXiv- 2301. [34] Qayyum, H., Rizvi, S.T.H., Naeem, M., Khalid, U.B., Abbas, M. and Coronato, A., 2024. Enhancing Diagnostic Accuracy for Skin Cancer and COVID-19 Detection: A Comparative Study Using a Stacked Ensemble Method. Technologies, 12(9), p.142. [35] Kaur, J., Parmar, K. S., & Singh, S. (2023). Autoregressive models in environmental forecasting time series: a theoretical and application review. Environmental Science and Pollution Research, 30(8), 19617- 19641. [36] Thu, N. T. H., Bao, P. Q., & Van, P. N. (2023). A hybrid model of decomposition, extended Kalman filter and autoregressive-long short-term memory network for hourly day ahead wind speed forecasting. J. Appl. Sci. Eng., 27, 3063-3071. | 4 | 1 | The GLinear model, being a simplified architecture without complex components like Transformers, should have a relatively low parameter count compared to complex models. The datasets used are of manageable sizes, with the largest having around 50,000 time steps and multiple channels, which is well within the processing capabilities of a single GPU in a standard training session. Given that similar architectures and models (like NLinear and DLinear) have shown good performance with reduced computational requirements, it is estimated that training may take around 4 hours on a single GPU, possibly with a batch size of around 32 or 64 to optimize memory usage. The training process is also likely to involve 50-100 epochs of training, which is standard for such models, reducing the overall training time further when compared to complex models. Therefore, training can be feasibly completed in under 8 hours on a single GPU. | yes | Yes | Time Series | Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series Prediction | 2025-01-02 0:00:00 | https://github.com/t-rizvi/GLinear | 1 | Inside the repo in dataset folder | 193 sec * 4 = 12.9 minutes | https://colab.research.google.com/drive/1sI72VSxjN4cyQR7UrueWfBXwoFi9Y9Qr?usp=sharing | Yes | -- Training on all data set is included inside the scripts/EXP-LookBackWindow_\&_LongForecasting/Linear_LookBackWindow.sh fiLE . But to run only traffic dataset. I have included the conda command. |
BTAD | URD | [] | Unlocking the Potential of Reverse Distillation for Anomaly Detection | 2024-12-10T00:00:00 | https://arxiv.org/abs/2412.07579v1 | [
"https://github.com/hito2448/urd"
] | {'Segmentation AUROC': '98.1', 'Detection AUROC': '93.9', 'Segmentation AUPRO': '78.5', 'Segmentation AP': '65.2'} | [
"Detection AUROC",
"Segmentation AUROC",
"Segmentation AP",
"Segmentation AUPRO"
] | Given the following paper and codebase:
Paper: Unlocking the Potential of Reverse Distillation for Anomaly Detection
Codebase: https://github.com/hito2448/urd
Improve the URD model on the BTAD dataset. The result
should improve on the following metrics: {'Segmentation AUROC': '98.1', 'Detection AUROC': '93.9', 'Segmentation AUPRO': '78.5', 'Segmentation AP': '65.2'}. You must use only the codebase provided.
| Unlocking the Potential of Reverse Distillation for Anomaly Detection Xinyue Liu1, Jianyuan Wang2*, Biao Leng1, Shuo Zhang3 1School of Computer Science and Engineering, Beihang University 2School of Intelligence Science and Technology, University of Science and Technology Beijing 3Beijing Key Lab of Traffic Data Analysis and Mining, School of Computer & Technology, Beijing Jiaotong University {liuxinyue7, lengbiao }@buaa.edu.cn, wangjianyuan@ustb.edu.cn, zhangshuo@bjtu.edu.cn Abstract Knowledge Distillation (KD) is a promising approach for unsupervised Anomaly Detection (AD). However, the stu- dent network’s over-generalization often diminishes the cru- cial representation differences between teacher and student in anomalous regions, leading to detection failures. To addresses this problem, the widely accepted Reverse Distillation (RD) paradigm designs asymmetry teacher and student network, using an encoder as teacher and a decoder as student. Yet, the design of RD does not ensure that the teacher encoder ef- fectively distinguishes between normal and abnormal features or that the student decoder generates anomaly-free features. Additionally, the absence of skip connections results in a loss of fine details during feature reconstruction. To address these issues, we propose RD with Expert, which introduces a novel Expert-Teacher-Student network for simultaneous distillation of both the teacher encoder and student decoder. The added expert network enhances the student’s ability to generate nor- mal features and optimizes the teacher’s differentiation be- tween normal and abnormal features, reducing missed detec- tions. Additionally, Guided Information Injection is designed to filter and transfer features from teacher to student, im- proving detail reconstruction and minimizing false positives. Experiments on several benchmarks prove that our method outperforms existing unsupervised AD methods under RD paradigm, fully unlocking RD’s potential. Code — https://github.com/hito2448/URD Introduction Anomaly Detection (AD) is one of the key tasks in industry. Due to the difficulty in obtaining anomalous images and the high cost of labeling, unsupervised Anomaly Detection has been extensively studied. Unsupervised AD uses only nor- mal images during training, enabling the model to detect and localize anomalies in the test images. In recent years, thanks to the application of techniques such as reconstruction mod- els, diffusion models, normalizing flow, and knowledge dis- tillation, unsupervised AD has seen rapid advancements. Knowledge distillation is one of the common paradigms for unsupervised AD (Bergmann et al. 2020). Similar to tra- ditional knowledge distillation, KD-based AD methods typi- cally rely on a teacher-student network, where an initialized *Corresponding author. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Input GT RD RD++ Ours Figure 1: Anomaly localization examples. Our method re- duces missed detections and false positives in RD. student network is distilled by the pre-trained teacher net- work. Since the student network is trained only on normal images, it is unable to obtain the teacher network’s anomaly representation ability. Therefore, during inference, the dif- ference in feature representations between the teacher and the student networks is used to determine whether there are anomalies. Early KD-based AD methods (Bergmann et al. 2020; Salehi et al. 2021; Wang et al. 2021; Zhou et al. 2022) use teacher and student networks with identical or simi- lar architectures and data flow, leading to the student over- generalizing the teacher’s anomaly representation ability. To tackle this shortfall, Reverse Distillation (RD) (Deng and Li 2022) innovatively combines the concept of knowledge dis- tillation with feature reconstruction, using a pre-trained en- coder as the teacher and a decoder as the student. The asym- metric network architectures and the reverse data flow of RD results in better anomaly localization performance. Although RD is simple and effective for unsupervised AD, there exist some shortcomings in its architectural de- sign: (1) RD’s bottleneck module claims to filter out ab- normal information so that the student decoder generates anomaly-free features. However, since there is only normal supervision during training, the anomaly filtering is not ex- plicitly guaranteed. Thus, in some cases, the decoder still reconstructs features similar to the teacher encoder’s, giv- ing rise to missed detections. (2) To inject encoder fea- tures into the decoder for better detail reconstruction from high-level representations, most reconstruction networks in- troduce skip connection. To prevent anomaly leakage, RD, though as a feature reconstruction network, discards skiparXiv:2412.07579v1 [cs.CV] 10 Dec 2024 Figure 2: Schematic diagram of the framework and data flow of RD and its variants including our proposed method. connections, limiting its ability to reconstruct fine details and leading to false positives in normal regions. To address the above issues of RD, recent methods pro- pose improvements from two aspects: enriching supervi- sion information andexpanding reconstruction informa- tion. For the lack of anomaly supervision, some methods integrates the denoising concept with anomaly synthesis, as shown in in Figure 2 (b) (Tien et al. 2023; Jiang, Cao, and Shen 2023). For instance, the representative method RD++ (Tien et al. 2023) trains the bottleneck to reconstruct the normal feature space from the teacher encoder’s abnormal features, thereby ensuring that the student decoder outputs normal features. However, this denoising strategy is based on a strong hypothesis that the features generated by the teacher encoder in anomalous regions differ from the cor- responding normal features reconstructed by the student de- coder. When the proportion of normal pixels in the receptive field is large, this hypothesis may not hold. For poor detail reconstruction, other methods (Guo et al. 2023; Gu et al. 2023) introduces a memory bank to store normal features of the teacher encoder, thus expands the available information of the student decoder during reconstruction, as in Figure 2 (c). However, the memory bank brings additional storage requirements, and feature search and alignment also require additional computing requirements. To unlock the potential of the RD paradigm for unsu- pervised AD task, we continue to innovate in comprehen- sive supervision anddetail reconstruction . First, to ensure the efficacy of RD to a great extent, our idea is to incor- porate synthetic anomalies to explicitly denoise the student decoder’s features while also distill the teacher encoder’s features. By enlarging the difference between features gen- erated in normal and anormalous regions, the teacher’s anomaly sensitivity is improved. Additionally, to better re- construct detail information in lower-level features, our intu- ition is to employ similarity attention to directly transfer the teacher’s feature information into student, thereby straight- forwardly enhancing detail reconstruction. Unlike the previous RD framework, as illustrated in Fig- ure 2 (d), we propose Reverse Distillation with Expert (RD-E) based on an innovative Expert-Teacher-Student Network, which leverages a frozen expert encoder to simultaneously train the teacher encoder and student decoder, enhancing their feature anomaly sensitivity and denoising capability. In addition, considering that skip connection may cause anomaly leakage, we design Guided Information Injection, utilizing teacher’s selective information to aid the student decoder in reconstructing low-level feature details. Experi- mental results on widely benchmarked AD datasets demon- strate that anomaly detection and localization performance of our method surpasses that of RD and other mainstream KD-based methods, achieving SOTA. Related Works As one of the crucial tasks in industrial quality inspection (Bergmann et al. 2019), unsupervised Anomaly Detection (AD) has earned increasing attention in recent years. Early methods in unsupervised AD often relies on generative mod- els (Bergmann et al. 2018; Akcay, Atapour-Abarghouei, and Breckon 2019; Tang et al. 2020; Liu et al. 2023a; Zhang et al. 2023a), where the models are trained on normal samples to learn how to reconstruct them and the reconstruction error is used for inference. Other methods employs parametric den- sity estimation (Defard et al. 2021; Gudovskiy, Ishizaka, and Kozuka 2022; Hyun et al. 2024; Zhou et al. 2024), where the parameters of normal distribution are calculated, and anomalies are detected based on how well the samples fit this distribution. Additionally, many methods incorporated pre-trained models (Liu et al. 2023b; Li et al. 2023) and memory banks (Roth et al. 2022; Bae, Lee, and Kim 2023), comparing the input images to stored normal features. Re- cently, synthetic anomalies have become a hot topic (Li et al. 2021; Lin and Yan 2024), with external datasets (Zavrtanik, Kristan, and Sko ˇcaj 2021) or diffusion models (Zhang, Xu, and Zhou 2024) being used to generate anomalies similar to real-world scenarios, thus aiding unsupervised AD. Besides, knowledge distillation (KD) based on teacher- student networks has recently been applied to unsupervised AD (Bergmann et al. 2020; Li et al. 2024), using differ- ences in representation between the two networks to iden- tify anomalies. A major concern in KD-based AD is stu- dent’s over-generalization to teacher’s anomaly representa- tions. Some methods (Salehi et al. 2021; Wang et al. 2021; Rudolph et al. 2023; Liu et al. 2024) address this by us- ing asymmetry teacher and student networks to differenti- ate their representation abilities. Reverse Distillation (RD) (Deng and Li 2022) follows this idea, proposing reverse net- work architecture and data flow. Recently, several improve- ments to RD have been explored (Tien et al. 2023; Guo et al. 2023; Gu et al. 2023; Jiang, Cao, and Shen 2023; Guo et al. 2024; Zhang, Suganuma, and Okatani 2024). Revisiting Reverse Distillation RD (Deng and Li 2022) is a widely adopted unsupervised AD paradigm based on KD. The primary components of RD include a pre-trained teacher encoder E, a one-class bottle- neck embedding (OCBE) module, and a student decoder D. During training, only normal images are used as input. Figure 3: Overview of our proposed method. (a) shows the overall architecture and training process of our designed Expert- Teacher-Student Network, where the expert are frozen and the teacher and student are trainable. Our proposed Guided Infor- mation Injection module is inserted between the two blocks of the student. (b) shows how to distill the teacher and student with the expert. The impact of distillation on the teacher and student features is visually represented. Through the two sub-tasks two sub-tasks: making the teacher encoder more sensitive to anomalies and better denoising the student features, differences between teacher and student features are achieved in anomalous regions, while similarities in normal regions are maintained. The teacher network is frozen, while the bottleneck and stu- dent networks are trainable. The OCBE module compresses multi-scale patterns into a low-dimensional space. The com- pact embedding is then fed into the student network to re- construct the teacher network’s features. During inference, the teacher network can capture abnor- mal features that deviate from the distribution of normal samples. The OCBE module, by generating compact fea- tures, prevents these abnormal perturbations from being in- put into the student network. Therefore, The student network can generate anomaly-free features regardless of whether normal or abnormal samples are input. The discrepancy in feature reconstruction, measured by their similarity, is used to detect and localize anomalies. However, the design of RD still has some limitations that affect its anomaly detection performance: (1) Miss Detection Issue : The effectiveness of RD relies on two key premises, each with specific requirements for the teacher and student. Firstly, the teacher should be able to capture anomalies by generating abnormal features that dif- fer from normal ones in the anomalous regions. RD assumes this premise is always valid. However, in some cases, such as when the anomalous region is small and normal pixels dom- inate the receptive field, this assumption may not hold true. Secondly, the student is promised to generate anomaly-free features. Since OCBE essentially performs only a downsam- pling operation, the generated features used as student inputare not guaranteed to be compact and may still contain ab- normal information. Additionally, the multi-layer convolu- tional student decoder has strong generalization capabilities, which means that even if it is trained only on normal sam- ples, it may still generate abnormal features similar to those of the teacher encoder due to over-generalization. Conse- quently, the inability to meet these two key premises results in insufficient difference between the teacher and student features in anomalous regions. Therefore, some anomalies are not be detected. (2) False Positive Issue : Since the teacher encoder per- forms multi-step downsampling and multi-layer convolu- tion, the output high-level features lose lots of details com- pared with low-level features. Directly using high-level fea- tures for low-level feature reconstruction results in recon- struction errors. Most of previous reconstruction networks utilize skip connections to directly pass encoder features to the corresponding decoder layers. However, for RD, this operation may introduce abnormal information from the teacher encoder into the student decoder, making it difficult to generate anomaly-free features. To overcome this chal- lenge, RD designes MFF, which fuses multiple layers of en- coder features as the input of the decoder. Although MFF allows low-level features to be included in generating the de- coder input, it still downsamples these features to a smaller scale before feature fusion. Hence, some useful detail infor- mation is lost, which causes notable discrepancies between student’s reconstructed features and teacher’s features even in normal regions, thereby raising the false positive rate. Method The overall architecture of our proposed method is illus- trated in Figure 3 (a). Based on the original teacher-student framework of RD, we design an Expert-Teacher-Student (E- T-S) Network, which retains the design of the teacher, bot- tleneck, and student from RD. The teacher encoder Tis a WideResNet50 (Zagoruyko and Komodakis 2016) pre- trained on ImageNet (Deng et al. 2009). The bottleneck, named OCBE by RD, includes Multi-scale Feature Fusion (MFF) and One-Class Embedding (OCE) modules. The stu- dent decoder Sis a symmetric network with T, differing in that it replaces downsampling by upsampling. Additionally, Sincludes Guided Information Injection (GII) to incorpo- rate information from encoder. Besides, we innovatively in- troduce an expert network Ewith the same architecture and initial parameters as T. During training, different from RD, the teacher, bottle- neck, and student in E-T-S Network are all trainable. The teacher uses a separate optimizer, while the bottleneck and student share a same optimizer (with the bottleneck being considered part of the student in the following sections). During inference, the frozen teacher and student are used for anomaly detection and localization. Reverse Distillation with Expert The teacher network’s ability to perceive anomalies and the student network’s capacity to generate anomaly-free fea- tures are prerequisites for RD. Previous RD and its vari- ants do not meet both of these two conditions, and trigger missed detection issue. To tackle this problem, we propose to introduce an expert network to distill both the teacher and the student at the same time, and ensure the distillation pro- cess covers enhancing the teacher’s anomaly sensitivity anddenoising the student’s features , as in Figure 3 (b). The teacher network is optimized to be more sensitive to anomalies and capable of generating differentiated abnormal and normal features. Simultaneously, the student network is trained with a denoising strategy to ensure normal features are generated even when anomalous samples are input. This dual strategy, based on the introduction of expert network, ensure that the features of teacher and student are similar in normal regions and dissimilar in anomalous regions, which enables effective anomaly detection and localization. For each normal image Inin the training set, anomaly synthesis operation is performed to generate a corresponding synthetic anomalous image Ia. Here, we follow DRÆM (Za- vrtanik, Kristan, and Sko ˇcaj 2021) for synthesizing anoma- lies with a Perlin noise generator (Perlin 1985) and the De- scribable Textures Dataset (Cimpoi et al. 2014). The teacher Treceives a pair of images I={In, Ia}as input and out- puts three layers of features: Fn T={Fn T1, Fn T2, Fn T3}= T(In)andFa T={Fa T1, Fa T2, Fa T3}=T(Ia). The stu- dentStakes the features from the teacher network as in- put and reconstructs the corresponding three features: Fn S= {Fn S1, Fn S2, Fn S3}=S(Fn T)andFa S={Fa S1, Fa S2, Fa S3}= Figure 4: (a) Cosine distance maps between features of teacher and student. (b) Guided Information Injection. S(Fa T). The expert E, which only takes normal images as input, produces the features that correspond to those of FE={Fn E1, Fn E2, Fn E3}=E(In). To enhance the teacher’s sensitivity to anomalies, we ex- plicitly guide the teacher’s feature extraction process using ground truth anomaly masks Mgt. We maintain high cosine similarity for normal regions. In the meantime, by increas- ing the cosine distances between the teacher’s abnormal fea- tures and the expert’s normal features in anomalous regions, the teacher network is optimized to generate differentiated abnormal and normal features. The teacher’s training loss LTEis calculated using L1 distance as Dn/a TEi(h, w ) = 1−Fn/a Ti(h, w )T ·Fn Ei(h, w ) ∥Fn/a Ti(h, w )∥∥Fn Ei(h, w )∥(1) Ln/a TE=3X i=1{1 HiWiHiX h=1WiX w=1(|Dn/a TEi(h, w )−Mi gt|} (2) LTE=Ln TE+La TE (3) where HiandWirepresent the height and width of the out- put feature of the i-th encoding block. Mi gtis obtained by downsampling Mgtto align the size of Fi T. To denoise the student’s features, we use both the teacher and expert networks to guide the student network, ensuring the student network generates normal features. To be spe- cific, the student network aims at reconstructing the normal features of the teacher and expert networks whether the input images are normal or anomalous, which is optimized based on cosine similarity with LScalculated as f=F(F) (4) Li SE/ST = (1−fn SiT·fn E/Ti ∥fn Si∥∥fn E/Ti∥) + (1 −fa SiT·fn E/Ti ∥fa Si∥∥fn E/Ti∥)(5) LS=3X i=1(Li SE+Li ST) (6) where Fis the flatten operation introduced in ReContrast (Guo et al. 2024). Guided Information Injection Considering that: (1) Higher-level features contain less tex- ture details, making detail reconstruction less critical. (2) Forward Distillation Reverse Distillation STPM DeSTSeg HypAD RD RD++ THFR MemKD Ours Texture Average - 99.1/- - 99.7/99.9 99.8/99.9 99.7/- 99.8/- 99.8/100 Object Average - 98.3/- - 98.3/99.3 98.6/99.4 98.9/- 99.5/- 98.9/ 99.6 Total Average 95.5/- 98.6/- 99.2/99.5 98.8/99.5 99.0/99.6 99.2/- 99.6/- 99.2/ 99.7 Table 1: Image-level anomaly detection results I-AUC/I-AP (%) on MVTec AD with the best in bold. Figure 5: Inference procedure of our proposed method. The expert is removed and both teacher and student are frozen. The shorter generation path for higher-level features nat- urally leads to better reconstruction quality. The distance maps calculated by the cosine similarity of higher-level fea- tures in Figure 4 (a) are therefore believed to effectively lo- cate anomalies and highlight anomalous regions. Inspired by this, we propose Guided Information Injection (GII), as shown in Figure 4 (b). By leveraging similarity- based attention from higher layers to guide the information injection from encoder to decoder, GII not only directly ad- dresses the issue of the lack of low-level information during reconstruction, but also filters out most of the anomalous in- formation, preventing anomaly leakage. As a result, detail information is introduced into the decoder in a more con- trolled and softer manner compared to traditional skip con- nections. Specifically, GII is inserted before the student decoder blocks S1andS2. For the GII module before Si, the in- put consists of the output features Fi T,Fi+1 T, andFi+1 Sfrom Ti,Ti+1, and Si+1, respectively. First, Fi TandFi+1 Tare adjusted in dimension and combined to obtain the multi- scale fused feature Fi+1 Tfuse. Then, the cosine similarity be- tween the higher-level features Sim(Fi+1 T, Fi+1 S)(hereafter referred to as Sim) is calculated, where smaller values indi- cate a higher likelihood of anomalies. Finally, Sim is used to control the proportion of the fused feature Fi+1 Tfusefrom teacher encoder, and a feature with enriched details Fi+1 SSAis obtained. The original feature Fi+1 Sand the detail-enriched feature Fi+1 SSAare concatenated and passed to the decoder block Sifor subsequent reconstruction, outputting the final feature Fi Swith injected detail information. For more de- tailed calculations, see Figure 4 (b).Inference Figure 5 illustrates the inference process. During inference, the expert network Eis removed, ensuring that our method does not increase storage and computational overhead. The approach for anomaly scoring follows RD. For anomaly lo- calization, the score map is obtained by summing the cosine distance maps between the three-layer features of the teacher Tand the student Swhich are upsampled to the input image size. For anomaly detection, the image-level anomaly score is represented by the maximum value in the score map. Experiments Experimental Setup Datasets We conducted our experiments primarily on MVTec AD (Bergmann et al. 2019) containing 5354 images across 15 categories, MPDD (Jezek et al. 2021) containing 1346 images across 6 categories, and BTAD (Mishra et al. 2021), which includes 2540 images across 3 categories. All datasets have only normal images in the training set, while have both normal and anomalous images in the test set. Implementation Details A separate detection model is trained for each category. During both training and infer- ence, all images are resized to 256×256. The training batch size is 16, with an early stopping strategy for a maximum of 10,000 iterations. Consistent with RD, the student decoder is optimized using an Adam optimizer with a learning rate of 0.005, while the teacher encoder is trained with another one at a learning rate of 0.0001. During inference, the anomaly maps are smoothed using a Gaussian filter with σ= 4. Evaluation Metrics For anomaly detection, the used eval- uation metrics are area under the receiver operating charac- teristic (AUROC) and average precision (AP). For anomaly localization, in addition to AUROC and AP, we also report per-region-overlap (PRO) (Bergmann et al. 2020). Main Results To demonstrate the superiority, we compare our method with various KD-based unsupervised AD methods, including STPM (Wang et al. 2021), DeSTSeg (Zhang et al. 2023b), and HypAD (Li et al. 2024) under Forward Distillation (FD) paradigm, as well as RD (Deng and Li 2022), RD++ (Tien et al. 2023), THFR (Guo et al. 2023), and MemKD (Gu et al. 2023) under Reverse Distillation paradigm. For a fairer com- parison, we retrain the main comparison methods RD and RD++ under the same environment with our method. CategoryForward Distillation Reverse Distillation STPM DeSTSeg HypAD RD RD++ THFR MemKD OursTexturesCarpet 98.8/-/95.8 96.1/72.8/93.6 -/-/92.7 99.3/67.2/97.9 99.2/63.9/97.7 99.2/-/97.7 99.1/-/97.5 99.6/83.0/98.3 Grid 99.0/-/96.6 99.1/61.5 /96.4 -/-/99.7 99.3/ 50.2/97.7 99.3/49.5/ 97.7 99.3/-/ 97.7 99.2/-/96.9 99.4/50.1/97.5 Leather 99.3/-/98.0 99.7 /75.6 /99.0 -/-/99.9 99.5/52.6/99.2 99.4/51.4/99.2 99.4/-/99.2 99.5/-/99.2 99.7/70.0/99.3 Tile 97.4/-/92.1 98.0/90.0/95.5 -/-/99.8 95.8/53.8/91.1 96.4/56.2/92.1 95.5/-/90.8 95.7/-/91.1 99.2/94.8/96.8 Wood 97.2/-/93.6 97.7/81.9 /96.1 -/-/95.3 95.3/51.5/93.2 95.7/51.8/93.2 95.3/-/93.3 95.3/-/91.2 98.1/80.2/95.2 Average 98.3/-/95.2 98.1/76.4 /96.1 -/-/97.5 97.8/55.1/95.8 98.0/54.6/96.0 97.7/-/95.7 97.8/-/95.2 99.2/75.6/97.4ObjectsBottle 98.8/-/95.1 99.2/90.3/96.6 -/-/100 98.8/78.4/96.9 98.7/80.0/96.9 98.9/-/97.2 98.8/-/97.1 99.3/91.6/97.9 Cable 95.5/-/87.7 97.3/60.4/86.4 -/-/93.3 97.8/59.6/92.6 98.4/63.6/93.9 98.5/-/94.8 98.3/-/93.4 98.7/73.1/94.9 Capsule 98.3/-/92.2 99.1 /56.3 /94.2 -/-/96.9 98.8/46.6/96.4 98.9/47.4/96.5 98.7/-/95.9 98.8/-/96.2 98.9/50.5/96.8 Hazelnut 98.5/-/94.3 99.6 /88.4 /97.6 -/-/99.7 99.2/67.9/96.0 99.2/66.5/ 96.3 99.2/-/96.2 99.1/-/95.7 99.2/68.0/96.1 Metal nut 97.6/-/94.5 98.6 /93.5 /95.0 -/-/98.0 97.5/81.8/93.3 98.0/ 83.9/93.2 97.4/-/90.5 97.2/-/90.8 98.4/83.7/ 93.7 Pill 97.8/-/96.5 98.7 /83.1/95.3 -/-/98.4 98.4/80.2/96.9 98.4/79.6/97.1 98.0/-/96.4 98.3/-/96.6 98.7/83.7/97.5 Screw 98.3/-/93.0 98.5/58.7 /92.5 -/-/95.6 99.6/54.9/ 98.4 99.6/55.5/98.3 99.5/-/98.2 99.6/-/98.2 99.6/48.8/98.3 Toothbrush 98.9/-/92.2 99.3 /75.2 /94.0 -/-/99.9 99.1/53.1/94.6 99.1/56.3/94.5 99.2/-/94.7 98.9/-/92.2 99.3/68.5/95.5 Transistor 82.5/-/69.5 89.1/64.8/85.7 -/-/100 93.1/55.9/79.6 94.4/58.3/82.8 95.9/-/85.9 96.4/-/85.3 97.5/70.3/90.2 Zipper 98.5/-/95.2 99.1 /85.2 /97.4 -/-/94.7 98.9/61.5/ 96.8 98.9 /60.5/96.4 98.7/-/96.6 98.5/-/95.9 98.9/69.3/96.8 Average 96.5/-/90.9 97.9/75.6 /93.5 -/-/97.6 98.1/64.0/94.2 98.4/65.2/94.6 98.4/-/94.6 98.4/-/94.1 98.9/70.8/95.8 Total Average 97.0/-/92.1 97.9/75.8 /94.4 98.0/62.5/97.6 98.0/61.0/94.7 98.2/61.6/95.1 98.2/-/95.0 98.2/-/94.5 99.0/72.4/96.3 Table 2: Pixel-level anomaly localization results P-AUC/P-AP/P-PRO (%) on MVTec AD with the best KD-based results underlined and the best RD-based results in bold. Category Bracket Black Bracket Brown Bracket White Connector Metal Plate Tubes Average RD 98.1/6.2/92.1 97.2/25.7/95.4 99.4/15.6/97.8 99.5/64.2/96.9 99.1/93.9/96.2 99.2/76.0/97.6 98.7/46.9/96.0 RD++ 98.2/9.8/92.8 97.1/25.6/94.9 99.5/12.8/97.2 99.3/61.3/96.0 99.1/93.3/96.1 99.2/74.8/97.4 98.7/46.3/95.7 MemKD 97.8/10.7/94.5 96.3/20.5/95.2 98.8/15.9/97.3 99.4/60.6/96.4 99.1/94.2/95.2 99.2/74.0/97.3 98.4/46.1/95.9 Ours 98.7/21.4/96.0 98.6 /30.3/96.7 99.4/ 17.1/98.2 99.6 /73.3/97.7 99.2 /95.3/96.7 99.4 /77.4/98.1 99.2 /52.5/97.2 Table 3: Pixel-level anomaly localization results P-AUC/P-AP/P-PRO (%) on MPDD with the best in bold. Category RD RD++ Ours Class 01 96.7/50.0/77.7 96.1/48.3/71.7 97.2/55.0/78.6 Class 02 96.8/65.9/66.5 96.5/60.1/ 69.4 97.4 /78.2/66.9 Class 03 99.1/53.5/87.3 99.7/59.2/87.2 99.8/62.5/90.0 Average 97.5/56.5/77.2 97.4/55.9/76.1 98.1/65.2/78.5 Table 4: Pixel-level anomaly localization results P-AUC/P- AP/P-PRO (%) on BTAD with the best in bold. Anomaly Detection Table 8 shows the image-level anomaly detection results on MVTec AD (detailed per- category results are provided in the supplementary mate- rials). For AUC, our method is comparable to the leading KD-based AD methods in overall average. Regarding AP, our method achieves SOTA performance, with an average of 99.7% over all categories. Anomaly Localization We conduct the quantitative com- parison of anomaly localization results on MVTec AD in Table 2. Our method surpasses previous KD-based SOTA in pixel-level AUC, achieving 99.0%. While our method ranks second in pixel-level AP and PRO metrics with 72.4% and96.3%, it represents the best performance within the RD paradigm. Qualitative visual results are shown in Figure 1. Furthermore, we extend the quantitative comparison to MPDD and BTAD datasets. Tables 3 and 10 respectively present the anomaly localization results over all categories on MPDD and BTAD. Our method achieves the best per- formance in all metrics on the both datasets compared with other RD-based methods, further validating its localization capability. Ablation Analysis Ablation Study on Network Composition Our proposed method primarily includes two innovative components: the design of Reverse Distillation with Expert for distillation supervision innovation and the design of Guided Informa- tion Injection for network detail optimization. To demon- strate the effectiveness and necessity of the components, we conduct ablation experiments on MVTec AD, MPDD, and BTAD datasets, as shown in Table 5. The quantitative results indicate that when both RD-E and GII are applied simulta- neously, the method achieves the best localization results. In addition, Figure 6 illustrates the qualitative compari- son results, where the baseline refers to the standard RD. Expert GIIMVTec AD MPDD BTAD P-AUC P-AP P-PRO P-AUC P-AP P-PRO P-AUC P-AP P-PRO - - 98.02 61.01 94.70 98.75 46.95 95.99 97.53 55.45 77.17 ✓ - 98.69 71.80 96.11 98.76 45.75 96.06 97.97 63.63 77.78 -✓ 98.17 60.20 94.77 99.14 48.95 97.17 97.83 59.14 78.23 ✓ ✓ 98.97 72.37 96.31 99.14 52.46 97.25 98.11 65.24 78.48 Table 5: Ablation localization results (%) of network composition on MVTec AD, MPDD, and BTAD. Input GT Baseline + RD-E + GII Figure 6: Visualization of ablation study on network composition. From top to bottom: the input image, the ground truth masks, the output anomaly maps of Baseline (RD), Baseline+RD-E, and Baseline+RD-E+GII (Ours). Den Sen I-AUC P-AUC I-AP P-AP P-PRO Teacher-Student Network - - 98.77 98.02 99.52 61.01 94.70 ✓ - 98.62 98.29 99.47 61.56 95.39 Expert-Teacher-Student Network ✓ ✓ 98.72 98.69 99.48 71.80 96.11 Table 6: Ablation study results (%) of RD-E on MVTec AD. (Den: Denoising. Sen: Sensitivity.) It is evident that incorporating RD-E significantly enhances anomaly localization capabilities, reducing the missed de- tection rate. Furthermore, with the introduction of GII, back- ground noise in anomaly maps in obtained anomaly maps is greatly reduced, leading to a lower false positive rate. These findings align well with our previous analysis. Ablation Study on Reverse Distillation with Expert In Table 6, we compare the results on MVTec AD between using only the teacher encoder for the student decoder’s feature denoising (Den) and introducing an expert net- work that enhances the teacher’s anomaly sensitivity while also denoising the student’s features (Sen+Den). The re- sults show a significant improvement in anomaly localiza- tion when the expert network is added to aid the distillation. Ablation Study on Guided Information Injection Table 12 presents the results of ablation experiments related to the GII module on MVTec AD. The ”+SC” row indicates theI-AUC P-AUC I-AP P-AP P-PRO w/o GII 98.72 98.69 99.48 71.80 96.11 w/ GII+ SC 98.96 98.86 99.56 71.14 96.19 + SA 99.22 98.97 99.74 72.37 96.31 Table 7: Ablation study results (%) of GII on MVTec AD. (SC: Naive skip connection. SA: Similarity attention.) absence of similarity attention, where Fi+1 SSA=Fi+1 Tfuse. The ”+SA” row shows the results when similarity attention is in- troduced to filter features. The overall results highlight the effectiveness of GII and underscores the importance of the similarity attention mechanism within it. Conclusion In this paper, we first improves Reverse Distillation with Ex- pert for unsupervised AD. Building on the RD paradigm, we introduce an expert network that distills both the teacher and student networks, ensuring the effectiveness of RD by en- hancing the teacher’s sensitivity to anomalies and maintain- ing the student’s ability to produce normal features. Besides, to address the challenge of detail reconstruction, we design Guided Information Injection, which uses high-level feature similarity as attention to guide the injection of teacher’s fea- tures into the student. With these innovations, our method effectively reduces missed detections and false positives in RD, as confirmed by experimental results. References Akcay, S.; Atapour-Abarghouei, A.; and Breckon, T. P. 2019. Ganomaly: Semi-supervised anomaly detection via adversarial training. In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, De- cember 2–6, 2018, Revised Selected Papers, Part III 14 , 622–637. Springer. Bae, J.; Lee, J.-H.; and Kim, S. 2023. Pni: industrial anomaly detection using position and neighborhood infor- mation. In Proceedings of the IEEE/CVF International Con- ference on Computer Vision , 6373–6383. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 9592–9600. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2020. Uninformed students: Student-teacher anomaly detec- tion with discriminative latent embeddings. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 4183–4192. Bergmann, P.; L ¨owe, S.; Fauser, M.; Sattlegger, D.; and Ste- ger, C. 2018. Improving unsupervised defect segmentation by applying structural similarity to autoencoders. arXiv preprint arXiv:1807.02011 . Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; and Vedaldi, A. 2014. Describing textures in the wild. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition , 3606–3613. Defard, T.; Setkov, A.; Loesch, A.; and Audigier, R. 2021. Padim: a patch distribution modeling framework for anomaly detection and localization. In International Con- ference on Pattern Recognition , 475–489. Springer. Deng, H.; and Li, X. 2022. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 9737–9746. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei- Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , 248–255. Ieee. Gu, Z.; Liu, L.; Chen, X.; Yi, R.; Zhang, J.; Wang, Y .; Wang, C.; Shu, A.; Jiang, G.; and Ma, L. 2023. Remem- bering Normality: Memory-guided Knowledge Distillation for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 16401–16409. Gudovskiy, D.; Ishizaka, S.; and Kozuka, K. 2022. Cflow- ad: Real-time unsupervised anomaly detection with local- ization via conditional normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of com- puter vision , 98–107. Guo, H.; Ren, L.; Fu, J.; Wang, Y .; Zhang, Z.; Lan, C.; Wang, H.; and Hou, X. 2023. Template-guided Hierarchical Fea- ture Restoration for Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 6447–6458.Guo, J.; Jia, L.; Zhang, W.; Li, H.; et al. 2024. Recontrast: Domain-specific anomaly detection via contrastive recon- struction. Advances in Neural Information Processing Sys- tems, 36. Hyun, J.; Kim, S.; Jeon, G.; Kim, S. H.; Bae, K.; and Kang, B. J. 2024. ReConPatch: Contrastive patch representation learning for industrial anomaly detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Com- puter Vision , 2052–2061. Jezek, S.; Jonak, M.; Burget, R.; Dvorak, P.; and Skotak, M. 2021. Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In 2021 13th International congress on ultra modern telecommuni- cations and control systems and workshops (ICUMT) , 66– 71. IEEE. Jiang, Y .; Cao, Y .; and Shen, W. 2023. A masked reverse knowledge distillation method incorporating global and lo- cal information for image anomaly detection. Knowledge- Based Systems , 280: 110982. Li, C.-L.; Sohn, K.; Yoon, J.; and Pfister, T. 2021. Cutpaste: Self-supervised learning for anomaly detection and localiza- tion. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition , 9664–9674. Li, H.; Chen, Z.; Xu, Y .; and Hu, J. 2024. Hyperbolic Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 17511–17520. Li, H.; Hu, J.; Li, B.; Chen, H.; Zheng, Y .; and Shen, C. 2023. Target before shooting: Accurate anomaly detection and localization under one millisecond via cascade patch re- trieval. arXiv preprint arXiv:2308.06748 . Lin, J.; and Yan, Y . 2024. A Comprehensive Augmenta- tion Framework for Anomaly Detection. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, 8742–8749. Liu, T.; Li, B.; Du, X.; Jiang, B.; Geng, L.; Wang, F.; and Zhao, Z. 2023a. Fair: frequency-aware image restora- tion for industrial visual anomaly detection. arXiv preprint arXiv:2309.07068 . Liu, X.; Wang, J.; Leng, B.; and Zhang, S. 2024. Dual- modeling decouple distillation for unsupervised anomaly detection. In Proceedings of the 32nd ACM International Conference on Multimedia , 5035–5044. Liu, Z.; Zhou, Y .; Xu, Y .; and Wang, Z. 2023b. Simplenet: A simple network for image anomaly detection and localiza- tion. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , 20402–20411. Mishra, P.; Verk, R.; Fornasier, D.; Piciarelli, C.; and Foresti, G. L. 2021. VT-ADL: A vision transformer network for im- age anomaly detection and localization. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE) , 01–06. IEEE. Perlin, K. 1985. An image synthesizer. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques , SIGGRAPH ’85, 287–296. ISBN 0897911660. Roth, K.; Pemula, L.; Zepeda, J.; Sch ¨olkopf, B.; Brox, T.; and Gehler, P. 2022. Towards total recall in indus- trial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 14318–14328. Rudolph, M.; Wehrbein, T.; Rosenhahn, B.; and Wandt, B. 2023. Asymmetric student-teacher networks for industrial anomaly detection. In Proceedings of the IEEE/CVF winter conference on applications of computer vision , 2592–2602. Salehi, M.; Sadjadi, N.; Baselizadeh, S.; Rohban, M. H.; and Rabiee, H. R. 2021. Multiresolution knowledge distillation for anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 14902–14912. Tang, T.-W.; Kuo, W.-H.; Lan, J.-H.; Ding, C.-F.; Hsu, H.; and Young, H.-T. 2020. Anomaly detection neural network with dual auto-encoders GAN and its industrial inspection applications. Sensors , 20(12): 3336. Tien, T. D.; Nguyen, A. T.; Tran, N. H.; Huy, T. D.; Duong, S.; Nguyen, C. D. T.; and Truong, S. Q. 2023. Revisiting reverse distillation for anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 24511–24520. Wang, G.; Han, S.; Ding, E.; and Huang, D. 2021. Student- Teacher Feature Pyramid Matching for Anomaly Detection. In32nd British Machine Vision Conference 2021, BMVC 2021, Online, November 22-25, 2021 , 306. BMV A Press. Zagoruyko, S.; and Komodakis, N. 2016. Wide residual net- works. arXiv preprint arXiv:1605.07146 . Zavrtanik, V .; Kristan, M.; and Sko ˇcaj, D. 2021. Draem-a discriminatively trained reconstruction embedding for sur- face anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision , 8330–8339. Zhang, J.; Suganuma, M.; and Okatani, T. 2024. Contextual affinity distillation for image anomaly detection. In Proceed- ings of the IEEE/CVF Winter Conference on Applications of Computer Vision , 149–158. Zhang, X.; Li, N.; Li, J.; Dai, T.; Jiang, Y .; and Xia, S.-T. 2023a. Unsupervised surface anomaly detection with diffu- sion probabilistic model. In Proceedings of the IEEE/CVF International Conference on Computer Vision , 6782–6791. Zhang, X.; Li, S.; Li, X.; Huang, P.; Shan, J.; and Chen, T. 2023b. Destseg: Segmentation guided denoising student- teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 3914–3923. Zhang, X.; Xu, M.; and Zhou, X. 2024. RealNet: A feature selection network with realistic synthetic anomaly for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 16699–16708. Zhou, Q.; He, S.; Liu, H.; Chen, T.; and Chen, J. 2022. Pull & push: Leveraging differential knowledge distillation for efficient unsupervised anomaly detection and localization. IEEE Transactions on Circuits and Systems for Video Tech- nology .Zhou, Y .; Xu, X.; Song, J.; Shen, F.; and Shen, H. T. 2024. MSFlow: Multiscale Flow-Based Framework for Unsuper- vised Anomaly Detection. IEEE Transactions on Neural Networks and Learning Systems . Supplementary material: Unlocking the Potential of Reverse Distillation for Anomaly Detection Details of Guided Information Injection To selectively introduce the normal features from the teacher encoder into the student decoder and aid in detailed recon- struction, we propose Guided Information Injection (GII). In this section, the specific details of GII are provided, as shown in Algorithm 1. ⊙represents the element-wise mul- tiplication operation. The resulting Fi+1 SGIIwill replace the original Fi+1 Sin the RD architecture and be fed into Sito reconstruct the feature Fi S. Algorithm 1: Guided Information Injection defore Si Input : teacher features Fi TandFi+1 T, student feature Fi+1 S Output : output feature Fi+1 SGII 1:Fi T′ = Conv 1×1(DownSample( Fi T)) 2:Fi+1 Tfuse= Conv 3×3(Fi T′ +Fi+1 T) 3:Simi+1=Fi+1 TFi+1 S ∥Fi+1 T∥∥Fi+1 S∥ 4:Fi+1 SSA= Conv 3×3(Fi+1 Tfuse⊙Sim + Fi+1 S⊙(1−Sim)) 5:Fi+1 SGII= Conv 3×3({Fi+1 SSA, Fi+1 S}) Details of Anomaly Synthesis Our method relies on synthetic anomalies for training the ex- pert network to guide both the teacher and the student. The anomaly synthesis approach used in our method is based on DRÆM (Zavrtanik, Kristan, and Sko ˇcaj 2021). First, gener- ating the synthetic masks: the random two-dimensional Per- lin noise (Perlin 1985) is binarized to obtain the synthetic anomaly masks. Then, using M, the images from external texture dataset DTD (Cimpoi et al. 2014) are overlaid onto normal images to create synthetic anomaly images. The cor- responding synthetic anomalous image Iafor a given normal image Iais expressed as Ia= (1−Msyn)⊙In+(1−β)(Msyn⊙In)+β(Msyn⊙It)(7) where Msynis the corresponding synthetic mask, βis the opacity parameter chosen between [0.15,1]following DeST- Seg (Zhang et al. 2023b), and Itis an image randomly ob- tained from the external dataset. For object categories, we make a slight modification to previous approach to let synthetic anomalies be more sim- ilar to actual ones. Specifically, we restrict the anomaly re- gions to the object’s foreground, which is achieved by using foreground masks to exclude background regions when gen- erating the synthetic anomaly masks. The foreground masks are simply obtained by binarizing the images and setting a threshold. We provide some examples of synthetic anomalies, with texture categories shown in Figure 7 and object categories in Figure 8.Experimental Setup Details of Datasets MVTec AD MVTec AD (Bergmann et al. 2019) is a com- monly used benchmark for unsupervised AD. It consists of 15 categories of industrial images, including 5 texture cat- egories and 10 object categories. The training set contains 3629 normal images, while the test set includes both normal and anomalous images, with a total of 1725 images. MPDD MPDD (Jezek et al. 2021) is developed for the de- fect detection of metal parts, reflecting anomalies that occur on human-operated production lines. It includes a total of six categories of metal part images. The training set contains 888 normal images, while the test set includes 176 normal images and 282 defect images. BTAD BTAD (Mishra et al. 2021) dataset comprises 3 cat- egories of industrial product images, with a total of 1,799 training images and 736 test images. Some misclassified im- ages are present in the original BTAD dataset, which have been removed before our experiments. VisA VisA (Zou et al. 2022) has 12 categories of images, which are grouped into three main types: Complex structure, Multiple instances, and Single instance. A total of 10,821 images are included in this dataset, with 9,621 normal im- ages and 1,200 anomalous images. For unsupervised AD task, we first divide VisA according to unsupervised stan- dards, where the training set contains only normal images, and the test set includes both normal and anomalous images. Additional Implementation Details Consistent with RD, we used the Adam optimizer with β= (0 .5,0.999) . The backbone network we employ is WideResNet50 (Zagoruyko and Komodakis 2016), and the output anomaly maps are computed using the first three stages of the teacher and student networks. During infer- ence, the ground truth anomaly masks are also resized to 256×256. The code is implemented in PyTorch 2.0, and all experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU. Evaluation Metrics We introduce AUC and AP metrics for anomaly detection and localization. AUC measures the model’s ability to dis- tinguish between normal and anomalous samples across all thresholds, while AP emphasizes the balance between precision and recall, making it more suitable for imbal- anced categories. For anomaly localization, considering that the size of anomalies may affect AUC, we also introduce PRO (Bergmann et al. 2020). PRO assesses the accuracy of anomaly localization by measuring the overlap between pre- dicted and ground truth anomalous regions, treating anoma- lies of all sizes equally. NormalSynthetic MaskAnomalous NormalSynthetic MaskAnomalous AnomalousSynthetic MaskNormal Carpet Grid Leather Tile WoodFigure 7: Anomaly synthesis of texture images in MVTec AD. Throughout the experimental results, we use I-AUC and I-AP to refer to image-level AUC and AP metrics, and P- AUC ,P-AP , and P-PRO to refer to pixel-level AUC, AP, and PRO metrics. Complete Anomaly Detection Results This section presents the complete anomaly detection results for each category. Tables 8, 9, and 10 respectively show the detection results on MVTec AD, MPDD, and BTAD datasets. Overall, our method achieves state-of-the-art re- sults on AP compared with other advanced RD-based meth- ods on MVTec AD and BTAD datasets. Additionally, our method achieves the best detection performance in terms of AUC and AP across many categories. Although the anomaly detection performance on MPDD fall short of MemKD (Gu et al. 2023), they still surpass the baseline method RD (Deng and Li 2022). Furthermore, in some categories such as Con- nector, Metal Plates, and Tubes, our method either exceeds or matches the performance of other RD-based competitors. Experimental Results on VisA This section shows the anomaly detection and localization results of our method on the VisA dataset, as shown in Table 11. The methods we compare with are RD-based, including RD (Deng and Li 2022) and MemKD (Gu et al. 2023). As is shown in the table, our proposed method also outperforms other RD-based SOTA methods on VisA in P-AUC and P- AP, achieving 99.1% and 47.7%. To clarify, we first downsample the ground truth masks to256×256and then test the model using the downsampled anomaly masks. In the VisA dataset, some anomalies are too small to be distinguishable after downsampling, which may result in deviations. To ensure a fairer comparison, we re-implement RD on VisA using the same testing approach with our method. More Ablation Results Ablation Study on Network Composition This section provides the qualitative ablation studies on the network com- position conducted on MPDD and BTAD datasets, as in Fig- ure 9. Ablation Study on Guided Information Injection We extend the ablation study on Guided Information Injection to MPDD and BTAD datasets. Table 12 shows the anomaly localization results. The addition of skip connection leads to improvements across all three localization metrics on both datasets. Additionally, after applying the proposed similar- ity attention, both AUC and AP metrics are further improved while PRO remains stable. More Visualizations Visual Analysis of Detection Errors in RD Figures 10 and 11 respectively illustrate examples of missed detections and false positives by RD on MVTec AD. As shown in Figure 10, missed detections often occur in the central region of large-scale anomalies. Our method effec- tively addresses this issue, yielding more complete anomaly NormalSynthetic MaskAnomalousForeground MaskAnomalousSynthetic MaskNormal Bottle Cable Capsule Hazelnut Metal nut Pill Screw Transistor ZipperForeground MaskFigure 8: Anomaly synthesis of object images in MVTec AD. CategoryForward Distillation Reverse Distillation STPM DeSTSeg HypAD RD RD++ THFR MemKD OursTexturesCarpet - 98.9/- - 100/100 100 /100 99.8/- 99.6/- 99.5/99.9 Grid - 99.7/- - 100/100 100 /100 100 /- 100/- 100/100 Leather - 100/- - 100/100 100 /100 100 /- 100/- 100/100 Tile - 100/- - 99.4/99.8 99.8/99.9 99.3/- 100/- 100/100 Wood - 97.1/- - 99.2/99.8 99.3/99.8 99.2/- 99.5/- 99.6/99.9 Average - 99.1/- - 99.7/99.9 99.8/99.9 99.7/- 99.8/- 99.8/100ObjectsBottle - 100/- - 100/100 100 /100 100 /- 100/- 100/100 Cable - 97.8/- - 97.1/98.5 97.8/98.9 99.2/- 99.2/- 98.6/ 99.2 Capsule - 97.0/- - 98.0/99.6 97.5/99.4 97.5/- 98.8/- 98.5/ 99.7 Hazelnut - 99.9/- - 100/100 100 /100 100 /- 100/- 100/100 Metal nut - 99.5/- - 98.6/99.7 100/100 100 /- 100/- 99.7/99.9 Pill - 97.2/- - 96.5/99.4 97.3/99.5 97.8/- 98.3/- 98.5/99.7 Screw - 93.6/- - 98.7/ 99.6 98.5/99.5 97.1/- 99.1/- 95.8/98.5 Toothbrush - 99.9/- - 98.9/99.6 98.9/99.6 100/- 100/- 100/100 Transistor - 98.5/- - 97.1/97.6 98.3/98.1 99.7/- 100/- 99.9/ 99.9 Zipper - 100/- - 98.1/99.4 97.4/99.2 97.7/- 99.3/- 98.3/ 99.5 Average - 98.3/- - 98.3/99.3 98.6/99.4 98.9/- 99.5/- 98.9/ 99.6 Total Average 95.5/- 98.6/- 99.2/99.5 98.8/99.5 99.0/99.6 99.2/- 99.6/- 99.2/ 99.7 Table 8: Image-level anomaly detection results I-AUC/I-AP (%) on MVTec AD with the best in bold. Input GT Baseline + RD-E + GII Figure 9: Visualization of ablation study on network composition on MPDD and BTAD. From top to bottom: the input image, the ground truth masks, the output anomaly maps of Baseline (RD), Baseline+RD-E, and Baseline+RD-E+GII (Ours). Category RD RD++ MemKD Ours Bracket Black 90.3/93.7 89.5/92.8 91.2/94.4 89.9/94.0 Bracket Brown 92.5/95.1 92.4/95.6 95.2/97.3 91.9/95.2 Bracket White 89.2/91.7 90.3/91.9 92.7/93.8 88.6/92.4 Connector 99.8/99.5 99.5/99.0 100/100 100 /100 Metal Plate 100/100 100 /100 100 /100 100 /100 Tubes 96.7/98.7 92.6/97.1 96.2/98.5 97.6/99.0 Average 94.8/96.5 94.1/96.1 95.4/97.3 94.7/96.8 Table 9: Image-level anomaly detection results I-AUC/I-AP (%) on MPDD with the best in bold. Category RD RD++ Ours Class 01 97.9/99.3 97.3/98.9 96.0/98.2 Class 02 86.0/97.7 87.3/97.9 85.8/97.7 Class 03 99.7/94.2 99.7/95.9 99.8/97.0 Average 94.5/97.1 94.8/97.6 93.9/ 97.6 Table 10: Image-level anomaly detection results I-AUC/I- AP (%) on BTAD with the best in bold. regions. In Figure 11, RD tends to produce false positives in region like textures and backgrounds, mistakenly identifying normal regions as anomalies. Our proposed method signifi- cantly reduces noise in the anomaly maps, mitigating these false positives. Qualitative comparisons on more datasets On Page 1 of the main text, we provide visualizations of lo- calization results on MVTec AD. In this section, we show qualitative results of anomaly localization on three other datasets. Figures 12, 13, and 14 respectively show the visual results on MPDD, BTAD, and VisA datasets. It is evident that, compared to RD and RD++ (Tien et al. 2023), our pro- posed method achieves more accurate anomaly localization while also reducing the likelihood of false positives in nor- mal regions. References Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. MVTec AD–A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 9592–9600. Bergmann, P.; Fauser, M.; Sattlegger, D.; and Steger, C. 2020. Uninformed students: Student-teacher anomaly detec- tion with discriminative latent embeddings. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 4183–4192. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; and Vedaldi, A. 2014. Describing textures in the wild. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition , 3606–3613. Deng, H.; and Li, X. 2022. Anomaly detection via reverse distillation from one-class embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 9737–9746.Gu, Z.; Liu, L.; Chen, X.; Yi, R.; Zhang, J.; Wang, Y .; Wang, C.; Shu, A.; Jiang, G.; and Ma, L. 2023. Remem- bering Normality: Memory-guided Knowledge Distillation for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 16401–16409. Jezek, S.; Jonak, M.; Burget, R.; Dvorak, P.; and Skotak, M. 2021. Deep learning-based defect detection of metal parts: evaluating current methods in complex conditions. In 2021 13th International congress on ultra modern telecommuni- cations and control systems and workshops (ICUMT) , 66– 71. IEEE. Mishra, P.; Verk, R.; Fornasier, D.; Piciarelli, C.; and Foresti, G. L. 2021. VT-ADL: A vision transformer network for im- age anomaly detection and localization. In 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE) , 01–06. IEEE. Perlin, K. 1985. An image synthesizer. In Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques , SIGGRAPH ’85, 287–296. ISBN 0897911660. Tien, T. D.; Nguyen, A. T.; Tran, N. H.; Huy, T. D.; Duong, S.; Nguyen, C. D. T.; and Truong, S. Q. 2023. Revisiting reverse distillation for anomaly detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 24511–24520. Zagoruyko, S.; and Komodakis, N. 2016. Wide residual net- works. arXiv preprint arXiv:1605.07146 . Zavrtanik, V .; Kristan, M.; and Sko ˇcaj, D. 2021. Draem-a discriminatively trained reconstruction embedding for sur- face anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision , 8330–8339. Zhang, X.; Li, S.; Li, X.; Huang, P.; Shan, J.; and Chen, T. 2023. Destseg: Segmentation guided denoising student- teacher for anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 3914–3923. Zou, Y .; Jeong, J.; Pemula, L.; Zhang, D.; and Dabeer, O. 2022. Spot-the-difference self-supervised pre-training for anomaly detection and segmentation. In European Confer- ence on Computer Vision , 392–408. Springer. CategoryMemKD RD Ours I-AUC I-AP P-AUC P-AP P-PRO I-AUC I-AP P-AUC P-AP P-PRO I-AUC I-AP P-AUC P-AP P-PROComplex structurePCB1 96.9 99.7 99.8 82.1 96.9 97.8 97.7 99.8 82.3 96.9 97.4 97.3 99.8 84.5 96.2 PCB2 98.0 94.8 96.0 25.2 94.9 97.8 97.6 98.9 26.3 93.3 96.3 96.5 99.0 23.1 92.9 PCB3 97.8 99.1 99.3 35.6 96.6 96.9 97.1 99.4 35.3 95.2 97.5 97.8 99.4 44.1 95.6 PCB4 99.8 98.6 98.6 44.3 99.9 99.9 99.9 98.7 44.3 91.0 99.9 99.9 99.1 47.7 92.1Multiple instancesCapsules 94.7 99.0 99.2 58.2 88.2 92.1 95.2 99.4 60.8 96.7 91.4 95.2 99.4 57.8 95.3 Candle 95.9 99.1 99.0 23.1 93.8 95.0 95.4 99.1 23.2 95.6 97.7 98.0 99.3 32.9 95.0 Macaroni1 98.0 99.6 99.6 23.2 92.7 95.9 94.4 99.8 21.9 98.3 95.7 94.1 99.8 21.7 98.7 Macaroni2 92.0 99.2 99.5 13.0 84.8 89.2 86.5 99.8 12.8 98.9 90.1 87.2 99.7 13.1 99.0Single instanceCashew 99.4 98.7 96.6 58.2 97.5 96.8 98.6 96.2 54.5 94.5 96.6 98.4 98.2 63.5 95.4 Chewing gum 99.8 99.1 98.6 60.3 98.8 97.1 98.6 99.3 66.7 92.9 99.5 99.7 99.0 76.3 89.4 Fryum 98.8 97.0 96.9 49.3 96.6 96.7 98.5 96.9 48.4 93.9 96.7 98.6 97.0 48.3 94.7 Pipe fryum 100 99.2 99.2 56.2 99.0 99.6 99.8 99.1 54.2 97.2 99.6 99.8 99.3 58.9 97.1 Total Average 97.6 98.6 98.4 44.1 94.9 96.2 96.6 98.9 44.2 95.4 96.5 96.9 99.1 47.7 95.1 Table 11: Anomaly detection and localization results (%) on VisA with the best in bold. Input GT RD Ours Figure 10: Missed detections of RD. Input GT RD OursFigure 11: False positives of RD. Input GT RD RD++ Ours Figure 12: Visualization of anomaly localization on MPDD. Input GT RD RD++ OursFigure 13: Visualization of anomaly localization on BTAD. MPDD BTAD P-AUC P-AP P-PRO P-AUC P-AP P-PRO w/o GII 98.76 45.75 96.06 97.97 63.63 77.78 w/ GII+ SC 99.08 51.45 97.39 98.08 64.77 78.49 + SA 99.14 52.46 97.25 98.11 65.24 78.48 Table 12: Ablation study results (%) of GII on MPDD and BTAD. (SC: Naive skip connection. SA: Similarity atten- tion.) Input GT RD Ours Input Input GT GT RD RD Ours OursFigure 14: Visualization of anomaly localization on VisA. | 4 | 1 | The proposed method utilizes a WideResNet50 architecture as a teacher network which typically has about 68 million parameters. Given the dataset size of around 5354 images with a training batch size of 16, the model is expected to go through multiple epochs for convergence, likely around 100 epochs based on common practices in deep learning for similar tasks. Training with a batch size of 16 on a single GPU should take approximately 4 hours to complete 100 epochs, assuming a moderate computational setup (e.g., NVIDIA RTX 3090 or similar) with reasonable optimization. The fluctuations in GPU memory usage and model complexity support the feasibility of this within a single GPU, keeping it under 8 hours for full training. | yes | Yes | CV | Unlocking the Potential of Reverse Distillation for Anomaly Detection | 2024-12-10 0:00:00 | https://github.com/hito2448/urd | 1 | https://www.mydrive.ch/shares/38536/3830184030e49fe74747669442f0f282/download/420938113-1629952094/mvtec_anomaly_detection.tar.xz; https://www.robots.ox.ac.uk/~vgg/data/dtd/download/dtd-r1.0.1.tar.gz | 8 hours for one folder. There are 11 folders. | https://drive.google.com/file/d/1OLbo3FifM1a7-wbCtfpjZrZLr0K5bS87/view?usp=sharing | Yes | -- Just need to change the num_workers in train.py according to system |
York Urban Dataset | DT-LSD | [] | DT-LSD: Deformable Transformer-based Line Segment Detection | 2024-11-20T00:00:00 | https://arxiv.org/abs/2411.13005v1 | [
"https://github.com/SebastianJanampa/DT-LSD"
] | {'sAP5': '30.2', 'sAP10': '33.2', 'sAP15': '35.1'} | [
"sAP5",
"sAP10",
"sAP15",
"FH"
] | Given the following paper and codebase:
Paper: DT-LSD: Deformable Transformer-based Line Segment Detection
Codebase: https://github.com/SebastianJanampa/DT-LSD
Improve the DT-LSD model on the York Urban Dataset dataset. The result
should improve on the following metrics: {'sAP5': '30.2', 'sAP10': '33.2', 'sAP15': '35.1'}. You must use only the codebase provided.
| DT-LSD: Deformable Transformer-based Line Segment Detection Sebastian Janampa The University of New Mexico sebasjr1966@unm.eduMarios Pattichis The University of New Mexico pattichi@unm.edu Abstract Line segment detection is a fundamental low-level task in computer vision, and improvements in this task can im- pact more advanced methods that depend on it. Most new methods developed for line segment detection are based on Convolutional Neural Networks (CNNs). Our paper seeks to address challenges that prevent the wider adop- tion of transformer-based methods for line segment de- tection. More specifically, we introduce a new model called Deformable Transformer-based Line Segment Detec- tion (DT-LSD) that supports cross-scale interactions and can be trained quickly. This work proposes a novel De- formable Transformer-based Line Segment Detector (DT- LSD) that addresses LETR’s drawbacks. For faster train- ing, we introduce Line Contrastive DeNoising (LCDN), a technique that stabilizes the one-to-one matching process and speeds up training by 34 ×. We show that DT-LSD is faster and more accurate than its predecessor transformer- based model (LETR) and outperforms all CNN-based mod- els in terms of accuracy. In the Wireframe dataset, DT- LSD achieves 71.7 for sAP10and 73.9 for sAP15; while 33.2 for sAP10and 35.1 for sAP15in the YorkUrban dataset. Code available at: https://github.com/ SebastianJanampa/DT-LSD . 1. Introduction Line segment detection is a low-level vision task used for higher-level tasks such as 3D reconstruction, camera cali- bration, vanishing point estimation, and scene understand- ing. Despite its importance, this problem remains open. Additionally, unlike other computer vision tasks ( e.g. object detection, 3d-estimation, camera calibration), transformer- based models are not popular to tackle this challenge, LinE segment TRansformers (LETR) [21] is the only transformer model for line segment detection in the literature. Most recent methods [23–25, 27, 28] uses Convolutional Neural Networks (CNN) despite the fact that CNNs require a post- processing step to get the final predictions. All models have a backbone that produces a set of hi-erarchical feature maps for further processing as shown in Fig. 1. CNN-based models use a feature pyramid network (FPN) as an enhancing method following the HourglassNet method [17] (see Fig. 1a). This method demonstrates the importance of cross-scale interaction since the new feature map is computed from contiguous feature maps, allowing the propagation of the global information from the highest- level feature map to the lower-level ones. On the other hand, LETR produces an enhanced feature map using a single fea- ture map, as depicted in Fig. 1b. LETR demonstrates the ability of the global attention mechanism [18] to capture long-term relationships between the pixels of the same fea- ture map (intra-scale processing), providing rich features. This paper develops a new transformer-based models for line segment detection. First, we improve LETR’s feature map-enhancing method. We choose the deformable atten- tion mechanism [29] for its ability to combine both intra- and cross-scale processing. We illustrate our idea in Fig. 1c, where a deformable-attention encoder is used for feature map enhancement. The encoder receives a set of hierar- chical feature maps1where for a pixel a fixed number of sampling offsets are generated for each given feature map. Second, we reduce the number of epochs required for train- ing. Inspired from [10, 26], we propose Line Contrastive DeNoising (LCDN) as training technique to to accelerate the convergence of the training process. We show the effi- ciency of LCDN in Table 3 where we improve the metrics while keeping the same amount of epochs. Our contributions are summarized as follows: 1. We propose a novel end-to-end transformer-based framework showing that outperforms CNN-based line segment detectors. This is achieved by using the de- formable attention mechanism. 2. We introduce a highly-efficient training technique, Line Contrastive DeNoising, to reduce the number of epochs. This technique allows DT-LSD to achieve convergence in a similar number of epochs to CNN- based models. 1feature maps are pre-processed by a 1×1 conv to assure all the inputs have the same amount of channels. 1arXiv:2411.13005v1 [cs.CV] 20 Nov 2024 a) Feature Pyramid Network b) Global Attention Encoder c) Deformable Attention Encoder Figure 1. Feature map enhancing. All line segment detectors use a hierarchical backbone, but they differ from each other in their enhancing method. (a) CNN-based models use a feature pyramid network to combine two contiguous feature maps, allowing the propagation of global information to low-dimensional feature maps. However, no intra-scale interaction is applied to any feature map. (b) LETR [21] uses a global attention encoder for each processed feature map, promoting the intra-scale interaction but not the cross-scale interaction since no information is passed between the two processed feature maps. (c) DT-LSD allows intra- and cross-scale (more than two feature maps) interactions by applying a deformable-attention encoder. 3. On two datasets (Wireframe [9] and YorkUrban [6]), our end-to-end transformer-based model present a per- formance improvement over state-of-the-art methods on both structural and heat map metrics. 4. Our work opens up opportunities for line segment de- tectors to remove hand-crafted post-processing by uti- lizing end-to-end transformer-based models. In what remains of this paper, we describe previous state- of-the-art methods in line segment detection, as well as the two attention mechanisms in Sec. 2. Next, we describe the methodology for DT-LSD in Sec. 3. Then, we pro- vide information about model parameters and training set- tings, comparison against previous state-of-the-art models, and the ablation studies of DT-LSD in Sec. 4. Finally, we summarize our findings in Sec. 5. 2. Background 2.1. Line Segment Detection 2.1.1 Traditional Approaches The Hough Transform (HT) [7] remains an important method for line detection. First, the Hough Transform ap- plies Canny edge detection [1] to obtain line segment candi- dates. Candidate lines are represented in polar form. Here, we note that candidate lines are evaluated based on the over- lapping number of pixels between the lines and the detected edges. Variations include the use of the Radon Transform and the Revoting Hough Transform. In regions dominated by a large density of edges, the HT can generate a large number of false positives. Grompone von Gioi et al . proposed a linear-time Line Segment De- tector (LSD) [19] to address this problem. LSD uses line- supported regions and line segment validation. The ap- proach also reduced time complexity through the use of a pseudo-sorting algorithm based on gradient magnitudes. A fundamental advantage of traditional methods is that they do not require training for specific datasets.2.1.2 Deep Learning Based Approaches Learning-based line segment detectors have shown signif- icant improvements compared to traditional approaches. The methods include different approaches that focus on line junctions, attraction field maps (AFM), transformers, and combining traditional approaches with deep learning tech- niques. The Holistically-Attracted Wireframe Parser (HAWP) [23] proposed a 4-dimensional attraction field map, and later HAWPv2 [24], a hybrid model of HAWP and self- supervision, was introduced. MLNet [25] and SACWP [27] incorporated cross-scale feature interaction on HAWP model. In [22, 28], the authors developed a method for de- tecting line junctions which were used to provide candidate line segments. Then, a classifier validated the candidates and produced the final set of predicted line segments. LSD- Net [18] used a CNN model to generate an angle field and line mask that were used to detect line segments using the LSD method. HT-HAWP and HT-LCNN [14] added global geometric line priors through the Hough Transform in deep learning networks to address the lack of labeled data. How- ever, the above methods requires of post-processing steps to produce the final output. In contrast, Line segment trans- formers (LETR) [21] remove post-processing steps by us- ing an end-to-end transformer-based model that relies on a coarse-to-fine strategy with two encoder-decoder trans- former blocks. 2.2. Transformers 2.2.1 High-complexity of Global Attention Models One crucial factor of LETR’s slow convergence is the global attention mechanism, which only does intra-scale feature processing. Plus, the global attention leads to very high computational complexity. To understand the complexity requirements of LETR, we revisit the global attention mechanism. We define global 2 attention using [18]: GlobAtt( Q, K, V ) = softmaxQKT √ d V (1) where Q, K, V, anddrepresent the queries, keys, values and the hidden dimensions, respectively. For object and line segment detection, we define K=V∈RHW×das the flattened form of the feature map f∈RH×W×dwhere HandWare the height and width, respectively. In the encoder, we have Q=K=Vresulting in a complexity time of O(H2W2d). Similarly, for the decoder, we have Q∈RN×dwhere Nis the number of queries producing a complexity time of O(HWC2+NHWC ). 2.2.2 Deformable Attention Module Based on our previous discussion, it is clear that the bot- tleneck of transformer-based models is the encoder, whose complexity time quadratically increases with respect to the spatial size of the feature map. To address this issue, Zhu et al. [29] proposed the deformable attention mechanism, inspired by deformable convolution [3]. Unlike global at- tention, the deformable attention module only attends to a fixed number kof keys for each query (see Fig. 2 from [29] for a visual representation). Given an input feature map f∈RH×W×d, letqbe the index of a query element with content feature zqand a 2d- reference point pq, the deformable attention for one atten- tion head2is mathematically defined as DeformAttn( zq, pq, x) =kX i=1Aqi·f(pq+ ∆pqi)(2) where iindexes the sampling keys and kis the total sam- pling keys number ( k≪HW ). The ithsampling key is computed as pq+ ∆pqiwhere ∆pqiis the sampling offset. Aqiis the ithrow of the attention weight Aq∈Rk×d. The weights of AqisatisfyPk i=1Aqi= 1. Comparing Eq. (1) to Eq. (2), softmax QKT/√ d is replaced by Aq, and Vbyx(pq+ ∆pqi). So, the complex- ity time for the deformable encoder is O(HWd2), which has a linear complexity. For the deformable decoder, the complexity time is O(kNd2)where Nis the total number of queries, and the spatial dimensions of xare irrelevant. Apart from reducing the memory and time complexities, deformable attention has a variation called multi-scale de- formable attention ( MSDeformAttn( zq,ˆpq,{fl}L l=1)) that allows cross-scale feature interaction and is defined as LX l=1kX i=1Alqi·fl(ϕl(ˆpq) + ∆ plqi) (3) 2The multi-head deformable attention equation is in page 5 section 4.1 in [29]where {fl}L l=1is a set of Lmulti-scale feature maps, andfl∈RHl×Wl×d. The normalized 2d coordinates ˆpq has its values lying in the range of [0,1], and is re-scaled to dimensions of the lth-level feature map by the func- tionϕl. Like Eq. (2), the attention weight AlqisatisfiesPL l=1Pk i=1Alqi= 1. 3. Methodology 3.1. Overview We present the architecture of DT-LSD, an end-to- end deformable transformer for line segment detection, in Fig. 2. First, we pass an RGB image to a backbone to produce a set of hierarchical feature maps. Second, a de- formable encoder enhances the backbone’s feature maps. Third, we apply query selection to choose the top-K queries 3as the initial 4D dynamic line endpoints. Fourth, we feed the initial dynamic line endpoints and the static (learn- able) content queries to the deformable decoder to promote the interaction between queries and the enhanced feature maps. Fifth, two independent multi-layer perceptron net- works process the decoder’s output queries to estimate the line segment endpoints and classify whether a query con- tains a line. For the training process, we added an extra branch to perform line contrastive denoising, which did not affect the inference time. In this section, we do not describe the decoder and the one-to-one matching since they are al- ready described in [29] and [21]. 3.2. Deformable Transformer Encoder The encoder is a fundamental part of our network since it enhances the backbone’s feature maps. However, these feature maps do not have any dimensions in common. For this reason, it is important to pre-process them before pass- ing them to the encoder. As shown in Fig. 3, given an RGB image of dimensions (H, W, 3), the backbone pro- duces a set of hierarchical feature maps {fl}5 l=1where fl∈ RHl×Wl×diandHl=H/2l+1andWl=W/2l+1. Since {fl}5 l=1do not have any dimension in common and we do not want to lose spatial resolution, we apply a 1×1convolu- tion to each feature so that the whole set has the same num- ber of channels {f′ l}5 l=2where f′ l∈RHl×Wl×256. Next, we flatten their spatial dimension, followed by stacking them together and adding the position encoding ( PE), creating the vector ˆF= stack( ˆf2,ˆf3,ˆf4,ˆf5) (4) where ˆfi= flatten( f′ i+PE(f′ i))andˆF∈RL×256 4, L=P5 i=1Hi·Wi. We pass ˆFto the encoder where each of its stacked pix- els is treated as a query q∈R1×256. For each query, we 3In the encoder, feature maps pixels are treated as queries 4Thestack andflatten functions are associative functions. 3 Figure 2. Framework of the proposed DT-LSD model. DT-LSD uses a deformable encoder and deformable decoder layers. Furthermore, it uses a set of mixed queries as a training strategy which does not influence the inference time. Figure 3. Feature maps pre-processing for the encoder. produce a fixed number k= 4 of offsets per feature map followed by applying Eq. (3). In general, we compute a to- tal of 16 offsets per attention head, where 4 of them are for intra-scale interaction, and the other 12 are for cross-scale interaction. Besides combining both types of interactions, we address the global attention’s time complexity and the convolutional layers’ kernel space restriction. 3.3. Line Contrastive Denoising Technique A problem for end-to-end transformer-based methods is the one-to-one matching technique, which removes the need for non-maximum suppression (NMS) but is unstable in matching queries with ground truth. The main difference between one-to-one matching and NMS is that the first one uses scores to do the matching. In contrast, the second one eliminates candidates depending on how much the bound- ing boxes of the candidates belonging to the same class overlap. In this section, we present Line Contrastive Denoising (LCDN), a training technique for stabilizing the matching process inspired from [26]. While LCDN is used to speed up training, it is not part of the final inference model. LCDN facilitates the matching by teaching the Hungarian Matcherto accept queries whose predicted line’s endpoints lie on or are close to a ground-truth line and to reject queries whose predicted line’s endpoints are far away from the ground- truth line. To achieve this, we create positive and negative queries by performing line length scaling and line rotation. The length scaling consists of varying the length of the line segment, with original length l, such that positive queries have a length in a range of [0, l]and the negative queries (l,2l). For line rotation, we rotate the line in a range of (−τ, τ)for positive, where τis the fixed angle. For negative queries, the rotation is in the (−2τ,−τ]∪[τ,2τ)range. We present an example of our LCDN technique in Fig. 4b and a comparison against the Contrastive DeNoising (CDN) [26] in Table 3. Since LCDN generates extra groups of denoising queries from ground-truth lines, this can harm the training process if the prediction queries interact with the denoising queries. The denoising queries contain ground-truth information, so if a matching query sees this information, it will start train- ing with information that should not be known. In the other case, we want the denoising queries to see the information stored by the matching queries. We manually implement this by using an attention mask. Note that only queries from the same denoising group are allowed to interact with each other, but all denoising groups interact with the matching group. 3.4. Loss Function We choose the focal loss [12] for line classification be- cause it can deal with class imbalance. The focal loss en- courages training on uncertain samples by penalizing sam- ples with predicted probability ˆpthat is away from 0 or 1 as 4 a) CDN (DINO) b) LCDN (ours) Ground Truth Positive query Negative query Figure 4. Comparison between contrastive denoising techniques applied to line segments. We present two different line segments and their positive and negative queries. We use solid and dashed to different between line segment samples. given by (per sample loss): L(i) class=−(α1(1−ˆp(i))γlog ˆp(i)+α2(ˆp(i))γlog(1−ˆp(i))) (5) withα1= 1,α2= 0.25, and γ= 2. For each line candidate ˆl, we use the L1loss to com- pute the distance from the ground truth points. Let idenote theithline candidate. If the classifier accepts the line can- didate, we get ci= 1 for the classification output. Else, ci= 0. Let l(i) jdenote the jthendpoint component of the i-th ground-trtuh line. We have four endpoint components because we use two coordinate points representing the line segment. The Llineloss function is then given by: L(i) line=1{ci̸=∅}4X j=1|l(i) j−ˆl(i) j|. (6) where 1{ci̸=∅}is the indicator function based on the classi- fier output ci. The final loss Lis a linear combination of the two loss functions: L=NX i=1λclsL(i) class+λlineL(i) line (7) where λclass= 2,λline= 5, and Nis the total number of instances. 4. Results 4.1. Datasets ShangaiTech Wireframe Dataset. We used the ShangaiTech Wireframe dataset for comparisons [9]. The dataset consists of 5462 (indoor and outdoor) images of hand-made environments. The ground-truth line segments were manually labeled. The goal of the dataset was to pro- vide line segments with meaningful geometric information about the scene. We split the dataset into 5000 images for training and 462 for testing.Parameter Value number of feature maps 4 number of encoder layers 6 encoder sampling points 4 number of decoder layers 6 decoder sampling points 4 hidden dim 256 feedforward dim 1024 number of heads 8 number of classes 2 number of queries 900 denoising number 300 label noise ratio 0.5 line scaling 1.0 line rotation 7◦ line loss weight 5 class loss weight 2 optimizer AdamW initial learning rate 1e-4 initial learning rate of backbone 1e-5 weight decay 1e-4 batch size 2 total number of epochs 24 learning rate drop 21 Table 1. DT-LSD architecture parameters and training setup. York Urban Dataset. We also use the York Urban dataset [6]. The dataset consists of 122 (45 indoor and 57 outdoor) images of size 640×480pixels. Denis et al. gener- ated ground-truth line segments using an interactive MAT- LAB program with a sub-pixel precision. We only used this dataset for testing. For the results shown in Sec. 4.3, we follow [2,10,15,26, 29] and apply data augmentations to the training set. During training, we reprise the input images such that the shortest side is at least 480 and at most 800 pixels while the longest at most 1333. At the evaluation stage, we resize the image with the shortest side at least 640 pixels. 4.2. Implementation 4.2.1 Network We use the SwinL [16] as a backbone for our deformable encoder-decoder transformer model. For the deformable transformer, we followed the recommendation of DINO [26]. We used 4 sampling offsets for the encoder and decoder, 900 queries to predict line segments, 6 stacked- encoder layers, and 6 stacked-decoder layers. We summa- rize the DT-LSD architecture parameters and training pa- rameters in Table 1. We train DT-LSD on a single Nvidia RTX A5500 GPU with a batch size of 2. 4.3. Comparison to SOTA models We compare DT-LSD to many state-of-the-art models in Table 2. Our approach gives the most accurate result in all of our comparisons providing new state-of-the-art results 5 Method EpochsWireframe Dataset YorkUrban DatasetFPS sAP10sAP15sF10sF15APHFHsAP10sAP15sF10sF15APHFH Traditional methods LSD [19] / / / / / 55.2 62.5 / / / / 50.9 60.1 49.6 CNN-based methods DWP [9] 120 5.1 5.9 / / 67.8 72.2 2.1 2.6 / / 51.0 61.6 2.24 AFM [22] 200 24.4 27.5 / / 69.2 77.2 9.4 11.1 / / 48.2 63.3 13.5 L-CNN [28] 16 62.9 64.9 61.3 62.4 82.8 81.3 26.4 27.5 36.9 37.8 59.6 65.3 10.3 HAWP [23] 30 66.5 68.2 64.9 65.9 86.1 83.1 28.5 29.7 39.7 40.5 61.2 66.3 30.3 F-Clip [4] 300 66.8 68.7 / / 85.1 80.9 29.9 31.3 / / 62.3 64.5 28.3 ULSD [11] 30 68.8 70.4 / / / / 28.8 30.6 / / / / 36.8 HAWPv2 [24] 30 68.6 70.2 / / 86.7 81.5 29.1 30.4 / / 61.6 64.4 14.0 SACWP [27] 30 70.0 71.6 / / / / 30.0 31.8 / / / / 34.8 MLNET [25] 30 69.1 70 .8 / / 86.7 81.4 32.1 33.5 / / 63.5 65.1 12.6 Transformer-based methods LETR [21] 825 65.2 67.7 65.8 67.1 86.3 83.3 29.4 31.7 40.1 41.8 62.7 66.9 5.8 DT-LSD (ours) 24 71.7 73.9 70.1 71.2 89.1 85.8 33.2 35.1 44.5 45.8 65.9 68.0 8.9 Table 2. Line segment detection results. Based on the models trained on the Wireframe dataset, we provide test results on the both YorkUrban and Wireframe dataset. The best results are given in boldface. Underlines are used for the second best . Figure 5. Precision-Recall (PR) curves. PR curves comparisons between L-CNN [28], LETR [21] and DT-LSD(ours) using sAP10and APHmetrics for Wireframe and YorkUrban datasets. in both datasets. In the Wireframe dataset, DT-LSD per- forms better than SCAWP (the second best result) by around 2 points in the sAP metric. In the YorkUrban dataset, DT-LSD outperforms MLNET (the second best result) by around 1 percentage point in the sAP metric and 1.6 in the APH. Comparing to LETR, we show a significant gain in all metrics. Furthermore, we note that DT-LSD is trained with just 24 epochs, significantly faster than the 825 epochs re- quired for LETR. At the same time, DT-LSD runs at nearly 9 frames per second. We show the Precision-Recall (PR) curves for sAP10 and APHin Fig. 5. DT-LSD has low performance for low recall values but surpasses LETR and L-CNN for re- call values greater than 0.1. We also provide qualitative comparative results in Fig. 6. The images clearly show that the transformer-based approaches (LETR and DT-LSD) perform significantly better than LCNN and HAWP. Upon closer inspection, we can see that DT-LSD avoids noisy line detections that appear in LETR ( e.g., inspect the center re- gions of third-row images).4.4. Ablation Studies 4.4.1 Line Contrastive Denoising Our comparisons are based on the number of epochs and the sAP metric as summarized in Table 3. For a fair compari- son, we train all the DINO5variations using ResNet50 [8] as the backbone. Compared to the 500 epochs required for the vanilla DETR, DINO converges at just 36 epochs. Furthermore, plain DINO results in a maximum accuracy drop of 0.6, while our additions boost performance significantly ( e.g., compare 66.3 and 68.8 versus 53.8 and 57.2). The performance improvement against vanilla DETR is because DINO uses the MultiScale Deformable Attention mechanism described in Eq. (3), which promotes the cross- and intra-scale interaction. However, DINO has a lower per- formance than LETR. Fig. 4 shows that the contrastive de- noising (CDN) technique from DINO does not work for line segments because applying CDN to line segments results in 5We use DINO-ResNet50-4scale from [26]. 6 LCNN [28] HAWP [23] LETR [21] DT-LSD (ours) Ground-Truth Figure 6. Qualitative comparisons for line detection methods. From left to right: the columns provide example results from LCNN, HAWP, LETR, DT-LSD (ours) and the ground-truth. The top and bottom rows provide examples from the Wireframe and YorkUrban test sets, respectively. The bottom two rows provide examples from the YorkUrban test set. Model Epochs sAP5sAP10sAP15 Vanilla DETR 500 - 53.8 57.2 LETR 825 - 65.2 67.1 DINO 36 45.8 53.2 56.7 - box scaling 36 51.7 60.0 63.5 + line scaling 36 56.5 63.4 66.2 + line rotation 36 60.7 66.3 68.7 Table 3. Ablation study of the different components of LCDN. All models uses ResNet50 as backbone except for LETR that uses ResNet101. scaling and translating both positive and negative queries. Therefore, these two operations lead to the model accepting non-line-segment candidates as potential candidates. We test our idea by removing the box scaling. We noticed an improvement of around 7 points for sAP10. We gain 3 ex- tra points for sAP10by adding the line scaling technique, reaching a score of 63.4. By combining the line scaling and the line rotation (our line contrastive denoising technique), we obtain the maximum of 66.3in sAP10. Based on our experiments, we conclude that our enhanc- ing method and training technique are effective since DINO with LCDN outperforms LETR using a lower-parameter backbone and fewer training epochs. We want to highlight that the input size of the image for all DINO variations was 512×512for both training and testing. On the other hand, vanilla DETR and LETR resize the images following the procedure described in Sec. 4.1 for training, and they resize the image with the shortest side of at least 1100 pixels for testing.Backbone Feat. maps sAP5sAP10sAP15FPS ResNet50 [8]S1- S5 60.2 65.5 68.1 13.0 S2 - S5 59.5 65.1 67.5 18.5 SwinL [16]S1- S5 63.8 69.3 71.7 10.5 S2 - S5 62.0 67.8 70.3 13.6 Table 4. Ablation study based on the number of feature maps and different backbones. 4.4.2 Feature maps An important element of our model is the feature maps gen- erated by the backbone. Here, we evaluate the effectiveness of different feature maps and backbones. We report the re- sults in Table 4. All models are trained using 24 epochs. Adding the S1 feature map produces more precise line seg- ments while slowing down our inference time (measured in the number of frames per second (FPS)). Using SwinL as the backbone gives the best results, but slows down the in- ference speed. For example, the configuration SwinL with 5 feature maps achieves a score of 69.3in the sAP10at 10.5 FPS. 4.4.3 Image upsampling Most transformer-based algorithms use upsampling tech- niques to improve their performance. To evaluate the ef- fects of upsampling an image, we train and test our model at different resolutions for 24 epochs. As documented in Table 5, upsampling benefits both CNN- and transformer- based models. First, we train DT-LSD following popu- lar CNN-based methods by resizing the original image to 7 Model Train Test sAP10sAP15FPS Size Size HAWP 512 512 65.7 67.4 - HAWP 832 832 67.7 69.1 - HAWP 832 1088 65.7 67.1 - DT-LSD 512 512 67.8 70.3 13.6 DT-LSD 800∗480†69.0 71.5 12.2 DT-LSD 800∗520†69.5 71.8 11.8 DT-LSD 800∗640†71.7 73.9 8.9 DT-LSD 800∗800†72.3 74.3 6.4 DT-LSD 800∗1100†72.2 74.2 4.7 Table 5. Ablation study on the effects of image upsampling. We used square images. For 800∗, we process images with the smaller dimension between 480 and 800. For the test sizes, we usenumber†to refer to the fact that the smaller dimension of the original image is given by number . 512×512. For DT-LSD 512, we obtain the fastest inference time at 13.6 FPS among all DT-LSD variations. Motivated by DETR-based models [2, 10, 15, 20, 21, 26, 29], we also apply scale augmentation consisting of resizing the input image so that the shortest side is a minimum of 480 pixels and a maximum of 800 pixels, while the longest side is a maximum of 1333 pixels. Here, we choose five different testing sizes, 1)480, the minimum size used for training, 2), 512, the size used for CNN-based line segment detectors, 3) 640, the size use for YOLO detectors, 4)800, the maximum size used for training, and 5)1100, LETR’s [21] testing size. We note that this scaling technique improved our results over 512 ×512. For example, DT-LSD 480†produces bet- ter results than DT-LSD 512. As the testing size increases, the sAP metric improves while reducing inference speed (as measured by FPS). As a balance between speed and preci- sion, we choose DT-LSD 640†because it increases sAP10 and sAP15by around 2 points, while its FPS is only by 2.9 fps less than DT-LSD 520†. We did not choose DT-LSD 800†because the 0.4 improvement in sAP does not justify a drop of 2.5 fps in inference performance. 4.4.4 Training Epochs We report the results for training DT-LSD with 12, 24, and 36 epochs in Table 6. DT-LSD gets to competitive perfor- mance after training with just 12 epochs. At 12 epochs, DT-LSD achieves a sAP10of 68.4. There is very little dif- ference between 24 and 36 epochs. At 36 epochs, we get a minor increase of 0.3, 0.2, and 0.1 in the sAP5, sAP10, and sAP15, respectively.Number of Epochs sAP5sAP10sAP15 12 65.2 68.4 69.7 24 66.6 71.7 73.9 36 66.9 71.9 74.0 Table 6. Ablation study of the training schedule. DT-LSD trained using different numbers of epochs. Pretrained Weights sAP5sAP10sAP15 ImageNet-22k 10.8 12.7 15.6 COCO 66.6 71.7 73.9 Table 7. Ablation study for pre-training using different datasets. 4.4.5 Transfer Learning We report results based on pre-training on different datasets in Table 7. For our experiments, we use 24 epochs. In our first example, the backbone was pre-trained with the ImageNet-22k dataset [5]. In our second example, the DINO was pre-trained using the COCO object detection dataset [13]. From the results, it is clear that it is essential to pre-train the entire network and not just the backbone. 5. Conclusion We introduced DT-LSD, a transformer-based model for line segment detection. DT-LSD uses cross-scale interac- tions to speed up convergence and improve results. Our ap- proach uses pre-training on the COCO dataset to learn low- level features. Our extensive experiment showed that end- to-end transformer-based model can surpass CNN-based methods. Additionally, we opened new opportunities for new line segment detection methods that do not require post-processing steps. In future work, we will consider the development of spe- cialized backbones for transformer-based models. Addi- tionally, an important observation from this work is that DT-LSD needs the COCO pre-trained weights to achieve state-of-the-art results. Therefore, we will also focus on the implementation of the network that is trained from scratch. 8 References [1] John Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelli- gence , PAMI-8(6):679–698, 1986. 2 [2] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to- end object detection with transformers. In European confer- ence on computer vision , pages 213–229. Springer, 2020. 5, 8 [3] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international confer- ence on computer vision , pages 764–773, 2017. 3 [4] Xili Dai, Haigang Gong, Shuai Wu, Xiaojun Yuan, and Yi Ma. Fully convolutional line parsing. Neurocomputing , 506:1–11, 2022. 6 [5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. IEEE, 2009. 8 [6] Patrick Denis, James H Elder, and Francisco J Estrada. Ef- ficient edge-based methods for estimating manhattan frames in urban imagery. In Computer Vision–ECCV 2008: 10th Eu- ropean Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part II 10 , pages 197– 210. Springer, 2008. 2, 5 [7] Richard O. Duda and Peter E. Hart. Use of the hough trans- formation to detect lines and curves in pictures. Commun. ACM , 15(1):11–15, jan 1972. 2 [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. 6, 7 [9] Kun Huang, Yifan Wang, Zihan Zhou, Tianjiao Ding, Shenghua Gao, and Yi Ma. Learning to parse wireframes in images of man-made environments. In CVPR , June 2018. 2, 5, 6 [10] Feng Li, Hao Zhang, Shilong Liu, Jian Guo, Lionel M Ni, and Lei Zhang. Dn-detr: Accelerate detr training by intro- ducing query denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13619–13627, 2022. 1, 5, 8 [11] Hao Li, Huai Yu, Jinwang Wang, Wen Yang, Lei Yu, and Se- bastian Scherer. Ulsd: Unified line segment detection across pinhole, fisheye, and spherical cameras. ISPRS Journal of Photogrammetry and Remote Sensing , 178:187–202, 2021. 6 [12] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll ´ar. Focal loss for dense object detection. In Pro- ceedings of the IEEE international conference on computer vision , pages 2980–2988, 2017. 4 [13] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll ´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 , pages 740–755. Springer, 2014. 8[14] Yancong Lin, Silvia L Pintea, and Jan C van Gemert. Deep hough-transform line priors. 2020. 2 [15] Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, and Lei Zhang. DAB-DETR: Dynamic anchor boxes are better queries for DETR. In International Conference on Learning Representations , 2022. 5, 8 [16] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , 2021. 5, 7 [17] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hour- glass networks for human pose estimation. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision - 14th European Conference, ECCV 2016, Proceed- ings, Lecture Notes in Computer Science (including sub- series Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 483–499, Germany, 2016. Springer Verlag. Publisher Copyright: © Springer Interna- tional Publishing AG 2016.; 14th European Conference on Computer Vision, ECCV 2016 ; Conference date: 08-10- 2016 Through 16-10-2016. 1 [18] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Il- lia Polosukhin. Attention is all you need. In I. Guyon, U. V on Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems , volume 30. Curran Associates, Inc., 2017. 1, 3 [19] R.G. von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall. LSD: A fast line segment detector with a false detection con- trol. IEEE Transactions on Pattern Analysis and Machine Intelligence , 32(4):722–732, apr 2010. 2, 6 [20] Yingming Wang, Xiangyu Zhang, Tong Yang, and Jian Sun. Anchor detr: Query design for transformer-based detector. InProceedings of the AAAI Conference on Artificial Intelli- gence , volume 36, pages 2567–2575, 2022. 8 [21] Yifan Xu, Weijian Xu, David Cheung, and Zhuowen Tu. Line segment detection using transformers without edges. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , pages 4257–4266, 2021. 1, 2, 3, 6, 7, 8 [22] Nan Xue, Song Bai, Fudong Wang, Gui-Song Xia, Tianfu Wu, and Liangpei Zhang. Learning attraction field represen- tation for robust line segment detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2019. 2, 6 [23] Nan Xue, Tianfu Wu, Song Bai, Fu-Dong Wang, Gui-Song Xia, Liangpei Zhang, and Philip H.S. Torr. Holistically- attracted wireframe parsing. In CVPR , 2020. 1, 2, 6, 7 [24] Nan Xue, Tianfu Wu, Song Bai, Fu-Dong Wang, Gui-Song Xia, Liangpei Zhang, and Philip HS Torr. Holistically- attracted wireframe parsing: From supervised to self- supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2023. 1, 2, 6 [25] Jian Yang, Yuan Rao, Qing Cai, Eric Rigall, Hao Fan, Junyu Dong, and Hui Yu. Mlnet: An multi-scale line detector and 9 descriptor network for 3d reconstruction. Knowledge-Based Systems , 289:111476, 2024. 1, 2, 6 [26] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, and Heung-Yeung Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection, 2022. 1, 4, 5, 6, 8 [27] Jiahui Zhang, Jinfu Yang, Fuji Fu, and Jiaqi Ma. Structural asymmetric convolution for wireframe parsing. Engineering Applications of Artificial Intelligence , 128:107410, 2024. 1, 2, 6 [28] Yichao Zhou, Haozhi Qi, and Yi Ma. End-to-end wireframe parsing. In ICCV 2019 , 2019. 1, 2, 6, 7 [29] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable trans- formers for end-to-end object detection. arXiv preprint arXiv:2010.04159 , 2020. 1, 3, 5, 8 10 | 4 | 1 | The proposed DT-LSD model has a relatively small batch size of 2 and uses a single Nvidia RTX A5500 GPU, which has sufficient memory (24 GB) to handle the model's parameters and intermediate activations. With a total of 24 epochs and leveraging the efficient Line Contrastive Denoising training technique, the training time is minimized. Given the complexity of the model and dataset sizes, a training duration of approximately 4 hours is a reasonable estimate. This is under 8 hours, hence it is feasible to train on a single GPU within this time. | yes | Yes | CV | DT-LSD: Deformable Transformer-based Line Segment Detection | 2024-11-20 0:00:00 | https://github.com/SebastianJanampa/DT-LSD | 1 | script to download is provided in colab file. | uses cpu to trainf or some reason 8hr per epoch | https://colab.research.google.com/drive/1XPiW-hDq6q8HNZ4yVP0oAn-3a1_ay5rG?usp=sharing | Yes | -- Trains but uses cpu for some reason |
UCR Anomaly Archive | KAN | [] | KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks | 2024-11-01T00:00:00 | https://arxiv.org/abs/2411.00278v1 | [
"https://github.com/issaccv/KAN-AD"
] | {'AUC ROC ': '0.7489'} | [
"Average F1",
"AUC ROC "
] | Given the following paper and codebase:
Paper: KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks
Codebase: https://github.com/issaccv/KAN-AD
Improve the KAN model on the UCR Anomaly Archive dataset. The result
should improve on the following metrics: {'AUC ROC ': '0.7489'}. You must use only the codebase provided.
| KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li† Computer Network Information Center Chinese Academy of Science zhouquan,chpei,hai,xie,lijh@cnic.cnFei Sun Institution of Computing Technology Chinese Academy of Science sunfei@ict.ac.cn Jing Han, Zhengwei Gao ZTE China han.jing28,gao.zhengwei@zte.com.cnDan Pei Computer Science Department Tsinghua University peidan@tsinghua.edu.cn Abstract Time series anomaly detection (TSAD) has become an essential component of large-scale cloud services and web systems because it can promptly identify anomalies, providing early warnings to prevent greater losses. Deep learning-based forecasting methods have become very popular in TSAD due to their powerful learning capabilities. However, accurate predictions don’t necessarily lead to better anomaly detection. Due to the common occurrence of noise, i.e., local peaks and drops in time series, existing black-box learning methods can easily learn these unintended patterns, significantly affecting anomaly detection performance. Kolmogorov–Arnold Net- works (KAN) offers a potential solution by decomposing complex temporal sequences into a combination of multiple univariate func- tions, making the training process more controllable. However, KAN optimizes univariate functions using spline functions, which are also susceptible to the influence of local anomalies. To address this issue, we present KAN-AD , which leverages the Fourier series to emphasize global temporal patterns, thereby mitigating the influ- ence of local peaks and drops. KAN-AD improves both effectiveness and efficiency by transforming the existing black-box learning ap- proach into learning the weights preceding univariate functions. Experimental results show that, compared to the current state-of- the-art, we achieved an accuracy increase of 15% while boosting inference speed by 55 times. 1 Introduction Time Series Anomaly Detection (TSAD) has become an essential part of IT infrastructure [ 16,24,28,30] and manufacturing [ 14,38, 39,48] because it can promptly identify potential anomalies, pro- viding timely alerts or sufficient clues for fault localization. TSAD greatly enhances system reliability by identifying anomalies in key performance indicators, attracting significant research attention in recent years [10, 15, 18, 35]. Thanks to the powerful learning capabilities of neural networks and the proposal of self-supervised AD algorithms, TSAD based on deep learning [ 41,45,50] has gradually replaced rule-based meth- ods [ 5,33] and become the new state-of-the-art. These approaches aim to learn normal patterns from historical data, enabling the *Also affiliated with University of Chinese Academy of Sciences. †Corresponding author. Local DropLocal PeaksLocal DropFigure 1: Illustration of local drops and peaks. Time SeriesAnomaly Score ✅Clean Train SampleNoisy Train Sample ✅ ✅ ❌ ❌ ✅ Figure 2: The black curve represents the original sample, with local peaks and dips highlighted in blue. The red curve denotes the anomaly scores provided by the method, the actual anomalous sections are highlighted in pink. The upper part demonstrates that when trained on the clean normal sample, all three methods successfully detect the anomaly segment. The lower part shows that when trained on data containing anomalies, TimesNet and KAN are disrupted by the anomalous samples, rendering them unable to detect anomaly segments. identification of anomalies by comparing predicted values to actual observations. Anomalies are detected when deviations between the predicted and actual values exceed a certain threshold. Although deep learning-based TSAD algorithms demonstrate excellent detection accuracy compared to rule-based methods, it is challenging to achieve further improvements in detection ac- curacy. Our investigation shows, as illustrated in Figure 1, that local peaks and drops are quite common in time series data. Some clustering-based methods, such as SubLOF [ 5], can partially resist the effects of such noisy data, but their overall detection accuracy is not sufficient. Many deep learning methods [ 37,41], on the other hand, are easily affected and tend to learn these noises to some extent, making it difficult to effectively model the expected normal patterns. As a result, they struggle to accurately identify anomalies when compared to the actual occurring time series. Kolmogorov-Arnold Networks (KANs) [ 26,27] offers a potential solution to this challenge by revisiting the modeling of complex data. Based on the Kolmogorov-Arnold representation theorem [ 4, 20,21], KANs decompose complex objectives into a combinationarXiv:2411.00278v1 [cs.LG] 1 Nov 2024 Conference’17, July 2017, Washington, DC, USA Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li†, Fei Sun, Jing Han, Zhengwei Gao, and Dan Pei of multiple univariate functions which can be learned via neural networks using spline functions [ 7]. This design retains the learning capabilities of deep learning while allowing intervention in the learning process to prevent neural networks from being disturbed by local peaks or drops, making it particularly useful for TSAD. KAN’s decomposition of complex objectives has shown promise in achieving accurate representations [ 1,47]. However, directly applying KAN to TSAD poses significant challenges. From the upper part of Figure 2, it can be observed that models trained on clean training samples, such as TimesNet [ 41] (third column) and KAN (fourth column), successfully detect anomalies in the test samples. This is manifested by the increasing anomaly scores of these two algorithms on the anomalous time series segments. When focusing on the bottom row of Figure 2, we find that even KAN fails to detect anomalies when the training samples contain noisy samples. The main reason is that, although KAN can specify univariate functions, these functions are not specifically designed for time series and can still overfit local features, failing to completely eliminate the impact of local peaks or drops. To address these challenges, we propose KAN-AD , a KAN for special designed TSAD. By considering the characteristics of time series, we redesign KAN’s univariate function set and adopt Fourier series, which represent periodic functions as sums of sine and cosine components. Since its components possess local smooth- ness, during the learning process, we freeze the functions during training and only learn their coefficients, reducing the impact of local peaks and drops while preserving strong learning capabilities. This approach effectively mitigates the impact of local peaks and drops while maintaining strong learning capabilities. Compared to spline functions adopted by KAN, Fourier series naturally extract frequency domain information from the time series, which can better model global pattern and effectively handle local fluctuations without being affected by them [8, 34]. Additionally, we enhance KAN-AD ’s ability to capture period- icity by designing a periodic univariate function. To eliminate the negative impact of trend components in TSAD, we design an ex- tra bias univariate function together with differencing mechanism. Through KAN-AD, TSAD is no longer a black-box fitting the rela- tionships between points in the time series. Instead, it efficiently learns the coefficients of specially designed univariate functions to model normal time series patterns. This design brings a dual enhancement in both efficiency and effectiveness to TSAD. Experi- mental results show that KAN-AD detects with an F1 accuracy that is 29% higher while being 36% faster than KAN. Our contributions are as follows: •We are the first to introduce the Kolmogorov-Arnold rep- resentation theorem into the TSAD field. By reformulating the problem, the detection accuracy is improved and the complexity of the model is significantly reduced. •We propose KAN-AD , a novel TSAD method. By carefully designing univariate functions, KAN-AD not only achieves higher detection accuracy compared to KAN, but its infer- ence speed is also faster. •We conducted extensive experiments on four public datasets. Compared to the current state-of-the-art method, KAN-ADachieves an accuracy increase of 15% while boosting infer- ence speed by 55 times. 2 Preliminaries and Problem Formulation 2.1 Problem Statement This paper primarily addresses the issue of anomaly detection in single time series curves, also known as univariate time series (UTS). To elaborate on the problem more comprehensively, consider the following UTS observational data: 𝑥0:𝑡={𝑥0,𝑥1,𝑥2,...,𝑥 𝑡}and anomaly labels 𝐶={𝑐0,𝑐1,𝑐2,...,𝑐 𝑡}, where𝑥𝑡∈R,𝑐∈{0,1}, and 𝑡∈N. Here,𝑥0:𝑡represents the entire observed time series, and 𝐶 denotes the temporal anomaly labels. Given a UTS 𝑥=[𝑥0,𝑥1,𝑥2,...,𝑥 𝑡], the objective of UTS anoomaly detection is to utilize the data [𝑥0,𝑥1,...,𝑥 𝑖]preceding each point 𝑥𝑖 to predict𝑐𝑖. 2.2 Kolmogorov–Arnold Networks Kolmogorov–Arnold Networks (KAN) [ 26,27] is a novel neural network architecture that builds on the Kolmogorov–Arnold rep- resentation theorem [ 4,20,21]. This theory provides a theoretical basis for decomposing continuous multivariate functions into mul- tiple univariate function combinations. 2.2.1 Theoretical Foundation. The Kolmogorov–Arnold represen- tation theorem demonstrates that any multivariate continuous func- tion can be decomposed into a finite sum of univariate functions, as shown in Equation (1), where 𝜑𝑞,𝑝are univariate functions that map each input variable 𝑥𝑝, andΦ𝑞are continuous functions. 𝑓(𝑥1,𝑥2,...,𝑥 𝑛)=2𝑛+1∑︁ 𝑞=1Φ𝑞(𝑛∑︁ 𝑝=1𝜑𝑞,𝑝(𝑥𝑝)) (1) KAN(𝑥)=(Φ𝐿−1◦Φ𝐿−2◦···◦ Φ0)(𝑥) (2) 2.2.2 Network Architecture. KAN consists of a series of intercon- nected univariate sub-networks, each responsible for learning dis- tinct features of the data. Unlike traditional multi-layer perceptrons (MLPs), which employ fixed activation functions at each node, KAN replaces each weight parameter with a univariate function. The resulting functional form for deeper KAN can be expressed as Equa- tion (2), where each Φ𝑙represents a layer of univariate functions applied to the input or intermediate outputs. 2.2.3 B-spline Functions. The vanilla KAN [ 26,27] utilizes B-spline functions [ 7] to optimize the representation of univariate functions. While B-spline functions demonstrate strong performance in ap- proximating localized characteristics, particularly for non-linear data with local variations, this capability may also lead to fitting anomalous features in the data. Since anomalies often present them- selves as localized patterns [ 45], B-splines may inadvertently cap- ture these outliers, potentially reducing the robustness of the model. Table 1 lists commonly used polynomial functions for time series analysis. In KAN, the choice of univariate function has a significant impact on the model’s performance. We further investigate the influence of different univariate functions on anomaly detection performance in Section 4.5. KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Conference’17, July 2017, Washington, DC, USA Table 1: Commonly used univariate functions for time series approximation. Name Φ𝑛(𝑥) Taylor Series 𝑥𝑛 Fourier Series cos(𝑛𝑥)+sin(𝑛𝑥) Chebyshev Polynomial I cos(𝑛arccos(𝑥)) Chebyshev Polynomial IIsin((𝑛+1)arccos(𝑥)) sin(arccos(𝑥)) 3 Methodology The primary challenge in anomaly detection within real-world time series is the accurate identification of “normal” patterns by learning the pattern of historical time series containing anomalies [ 25]. The prevailing method of directly modeling the relationship between the future value 𝑥𝑖+1at time𝑖+1and the historical data 𝑥0:𝑖from time 0to𝑖is often affected by local peaks and drops in the data. To address this challenge, we propose KAN-AD , which inte- grates the Fourier series [ 9] with the Kolmogorov-Arnold repre- sentation theorem. KAN-AD reframes the task of learning normal patterns as the estimation of coefficients for univariate functions. By leveraging the global representation provided by the Fourier series, KAN-AD effectively reduces the influence of local drops and peaks in historical data. As shown in Figure 3b, the pipeline ofKAN-AD consists of three main stages: mapping ,reducing , andprojection . During the mapping and reducing phases, the in- put time window is processed through a univariate function and weighted by coefficients to generate a representation robust to local anomalies. Finally, in the projection stage, the prediction for 𝑥𝑖+1is made. Specifically, in the mapping phase, we activate the univariate functions with the input time window 𝑥0:𝑖for reconstructing nor- mal pattern. In the reducing phase, the reconstruction is formulated as a weighted sum of multiple univariate functions, where the weights are automatically learned from historical data. Finally, in the projection phase, future time series values 𝑥𝑖+1are predicted based on the reconstructed normal time series 𝑥′ 0:𝑖. Once the fu- ture normal pattern of the input time series is predicted, it can be compared with future real-time happening time series to detect anomalies. In the following part of this section, we first provide a high-level, mathematical explanation of the entire workflow and principles of KAN-AD . Subsequently, we delve into the specific implementations of the mapping, reducing and projection phases. 3.1 Design of KAN-AD As shown in Figure 3a, compared with KAN, KAN-AD focuses on learning the coefficients of univariate functions on edges, rather than relying on spline functions to optimize univariate functions themselves. In our approach, the neural network Θno longer di- rectly learns the temporal dependencies between a time window and the subsequent point. Instead, it focuses on capturing the rela- tionship between the time window and the coefficients of univariate functions . Formally, for KAN-AD , we employ Fourier series for normal pattern representation. In this way, 𝑓(𝑥), the mapping functionbetween the historical window 𝑥0:𝑖and its normal pattern 𝑥′ 0:𝑖, can be expanded as the combination of multiple univariate functions, which is shown in Equation (3). We posit the normal pattern can be represented by the finite 𝑁 terms of the series [ 4,20,21], denoted as 𝑔(𝑥). The terms beyond the𝑁-th term encompass the remaining stochastic observational noise𝜖(𝑥). 𝑓(𝑥)=𝐴0+𝑁∑︁ 𝑛=1𝐴𝑛cos(𝑛𝑥)+𝐵𝑛sin(𝑛𝑥) | {z } 𝑔(𝑥) +∞∑︁ 𝑛=𝑁+1𝐴𝑛cos(𝑛𝑥)+𝐵𝑛sin(𝑛𝑥) | {z } 𝜖(𝑥)(3) We call this decomposition the function deconstruction (FD) mechanism. Following the decomposition, we aim to learn the coefficients preceding the different univariate functions. The nor- mal pattern 𝑥′can then be expressed as in Equation (4), where H denotes the univariate function matrix. H=Stack(cos(𝑥0:𝑖),sin(𝑥0:𝑖),..., cos(𝑛𝑥0:𝑖),sin(𝑛𝑥0:𝑖)) Θ(𝑥0:𝑖)=𝐴1,𝐵1,𝐴2,𝐵2,...,𝐴 𝑛,𝐵𝑛 𝑥′ 0:𝑖=𝐴0+Θ(𝑥0:𝑖)×H (4) 3.2 Mapping Phase As shown in Figure 3b, the primary purpose of the mapping phase is to transform the original time series signal 𝑥0:𝑖∈R𝑇into multiple new sets of values 𝑥0:𝑖∈R𝑇×(𝑁+𝑁)through a series of univariate functions. Here, 𝑇is the size of the sliding window. The first 𝑁 represents the number of sine series univariate functions, and the other𝑁represents the number of cosine series univariate functions. The detailed calculation method is shown in Equation (3). Notably, besides the univariate function terms, an 𝐴0term repre- senting the average value within the sliding window is also present, which varies across different windows. To mitigate the impact of fluctuating𝐴0on coefficient fitting, a constant term elimination module is employed. Constant Term Elimination : As shown in Equation 3, a con- stant term𝐴0always exists during the approximation process. In Fourier series, 𝐴0represents the mean value of the function. Al- though normalization ensures that the entire time series has a mean of zero, individual time windows may still exhibit significant fluctuations in their means due to the presence of a trend. These variations in the constant term ultimately affect the model’s ac- curate estimation of Fourier coefficients, leading to biases in the construction of the normal pattern. To mitigate the impact of mean fluctuations on the model’s ap- proximation of normal time series patterns, we employ first-order differencing during data preprocessing to minimize the residual trend component in the data and subsequently renormalize the differenced data. This strategy allows the model to focus on esti- mating Fourier coefficients 𝐴1:𝑛and𝐵1:𝑛, thereby avoiding the need to learn frequently changing constant terms. After this differential Conference’17, July 2017, Washington, DC, USA Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li†, Fei Sun, Jing Han, Zhengwei Gao, and Dan Pei fixed univariate functions on edgesfixed activation functions on nodeslearnable weights on edges learnable univariate functions on edges sum operations on nodes KANKAN-AD (ours) (a) Illustration of learning components in KAN andKAN-AD .KAN-AD learns the coefficients on edges with fixed univariate functions, and per- forms weighted sum operations on nodes. Blue lines indicate edges with weights. (b) Illustration of the KAN-AD process using a sliding window approach. During the mapping phase, raw time windows are transformed into multiple univariate functions. In the reducing phase, a one-dimensional convolutional kernel learns coefficients for these univariate functions, aggregating them into a normal pattern for the current time window. In the projection phase, a single-layer MLP predicts future normal patterns. Figure 3: Illustration of KAN-AD. strategy, the normal pattern 𝑥′can be expressed as in Equation (5). 𝑥′ 0:𝑖∼Θ(𝑥0:𝑖)×H (5) 𝑓[𝑖]=1 𝑇𝑇−1∑︁ 𝑛=0𝑎𝑡cos(2𝜋𝑛𝑖 𝑇)+𝑏𝑡sin(2𝜋𝑛𝑖 𝑇) (6) X=𝑥0:𝑖 Sn={sin(𝑛𝑥0:𝑖),cos(𝑛𝑥0:𝑖)} Pn={sin(2𝜋𝑛𝑖 𝑇),cos(2𝜋𝑛𝑖 𝑇)} (7) Periodic-Enhanced KAN-AD : Time series often exhibit latent dependencies that are difficult to capture directly from the time domain but become evident in the frequency domain [ 32,40]. Pre- vious methods relied on Fast Fourier Transform (FFT) [ 3] to learn statistical features of different frequency components, neglecting the inherent approximation potential of sine and cosine functions used as univariate functions in the inverse FFT (iFFT). To enhance KAN-AD ’s ability to represent periodicity, we also incorporate fre- quency domain information. As shown in Equation (6), the iFFT process involves a weighted summation of sine and cosine compo- nents for each frequency component. This can be viewed as a special form of univariate function, namely a series containing cos(2𝜋𝑛𝑖 𝑇) and sin(2𝜋𝑛𝑖 𝑇), where𝑖represents the index within the window. Consequently, it can be utilized for function approximation. We integrate these functions as part of the univariate functions and, similar to the aforementioned approximation strategy, employ a one-dimensional convolutional network to learn the coefficients of these univariate functions. In our implementation, we ultimately employed the three univariate functions shown in Equation (7), namely the raw time variable 𝑋, the Fourier series 𝑆𝑛, and thesine-cosine wave 𝑃𝑛. This strategy enables the combination of time- domain and frequency-domain univariate functions, thereby further improving KAN-AD ’s capability to capture normal patterns in time series. 3.3 Reducing Phase Another major challenge in real-world time series anomaly de- tection is the high computational cost. Existing methods often overlook the increasing computational overhead when pursuing de- tection accuracy, making it impractical to deploy these algorithms in resource-constrained or large-scale settings. Due to the function deconstruction (FD) mechanism, modeling normal pattern no longer requires fine-grained weight adjustments for every value within the entire time window. Instead, it only necessitates adjusting coefficients corresponding to the univariate functions. Since the number of univariate functions is significantly smaller than the length of the time window, the FDmechanism reduces the model’s parameter count compared to modeling depen- dencies between each time point. To fully leverage the advantages of the FDmechanism and ac- curately estimate univariate function coefficients, we employed a stacked one-dimensional convolutional neural network (1D CNN) to learn these coefficients. One-dimensional convolutions are well- suited for sequence modeling tasks due to their ability to traverse data along the temporal dimension. Moreover, the convolutional kernels in a 1D CNN can capture the diverse features introduced by the FD mechanism. As show in Equation (8), KAN-AD first computes the required univariate functions for a given time window and combines them into a univariate function matrix H(0). H(0)=Stack(𝑋,𝑆 1,𝑃1,...,𝑆 𝑛,𝑃𝑛)∀𝑛∈[1,2,...,𝑁] (8) KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Conference’17, July 2017, Washington, DC, USA The matrix is processed using multiple stacked one-dimensional convolutional layers with a kernel size of 3, allowing us to learn the coefficient matrix and progressively approximate the normal pattern, as illustrated in Equation (9). In Equation (11), each one- dimensional convolution with 𝑐channels applies a kernel 𝑊𝑐to each channel H𝑐. Here, the indices 𝑚and𝑡denote positions within the convolutional kernel and the time window, respectively. We apply batch normalization [ 17] after each convolutional layer to ensure training stability and reduce internal covariate shift, as shown in Equation (10). Gaussian Error Linear Units (GELUs) [ 12] are utilized for activation: H(𝑙)=CNN(CNN(H(𝑙−1))) ∀𝑙∈[1,2,...,𝐿] (9) where𝐿denotes the number of 2-layer CNN blocks, and the network CNN(H)and the convolution operation Conv(H)are represented as: CNN(H)=GELU(BN(Conv(H))) (10) Conv(H)=2𝑁∑︁ 𝑐=12∑︁ 𝑚=0𝑊𝑐[𝑚]·H𝑐[𝑖+𝑚−1] (11) At the final stage of the reducing phase, we employ a residual connection [ 11] between the hidden state H(𝐿)generated through 𝐿convolutional blocks and the original stacked input H(0)to main- tain the numerical stability of the univariate function matrix, as shown in Equation (12). Then, we use a convolutional kernel with a width of 1 to reduce the dimensional of the H(𝐿)′to generate the approximation of the normal mode within the current time window 𝑥′: H(𝐿)′=H(𝐿)+H(0)(12) 𝑥′ 0:𝑖=GELU(BN(DownConv(H(𝐿)′)) (13) Here, DownConv(H)=Í2𝑁 𝑐=1𝑊𝑐·H𝑐[𝑖]denotes the convolution operation for reducing dimensions. 3.4 Projection Phase After obtaining an approximation 𝑥′of the current window’s nor- mal mode, we predict the future normal mode 𝑥𝑡+1. Leveraging the accuracy of KAN-AD in approximating the normal mode, predic- tion becomes straightforward and can be achieved with a single- layer MLP: 𝑥𝑡+1=𝑊·𝑥′ 0:𝑖+𝑏 (14) where𝑊denotes the weight matrix of the linear layer, and 𝑏repre- sents its bias term. 4 Evaluation In this section, we conduct comprehensive experiments primarily aimed at answering the following research questions. RQ1 : How does KAN-AD compare to state-of-the-art anomaly de- tection methods in performance and efficiency? RQ2 : How sensitive is KAN-AD to hyperparameters? RQ3 : How effective is each design choice in KAN-AD? RQ4 : How sensitive is KAN-AD to anomalies in the training data?Table 2: Dataset Statistics. Dataset Curves Train Train Ano% Test Test Ano% KPI 29 3,073,567 2.70% 3,073,556 1.85% TODS 15 75,000 5.32% 75,000 6.38% WSD 210 3,829,373 2.43% 3,829,537 0.76% UCR 203 3,572,316 0.00% 7,782,539 0.47% 4.1 Experimental settings This section details the datasets used for experimental comparison and outlines the experimental setup, including model training and dataset partitioning for the evaluation in this study. 4.1.1 Dataset. We evaluate the efficacy of the KAN-AD method for anomaly detection using four publicly available UTS datasets widely adopted in the field. These datasets span diverse domains and encompass a variety of anomaly characteristics, including KPI [ 6], TODS [22], WSD [49], and UCR [42]. Table 2 summarizes the characteristics of the datasets used, in- cluding the number of curves, dataset size, and anomaly rate. To further describe the distribution of anomaly interval lengths, we plot the cumulative distribution function (CDF) of interval lengths for each dataset in Figure 6. While most anomalies are relatively short, comprising less than 10 points, the WSD and UCR datasets exhibit a wide range of anomaly lengths, with the longest anomaly segments exceeding 300 points. This diversity in anomaly lengths allows for a more comprehensive evaluation of models’ anomaly detection capabilities. For details regarding the provenance and characteristics of the datasets used, please refer to Appendix A. 4.1.2 Model training and inference. In our anomaly detection ex- periments, we trained a dedicated model for each time series within each dataset. These models were subsequently employed to gener- ate detection outputs on their corresponding test sets. A batch size of 1024 was used by default, with a learning rate set to 0.01. Training for each time series was limited to a maximum of 100 epochs. To ensure model generalizability and mitigate overfitting, we incorpo- rated validation sets into the training process. Specifically, for the UCR dataset, 20% of the training data was allocated as a validation set. For other datasets, we adopted a 4:1:5 ratio for splitting the data into training, validation, and test sets, respectively. During model testing, we set the batch size to 1 to facilitate a more accurate comparison of inference efficiency across different methods. To enhance the reliability of our experimental results, we con- ducted five independent trials using different random seeds. The final results were reported as the mean and standard deviation across these trials, as presented in Table 3. 4.2 Baselines and metrics This section details the SOTA methods employed for comparative analysis, followed by a description of the rigorous metrics used to evaluate detection accuracy. To ensure unbiased comparisons, we utilized publicly available source code implementations of the SOTA methods and excluded any post-processing operations, such as Point-adjust (PA), prior to submitting prediction results. Conference’17, July 2017, Washington, DC, USA Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li†, Fei Sun, Jing Han, Zhengwei Gao, and Dan Pei Table 3: Performance comparison. Best scores are highlighted in bold, and second best scores are highlighted in bold and underlined. Metrics include F1(Best F1), F1e(Event F1), F1d(Delay F1), AUPRC (area under the precision-recall curve) and Avg F1 e (average F1escore across four datasets). MethodKPI TODS WSD UCR Avg F1 eF1 F1 e F1dAUPRC F1 F1 e F1dAUPRC F1 F1 e F1dAUPRC F1 F1 e F1dAUPRC SRCNN 0.4137 0.0994 0.2266 0.3355 0.6239 0.1918 0.4399 0.6076 0.4092 0.1185 0.1951 0.3080 0.5964 0.1369 0.1656 0.5109 0.1367 SAND 0.2710 0.0397 0.1097 0.2022 0.5372 0.1879 0.5103 0.5145 0.1761 0.0839 0.1267 0.1238 0.7044 0.5108 0.5116 0.6550 0.2056 AT 0.6103 0.3020 0.3623 0.5676 0.4875 0.1915 0.2918 0.4148 0.4348 0.2311 0.1517 0.3527 0.6135 0.1696 0.1084 0.5458 0.2236 TranAD 0.7553 0.5611 0.6399 0.7399 0.5035 0.2460 0.3619 0.4501 0.7570 0.6338 0.4158 0.7106 0.5278 0.1840 0.1554 0.4599 0.4062 SubLOF 0.7273 0.2805 0.4994 0.7015 0.7997 0.4795 0.7169 0.7809 0.8683 0.6585 0.4917 0.8353 0.8468 0.4772 0.4151 0.8001 0.4739 TimesNet 0.8022 0.6363 0.6995 0.8166 0.6232 0.3327 0.4495 0.6031 0.9406 0.8444 0.6170 0.9376 0.5273 0.1805 0.1439 0.4536 0.4985 FITS 0.9083 0.6353 0.8175 0.9359 0.7773 0.5416 0.6312 0.7725 0.9732 0.8391 0.6535 0.9771 0.6664 0.2926 0.2912 0.5969 0.5772 OFA 0.8810 0.6150 0.7952 0.9009 0.6928 0.5811 0.5588 0.7206 0.9564 0.8344 0.6250 0.9615 0.6294 0.3176 0.1503 0.5699 0.5870 FCVAE 0.9398 0.7556 0.8624 0.9572 0.8652 0.6995 0.7482 0.8798 0.9650 0.8610 0.6583 0.9653 0.7651 0.3812 0.2857 0.7145 0.6743 LSTMAD 0.9376 0.7742 0.8782 0.9624 0.8633 0.6981 0.7655 0.8740 0.9866 0.9028 0.6743 0.9849 0.7040 0.3482 0.3121 0.6432 0.6808 KAN 0.9411 0.7816 0.8666 0.9664 0.8109 0.6466 0.7518 0.8286 0.9879 0.8939 0.6650 0.9881 0.8016 0.4120 0.3971 0.7489 0.6835 KAN-AD0.9442 0.7989 0.8755 0.9693 0.9425 0.8940 0.8391 0.9716 0.9888 0.8927 0.6623 0.9868 0.8554 0.5335 0.5177 0.8188 0.7798 ±0.0007 ±0.0054 ±0.0024 ±0.0008 ±0.0040 ±0.0022 ±0.0055 ±0.0035 ±0.0005 ±0.0025 ±0.0022 ±0.0009 ±0.0040 ±0.0046 ±0.0042 ±0.0041 Table 4: Efficiency comparison on UCR dataset. Method GPU Time CPU Time Parameters F1e OFA 220s 3087s 81.920 M 0.3176 AT 201s 1152s 4.752 M 0.1696 FCVAE 2327s 1743s 1.414 M 0.3812 TimesNet 182s 259s 73,449 0.1805 LSTMAD 73s 267s 10,421 0.3482 KAN 66s 34s 1,360 0.4120 FITS 32s 17s 624 0.2926 TranAD 113s 62s 369 0.1840 KAN-AD 42s 36s 274 0.5335 4.2.1 Baselines. We conducted comparative experiments with ten state-of-the-art time series anomaly detection methods: LSTMAD [ 29], FCVAE [ 40], SRCNN [ 32], FITS [ 46], TimesNet [ 41], OFA [ 50], TranAD [ 37], SubLOF [ 5], Anomaly Transformer [ 45] (abbreviated as AT in the tables), KAN [27] and SAND [2]. Detailed descriptions of these methods can be found in Appen- dix B. To ensure the reliability of our experimental results, we di- rectly adopted the hyperparameter settings reported in the original baseline papers for datasets used therein. For datasets not featured in the baseline literature, we meticulously tuned hyperparameters via grid search to optimize the performance of the baseline method on the respective evaluation metrics. 4.2.2 Evaluation metrics. In practical applications, operations teams are less concerned with point-wise anomalies (i.e., whether individ- ual data points are classified as anomalous) and more focused on detecting sustained anomalous segments within time series data. Furthermore, due to the potential impact of such segments, early identification is crucial. Previous work [ 44] proposed the Best F1 metric, which iterates over all thresholds and applies a point adjustment strategy to cal- culate the F1 score. However, it has been criticized for performance inflation [ 23,43]. To address this, we also adopt Delay F1 [32] and Event F1 . Delay F1 is similar to Best F1 but uses a delay point adjustment strategy. As shown in Figure 4, the second anomaly was missed because the detection delay exceeded the threshold of five time intervals. In all experiments, a delay threshold of five was used across all datasets. Event F1, on the other hand, treats anomalies of 5-Delay PAGround TruthDetected ResultPoint-wise PAAnomaly pointTrue PositiveFalse PositiveFalse NegativeEvent-wise PAview as an eventFigure 4: Illustration of the adjustment strategy. Point-wise PA gives an inflated score when some anomaly segments per- sist for a long duration. Event-wise PA treats each anomaly segment as an event, completely disregarding the length of the anomaly segment. 𝑘-delay PA considers only anomalies detected within the first 𝑘points after the anomaly onset, treating any detected later as undetected. varying lengths as anomalies with a length of 1, minimizing perfor- mance inflation caused by excessively long anomalous segments. For the sake of convenience, unless otherwise stated, we use Event F1 as the primary metric, as it is more alignment with the need for real-time anomaly detection in real-world situations. 4.3 RQ1. Performance and Efficiency Comparison We present the results of our time series anomaly detection experi- ments in Table 3. Table 4 provides inference times for models eval- uated on sample from UCR dataset. Notably, our KAN-AD model consistently achieves comparable or superior performance across all experiments. Despite the absence of anomalies in the UCR dataset’s training set, the presence of significant periodic variations among the sam- ples may reduce the baseline method’s ability to capture normal patterns. For TODS, since its training set includes a substantial number of anomalies, KAN-AD demonstrates a significant advan- tage over the baseline methods, highlighting the robustness of KAN-AD during the training process. Given the strong periodic- ity in the WSD and KPI datasets, KAN-AD achieves comparable performance as the baseline methods. Overall, KAN-AD achieves more than a 15% improvement in average Event F1 compared to the state-of-the-art methods. KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Conference’17, July 2017, Washington, DC, USA Original Sample SubLOF FITS FCV AE KAN TimesNet TranAD OFA SAND AnomalyTransformer LSTMAD KAN-AD (ours) Figure 5: Case study on UCR InternalBleeding10. The black curve represents the original sample, the red curve represents the anomaly scores provided by the method, and the true anomaly segments are highlighted in pink. As shown in Table 4, some baseline methods are absent com- pared to those presented in Table 3. This is because methods like SAND can only be executed on CPUs, while traditional approaches such as SubLOF do not leverage multi-core performance. To en- sure consistency in our comparisons, we ultimately excluded these methods from the inference time evaluation. Among the models listed, parameter counts range from mil- lions to hundreds. Notably, large models such as OFA require a staggering number of parameters, reaching 81.92M. Similarly, pop- ular models like Anomaly Transformer, FCAVE, and TimesNet ex- hibit parameter counts ranging from 73k to 4.75M. In contrast, KAN-AD distinguishes itself as a highly efficient model, achieving impressive performance with only 274 parameters. Significantly, KAN-AD reduces the parameter count by 25% compared to the next smallest model, TranAD. These analyses highlight the remarkable efficiency of KAN-AD . Despite its small size, KAN-AD consistently achieves competitive results, positioning it as an attractive option for time series anomaly detection tasks. KAN-AD demonstrates that achieving state-of-the- art (SOTA) or near SOTA performance while significantly reducing the parameter footprint is possible, making it an ideal choice for cost-sensitive or resource-constrained environments. 4.3.1 Case Study. We analyzed anomaly detection performance on UCR dataset samples to illustrate how various methods respond to identical anomalies, as shown in Figure 5. The selected sample displayed pattern anomalies, marked by significant deviations from typical behavior. Both TranAD and TimesNet exhibit difficulty establishing nor- mal temporal patterns. Minor variations among normal samples across cycles lead to periodic false alarms during normal segments, consistent with our observations in Figure 2. Among the methods listed, while OFA, LSTMAD, SubLOF, and FITS can detect anomalies, their high anomaly scores during normal segments indicate exces- sive sensitivity to minor fluctuations in normal data. In contrast, KAN-AD name excels in identifying anomalies while maintaining minimal anomaly scores during normal segments. 4.4 RQ2. Hyperparameter sensitivity The KAN-AD model incorporates two key hyperparameters: the number of univariate functions 𝑁and the window size 𝑇. To investi- gate the ultimate impact of these parameters on model performance, 100 101 102 Anomaly Length0.60.70.80.91.0CDFKPI TODS WSD UCRFigure 6: Distribution of anomalous lengths.16326496128 1248160.760.780.80.820.84 𝑇𝑁𝐴𝑈𝑃𝑅𝐶 Figure 7: Model performance un- der different hyperparameters. we conducted experiments on the UCR dataset while holding all other parameters constant. As findings summarized in Figure 7, As findings summarized in Figure 7, a larger sliding window facilitates more accurate learning of normal patterns when 𝑁is fixed, leading to improved performance. When 𝑇is fixed, an optimal number of univariate functions yields the best results. Insufficient univariate functions limit KAN-AD ’s expressive power, while ex- cessive𝑁can lead to overfitting. Overall, KAN-AD achieved its best performance with 𝑇=96 and𝑁=2. Notably, even with suboptimal hyperparameter settings like𝑇=16and𝑁=1, we surpassed SOTA methods on the UCR dataset. 4.5 RQ3. Ablation Studies To understand the contribution of each module in KAN-AD , we conducted ablation studies. Specifically, we investigated the im- pact of the differencing module and the influence of different uni- variate function choices. Additionally, we explored the influence offunction deconstruction mechanism on the algorithm’s perfor- mance in Appendix C. 4.5.1 Constant term elimination module. We employed a constant term elimination (CTE) module during data preprocessing to mit- igate the influence of the offset term 𝐴0in Equation (3). Further experiments were conducted across all datasets to evaluate the impact of incorporating CTE within the preprocessing pipeline. As presented in Figure 8, the impact of CTE varies across datasets, reflecting inherent data characteristics. For datasets with pronounced periodicity or strong temporal stability (e.g., WSD), the benefits of CTE are less apparent. Conversely, for datasets exhibiting larger value fluctuations or trends (e.g., KPI, TODS and UCR), CTE yields significant improvements. 4.5.2 Selection of univariate functions. To comprehensively assess the ability of KAN-AD , we conducted experiments using common univariate functions listed in Table 1. In our implementations, due to varying input range requirements across univariate functions, appropriate normalization techniques are employed. Specifically, min-max scaling to the range 𝑥∈[− 1,1] was utilized for both types of Chebyshev polynomials, while z-scor was employed for Taylor series and Fourier series. The performance of all four univariate functions was compared using the same con- figuration. As results presented in Figure 9, Fourier series consistently achieved the top two performance across all datasets. In contrast, Conference’17, July 2017, Washington, DC, USA Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li†, Fei Sun, Jing Han, Zhengwei Gao, and Dan Pei KPI TODS WSD UCR Dataset0.30.40.50.60.70.80.91.0F1ew/o CTE w/ CTE Figure 8: Model performance under dif- ferent preprocessing. KPI TODS WSD UCR Dataset0.30.40.50.60.70.80.91.0F1eTaylor Chebyshev I Chebyshev II Fourier(Ours)Figure 9: Model performance under dif- ferent univariate function. 10% 15% 20% 25% 30% 35% 40% Ano in train0.10.20.30.40.50.60.70.80.9F1e KAN-AD TranAD SRCNN TimesNetLSTMAD FCVAE KAN AnomalyTransformerFigure 10: Model performance under dif- ferent anomaly ratios in training. Taylor series exhibited persistent bias due to non-zero function values in most cases, hindering optimal model performance. The primary objective of the two types of Chebyshev polynomials is to minimize the maximum error, which conflicts with the Mean Squared Error loss function used by our model. This discrepancy contributed to their suboptimal performance. 4.6 RQ4. Robustness to Anomalous Data To evaluate KAN-AD ’s robustness to anomalies in the training set, we conducted further experiments using the TODS dataset. Specifically, we selected samples with spike anomalies as the test set while progressively increasing the proportion of spike anomalies in the anomaly-free training set. For a fair comparison, we excluded methods that do not involve a training process. As illustrated in Figure 10, KAN-AD demonstrates stable perfor- mance across all anomaly ratios. Popular methods such as LSTMAD, FCVAE, and KAN perform well at lower anomaly ratios but experi- ence a significant decline as the ratio increases. Other approaches, like TimesNet and TranAD, fail to achieve optimal performance due to overfitting to fine-grained structures within the time series. 5 Related Work Time Series Forecasting Methods :In recent years, methods that use temporal forecasting for establishing normal patterns have dominated the field of time series anomaly detection. These meth- ods evaluate anomalies by calculating the discrepancy between the predicted results and the observed results. An anomaly is consid- ered to occur when this discrepancy exceeds a certain threshold. These methods can be further divided into reconstruction-based and prediction-based approaches. Reconstruction-based methods, such as Donut [ 44], assume that normal models exhibit low rank properties and utilize this to de- noise time series data. FCVAE [ 40] further enhance the Variational Autoencoder (VAE) [ 19] model’s ability to capture normal patterns by incorporating bidirectional dependencies and frequency domain information respectively. With the exploration of Transformer mod- els’ potential, TranAD [ 37] leverages them as its backbone and employs adversarial learning to capture temporal dependencies within time series. Anomaly Transformer [ 45] uses the attention mechanism to calculate association discrepancy. OFA [ 50] further advances the Transformer architecture by incorporating GPT-2 [ 31] as part of its backbone, enabling it to capture more intricate patterns through a large number of parameters. Similar to the approach ofincreasing parameters, TimesNet [ 41] utilizes a common computer vision (CV) backbone, Inception [ 36], to transform time series into a two-dimensional representation based on periodicity information for feature modeling. Prediction-based methods, such as FITS [ 46] employs low-pass filtering and frequency domain information to model low-frequency components, achieving efficient anomaly detection with minimal pa- rameters. LSTMAD [ 29], utilize Long Short-Term Memory (LSTM) networks [ 13] to model normal patterns within time series. By forgetting discrete values, LSTMAD achieves promising detection accuracy. Pattern Change Detection Methods : Another direct approach to time series anomaly detection focuses on identifying differences between the current time window’s pattern and historical patterns. Statistical approaches like SubLOF [ 5] quantify pattern differences by calculating distances between sequences in the current win- dow and surrounding windows. Deep learning models such as SR- CNN [ 32] uses supervised training to learn the relationship between anomaly labels and changes in the frequency domain, eventually outputting predicted labels. SAND [ 2] uses supervised training to learn the relationship between anomaly labels and changes in the frequency domain, eventually outputting predicted labels. Anomaly Transformer [ 45] constrains the attention mechanism to identify anomalous windows by outputting attention weights. TriAD [ 35] uses contrastive learning across the time domain, frequency domain, and residual domain to jointly detect pattern changes, achieving state-of-the-art performance on the UCR dataset. 6 Conclusion Training time series anomaly detection models with datasets con- taining anomalies is essential for deployment in production environ- ments. Existing algorithms often rely on carefully selected features and complex architectures to achieve minor accuracy gains, neglect- ing robustness during training. This paper introduces KAN-AD , a robust anomaly detection model rooted in the Kolmogorov–Arnold representation theorem. KAN-AD transforms the prediction of time points into the estimation of coefficients of Fourier series, achieving strong performance with few parameters, significantly reducing costs while enhancing robustness to outliers. Our model includes a constant term elimination module to address temporal trends and leverages frequency domain information for better perfor- mance. Compared to the SOTA model across four public datasets, KAN-AD achieves a 15% improvement in average Event F1 score, KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Conference’17, July 2017, Washington, DC, USA while reducing the parameter count by 99% and accelerating infer- ence speed by 55 times. With KAN-AD , a promising research direction is to explore whether normal patterns in time series can be represented more efficiently by leveraging additional data. References [1]Alexander Dylan Bodner, Antonio Santiago Tepsich, Jack Natan Spolski, and Santiago Pourteau. 2024. Convolutional Kolmogorov-Arnold Networks. arXiv:2406.13155 [cs.CV] https://arxiv.org/abs/2406.13155 [2]Paul Boniol, John Paparrizos, Themis Palpanas, and Michael J Franklin. 2021. SAND: streaming subsequence anomaly detection. Proceedings of the VLDB Endowment 14, 10 (2021), 1717–1729. [3]Ronald Newbold Bracewell and Ronald N Bracewell. 1986. The Fourier transform and its applications . Vol. 31999. McGraw-Hill New York. [4]Jürgen Braun and Michael Griebel. 2009. On a constructive proof of Kolmogorov’s superposition theorem. Constructive approximation 30 (2009), 653–675. [5]Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jörg Sander. 2000. LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data . 93–104. [6] AIOps Competition. 2018. KPI Dataset. https://github.com/iopsai/iops. [7]C De Boor. 1978. A practical guide to splines. Springer-Verlag google schola 2 (1978), 4135–4195. [8] Harry Dym and MCKEAN HP. 1972. Fourier series and integrals. (1972). [9]Jean Baptiste Joseph Fourier. 1888. Théorie analytique de la chaleur . Vol. 1. Gauthier-Villars. [10] Siho Han and Simon S. Woo. 2022. Learning Sparse Latent Graph Representations for Anomaly Detection in Multivariate Time Series. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Washington DC, USA) (KDD ’22) . Association for Computing Machinery, New York, NY, USA, 2977–2986. https://doi.org/10.1145/3534678.3539117 [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition . 770–778. [12] Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 (2016). [13] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780. [14] Ruei-Jie Hsieh, Jerry Chou, and Chih-Hsiang Ho. 2019. Unsupervised online anomaly detection on multivariate sensing time series data for smart manufactur- ing. In 2019 IEEE 12th conference on service-oriented computing and applications (SOCA) . IEEE, 90–97. [15] Alexis Huet, Jose Manuel Navarro, and Dario Rossi. 2022. Local Evaluation of Time Series Anomaly Detection Algorithms. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Washington DC, USA) (KDD ’22) . Association for Computing Machinery, New York, NY, USA, 635–645. https://doi.org/10.1145/3534678.3539339 [16] Kyle Hundman, Valentino Constantinou, Christopher Laporte, Ian Colwell, and Tom Soderstrom. 2018. Detecting spacecraft anomalies using lstms and nonpara- metric dynamic thresholding. In Proceedings of the 24th ACM SIGKDD interna- tional conference on knowledge discovery & data mining . 387–395. [17] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning . pmlr, 448–456. [18] Tung Kieu, Bin Yang, Chenjuan Guo, Razvan-Gabriel Cirstea, Yan Zhao, Yale Song, and Christian S Jensen. 2022. Anomaly detection in time series with robust variational quasi-recurrent autoencoders. In 2022 IEEE 38th International Conference on Data Engineering (ICDE) . IEEE, 1342–1354. [19] Diederik P Kingma and Max Welling. 2022. Auto-Encoding Variational Bayes. arXiv:1312.6114 [stat.ML] [20] Andrei Nikolaevich Kolmogorov. 1957. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. In Doklady Akademii Nauk , Vol. 114. Russian Academy of Sciences, 953–956. [21] Andre ˘ı Nikolaevich Kolmogorov. 1961. On the representation of continuous func- tions of several variables by superpositions of continuous functions of a smaller number of variables . American Mathematical Society. [22] Kwei-Herng Lai, Daochen Zha, Junjie Xu, Yue Zhao, Guanchu Wang, and Xia Hu. 2021. Revisiting time series outlier detection: Definitions and benchmarks. InThirty-fifth conference on neural information processing systems datasets and benchmarks track (round 1) . [23] Kwei-Herng Lai, Daochen Zha, Junjie Xu, Yue Zhao, Guanchu Wang, and Xia Hu. 2021. Revisiting time series outlier detection: Definitions and benchmarks. InThirty-fifth conference on neural information processing systems datasets and benchmarks track (round 1) .[24] Dan Li, Dacheng Chen, Baihong Jin, Lei Shi, Jonathan Goh, and See-Kiong Ng. 2019. MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. In International conference on artificial neural networks . Springer, 703–716. [25] Zhihan Li, Youjian Zhao, Jiaqi Han, Ya Su, Rui Jiao, Xidao Wen, and Dan Pei. 2021. Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining . 3220–3230. [26] Ziming Liu, Pingchuan Ma, Yixuan Wang, Wojciech Matusik, and Max Tegmark. 2024. KAN 2.0: Kolmogorov-Arnold Networks Meet Science. arXiv:2408.10205 [cs.LG] https://arxiv.org/abs/2408.10205 [27] Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljačić, Thomas Y Hou, and Max Tegmark. 2024. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756 (2024). [28] Pankaj Malhotra, Anusha Ramakrishnan, Gaurangi Anand, Lovekesh Vig, Puneet Agarwal, and Gautam Shroff. 2016. LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148 (2016). [29] Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, Puneet Agarwal, et al .2015. Long Short Term Memory Networks for Anomaly Detection in Time Series.. In Esann , Vol. 2015. 89. [30] Xinji Qu, Zhuo Liu, Chase Q Wu, Aiqin Hou, Xiaoyan Yin, and Zhulian Chen. 2024. MFGAN: Multimodal Fusion for Industrial Anomaly Detection Using Attention-Based Autoencoder and Generative Adversarial Network. Sensors 24, 2 (2024), 637. [31] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. [32] Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. 2019. Time-series anomaly detec- tion service at microsoft. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining . 3009–3017. [33] Bernhard Schölkopf, Robert C Williamson, Alex Smola, John Shawe-Taylor, and John Platt. 1999. Support vector method for novelty detection. Advances in neural information processing systems 12 (1999). [34] Elias M Stein and Rami Shakarchi. 2011. Fourier analysis: an introduction . Vol. 1. Princeton University Press. [35] Yuting Sun, Guansong Pang, Guanhua Ye, Tong Chen, Xia Hu, and Hongzhi Yin. 2024. Unraveling theAnomaly’in Time Series Anomaly Detection: A Self- supervised Tri-domain Solution. In 2024 IEEE 40th International Conference on Data Engineering (ICDE) . IEEE. [36] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition . 1–9. [37] Shreshth Tuli, Giuliano Casale, and Nicholas R Jennings. 2022. TranAD: deep transformer networks for anomaly detection in multivariate time series data. Proceedings of the VLDB Endowment 15, 6 (2022), 1201–1214. [38] Xing Wang, Jessica Lin, Nital Patel, and Martin Braun. 2016. A self-learning and online algorithm for time series anomaly detection, with application in CPU manufacturing. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management . 1823–1832. [39] Yue Wang, Michael Perry, Dane Whitlock, and John W Sutherland. 2022. Detect- ing anomalies in time series data from a manufacturing system using recurrent neural networks. Journal of Manufacturing Systems 62 (2022), 823–834. [40] Zexin Wang, Changhua Pei, Minghua Ma, Xin Wang, Zhihan Li, Dan Pei, Sara- van Rajmohan, Dongmei Zhang, Qingwei Lin, Haiming Zhang, Jianhui Li, and Gaogang Xie. 2024. Revisiting VAE for Unsupervised Time Series Anomaly Detection: A Frequency Perspective. In Proceedings of the ACM on Web Confer- ence 2024 (<conf-loc>, <city>Singapore</city>, <country>Singapore</country>, </conf-loc>) (WWW ’24) . Association for Computing Machinery, New York, NY, USA, 3096–3105. https://doi.org/10.1145/3589334.3645710 [41] Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. 2023. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In International Conference on Learning Representations . [42] Renjie Wu and Eamonn J Keogh. 2021. Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress. IEEE transactions on knowledge and data engineering 35, 3 (2021), 2421–2429. [43] Renjie Wu and Eamonn J Keogh. 2021. Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress. IEEE transactions on knowledge and data engineering 35, 3 (2021), 2421–2429. [44] Haowen Xu, Wenxiao Chen, Nengwen Zhao, Zeyan Li, Jiahao Bu, Zhihan Li, Ying Liu, Youjian Zhao, Dan Pei, Yang Feng, et al .2018. Unsupervised anomaly detection via variational auto-encoder for seasonal kpis in web applications. In Proceedings of the 2018 world wide web conference . 187–196. [45] Jiehui Xu, Haixu Wu, Jianmin Wang, and Mingsheng Long. 2022. Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy. In International Conference on Learning Representations . https://openreview.net/ forum?id=LzQQ89U1qm_ Conference’17, July 2017, Washington, DC, USA Quan Zhou*, Changhua Pei, Haiming Zhang, Gaogang Xie, Jianhui Li†, Fei Sun, Jing Han, Zhengwei Gao, and Dan Pei [46] Zhijian Xu, Ailing Zeng, and Qiang Xu. 2024. FITS: Modeling Time Series with 10𝑘Parameters. In International Conference on Learning Representations (ICLR) . [47] Runpeng Yu, Weihao Yu, and Xinchao Wang. 2024. Kan or mlp: A fairer compari- son. arXiv preprint arXiv:2407.16674 (2024). [48] Peng Zhan, Shaokun Wang, Jun Wang, Leigang Qu, Kun Wang, Yupeng Hu, and Xueqing Li. 2021. Temporal anomaly detection on IIoT-enabled manufacturing. Journal of Intelligent Manufacturing 32 (2021), 1669–1678. [49] Shenglin Zhang, Zhenyu Zhong, Dongwen Li, Qiliang Fan, Yongqian Sun, Man Zhu, Yuzhi Zhang, Dan Pei, Jiyan Sun, Yinlong Liu, et al .2022. Efficient kpi anomaly detection through transfer learning for large-scale web services. IEEE Journal on Selected Areas in Communications 40, 8 (2022), 2440–2455. [50] Tian Zhou, Peisong Niu, Liang Sun, Rong Jin, et al .2023. One fits all: Power general time series analysis by pretrained lm. Advances in neural information processing systems 36 (2023), 43322–43355. A Datasets We selected four datasets from diverse domains, with samples orig- inating from: •KPI [6]: This dataset comprises service metrics collected from five major Internet companies: Sogou, eBay, Baidu, Tencent, and Alibaba. The data points are primarily recorded every 1-2 minutes, with some sections exhibiting a 5-minute interval. •TODS [22]: TODS comprises artificially created time series, each designed to present specific types of anomalies. Its excellent interpretability and carefully constructed data dis- tributions make it suitable for in-depth case studies. •WSD [49]: This dataset consists of web server metrics col- lected from three companies providing large-scale web ser- vices: Baidu, Sogou, and eBay. •UCR [42]: This archive contains data from multiple domains with a single anomalous segment on each time series. In addition to real anomalies, UCR also includes synthetic but highly plausible anomalies. B Baselines We selected the following baseline approaches to further elabo- rate on the performance differences between KAN-AD and SOTA methods: •SubLOF [5] represents traditional outlier detection tech- niques based on distance metrics. •SRCNN [32] is a supervised approach reliant on high-quality labeled data. •LSTMAD [29] leverages Long Short-Term Memory (LSTM) networks [13] for deep learning-based anomaly detection. •FITS [46] achieves parameter-efficient anomaly detection by upsampling frequency domain information using a low-pass filter and simple linear layers. •FCVAE [40] is unsupervised reconstruction method based on Variational Autoencoder (VAE) [ 19], designed to recon- struct normal patterns. •Anomaly Transformer [45] employs attention mechanism to computer the association discrepancy. •TranAD [37] incorporates the principles of adversarial learn- ing to develop a training framework with two stages while integrating the strengths of self-attention encoders to cap- ture the temporal dependency embedded in the time series.Table 5: Model performance under different function decon- struction strategies. Variation F1e F1d AUPRC KAN-AD 0.5335 0.5177 0.8188 w/o X 0.5153 0.4974 0.8066 w/o P 0.5081 0.4810 0.8007 w/o S 0.5056 0.5113 0.7998 w/o X&P 0.4737 0.4583 0.7872 w/o X&S 0.4698 0.4610 0.7767 w/o S&P 0.4561 0.4637 0.7595 •SAND [2] utilizes a novel statistical approach based on curve shape clustering for anomaly detection in a streaming fash- ion. •TimesNet [41] leverages an Inception [ 36]-based computer vision backbone to enhance learning capabilities. •OFA [50], with GPT-2 [ 31] as its backbone, improves its ability to capture point-to-point dependencies. •KAN [27] leverages Kolmogorov-Arnold representation the- ory to decompose complex learning objectives into linear combinations of univariate functions. These baseline methods encompass a variety of anomaly detec- tion paradigms: shape-based SAND, subsequence distance-based SubLOF, Transformer-based approaches like OFA, TranAD, and Anomaly Transformer for modeling sequence relationships, and frequency domain information enhanced methods FCVAE and FITS. C Ablation on function deconstruction mechanism To investigate the impact of the FDmechanism, we compared the model’s detection capabilities under different univariate function combination strategies. For clarity, the specific definitions are pro- vided in Equation (7). As the results presented in Table 5, the model’s detection perfor- mance exhibited a notable improvement with an increasing num- ber of univariate functions. Both Fourier series and cosine waves outperformed the raw input data, likely due to their smoother representations compared to the original signal, enabling higher detection accuracy. The combination of different features, partic- ularly those involving Fourier series and cosine waves, resulted in significant performance gains as the feature count increased. Ultimately, KAN-AD achieved optimal detection performance by integrating all features. It is worth noting that even the variant of KAN-AD utilizing only the raw time series X outperforms KAN, clearly demonstrating the advantage of Fourier series over the use of spline functions for optimizing univariate functions. | 4 | 1 | The KAN-AD model is based on a novel architecture that leverages Fourier series for anomaly detection in time series, which would imply a moderate computational overhead given the 1D CNN architecture with stacked layers for coefficient learning. The training dataset size varies per dataset, with the largest (KPI) containing over 3 million samples, and the training procedure would require several epochs to achieve meaningful results. However, given that the architecture focuses on learning coefficients from the Fourier series and not highly complex patterns through deep layers, the model is designed to be efficient in training. Given its improvements over existing methods and anecdotal results demonstrating effective learning, we estimate the time spent training over these datasets, including pre-processing, is reasonable to conclude around 4 hours on a single, powerful GPU (e.g., NVIDIA RTX 3090/3080). The 1D CNN design allows for efficient memory usage across batch processing, making it feasible within the parameters. Based on these considerations, a single modern GPU setup can complete the training in under 8 hours. | yes | Yes | Time Series | KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks | 2024-11-01 0:00:00 | https://github.com/issaccv/KAN-AD | 1 | Downloaded when running prepeare_env.sh from repository & uses UTS dataset, https://github.com/CSTCloudOps/datasets | There are 5 folders. May take around 2 hours or more no idea as time was not specified and traing was happening fast. | https://colab.research.google.com/drive/1sE1mKwy3n9yameE-JG27Oa_HI-q8lFn9?usp=sharing | Yes | -- After the installation of environment.sh. I changed a line of code to run matplot lib on colab and need to fix the typo on .bin file which i have mentioned in colab file. It takes 10 min to install env on colab with requirements. |
Chameleon | CoED | [] | Improving Graph Neural Networks by Learning Continuous Edge Directions | 2024-10-18T00:00:00 | https://arxiv.org/abs/2410.14109v1 | [
"https://github.com/hormoz-lab/coed-gnn"
] | {'Accuracy': '79.69±1.35'} | [
"Accuracy"
] | Given the following paper and codebase:
Paper: Improving Graph Neural Networks by Learning Continuous Edge Directions
Codebase: https://github.com/hormoz-lab/coed-gnn
Improve the CoED model on the Chameleon dataset. The result
should improve on the following metrics: {'Accuracy': '79.69±1.35'}. You must use only the codebase provided.
| Preprint IMPROVING GRAPH NEURAL NETWORKS BY LEARN - INGCONTINUOUS EDGE DIRECTIONS Seong Ho Pahng1, 2& Sahand Hormoz3, 2, 4 1Department of Chemistry and Chemical Biology, Harvard University 2Department of Data Science, Dana-Farber Cancer Institute 3Department of Systems Biology, Harvard Medical School 4Broad Institute of MIT and Harvard spahng@g.harvard.edu, sahand hormoz@hms.harvard.edu ABSTRACT Graph Neural Networks (GNNs) traditionally employ a message-passing mech- anism that resembles diffusion over undirected graphs, which often leads to ho- mogenization of node features and reduced discriminative power in tasks such as node classification. Our key insight for addressing this limitation is to as- sign fuzzy edge directions—that can vary continuously from node ipointing to node jto vice versa—to the edges of a graph so that features can preferentially flow in one direction between nodes to enable long-range information transmis- sion across the graph. We also introduce a novel complex-valued Laplacian for directed graphs with fuzzy edges where the real and imaginary parts represent information flow in opposite directions. Using this Laplacian, we propose a gen- eral framework, called Continuous Edge Direction (CoED) GNN, for learning on graphs with fuzzy edges and prove its expressivity limits using a generalization of the Weisfeiler-Leman (WL) graph isomorphism test for directed graphs with fuzzy edges. Our architecture aggregates neighbor features scaled by the learned edge directions and processes the aggregated messages from in-neighbors and out-neighbors separately alongside the self-features of the nodes. Since contin- uous edge directions are differentiable, they can be learned jointly with the GNN weights via gradient-based optimization. CoED GNN is particularly well-suited for graph ensemble data where the graph structure remains fixed but multiple re- alizations of node features are available, such as in gene regulatory networks, web connectivity graphs, and power grids. We demonstrate through extensive experi- ments on both synthetic and real datasets that learning continuous edge directions significantly improves performance both for undirected and directed graphs com- pared with existing methods. 1 I NTRODUCTION Graph Neural Networks (GNNs) have emerged as a powerful tool for learning from data that is structured as graphs, with applications ranging from social network analysis to molecular chemistry (Kipf & Welling, 2017; Zhou et al., 2020; Gilmer et al., 2017). GNNs typically employ a message passing mechanism where nodes aggregate and then transform feature information from their neigh- bors at each layer, enabling them to learn node representations that capture both local and global graph structures. When the graph is undirected, the aggregation of node features mimics a diffu- sion process. Each node’s representation becomes the averaged features of its immediate neighbors, leading to a homogenization of information across the graph. As depth increases, this diffusion of information culminates in a uniform state where node representations converge towards a constant value across all the nodes, which severely limits the discriminative power of GNNs, especially in tasks such as node classification (Rusch et al., 2023a; Oono & Suzuki, 2020; Cai & Wang, 2020; Li et al., 2018a; Keriven, 2022; Chen et al., 2020a; Wu et al., 2022; 2024). Our code is available at https://github.com/hormoz-lab/coed-gnn . 1arXiv:2410.14109v1 [cs.LG] 18 Oct 2024 Preprint (a) 1(b) 21 2 Figure 1: (a) When edges are undirected information diffuses across the graph and long range transmission of informa- tion between nodes 1 and 2 is not pos- sible. (b) Once the optimal edge direc- tions are learned, information can flow directly from node 1 to node 2.Our key insight for improving the performance of GNNs is to alter the nature of information transmission between nodes from diffusion to flow. To do so, we add direc- tions to the edges of a graph so that features can be prop- agated from node vito its neighbor node vjwithout a reciprocal propagation of information from node vjto node vi. Unlike diffusion, where information uniformly spreads across available paths, flow is directional and pre- serves the propagation of information across longer dis- tances within a graph, as illustrated in Figure 1. In gen- eral, the optimal information propagation could require edges whose directions fall anywhere in the continuum of node vipointing to node vjto vice versa. To capture such continuous edge directions, we propose a concept of ‘fuzzy edges,’ where the direction of an edge between any two nodes viandvjis not a discrete but a continuous value. An edge’s orientation can range con- tinuously—from exclusively pointing from node vito node vj, through a fully bidirectional state, to exclusively pointing from node vjto node vi. Therefore, ‘fuzzy’ direction essentially controls the relative amount of information flow from node vito node vjand the reciprocal flow from node vj to node vi. To effectively model this directional flexibility, we introduce a complex-valued graph Laplacian called a fuzzy Laplacian. In this framework, the real part of the ij-th entry in the fuzzy Laplacian matrix quantifies the degree of information transmission from node vjto node vi, while the imaginary part measures the flow from node vito node vj. Next, we introduce the Continuous Edge Direction (CoED) GNN architecture. At each layer, a node’s neighbors’ features are scaled by the directions of their connecting edges and aggregated. This aggregation is performed separately for incoming and outgoing edges, following Rossi et al. (2024), resulting in distinct features for incoming and outgoing messages. Practically, this is im- plemented by applying the fuzzy Laplacian to the node features, where the real and imaginary parts correspond to the features aggregated from incoming neighbors and outgoing neighbors, respec- tively. These aggregated features are then affine transformed using learnable weights and combined with the node’s own transformed features. A nonlinear activation function is applied to obtain the updated node features. This process is repeated for each layer. The continuous edge directions have the added benefit that they are differentiable. During training, both the edge directions and weight matrices are learned simultaneously using gradient-based optimization to improve the learning ob- jective. Importantly, our approach is fundamentally different from methods such as Graph Attentions Net- work (GAT) (Veli ˇckovi ´c et al., 2018) or graph transformers (Dwivedi & Bresson, 2021; Ramp ´aˇsek et al., 2022) that learn attention coefficients to assign weights to each edge of the graph based on the features (and potentially the positional encoding) of the nodes connected by that edge. While such an attention mechanism can capture asymmetric relationships by computing direction-specific atten- tion weights based on node features, they do not learn edge directions as independent parameters. In these models, the attention coefficient from node vito node vjare functions of the features of viand vj, and will change if node features change. In contrast, our approach introduces continuous edge directions as learnable parameters that are optimized end-to-end, independent of the node features. As we demonstrate empirically below, by directly learning edge directions, our method goes beyond what can be achieved through an attention mechanism, enabling long-range information flow on graphs and improved performance. End-to-end learning of edge directions is most effective on graph ensemble data, where the graph structure remains fixed but multiple realizations of node features and targets (such as node labels) are available. This effectiveness arises from the ability to optimize information flow across all edges simultaneously without a need to mask parts of the graph for training and testing. Instead, training and testing splits are based on different feature realizations rather than on subsets of the graph. Graph ensemble data are increasingly common across various domains. For example, in biology, gene-regulatory networks are constant directed graphs where nodes represent genes and edges represent gene-gene interactions, while node features like gene expression levels vary across different cells. Similarly, in web connectivity, the network of websites remains relatively static, 2 Preprint but traffic patterns change over time, providing different node feature sets on the same underlying graph. In power grids, the network of electrical components is fixed, while the steady-state operating points of these components vary under different conditions, yielding multiple observations on the same graph. In all these cases, a fixed graph is paired with numerous feature variations. By applying CoED GNN to these scenarios, we demonstrate that learning edge directions significantly improves performance for both directed and undirected graphs. The main contributions of this paper are the following: • We introduce a principled complex-valued graph Laplacian for graphs where edge direc- tions can vary continuously and prove that it is more expressive than existing forms of Laplacians for directed graphs, such as the magnetic Laplacian. • We propose an architecture called Continuous Edge Direction (CoED) GNN, which is a general framework for learning on directed graphs with fuzzy edges. We prove that CoED GNN is as expressive as a weak form of the Weisfeiler-Leman (WL) graph isomorphism test for directed graphs with fuzzy edges. • Using extensive experiments, we empirically show that learning edge directions signifi- cantly improves performance by applying CoED GNN to both synthetic and real graph ensemble data. 2 P RELIMINARIES A graph is defined as a pair G= (V,E), where V={v1, v2, . . . , v N}is a set of Nnodes, and E ⊆ V × V is a set of edges connecting pairs of nodes. Each node viis associated with a feature vector fi∈RD, where Dis the dimensionality of the feature space, and collectively these feature vectors form the node feature matrix F∈RN×D. Additionally, each node is assigned a prediction target, such as a class label for classification tasks or a continuous value for regression tasks. The connectivity of a graph is encoded in an adjacency matrix A∈ {0,1}N×N. If there is an undirected edge between viandvj, then both Aij= 1 andAji= 1. For a directed edge, one of AijorAjiis 1 while the other is 0, specifying the direction of information flow. A directed edge, Aij= 1 andAji= 0, indicates that vjsends information to vi. Hence, we refer to vjas the in- neighbor of viand conversely to vias the out-neighbor of vj. If both Aij= 0andAji= 0, there is no edge between viandvj. Accordingly, in a directed graph, we define two distinct degree matrices: the in-degree matrix Din= diag( A1)and the out-degree matrix Dout= diag( A⊤1). In GNNs, node features Fare processed iteratively through a message-passing mechanism that leverages the structural information of the graph G. This process involves two main steps at each layer l: 1. Message Aggregation: For each node vi, an aggregated message m(l) i,N(i)is computed from the features of its neighbors: m(l) i,N(i)=AGGREGATE { {(f(l−1) i,f(l−1) j)|j∈ N(i)} } Here,N(i)denotes the set of nodes vjthat are connected to node viby an edge. 2. Feature Update: The feature vector of node viis then updated using the aggregated message: f(l) i=UPDATE f(l−1) i,m(l) i,N(i) AGGREGATE and UPDATE are functions with learnable parameters, and their specific implemen- tations define different GNN architectures (Gilmer et al., 2017). 3 F ORMULATION OF GNN ONDIRECTED GRAPHS WITH FUZZY EDGES 3.1 C ONTINUOUS EDGE DIRECTIONS AS PHASE ANGLES To describe a continuously varying edge direction between node viand node vj, we assign an angle θij∈[0, π/2]to the edge connecting vitovj. During aggregation of features from neighbors, 3 Preprint features propagated from vjtoviare scaled by a factor of cosθij. Conversely, the features that vj receives from viare scaled by sinθij. For example, when θij= 0, we have a directed edge where vjsends messages to vibut does not receive any messages from vi. When θij=π/4, the edge is undirected and the same scaling is applied to the messages sent and received by vito and from vj, i.e.,cosπ/4 = sin π/4 = 1 /√ 2. To ensure consistency, we require that the message received by vifrom vjshould be equivalent to the message sent by vjtovi. It follows that θji=π/2−θij. We define the phase matrix Θ∈[0, π/2]N×Nto describe the directions of all the edges in a graph. (Θ)ijis only defined if there is an edge connecting nodes viandvj. 3.2 F UZZY GRAPH LAPLACIAN To keep our message-passing GNN as expressive as possible, we define a Laplacian matrix that, during the aggregation step, propagates information along directed edges but keeps the aggregated features from in-neighbors and out-neighbors for each node distinct by assigning them to the real and imaginary parts of a complex number, respectively. For a given Θ, we construct the corresponding fuzzy graph Laplacian LFas follows. The diagonal entries (LF)iiare zero as we cannot define edge directions for self-loops, and off-diagonal entries are either zero or a phase value, (LF)ij=0 ifAij=Aji= 0 exp(iθij)otherwise(1) Since θijandθjiare related by θji=π/2−θij, it follows that LF=iL† F, where †is the conjugate transpose. Re [LF]thus encodes all i←jedges scaled by cosθijand Im [LF]alli→jedges scaled bysinθij. In Appendix D, we show the fuzzy graph Laplacian admits orthogonal eigenvectors with eigenvalues of the form a+iawitha∈R. Therefore, the eigenvectors of our Laplacian provide positional encodings that are informed by the directions of the edges in addition to their connectivities, as shown in Appendix C. In addition, we prove in Appendix E that a message-passing GNN whose aggregation step is done using the fuzzy Laplacian is as expressive as an extension of the Weisfeiler-Leman (WL) graph isomorphism test to directed graphs with fuzzy edges. An alternative form of Laplacian for directed graphs is the magnetic Laplacian (Zhang et al., 2021; He et al., 2022), which is also complex-valued. For directed graphs with fuzzy edges, the magnetic Laplacian is not as expressive as the Laplacian proposed above. We provide a proof in Appendix F. Briefly, the magnetic Laplacian produces linear combinations of in- and out- neighbor messages at each node as both the real and imaginary parts of the aggregated features. In principle, GNNs should be able to disentangle these linear combinations to recover the in- and out- neighbor messages. How- ever, the linear combinations depend on the local neighborhood of each node which is distinct from one node to another, whereas GNN parameters are shared across all nodes. Therefore, in general, a GNN using the magnetic Laplacian loses the ability to disentangle the in- and out- neighbor mes- sages at each node and thus has lower expressivity. The Laplacian proposed above does not suffer from this limitation since by construction the real and imaginary values directly correspond to the in- and out- neighbor aggreagated messages, respectively. 3.3 M ODEL ARCHITECTURE : CONTINUOUS EDGE DIRECTION (COED) GNN To ensure maximum expressivity, a message-passing mechanism on a directed graph should, for each node, separately aggregate the features of the in-neighbors and the out-neighbors, and independently process each of the aggregated features and the self-features to obtain an updated feature for each node. To this end, we define in- and out- edge weight matrices as A←=Re[LF]andA→= Im[LF], respectively. We compute the in- and out- degree matrices as D←= diag ( A←1)and D→= diag ( A→1). Following Rossi et al. (2024), but extending it to graphs with continuous edge directions, we define in- and out- fuzzy propagation matrices as P←=D−1/2 ←A←D−1/2 → P→=D−1/2 →A→D−1/2 ←(2) Using these matrices, we compute in- and out- messages at layer las m(l) ←=P←F(l−1) m(l) →=P→F(l−1)(3) 4 Preprint 1 2 1 3 2 3 3 4 1 2 1 3 2 3 3 4 Input graph Initial edge directionsLearned graph Learned edge directions 1 23 41 23 4 1 2 3 4 1 2 3 41 2 3 490o 45o 0o1 2 3 4 1 2 1 3 2 3 3 4 1 2 1 3 2 3 3 490o 45o 0o1 23 4 1 23 4 1 23 41 23 4 1 23 4 1 23 4signals targets : model parameters : phase anglesw Θ f(X1;w,Θ)= ˆY1 f(X2;w,Θ)= ˆY2 f(XS;w,Θ)= ˆYS: input features : output values≡ ≡x y vs. vs. vs. Lf: model : loss B/summationdisplay bL/parenleftbigˆYb,{} b/parenrightbig ∇wL ∇ΘL Gradient steps with and Figure 2: Schematic of training with a graph ensemble data. The input graph is undirected (left box). The graph ensemble data contains multiple realizations of node features and corresponding target values, either at the node, edge, or graph level. The phase angle formulation allows continuous edge directions to be optimized alongside the GNN parameters in an end-to-end manner (middle box). The learned edge directions (right box) enable long range information transmission across the graph. which defines the AGGREGATE function. Since self-loops are omitted from LF, we include current node features along with the two direc- tional messages in UPDATE function and update node features as, F(l)=σ F(l−1)W(l) self+m(l) ←W(l) ←+m(l) →W(l) →+B(l) (4) where σis an activation function, and W(l) self/←/→andB(l)are self/in/out weight matrices and a bias matrix, respectively. The features at the final layer are then transformed using a linear layer to obtain the output for a specific learning task, e.g. node classification or node regression. We use end-to-end gradient-based optimization to iteratively update both the phase matrix Θand the GNN parameters W(l) self/←/→and B(l)at each layer, as illustrated in Figure 2. We allow for the option to learn a different set of edge directions at each layer, Θ(l), just as we have distinct GNN parameters at each layer. 4 R ELATED WORK The issue of feature homogenization in GNNs, known as the oversmoothing problem, has been a significant concern. Early studies identified the low-pass filtering effect of GNNs (Defferrard et al., 2016; Wu et al., 2019), linking it to oversmoothing and loss of discriminative power (Li et al., 2018a; Oono & Suzuki, 2020). Proposed solutions include regularization techniques like edge dropout (Rong et al., 2020), feature masking (Hasanzadeh et al., 2020), layer normalization (Zhao & Akoglu, 2020), incorporating signed edges (Derr et al., 2018), adding residual connections (Chen et al., 2020b), gradient gating (Rusch et al., 2023b), and constraining the Dirichlet energy (Zhou et al., 2021). Dynamical systems approaches have also been explored, modifying message passing via nonlinear heat equations (Eliasof et al., 2021), coupled oscillators (Rusch et al., 2022), and interacting particle systems (Wang et al., 2022; Di Giovanni et al., 2023). Other methods involve learning additional geometric structures, such as cellular sheaves (Bodnar et al., 2022). Extending GNNs to directed graphs has been addressed through various methods. GatedGNN (Li et al., 2016) processed messages from out-neighbors in directed graphs. Some works constructed symmetric matrices from directed adjacency matrices and their transposes to build standard Lapla- cians (Tong et al., 2020b; Kipf & Welling, 2017), while others (Ma et al., 2019; Tong et al., 2020a) developed Laplacians based on random walks and PageRank (Duhan et al., 2009). MagNet (Zhang et al., 2021) utilized the magnetic Laplacian to represent directed messages, a technique also applied 5 Preprint in adapting transformers to directed graphs (Geisler et al., 2023). FLODE (Maskey et al., 2023) em- ployed asymmetrically normalized adjacency matrices within a neural ODE framework. DirGNN (Rossi et al., 2024) separately processed the messages from in-neighbors and out-neighbors us- ing asymmetrically normalized adjacency matrices, improving node classification on heterophilic graphs. A similar strategy was used in Koke & Cremers (2024), replacing the adjacency matrices with filters representing Faber polynomials. Recent graph PDE-based models (Eliasof et al., 2024; Zhao et al., 2023) introduced an advection term to model directional feature propagation alongside diffusion, assigning edge weights based on computed velocities between nodes, akin to attention coefficients in GAT. GNNs have been increasingly applied in domains relevant to our work. In single-cell biology, GNNs have been used to predict perturbation responses in gene expression data (Roohani et al., 2023; Molho et al., 2024), with datasets compiled in scPerturb (Peidli et al., 2024). In web traffic anal- ysis, a form of spatiotemporal data on graphs, GNNs often model temporal signals using recurrent neural networks on graphs (Li et al., 2018b; Chen et al., 2018; Sahili & Awad, 2023), with datasets and benchmarks provided by PyTorch Geometric Temporal library (Rozemberczki et al., 2021). In power grids, GNNs have been applied to predict voltage values (Ringsquandl et al., 2021) and solve optimal power flow problems (Donon et al., 2020; B ¨ottcher et al., 2023; Piloto et al., 2024), with datasets compiled by Lovett et al. (2024). 5 E XPERIMENTS 5.1 N ODE CLASSIFICATION We first benchmarked our method for node classification task on five graphs with varying properties without simultaneously learning the edge directions. While this is not the main goal of our paper, we include this analysis nonetheless to highlight the advantage of our Laplacian over the magnetic Laplacian as well as the benefit of processing self, in-neighbor aggregated, and out-neighbor aggre- gated features separately. Importantly, we do not learn edge directions in this case, and hence the phase value is either 0orπ/2for directed graphs and π/4for undirected graphs. For comparison, we include the classical models: GCN (Kipf & Welling, 2017), SAGE (Hamilton et al., 2017), GAT (Veli ˇckovi ´c et al., 2018); heterophily-specific model, GGCN (Yan et al., 2021); directionality-aware models: MagNet (Zhang et al., 2021), FLODE (Maskey et al., 2023), DirGNN (Rossi et al., 2024); and a model that learns geometric structure of graph, Sheaf (Bodnar et al., 2022). We also include MLP to highlight the effect of solely processing the nodes’ self-features without aggregating features across the graph. As shown in Table 1, CoED demonstrates competitive performance across all five datasets, rank- ing within the top three in terms of test accuracy. While all models exhibit comparable results on the Cora graph—which is undirected and homophilic—their performances differ significantly on the directed, heterophilic graphs. The classical models developed for undirected graphs particularly struggle on these datasets, with the exception of SAGE, which achieves over 80% accuracy on Texas and Wisconsin. This is because processing only the node’s own features yields good performance, as evidenced by the MLP’s results. In contrast, for the Squirrel and Chameleon datasets, process- ing directed messages along only one direction is crucial for good performance. Only FLODE, DirGNN, and CoED exhibit strong results on these datasets when configured accordingly. Specifi- cally, for CoED, we introduce the αhyperparameter as in Rossi et al. (2024) to weigh the directional messages post aggregation and make the transformation of self-features optional. The selected hy- perparameters modify Equation 4 to σ(m(l) ←W(l) ←+B(l))for these datasets. Our results highlight the advantage of the fuzzy Laplacian over the magnetic Laplacian. The mag- netic Laplacian does not process the aggregated messages from out-neighbors and in-neighbors separately. Instead, it combines them into both the real and imaginary components of the aggre- gated feature vector, thus losing the opportunity to process the two separately. Moreover, during the Laplacian convolution, the directed messages further mix with self-features encoded in the real com- ponent. This results in poor performance by MagNet on Squirrel and Chameleon. Sheaf also suffers on these datasets despite expanding the feature dimensions via an object called a stalk, because the sheaf Laplacian is constrained to be symmetric, thereby losing the ability to process directed mes- sages. FLODE and DirGNN, however, perform poorly on Texas and Wisconsin because they focus 6 Preprint Texas Wisconsin Squirrel Chameleon Cora Hom. level 0.11 0.21 0.22 0.23 0.81 Undirected ✗ ✗ ✗ ✗ ✓ MLP 80.81±4.75 85.29±3.31 28.77±1.56 46.21±2.99 75.69±2.00 GCN 55.14±5.16 51.76±3.06 53.43±2.01 64.82±2.24 86.98±1.27 SAGE 82.43±6.14 81.18±5.56 41.61±0.74 58.73±1.68 86.90±1.04 GAT 52.16±6.63 49.41±4.09 40.72±1.55 60.26±2.50 86.33±0.48 GGCN 84.86±4.55 86.86±3.29 55.17±1.58 71.14±1.84 87.95±1.05 MagNet 83.3±6.1 85.7±3.2 39.01±1.93 58.22±2.87 82.63±1.80 FLODE 77.57±5.28 80.20±3.56 74.03±1.58 77.98±1.05 86.44±1.17 Sheaf 85.95±5.51 89.41±4.74 56.34±1.32 68.68±1.73 87.30±1.15 DirGNN 68.38±4.99 68.82±5.91 75.31±1.92 79.71±1.26 86.27±1.45 CoED 84.59±4.53 87.84±3.70 75.32±1.82 79.69±1.35 87.02±1.01 Table 1: Comparison of baseline models and CoED (without edge direction learning) for node classification task across different types of graphs. Top three models are colored by First, Second, Third. The reported numbers are the mean and standard deviation of test accuracies across different splits. The first two rows report the homophily ratios of the graphs and whether they are directed or undirected. exclusively on directed messages and do not process the node’s own features separately. Taken to- gether, our benchmarking demonstrates that CoED’s ability to effectively process self-features and separately aggregate in-neighbor and out-neighbor messages using our fuzzy Laplacian enables it to achieve competitive performance across diverse datasets. 5.2 N ODE REGRESSION ON GRAPH ENSEMBLE DATASET Our key contribution is the joint learning of continuous edge directions alongside GNN parameters. This approach is most effective on graph ensemble data, where the graph structure remains fixed but multiple realizations of node features and targets exist. By learning edge directions for all edges without a need to mask parts of the graph, our method optimizes information flow across the entire graph. We empirically demonstrate the benefits of learning continuous edge directions using both synthetic and real-world graph ensemble datasets. 5.2.1 S YNTHETIC DATASETS Directed flow on triangular lattice. We begin by applying CoED GNN to a node regression problem constructed on a graph with continuous edge directions, where the target node features are obtained by directionally message-passing the input node features over long distances across the graph. To generate such a graph with continuous edge directions exhibiting long-range order, we created a two-dimensional triangular lattice, assigning each node a position in the 2d plane. We then defined a potential energy function Von this plane, consisting of one peak and one valley (Figure 3(a)). The gradient of Vyields a vector field with long-range order, which we used to assign continuous edge directions to the edges of the triangular lattice (Figure 3(b)). Using this graph, we performed the message passing step of Equation 4 iteratively 10 times—using random matrices W→,W←, andWselfthat were shared across all 10 iterations, starting from the initial node features to obtain the target node values. We repeated this procedure 500 times for different random initial node features and generated an ensemble of input node features and corresponding target node values. During training, we provided all models with the undirected version of the triangular lattice graph (i.e., all θij=π/4for CoED). The goal of the learning task is to predict the target node values from the input node features. Additionally, CoED GNN is expected to learn the underlying ground truth continuous edge directions of the graph as part of its training. Further details on data generation are provided in Appendix A.2.1. Gene Regulatory Network (GRN) dynamics. Gene Regulatory Networks (GRNs) are directed graphs where nodes represent genes and edges represent interactions between pairs of genes (Fig- ure 3(c)). In these networks, when two genes interact, one either activates or suppresses the other. 7 Preprint : gene : knockout (c) Gene regulatory network (GRN): activation : suppression . . . (d) Data generation using GRN dynamics dc dt= GRN( c),c i=0∀tifi∈{knockout genes } (a) Potential and gradient vector field (b) Edge directions of lattice graphxy y x ci ∆c i Figure 3: Synthetic datasets. (a-b) Triangular lattice graph with edge directions derived from the gradient of a 2D potential function V(shown in a), creating long-range flows across the graph. (c-d) Gene regulatory network (GRN) represented as a directed graph where nodes are genes and edges denote interactions. Steady-state gene expression levels are obtained from GRN dynamics, with perturbations simulated by setting the expression levels of specific genes to zero. We used Hill functions with randomly chosen parameters to define the dynamics of these gene- gene interactions. We constructed a directed GRN graph with 200 nodes and randomly assigned interactions between them. Starting from random initial expression levels, we solved the system of nonlinear ordinary differential equations representing the GRN dynamics to obtain the steady-state expression levels of all genes. Next, we modeled gene perturbations by setting the expression levels of either one or two genes (the perturbed set of genes) to zero and recomputing the steady-state expression levels for all genes using the same GRN dynamics (Figure 3(d)). We performed this pro- cedure for all single-gene perturbations and a subset of double-gene perturbations, resulting in 1,200 different realizations. Our learning task is to predict the steady-state expression levels of all genes following perturbation (target node values) given the initial steady-state expression levels, with the perturbed genes set to zero (input node features). We provided baseline models with the original graph and CoED with the undirected version of the graph. Further details of the data generation are provided in Appendix A.2.2. Results. As shown in Table 2, CoED achieves the best performance on both synthetic datasets by a considerable margin. For comparison, we include several baseline models: classical models, GCN and GAT; a transformer-based model with positional encoding, GraphGPS (Ramp ´aˇsek et al., 2022); a directionality-aware model, MagNet; a higher-order model, DRew (Gutteridge et al., 2023); and a combination of directionality-aware and higher-order model, FLODE. Details of the training setup, hyperparameter search procedure, and selected hyperparameters are provided in Appendix A.2.2. To identify which aspects of GNNs are particularly effective for learning on graph ensemble datasets, we analyze the baseline models’ results in detail. On the undirected lattice graph, MagNet provides only a slight improvement over GCN, which is expected since MagNet reduces to ChebNet (Def- ferrard et al., 2016) on undirected graphs, and GCN is a first-order truncation of ChebNet. How- ever, in the directed GRN experiments, MagNet shows substantial improvement over GCN. We also observe that higher-order GNNs like DRew and FLODE perform competitively on the undirected lattice graph. Notably, FLODE’s instantaneous enhancement of connectivity via fractional powers of the graph Laplacian outperforms DRew’s more gradual incorporation of higher-hop messages. However, both methods struggle on the directed GRN graph. On both synthetic graphs, attention- based models—GAT and GraphGPS—deliver strong performance, coming in just behind CoED. GraphGPS, in particular, seems to benefit from its final global attention step, similar to how FLODE benefits from densifying the graph. We also notice that increasing the dimension of Laplacian posi- tional encoding does not further enhance GraphGPS’s performance. Interestingly, models that learn edge weights via attention mechanisms outperform MagNet on the directed GRN graph. This is likely because MagNet’s unitary evolution of complex-valued features does not resemble the actual feature propagation (i.e., the GRN dynamics), in addition to the shortcomings highlighted in the previous section. We then investigated whether CoED can recover the true continuous edge directions of the triangular lattice graph, given that the feature propagation steps during data generation closely match the message-passing operation of CoED. As shown in Figure 4, CoED correctly learns the true directions. Lastly, since both synthetic datasets are generated by propagating input features 8 Preprint over multiple hops, we investigated how performance scales with model depth by training CoED and the second-best model with up to 10 layers. Figure 5 demonstrates that CoED continues to improve as depth increases, while the performance of the other models plateau at a shallower depth. Lattice GRN GCN 77.56±0.47 69.38±0.62 GAT 9.41±0.05 12.07±1.50 GraphGPS 3.47±0.14 25.16±1.56 MagNet 75.06±0.03 43.42±4.34 DRew 28.55±0.02 69.92±0.15 FLODE 7.54±0.05 70.31±0.03 CoED 1.36 ±0.06 5.02±0.45 Table 2: Comparison of different models on the synthetic datasets. Values are test losses reported with a common factor of 10−3in both columns. Figure 4: Learned theta vs. true theta for CoED applied to directed flow on triangular lattice syn- thetic dataset. (a) Test loss vs. depth on triangular lattice dataset (b) Test loss vs. depth on GRN dataset Figure 5: Model performance as a function of depth. 5.2.2 R EAL DATASETS Single-cell Perturb-seq. Perturb-seq (Dixit et al., 2016) is a well-established experimental tech- nique in single-cell biology that inspired the synthetic GRN experiment described earlier. In Perturb- seq experiments, one or more genes in a cell are knocked out resulting in zero expresion—as in our synthetic GRN dataset. The resulting changes in the expression levels of all other genes are then measured to elucidate gene-gene interactions. For our study, we used the Replogle-gwps dataset (Replogle et al., 2022; Peidli et al., 2024), which includes 9,867 distinct single-gene perturbations, along with control measurements from cells without any perturbation to establish baseline gene ex- pression levels. The learning task is again predicting the expression levels of all genes following perturbation given the initial steady-state with the expression levels of the perturbed genes set to zero. Since there is no ground truth gene regulatory network (GRN) available for this dataset, we constructed an undirected k-nearest neighbors graph to connect genes with highly correlated ex- pression patterns. All models are trained using this heuristic graph. Details of the data processing procedure are provided in Appendix A.3.1. Wikipedia web traffic. We also modeled the traffic flow between Wikipedia pages using the Wiki- Math dataset, which is classified as a “static graph with temporal signals” in the PyTorch Geometric Temporal library (Rozemberczki et al., 2021). In this dataset, each node corresponds to an article page on a popular mathematics topic, and each directed edge represents a link from one page to another. The node features are the daily visit counts of all pages over a period of 731 consecutive days. The learning task is node regression: predict the next day’s visit counts across all pages given today’s visit counts. We trained the baseline models using the true, directed graph, while CoED was trained starting from the undirected version of the graph. Additional details are provided in Appendix A.3.2. 9 Preprint Power grid. We applied CoED to the optimal power flow (OPF) problem using the OPF- Data (Lovett et al., 2024) from the PyTorch Geometric library. In this dataset, a power grid is represented as a directed graph with nodes corresponding to buses (connection points for generators and loads) and edges representing transformers and AC lines. Input features are the operating values of all components under specific load conditions, and the targets are the corresponding AC-OPF so- lution values at the generator nodes. To compare different models, we used a consistent architecture across components but substituted different model layers for message passing. For CoED, they were again converted to undirected edges. Additional details are provided in Appendix A.3.3. Perturb-seq Web traffic Power grid GCN 4.13±0.08 7.07±0.03 28.56±6.08 MagNet 4.11±0.01 6.94±0.02 18.05±2.77 GAT 3.85±0.03 6.00±0.03 13.57±1.73 DirGCN 5.46±0.26 6.72±0.04 6.15±0.84 DirGAT 3.98±0.07 6.55±0.04 3.28±0.17 CoED 3.56 ±0.03 5.76±0.05 2.91±0.11 Table 3: Comparison of different methods on real graph en- semble datasets. Values are test losses reported with com- mon factors of 101,10−1,10−3for Perturb-seq, web traffic, and power grid columns, respectively.Results. Table 3 reports the test performances of all baseline mod- els and CoED on the three datasets. The baselines include GAT, MagNet, DirGCN, and DirGAT. We focus on these models because attention-based approaches showed competitive per- formance on the synthetic datasets, and MagNet, which accounts for edge directions, performed well on the directed GRN dataset. Details of the training setup, hyperparameter search procedure, and selected hyper- parameters are provided in the Ap- pendix. We observe that CoED achieves the best performance across all three datasets. On the Perturb-seq dataset with an undirected graph, MagNet performs similarly to GCN, while DirGCN struggles. We attribute DirGCN’s poor per- formance to clashing learnable parameters: it uses two distinct weight matrices, W←andW→, applied to identical in- and out-neighbor aggregated messages in the case of undirected graphs. For a propagation path of Lhops, this results in 2Lfeature transformations, comprised of different com- binations of the two weight matrices, which together reduce the model’s ability to efficiently learn the optimal weight matrices. In contrast, DirGAT’s attention mechanisms break the symmetry of the undirected edges, leading to improved performance. CoED naturally addresses this issue in undi- rected graphs by learning the edge directions. On the web traffic and power grid datasets, which have directed graphs, we observe a similar trend. MagNet outperforms GCN due to its ability to process directed messages. However, GAT delivers better performance than MagNet, likely because its attention mechanism effectively captures important features. Since directed graphs create dis- tinct feature propagation paths, DirGCN achieves substantial performance gains. DirGAT further improves upon DirGCN by leveraging additional edge weight learning through an attention mech- anism. CoED surpasses all these models, demonstrating the effectiveness of learning continuous edge directions. 6 C ONCLUSION We have introduced the Continuous Edge Direction (CoED) GNN, which assigns fuzzy, continuous directions to the edges of a graph and employs a novel complex-valued Laplacian to transform in- formation propagation on graphs from diffusion to directional flow. Our theoretical analysis shows that CoED GNN is more expressive than existing Laplacian-based methods and matches the ex- pressiveness of an extended Weisfeiler-Leman test for directed graphs with fuzzy edges. Through extensive experiments on both synthetic and real-world graph ensemble datasets—including gene regulatory networks, web traffic, and power grids—we demonstrated that learning continuous edge directions significantly improves performance over existing GNN models. These results highlight CoED GNN’s effectiveness in enabling long-range information flow, offering a powerful framework for learning on graphs. 10 Preprint REFERENCES Cristian Bodnar, Francesco Di Giovanni, Benjamin Chamberlain, Pietro Li `o, and Michael Bronstein. Neural sheaf diffusion: A topological perspective on heterophily and oversmoothing in gnns. In Advances in Neural Information Processing Systems , volume 35, pp. 24850–24863, 2022. Luis B ¨ottcher, Hinrikus Wolf, Bastian Jung, Philipp Lutat, Marc Trageser, Oliver Pohl, Xiaohu Tao, Andreas Ulbig, and Martin Grohe. Solving ac power flow with graph neural networks under realistic constraints. In 2023 IEEE Belgrade PowerTech , pp. 1–7. IEEE, 2023. Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318 , 2020. Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over- smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pp. 3438–3445, 2020a. Jinyin Chen, Xuanheng Xu, Yangyang Wu, and Haibin Zheng. Gc-lstm: Graph convolution embed- ded lstm for dynamic link prediction. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI) , pp. 219–225, 2018. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In International conference on machine learning , pp. 1725–1735. PMLR, 2020b. Micha ¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems , 29, 2016. Tyler Derr, Yao Ma, and Jiliang Tang. Signed graph convolutional networks. In 2018 IEEE Interna- tional Conference on Data Mining (ICDM) , pp. 929–934. IEEE, 2018. Francesco Di Giovanni, James Rowbottom, Benjamin P. Chamberlain, Thomas Markovich, and Michael M. Bronstein. Understanding convolution on graphs via energies. Transactions on Ma- chine Learning Research , 2023. Atray Dixit, Oren Parnas, Biyu Li, Jenny Chen, Charles P Fulco, Livnat Jerby-Arnon, Nemanja D Marjanovic, Danielle Dionne, Tyler Burks, Raktima Raychowdhury, et al. Perturb-seq: dissecting molecular circuits with scalable single-cell rna profiling of pooled genetic screens. cell, 167(7): 1853–1866, 2016. Balthazar Donon, R ´emy Cl ´ement, Benjamin Donnot, Antoine Marot, Isabelle Guyon, and Marc Schoenauer. Neural networks for power flow: Graph neural solver. Electric Power Systems Research , 189:106547, 2020. Neelam Duhan, AK Sharma, and Komal Kumar Bhatia. Page ranking algorithms: a survey. In 2009 IEEE International Advance Computing Conference , pp. 1530–1537. IEEE, 2009. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. AAAI Workshop on Deep Learning on Graphs: Methods and Applications , 2021. Moshe Eliasof, Eldad Haber, and Eran Treister. Pde-gcn: Novel architectures for graph neural networks motivated by partial differential equations. Advances in neural information processing systems , 34:3836–3849, 2021. Moshe Eliasof, Eldad Haber, and Eran Treister. Feature transportation improves graph neural net- works. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pp. 11874– 11882, 2024. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds , 2019. Simon Geisler, Yujia Li, Daniel J Mankowitz, Ali Taylan Cemgil, Stephan G ¨unnemann, and Cosmin Paduraru. Transformers meet directed graphs. In International Conference on Machine Learning , pp. 11144–11172. PMLR, 2023. 11 Preprint Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning , pp. 1263–1272. PMLR, 2017. Mingyu Guan, Anand Padmanabha Iyer, and Taesoo Kim. Dynagraph: dynamic graph neural net- works at scale. In Proceedings of the 5th ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA) , pp. 1–10, 2022. Benjamin Gutteridge, Xiaowen Dong, Michael M Bronstein, and Francesco Di Giovanni. Drew: Dy- namically rewired message passing with delay. In International Conference on Machine Learning , pp. 12252–12267. PMLR, 2023. William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS) , volume 30, 2017. Arman Hasanzadeh, Ehsan Hajiramezanali, Shahin Boluki, Mingyuan Zhou, Nick Duffield, Krishna Narayanan, and Xiaoning Qian. Bayesian graph neural networks with adaptive connection sam- pling. In International conference on machine learning , pp. 4094–4104. PMLR, 2020. Yixuan He, Michael Perlmutter, Gesine Reinert, and Mihai Cucuringu. Msgnn: A spectral graph neural network based on a novel magnetic signed laplacian. In Learning on Graphs Conference , pp. 40–1. PMLR, 2022. Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are uni- versal approximators. Neural networks , 2(5):359–366, 1989. Nicolas Keriven. Not too little, not too much: a theoretical analysis of graph (over) smoothing. Advances in Neural Information Processing Systems , 35:2268–2281, 2022. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional net- works. In International Conference on Learning Representations (ICLR) , 2017. C Koke and D Cremers. Holonets: Spectral convolutions do extend to directed graphs. In Interna- tional Conference on Learning Representations (ICLR) , 2024. Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI conference on artificial intelligence , volume 32, 2018a. Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural net- work: Data-driven traffic forecasting. In International Conference on Learning Representations (ICLR) , 2018b. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings , 2016. Sean Lovett, Miha Zgubic, Sofia Liguori, Sephora Madjiheurem, Hamish Tomlinson, Sophie Elster, Chris Apps, Sims Witherspoon, and Luis Piloto. Opfdata: Large-scale datasets for ac optimal power flow with topological perturbations. arXiv preprint arXiv:2406.07234 , 2024. Yi Ma, Jianye Hao, Yaodong Yang, Han Li, Junqi Jin, and Guangyong Chen. Spectral-based graph convolutional network for directed graphs. arXiv preprint arXiv:1907.08990 , 2019. Sohir Maskey, Raffaele Paolino, Aras Bacho, and Gitta Kutyniok. A fractional graph laplacian approach to oversmoothing. In Advances in Neural Information Processing Systems , 2023. Dylan Molho, Jiayuan Ding, Wenzhuo Tang, Zhaoheng Li, Hongzhi Wen, Yixin Wang, Julian Vene- gas, Wei Jin, Renming Liu, Runze Su, et al. Deep learning in single-cell analysis. ACM Transac- tions on Intelligent Systems and Technology , 15(3):1–62, 2024. Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In International Conference on Learning Representations (ICLR) , 2020. 12 Preprint Hongwei Pei, Bingzhe Wei, Kevin Chang, Yiming Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In Proceedings of the International Conference on Learning Represen- tations (ICLR) , 2020. Stefan Peidli, Tessa D Green, Ciyue Shen, Torsten Gross, Joseph Min, Samuele Garda, Bo Yuan, Linus J Schumacher, Jake P Taylor-King, Debora S Marks, et al. scperturb: harmonized single- cell perturbation data. Nature Methods , pp. 1–10, 2024. Luis Piloto, Sofia Liguori, Sephora Madjiheurem, Miha Zgubic, Sean Lovett, Hamish Tomlinson, Sophie Elster, Chris Apps, and Sims Witherspoon. Canos: A fast and scalable neural ac-opf solver robust to n-1 perturbations. arXiv preprint arXiv:2403.17660 , 2024. Adolfo Piperno et al. Isomorphism test for digraphs with weighted edges. In Proceedings SEA2018 . Schloss Dagstuhl–Leibniz-Zentrum f ¨ur Informatik GmbH, Dagstuhl Publishing, 2018. Ladislav Ramp ´aˇsek, Michael Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Do- minique Beaini. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems , 35:14501–14515, 2022. Joseph M Replogle, Reuben A Saunders, Angela N Pogson, Jeffrey A Hussmann, Alexander Lenail, Alina Guna, Lauren Mascibroda, Eric J Wagner, Karen Adelman, Gila Lithwick-Yanai, et al. Mapping information-rich genotype-phenotype landscapes with genome-scale perturb-seq. Cell, 185(14):2559–2575, 2022. Martin Ringsquandl, Houssem Sellami, Marcel Hildebrandt, Dagmar Beyer, Sylwia Henselmeyer, Sebastian Weber, and Mitchell Joblin. Power to the relational inductive bias: Graph neural net- works in electrical power grids. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management , pp. 1538–1547, 2021. Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Repre- sentations (ICLR) , 2020. Yusuf Roohani, Kexin Huang, and Jure Leskovec. Predicting transcriptional outcomes of novel multigene perturbations with gears. Nature Biotechnology , pp. 1–9, 2023. Emanuele Rossi, Bertrand Charpentier, Francesco Di Giovanni, Fabrizio Frasca, Stephan G¨unnemann, and Michael M Bronstein. Edge directionality improves learning on heterophilic graphs. In Learning on Graphs Conference , pp. 25–1. PMLR, 2024. Benedek Rozemberczki, Paul Scherer, Yixuan He, George Panagopoulos, Maria Sinziana Aste- fanoaei, Oliver Kiss, Ferenc Beres, Nicolas Collignon, and Rik Sarkar. Pytorch geometric tem- poral: Spatiotemporal signal processing with neural machine learning models. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM) , pp. 4564–4573, 2021. doi: 10.1145/3459637.3482014. T Konstantin Rusch, Ben Chamberlain, James Rowbottom, Siddhartha Mishra, and Michael Bron- stein. Graph-coupled oscillator networks. In International Conference on Machine Learning , pp. 18888–18909. PMLR, 2022. T Konstantin Rusch, Michael M Bronstein, and Siddhartha Mishra. A survey on oversmoothing in graph neural networks. arXiv preprint arXiv:2303.10993 , 2023a. T. Konstantin Rusch, Benjamin P. Chamberlain, Michael W. Mahoney, Michael M. Bronstein, and Siddhartha Mishra. Gradient gating for deep multi-rate learning on graphs. In International Conference on Learning Representations (ICLR) , 2023b. Zahraa Al Sahili and Mariette Awad. Spatio-temporal graph neural networks: A survey. arXiv preprint arXiv:2301.10569 , 2023. Zekun Tong, Yuxuan Liang, Changsheng Sun, Xinke Li, David S Rosenblum, and Andrew Lim. Di- graph inception convolutional networks. In Advances in Neural Information Processing Systems (NeurIPS) , 2020a. 13 Preprint Zekun Tong, Yuxuan Liang, Changsheng Sun, David S. Rosenblum, and Andrew Lim. Directed graph convolutional network. arXiv preprint arXiv:2004.13970 , 2020b. Petar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li `o, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations , 2018. Yuelin Wang, Kai Yi, Xinliang Liu, Yu Guang Wang, and Shi Jin. Acmp: Allen-cahn message pass- ing with attractive and repulsive forces for graph neural networks. In The Eleventh International Conference on Learning Representations , 2022. F Alexander Wolf, Philipp Angerer, and Fabian J Theis. Scanpy: large-scale single-cell gene ex- pression data analysis. Genome biology , 19:1–5, 2018. Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Q Weinberger. Sim- plifying graph convolutional networks. In International conference on machine learning , pp. 6861–6871. PMLR, 2019. Xinyi Wu, Zhengdao Chen, William Wang, and Ali Jadbabaie. A non-asymptotic analysis of over- smoothing in graph neural networks. arXiv preprint arXiv:2212.10701 , 2022. Xinyi Wu, Amir Ajorlou, Zihui Wu, and Ali Jadbabaie. Demystifying oversmoothing in attention- based graph neural networks. Advances in Neural Information Processing Systems , 36, 2024. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. Shen Yan, Zeyuan Allen Xu, An Gu, Yufeng Sun, Charu Aggarwal, and Neil Shah. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. In Pro- ceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM) , pp. 1812–1821, 2021. Xitong Zhang, Yixuan He, Nathan Brugnone, Michael Perlmutter, and Matthew Hirn. Magnet: A neural network for directed graphs. Advances in neural information processing systems , 34: 27003–27015, 2021. Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, and Wee Peng Tay. Graph neural convection-diffusion with heterophily. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence , pp. 4656–4664, 2023. Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in GNNs. In International Conference on Learning Representations (ICLR) , 2020. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applica- tions. AI open , 1:57–81, 2020. Kaixiong Zhou, Xiao Huang, Daochen Zha, Rui Chen, Li Li, Soo-Hyun Choi, and Xia Hu. Dirichlet energy constrained learning for deep graph neural networks. Advances in Neural Information Processing Systems , 34:21834–21846, 2021. 14 Preprint A E XPERIMENTAL DETAILS A.1 N ODE CLASSIFICATION All the results reported in Table 1 are taken from (Bodnar et al., 2022) except for MagNet, FLODE, and DirGNN. Magnet’s results on Texas and Wisconsin are taken from the original paper, while those on Squirrel and Chameleon are taken from (Rossi et al., 2024). FLODE’s results on Squir- rel, Chameleon, and Cora, and DirGNN’s results on Squirrel and Chameleon are taken from their respective papers. We trained these models and CoED for the remaining results and describe the training procedures below. All five datasets were downloaded using PyTorch Geometric library (Fey & Lenssen, 2019) with split= ‘geom-gcn ’ argument to use the 10 fixed 48%/32%/20% training/validation/test splits provided by (Pei et al., 2020), which are the splits used in all the refer- enced results. Training. We evaluated the validation accuracy at each epoch, incrementing a counter if the value did not improve and resetting it to 0 when a new best validation accuracy was achieved. Training was early-stopped when the counter reached a patience of 200. Unless otherwise mentioned, we used the default hyperparameter settings of the respective models. For DirGNN, we chose GCN as a convolution layer since it showed competitive performance across various datasets consid- ered in the original paper. We used the ReLU activation function and the ADAM optimizer in all experiments. Across all models, we searched over the following hyperparmeters: hidden dimen- sion∈ {32,64,128,256}, learning rate ∈ {5e-4,1e-3,2e-3,5e-3,1e-2,2e-2}, and weight decay ∈ {0,1e-4,5e-4,1e-3}. With the exception of FLODE, we set the dropout rate to 0.5 and tuned the number of layers ∈ {2,3,4,5}. We additionally searched over model-specific hyperparame- ters: the weight between in-/out-neighbor aggregated messages α∈ {0,0.5,1}, jumping knowl- edge (jk) ∈ {None ,‘cat’,‘max’}, and layer-wise feature normalization (norm) ∈ {True ,False } for DirGNN and CoED; additionally, self-feature transform ∈ {True ,False }, self-loop value ∈ {0,1}for CoED; the order of Chebyshev polynomial K∈ {1,2}, the global directionality q∈ {0,0.05,0.1,0.15,0.2,0.25}, and self-loop value ∈ {0,1}for MagNet; and the number of layers ∈ {1,2,3}, self-loop value ∈ {0,1}, and dropout rate for encoder and decoder ∈ {0,0.5} for FLODE. Note that the number of layers refers to the number of forward Euler steps in FLODE. We set the number of layers in both encoder and decoder MLPs to 1 to focus on the effect of the fractional Laplacian, and solved the heat equation with a minus sign, which was the setup employed for Squirrel and Chameleon experiments in the original paper. If the self-loop value is 1, the self- feature is combined with neighbors’ features in the AGGREGATE function. We report in Table 1 the mean accuracy and standard deviation over the 10 test splits using the best hyperparameters pre- sented below. All experiments were performed on two NVIDIA RTX 6000 Ada Generation GPUs with 48GB of memory and it took roughly 5 days of training to produce the results. # layers # hidden lr wd dropout self-loop K q α norm jk self-feature MagNet (Cora) 1 32 1e-2 0 0.5 1 2 0 - - - - FLODE (Texas) 1 128 1e-2 1e-3 0 0 - - - - - - FLODE (Wisconsin) 1 128 5e-3 0 0.5 0 - - - - - - DirGNN (Cora) 2 64 5e-3 5e-4 0.5 - - - 0 False None - DirGNN (Texas) 4 64 5e-3 1e-3 0.5 - - - 0.5 False None - DirGNN (Wisconsin) 2 128 5e-3 1e-4 0.5 - - - 0.5 False None - CoED (Texas) 2 64 2e-2 5e-4 0.5 0 - - 0.5 False None True CoED (Wisconsin) 2 128 2e-2 1e-3 0.5 0 - - 0.5 False None True CoED (Squirrel) 3 128 1e-2 0 0 0 - - 0 True max False CoED (Chameleon) 4 128 5e-3 0 0 0 - - 0 True cat False CoED (Cora) 2 128 5e-4 1e-4 0.5 1 - - 0 False None False Table A.1: Hyperparameters selected for node classification experiments A.2 G RAPH ENSEMBLE EXPERIMENT WITH SYNTHETIC DATASETS A.2.1 D ATA GENERATION AND TRAINING SETUP FOR THE DIRECTED FLOW TRIANGULAR LATTICE GRAPH Data generation. To obtain the triangular lattice graph described in the main text, we first designed a potential function Von[−2,2]2plane with a peak (source) and a valley (sink). We used quadratic 15 Preprint potentials, located at µ1= (−1,1)andµ2= (1,−1)with stiffness matrices, K1=K2= 1 0 0 1 With magnitudes a1= 1anda2=−1, the potential function V(x)is parameterized as, V(x) =a1(x−µ1)⊤K1(x−µ1) +a2(x−µ2)⊤K2(x−µ2) We then generated a triangular lattice on the 2d plane and considered this lattice as a graph Glattice, where the vertices of the lattice serve as the nodes of the graph, and the edges of the lattice form the edges of the graph. In this way, each node vi∈ V lattice has an associated spatial position xion the 2d plane. We computed ∆Vij=V(xj)−V(xi)for all (vi, vj)∈ E lattice. All∆Vijvalues were shifted and scaled to the range [0, π/2]to obtain the θij, which is an approximate version of the gradient direction of the potential. In the resulting lattice graph, an edge points towards the node with the lower potential energy. We then assigned to each node a 10-dimensional random feature vector sampled independently from the standard multivariate normal distribution, and normalized them to have a unit-norm, and repeated the process 500 times to generate an ensemble of node features. To generate corresponding target values, we propagated features using Equation 4 with message-passing matrices P→andP← computed from Θof the lattice graph as described in Equation 2 and the entries of the 10×10weight matrices W→,W←, andWselfsampled independently from the standard normal distribution and shared across all 10 iterations. Instead of applying an activation function, we normalized the features mself+m→+m←to have unit norm. We used the features after 10 iterations of message passing as the target values. We divided these 500 instances of feature-target pairs using 60%/20%/20% random training/validation/test split. Training. We used a batch size of 16 for training with random shuffling at each epoch and a full batch for both validation and testing. We evaluated the validation MSE at each epoch, incrementing a counter if the value did not improve and resetting it to 0 when a new best validation MSE was achieved. Training was early-stopped when the counter reached a patience of 20. We used neither dropout nor weight decay, as we aim to learn an exact mapping from node features to target values for regression, as opposed to a noise-robust node embedding for classification. We used the ReLU activation function in all models, except for ELU in GAT, and used ADAM optimizer for all exper- iments. Across all models, we searched over the following hyperparmeters: the number of layers ∈ {2,3,4}, hidden dimension ∈ {16,32,64}, learning rate ∈ {1e-3,2e-3,5e-3,1e-2}. We addition- ally grid-searched over model-specific hyperparameters: the number of attention heads ∈ {1,4,8} and skip connection (sc) ∈ {True ,False }for GAT; attention type ∈ {‘multihead’ ,‘performer’ }, attention heads ∈ {1,4,8}, encoding type ∈ {‘eigenvector’ ,‘electrostatic’ }, the dimension of eigen- vector encoding ∈ {2,5,10,20}and self-loop value ∈ {0,1}for GraphGPS; the order of Chebyshev polynomial K∈ {1,2}and self-loop value ∈ {0,1}for MagNet; multi-hop aggregation mecha- nism∈ {‘sum’ ,‘weight’ }for DRew; the number of encoder MLP layers ∈ {1,2,3}, the number of decoder MLP layers ∈ {1,2,3}and self-loop value ∈ {0,1}for FLODE; self-feature trans- form∈ {True ,False }, learning rate for Θ∈ {1e-3,5e-3,1e-2}, and layer-wise (lw) Θlearning ∈ {True ,False }for CoED. For GraphGPS, we used GINE as a convolution layer and provided 1 as an edge attribute. Computing structural encoding via random walk resulted in an all-zero vector since the degree of node is 3 across all nodes except at the boundary in our lattice graph. We thus opted to use the electrostatic function encoding provided in the original paper as an alternative to structural encoding. For MagNet, we optimized qalong with the model parameters during training. We supplied the undirected version of the lattice graph by setting all θijvalues to π/4and used self-feature transform with self-loop value set to 0. We did not use layer-wise feature normalization. Table 2 reports the mean accuracy and standard deviation on the test data from the top 5 out of 7 training runs with different initializations using the best hyperparameters shown below. All exper- iments were performed on two NVIDIA RTX 6000 Ada Generation GPUs with 48GB of memory and it took about 3 days of training time to generate the results. A.2.2 D ATA GENERATION AND TRAINING SETUP FOR THE GRN DYNAMICS EXPERIMENT Data generation. We prepared a directed adjacency matrix Afor a graph with 200nodes by sam- pling each entry of the matrix independently from a Bernoulli distribution with success probability 16 Preprint Model # layers # hidden lr self-loop sc # attn. heads attn. type enc. type K aggr. # enc. layers # dec. layer self-feature lr Θ lwΘ GCN 2 16 1e-3 - - - - - - - - - - - - GAT 4 32 1e-3 - True 4 - - - - - - - - - GraphGPS 4 64 1e-3 0 - 4 multihead eigenvector (dim=5) - - - - - - - MagNet 4 64 1e-3 1 - - - - 2 - - - - - - DRew 3 64 5e-3 - - - - - - weight - - - - - FLODE 4 64 1e-2 0 - - - - - - 1 3 - - - CoED 4 64 1e-3 - - - - - - - - - True 1e-3 False Table A.2: Hyperparameters selected for node regression on the synthetic lattice graph. 0.03. We interpreted an edge Aijas indicating that the gene represented by the node vjis regulating the gene represented by the node vi. We then randomly chose half of the edges as activating edges and the other half as suppressing edges. The scalar feature value of each node is the expression level of a gene, measured as concentration ci. In order to simulate the gene regulatory network dynamics where genes are either up- or down- regulating connected genes, we sampled the magnitudes of activation γact ijand suppression γsup ijfrom a uniform distribution with the support [0.5,1.5]. We ad- ditionally sampled half-saturation constants Kij, which control how quickly cichanges in response tocj, from a uniform distribution with support [0.25,0.75]. Lastly, we sampled the initial concen- trations independently from a uniform distribution with support [0.1,10]for each gene, and ran the GRN dynamics as described by, dci dt=X j∈N(i) γact ijFact(cj, Kij) +γsup ijFsup(cj, Kij) −ci (5) for 250 time steps with dt= 0.05to reach a steady state for c. The summation in the above equation is over all genes that either activate or repress gene i.Fact(cj, Kij) =c2 j/(K2 ij+c2 j)and Fsup(cj, Kij) =K2 ij/(K2 ij+c2 j), which are the Hill functions defining up- and down- regulations of geneiby gene j, respectively. From this steady state, we mimicked the gene knockout experiment in biology by setting the concentration values of a chosen set of genes to zero and running the GRN dynamics for additional 100 time steps, by which time genes reached new steady-state values. We performed a single-gene knockout for all 200 genes and a double-gene knockout for 1000 randomly selected pairs of genes. We defined node features as the original steady state with the values of knockout genes set to zero and the corresponding target values as the new steady-state values reached from this state. This procedure generates an ensemble of 1200 feature-target pairs for each node for the synthetic GRN graph. We used all 200 single gene knockout results as training data, and randomly selected 200 and 800 double gene knockout results for validation and testing, respectively. Training. For each knockout result, we used all nodes for regression except those corresponding to the knocked out genes. We used a batch size of 8 for training with random shuffling at each epoch, and a full batch for both validation and testing. We evaluated the validation loss at every epoch, and implemented the same counting scheme as in the directed flow experiment to early-stop the training with a patience of 50. We searched over the number of layers ∈ {2,3,4,5}, hidden dimension ∈ {16,32}, and learning rate ∈ {5e-4,1e-3,2e-3,5e-3}, and otherwise conducted the same hyperparameter search as described in the lattice experiment, using the same training setup. Table 2 reports the mean accuracy and standard deviation on the test data from the top 5 out of 7 training runs with different initializations using the best hyperparameters shown below. All exper- iments were performed on two NVIDIA RTX 6000 Ada Generation GPUs with 48GB of memory and it took about 3 days of training time to generate the results. Model # layers # hidden lr self-loop sc # attn. heads attn. type enc. type K aggr. # enc. layers # dec. layer self-feature lr Θ lwΘ GCN 3 32 5e-4 - - - - - - - - - - - - GAT 5 32 2e-3 - True 8 - - - - - - - - - GraphGPS 5 32 5e-3 0 - 4 multihead eigenvector (dim=10) - - - - - - - MagNet 5 32 5e-3 1 - - - - 2 - - - - - - DRew 3 32 5e-4 - - - - - - weight - - - - - FLODE 5 32 5e-3 1 - - - - - - 1 1 - - - CoED 5 32 1e-3 - - - - - - - - - True 1e-2 True Table A.3: Hyperparameters selected for node regression on the synthetic GRN graph. 17 Preprint A.3 G RAPH ENSEMBLE EXPERIMENT WITH REAL DATASETS A.3.1 P REPROCESSING , DATA GENERATION ,AND TRAINING SETUP FOR SINGLE -CELL PERTURBATION EXPERIMENTS Preprocessing. We downloaded Replogle-gwps dataset from scPerturb database (Peidli et al., 2024) and followed the standard single-cell preprocessing routine using Scanpy software (Wolf et al., 2018), selecting for the top 2000 most variable genes. This involved running the following four functions: filter cells function with mincounts=20000 argument, normalize percell function, filter genes function with mincells=50 argument, highly variable genes function with ntopgenes=2000 ,flavor=‘seurat v3’, andlayer=‘counts’ arguments. Afterwards, we discarded genes that were not part of the top 2000 most variable genes. These 2000 genes define nodes. We did not log transform the expres- sion values and used the normalized expression values obtained from these preprocessing steps for all downstream tasks. Out of 9867 genes that were perturbed in the original dataset, 958 of them were among the top 2000 most variable genes. Note that perturbed genes should have near zero expression values since the type of perturbation in the original experiment was gene knockout via a technique called CRISPRi. Therefore, we disregarded perturbed genes if the perturbations did not result in more than 50% of cells with zero expression values for each respective gene. This prepro- cessing step identifies 824 effective gene perturbations. Note that there are multiple measurements per perturbation. Data generation for node regression. In perturbation experiments, the expression values are mea- sured ‘post-perturbation’ (equivalent to ci+ ∆ciin Figure 3(d) of the main text). Consequently, we do not have access to their ‘pre-interaction’ expression levels (equivalent to ciwith a perturbed gene’s value set to 0). To pair each post-perturbation expression values to its putative pre-interaction state, we randomly sampled a control measurement and set the expression value of the perturbed gene to 0. These pairs of pre-interaction and post-perturbation expressions define the ensemble of features and targets. Since the number of measurements vary per perturbation, we standardized the dataset diversity by downsampling the feature target pairs to 2 per perturbation. We split each dataset based on perturbations, grouping all cells subject to the same perturbation together. We per- formed 60/10/30 training/validation/test splits and computed k-nearest neighbors gene-gene graph withk= 3 using the training split. We checked that this graph roughly corresponds to creating an edge between genes whose Pearson correlation coefficient is higher than 0.5. We also confirmed that this graph represents a single connected component. Training. As in the GRN example, we used all nodes (i.e., genes) for regression except for those corresponding to a knocked-out gene. We used a batch size of 16 for training with random shuffling at each epoch, and a full batch for both validation and testing. We evaluated the validation loss at every epoch, and implemented the same counting scheme as in the synthetic dataset experiments to early-stop the training with a patience of 30. Since the models’ performances generally improved as their depths and hidden dimension increased, we used 4layers and a hidden dimension of 32 across all models (to streamline the comparison). We searched over learning rate ∈ {1e-3,5e-3,1e-2}for all models, the order of Chebyshev polynomial K∈ {1,2}for MagNet; the number of attention heads∈ {1,4,8}for both GAT and DirGAT; addtionally, skip connection (sc) ∈ {True ,False } for GAT; self-feature transform ∈ {True ,False }, learning rate for Θ∈ {5e-4,1e-3,5e-3,1e-2} and layer-wise Θlearning ∈ {True ,False }for CoED. We learned qin MagNet. Table 3 reports the mean accuracy and standard deviation on the test data from the top 5 out of 7 training runs with different initializations using the best hyperparameters shown below. All experiments were per- formed on two NVIDIA RTX 6000 Ada Generation GPUs with 48GB memory and approximately one day of training time was spent for generating the results. A.3.2 T RAINING SETUP FOR WEB TRAFFIC EXPERIMENTS We downloaded WikiMath dataset from PyTorch Geometric Temporal library (Rozemberczki et al., 2021) with the default time lag value of 8. We followed the same temporal split from the paper where 90% of the snapshots were used for training and 10% of the forecasting horizons were used for testing. 18 Preprint Model lr K sc # attn. heads self-feature lr Θ lwΘ GCN 5e-3 - - - - - MagNet 1e-3 2 - - - - GAT 1e-3 - True 4 - - - DirGCN 5e-3 - - - - - DirGAT 1e-3 - 1 - - - CoED 1e-3 - - False 5e-4 False Table A.4: Hyperparameters selected for node regression on the Perturb-seq data. Training. The input features and target values are shaped as N×8andN×1, respectively. Since we consider a pair of the two consecutive snapshots as input and the corresponding target, we disregarded the first 7 snapshots (i.e., feature dimensions) and used only the node values (i.e., visit counts of Wikipedia articles) of the last snapshot for prediction for testing. For training, we utilized all 8 snapshots, generating 8 predictions. MSE loss for the predictions from the first 7 snapshots were each evaluated with the next 7 snapshots as target values. This procedure is similar to the ‘incremental training mode’ used to train time-series based models (Rozemberczki et al., 2021; Eliasof et al., 2024; Guan et al., 2022). We used a full batch for both training and testing. We evaluated test loss at each step, and implemented the same counting scheme used above to early-stop training with a patience value of 50. We used 2 layers with a hidden dimension of 16 across all models. We searched over learning rate ∈ {1e-3,5e-3,1e-2,2e-2}for all models and also over the same model-specific hyperparameters discussed in the Perturb-seq experiments except the number of attention heads, which was limited to 2 due to memory constraints. We learned qfor MagNet. Table 3 reports the mean accuracy and standard deviation on the test data from the top 5 out of 7 training runs with different initializations using the best hyperparameters shown below. All experiments were performed on two NVIDIA RTX 6000 Ada Generation GPUs with 48GB memory and approximately 16 hours of training time was spent for generating the results. Model lr K sc # attn. heads self-feature lr Θ lwΘ GCN 1e-2 - - - - - MagNet 5e-3 1 - - - - GAT 1e-2 - True 2 - - - DirGCN 2e-2 - - - - - DirGAT 5e-2 - 1 - - - CoED 5e-3 - - True 1e-2 False Table A.5: Hyperparameters selected for node regression on the WikiMath data. A.3.3 T RAINING SETUP FOR POWER GRID EXPERIMENTS We downloaded a power grid graph with 2000 nodes from PyTorch Geometric library, compiled via OPFData (Lovett et al., 2024). We selected the ‘fulltop’ topology option to obtain the information of the entire graph, opposed to ‘N-1’ perturbation option which masks parts of the graphs. We randomly sampled 300 graphs and split them into 200 training set, 50 validation set, and 50 test set. Power grids are heterogeneous graphs. Refer to Figure 1 of Piloto et al. (2024) for detailed overview of the different components. Importantly, generators, loads, and shunts (‘subnodes’) are all connected to buses (‘nodes’) via edges, and buses are connected to one another via two types of edges, transformers and AC lines. Thus, the load profile across the graph informs the generator subnodes via the bus-to-bus edges. We describe the architectural design choices that we made to accomplish this task while facilitating effective model comparison. Model. The primary goal is to ensure information flow into those buses connected to generators. We thus substituted different model layers to process messages over the two types of bus-to-bus edges while keeping the rest of the architecture unchanged. The processing steps are: 1. Transform node/subnode features and edge features using an MLP. 2. For each bus node, integrate the features of its subnodes into its own features via Graph- Conv (i.e., W1xi+W2P j∈N(i)ej,i·xj). 3. Incorporate edge features into node features using GINE. 4. Iterate message-passing among bus nodes 19 Preprint 5. Decode the features of bus nodes into generator operating point values. We varied the message-passing mechanism in step 4 by applying the different Aggregate and Update functions of each of the models that we analyzed. Training. Following the example training routine outlined in Lovett et al. (2024), we trained mod- els to predict generator active and reactive power outputs and evaluated MSE loss against those values in the AC-OPF solutions. We did not incorporate AC-OPF constraints, as the focus of the experiments was to compare the message-passing capabilities of CoED with other models. We refer to B¨ottcher et al. (2023); Piloto et al. (2024) for predicting AC-OPF solutions that satisfy constraints. We used a batch size of 16 during training with random shuffling applied at each epoch. We evalu- ated the validation loss at every epoch and early-stopped the training with the same counting scheme with a patience of 50. We iterated the step 4 above 3 times (i.e., 3 layers) and used 32 hidden di- mension. We searched over learning rate ∈ {5e-4,1e-3,2e-3,5e-3}, and otherwise the same set of hyperparameters considered in the Perturb-seq experiments. We learned qfor MagNet. Table 3 re- ports the mean accuracy and standard deviation on the test data from the top 5 out of 7 training runs with different initializations using the best hyperparameters shown below. All training were per- formed on two NVIDIA RTX 6000 Ada Generation GPUs with 48GB memory and approximately one day of training time was spent for generating the results. Model lr K sc # attn. heads self-feature lr Θ lwΘ GCN 5e-3 - - - - - MagNet 1e-3 2 - - - - GAT 2e-3 - False 4 - - - DirGCN 1e-3 - - - - - DirGAT 1e-3 - 8 - - - CoED 5e-4 - - False 5e-3 True Table A.6: Hyperparameters selected for node regression on the AC-OPF data. B T IME AND SPACE COMPLEXITY LetVandEdenote the numbers of nodes and edges, and let Hdenote hidden dimension, which we assume stays constant for two consecutive layers. At minimum, the message-passing mechanism outlined in 2 results in the time complexity that scales as O(EH+V H2), where the first term corresponds to aggregating features with dimension Hand the second term stems from the matrix- matrix multiplication of node features and a weight matrix. The space complexity is O(E+H2) due to the edges and the weight matrix. With the sparse form of the phase matrix Θ, CoED incurs an additional O(E)term both in the time complexity from computing fuzzy propagation matrices P←/→and in the space complexity due to storing Θ. Unless layer-wise Θlearning is employed, this computation happens once and thus only adds minimal overhead. The layer-wise Θlearning adds an O(EL)term to the total time and space complexities over Llayers. For comparison, however, we note that computing S-head attentions incurs an O(EH)term (or even O(EHS )term if feature dimension is not divided by S) in the time complexity and an O(ES)term in the space complexity per layer. C P OSITIONAL ENCODING USING THE FUZZY LAPLACIAN Graph Laplacians can be used to assign a positional encoding to each node of a graph based on the connectivity patterns of the nodes of the graph. Using the fuzzy Laplacian, we can extend positional encoding to include variations in directions of the edges surrounding a node in addition to the connectivity pattern of the graph. To demonstrate the utility of the eigenvectors of the fuzzy Laplacian for positional encoding, we visualize the eigenvectors of the Laplacian computed from the triangular lattice graph (which has a trivial connectivity pattern as shown above by the random walk structural encoding being trivially zero) supplemented with two different sets of edge directions. In the first case, we obtained edge directions from the gradient of source-sink potential function described in A.2.1. These edge directions are visualized in Figure 3(b) of the main text. In the second case, we obtained the edge directions for the same triangular lattice graph, from the following 20 Preprint solenoidal vector field, F(x) = sin(πx) cos( πy),−cos(πx) sin(πy) which does not have sources or sinks but instead features a cyclic flow. To assign edge directions using the solenoidal vector field, we computed θijas the angle between the unit vector pointing from vitovjand the vector evaluated at the midpoint of an edge, i.e., F (x2−x1)/2 . All θij were scaled to range from 0toπ/2. As shown in Figure C.1(a-e), the real part of the eigenvector distinguishes the peak and valley from the region in between, whereas the phase of the eigenvector distinguish the peak from the valley. Similarly in Figure C.1(f-j), the real part of the eigenvector highlights the regions adjacent to the cyclic flows. The magnitude of the eigenvector picks out the centers of the four solenoids in the middle. Taken together, the eigenvectors of the fuzzy Laplacian contain positional information based on the directions of the edges as demonstrated here for the same graph but with different edge directions. (b) Real part for source-sink graph(c) Imaginary part for source-sink graph (d) Phase for source-sink graph (e) Magnitude for source-sink graph (g) Real part for solenoidal vector field(h) Imaginary part for solenoidal vector field(i) Phase for solenoidal vector field(j) Magnitude for solenoidal vector field (a) Gradient vector field (f) Solenoidal vector field Figure C.1: Visualization of the eigenvector, corresponding to the eigenvalue with largest magnitude, of the fuzzy Laplacian of the triangular lattice whose edge directions are taken from the source-sink potential function described in the main text (top row) and the solenoidal vector field described in this section (bottom row). The original vector fields are shown in the left-most figures. The real and imaginary components, as well as the magnitude and phase, of the eigenvector encode positional information at each node about the direction of the edges surrounding that node. D M ATHEMATICAL PROPERTIES OF THE FUZZY LAPLACIAN We propose a new graph Laplacian matrix for directed graphs which generalizes to the case of directed graphs with fuzzy edges, where an edge connecting two nodes AandBcan take on any intermediate value between the two extremes of pointing from AtoBand pointing from BtoA. We will show that our Laplacian exhibits two useful properties: 1. The eigenvectors of our Laplacian matrix are orthogonal and therefore can be used as the positional encodings of the nodes of the graph. 2. For any node of the graph, our Laplacian aggregates information from neighbors that send information to the node separately from the neighbors that receive information from the node and is therefore as expressive as a weak form of the Weisfeiler-Leman (WL) graph isomorphism test for directed graphs with fuzzy edges that we define below. D.1 D IRECTED GRAPHS WITH FUZZY EDGES We define a directed graph with fuzzy edges as follows. Our definition builds on the standard definition of a graph. AgraphGis an ordered pair G:= (V,E)comprising a set Vof vertices or nodes together with a set Eof edges. Each edge is a 2-element subset of V. 21 Preprint •V: A finite, non-empty set of vertices V={v1, v2, . . . , v n} •E: A set of edges, each linking two vertices in V E={(vi, vj)|vi, vj∈ V} To incorporate fuzzy directions to the edges, we define a new attribute for each edge. •µ: A function defining the direction of each edge µ: (vi, vj)→[0,1] such that for each edge (vi, vj)∈E,µ(vi, vj) =ximplies µ(vj, vi) =√ 1−x2. In this model, each edge is associated with a scalar xthat represents its direction. The value xis a real number in the interval [0,1]. If for an edge (vi, vj),µ(vi, vj) =x, then it must hold that µ(vj, vi) =√ 1−x2, capturing the edge in both directions. For example, if µ(vi, vj) = 1 then the edge is an arc (directed edge) connecting node vito node vj. Ifµ(vi, vj) = 0 then the edge is an arc connecting node vjto node vi. Ifµ(vi, vj) = 1 /√ 2then the edge is a bidirectional edge connecting node vito node vjand node vjto node vi. For a scalar-directed edge graph G= (V,E, µ), the adjacency matrix Ais a square matrix of dimen- sion|V| × |V| . The entry Aijof the matrix is defined as follows: Aij=µ(vi, vj),if(vi, vj)∈E 0, otherwise In this setting, µ(vi, vj)captures the direction of the edge from vitovj. It follows that if an edge is present between nodes viandvjthenAji=q 1−A2 ij. Therefore, the adjacency matrix captures not only the presence of edges but also their direction according to the function µ. D.2 F UZZY LAPLACIAN MATRIX For a scalar-directed edge graph G= (V,E, µ)with adjacency matrix A, we define its Fuzzy Lapla- cian matrix LFas follows: The diagonal entries of LFare zero: (LF)ii= 0 The off-diagonal elements are (LF)ij=0 ifAij=Aji= 0 eiθijotherwise(6) where θijis selected such that: cos(θij) =Aij In other words, the real part of eiθijis equal to the corresponding adjacency matrix entry Aij. We require that 0≤θij≤π/2. It follows that θji=π/2−θij. The Fuzzy Laplacian LFsatisfies the property: LF=iL∗ F To confirm this, note that e−iθij= cos( −θij) +isin(−θij) = cos( θij)−isin(θij). Therefore, (LF)ji= sin( θij) +icos(θij) =ie−iθij, and thus LF=iL∗ F. 22 Preprint The fuzzy Laplacian takes the following form, LF= 0··· ··· ......eiθij ...ie−iθij... (7) Here, eiθijandie−iθijare sample off-diagonal elements corresponding to the edge (vi, vj)in the graph. D.3 P ROPERTIES OF FUZZY LAPLACIAN MATRIX LF In this section, we will show that the Fuzzy Laplacian matrix LFhas eigenvalues of the form a+ia, where a∈R, and orthogonal eigenvectors. D.3.1 E IGENVALUES OF THE FORM a+ia LFhas eigenvalues of the form a+iawitha∈R. Proof: Letλbe an eigenvalue of LF, and let wbe the corresponding eigenvector. Then: LFw=λw⇒w∗L∗ F=λ∗w∗⇒w∗L∗ Fw=λ∗w∗w ⇒ − iw∗LFw=λ∗w∗w⇒ − iλw∗w=λ∗w∗w⇒ − iλ=λ∗ where we used L∗ F=−iLFto go to the second line. The last identity holds only when λ=a+ia where ais a real number. D.3.2 O RTHOGONAL EIGENVECTORS To prove that LFhas orthogonal eigenvectors, we need to show that if wandvare eigenvectors corresponding to distinct eigenvalues λ1andλ2respectively, then wandvare orthogonal. Proof: LetLFw=λ1wandLFv=λ2v. v∗LFw=λ1v∗w⇒iv∗L∗ Fw=λ1v∗w⇒iλ∗ 2v∗w=λ1v∗w⇒λ2v∗w=λ1v∗w where we used λ=iλ∗derived above for the last step. Therefore, λ2v∗w=λ1v∗w. Since λ1̸=λ2, it must be that v∗w= 0, i.e.,wandvare orthogonal. Because the eigenvectors of the Fuzzy Laplacian matrix LFare orthogonal, they can be used as the positional encoding of the nodes of the graph. E E XPRESSIVITY OF NEURAL NETWORKS USING THE FUZZY LAPLACIAN E.1 G RAPH ISOMORPHISM FOR DIRECTED GRAPHS WITH FUZZY EDGES First, we extend the standard definition of graph isomorphism to the case of directed graphs with fuzzy edges following a similar approach as in Piperno et al. (2018). We only consider graph with a finite number of nodes. Therefore, the set of all edge weights form a countable set with finite cardinality. Two directed fuzzy graphs G= (VG,EG, µG)andH= (VH,EH, µH)are said to be isomorphic if there exists a bijection f:VG→ VHsuch that, for every pair of vertices u, vinVG, the following conditions hold: 1.(u, v)is an edge in EGif and only if (f(u), f(v))is an edge in EHfor all vertices uandvinVG. 2. For every edge (u, v)inEGthat maps to edge (f(u), f(v))inEH, the corresponding weights satisfy: µG(u, v) =µH(f(u), f(v)) 23 Preprint E.2 W EISFEILER -LEMAN TEST FOR ISOMORPHISM OF DIRECTED GRAPHS WITH FUZZY EDGES Next, we extend the Weisfeiler-Leman (WL) graph isomorphism test to determine whether two directed graphs with fuzzy edges are isomorphic or not according to the extended definition of graph isomorphism stated above. WL test is a vertex refinement algorithm that assigns starting features to each node of the graph. The algorithm then aggregates all the features of each node’s neighbors and hashes the aggregated labels alongside the node’s own label into a unique new label. At each iteration of the algorithm, the list of labels are compared across the two graphs. If they are different, the two graphs are not isomorphic. If the labels are no longer updated at each iteration, the two graphs can potentially be isomorphic. We extend the WL test to directed graphs with fuzzy edges in two ways. Importantly, for all the proofs that follow, we will assume that the graph has a finite number of nodes. The strong form of WL test for fuzzy directed graphs: Given a directed graph with fuzzy edges G= (V,E, µ), the strong form of the WL test calculates a node coloring C(t):V → { 1,2, . . . , k }, a surjective function that maps each vertex to a color. At the first iteration, C(0)= 0. At all subsequent iterations, C(t)(i) =Relabel (C(t−1)(i),{ {(µij, C(t−1) j) :j∈N(i)} }), (8) where (µij, C(t−1) j)is tuple comprised of the weight of the edge from node ito node jandC(t−1) j , the color of node jin the previous iteration. To simplify notation, we will write µijto denote the weight of the edge connecting nodes iandjinstead of µ(vi, vj)from now on. N(i)denotes the set of all neighbors of node i. Function Relabel is an injective function that assigns a unique new color to each node based on the node’s color in the previous iteration and the tuples formed from its neighbors colors and the value of the edges connecting the node to those neighbors. From the definition above, it follows that a graph neural network Γ :G →Rdthat aggregates and updates the node features as follows h(k) i=ϕ(h(k−1) i, f({ {(µij, h(k−1) j) :j∈N(i)} })), (9) will map two graphs G1andG2to different vectors in Rdif the above strong form of the graph coloring test deems that the two graphs are not isomorphic. In the above equation, hk vis the hidden representation (or feature) of node iat the kth layer. ϕandfare injective functions with facting on multisets of tuples of the edge weights and hidden representation of the neighbors of node i. The proof of above statement is a trivial extension of the proof of Theorem 3 in (Xu et al., 2018). Briefly, the multiset of the features of the neighbors of node ican be converted from a multiset of tuples of the form { {(µij, h(k−1) j) :j∈N(i)} }to a multiset of augmented features of the neighbors { {˜h(k−1) j :j∈N(i)} }where ˜h(k−1) j = (µij, h(k−1) j). Because µijform a set of finite cardinality, the augmented feature space ˜h(k−1) j is still a countable set and the same proof as in (Xu et al., 2018) can be applied. The weak form of WL test for fuzzy directed graphs: We introduce a weaker form of the WL test for directed graphs with fuzzy edges where a given node cannot distinguish between its neighboring nodes that have the same color based on the weights of the edges that they share. Rather, the node aggregates the weights of all outgoing and incoming edges of its neighbors of a given color. The weak form of the algorithm is as follows. At the first iteration, C(0)= 0. At all subsequent iterations, 24 Preprint C(t)(i) =Relabel C(t−1)(i), {X j∈N(i)δ(C(t−1) j−c)µij, X j∈N(i)δ(C(t−1) j−c)µji, c :c∈C(t−1) N(i)} , (10) The termP j∈N(i)δ(C(t−1) j−c)µijis summing over the edge weights µijof all neighbors of ithat have color c. The tuple now also contains a similar that sums over the edge weights pointing from node jto node i,µjifor all neighbors of the same color. As stated above, the edge weights are related, namely, µji=q 1−µ2 ij. The color of the node iat the previous iteration alongside the set of tuples containing the sum of incoming edge weights from neighbors of a given color, the sum of outgoing edge weights to the neighbors of a given color, and the color of those neighbors are inputs to the injective function Relabel , which assigns a unique color to node ifor the next iteration. Theorem . LetΓ :G →Rdbe a graph neural network that updates node features as follows, h(k) i=MLP(k) h(k−1) i,ℜ X j∈N(i)(LF)ijh(k−1) j ,ℑ X j∈N(i)(LF)ijh(k−1) j , (11) where MLP(k)is a multi-layer perceptron. The last two terms of the MLP input are the real and imaginary parts of the aggregated features of the neighbors of node i,P j∈N(i)(LF)ijh(k−1) j . With sufficient number of layers, the parameters of Γcan be learned such that it is as expressive as the weak form of the WL test, in that Γmaps two graphs G1andG2that the weak form of WL test decides to be non-isomorphic to different embeddings in Rd. Proof . First, we claim that there exists a GNN of the form: h(k) i=ϕ f(h(k−1) i),X j∈N(i)µijf(h(k−1) j),X j∈N(i)µjif(h(k−1) j) , (12) that is as expressive as the weak form of the WL test. We will prove this by induction. Note that F(i) =P j∈N(i)µijf(h(k−1) j)andG(i) =P j∈N(i)µjif(h(k−1) j)are only injective up to the sum of the incoming and outgoing weights for a given type of neighbor (see Figure E.2 for an example). This is because the weak form of the WL test only accounts for the sum of the weights of outgoing and incoming edges to neighboring nodes of a given color. This is different from previous work that dealt with undirected graphs (Xu et al., 2018) or graphs with directed but unweighted edges (Rossi et al., 2024). Let’s denote with Xthe multiset of the colors of all neighbors of node iafter a given number of iterations of the color refinement algorithm. Equivalently, we can consider the multiset of all the features of the neighboring nodes of iafter a given number of iteration of the GNN algorithm. Because our graphs have a finite number of nodes, Xis bounded. For graphs without directed edges or weighted edges, it can be shown (see Lemma 5 of (Xu et al., 2018)) that there always exists an injective function fsuch that F(x) =P x∈Xf(x)is injective for all multisets X, i.e. if F(X) =F(Y)for two multisets XandYthenX=Y. This result can be easily extended to regular directed graphs without weighted edges by considering the neighbors with incoming edges separately from the neighbors with outgoing edges (Rossi et al., 2024). For such graphs, a function f exists such that the tuple (P x∈X→f(x),P x∈X←f(x))is only the same for two nodes if they have identical neighborhoods. X→andX←denotes the multiset of features of the neighboring nodes that are connected to iusing an outgoing edge and incoming edge respectively. Let’s proceed with our proof by induction. At the first iteration, k= 0, all the nodes have the same color corresponding to the same trivial feature (e.g. scalar 0). Functions F(i)andG(i)then simply 25 Preprint Figure E.2: Examples of two neighborhoods of a node iare shown on the left and right. The strong form of the WL test for directed graphs with fuzzy edges can distinguish these two neighborhoods. The weak form of the WL test, however, cannot do so because the sum of the weights of the edges connecting node ito green-colored neighbors is 0.9 in both cases. sum the weights of the outgoing and incoming edges of node irespectively and use their values to assign updated feature h(1) i. This is identical to the procedure that the weak WL algorithm is using to assign new colors to the nodes at its first iteration. Therefore, nodes that would be assigned a given color under the WL color refinement algorithm at its first iteration will also be assigned the same updated feature vector h(1) i. Assume that our claim hold for iteration k−1. This mean that all the nodes that the weak WL color refinement assigns to different colors are also assigned different features h(k−1) i by the GNN. Following the weak WL test, at iteration k, we need to sum the incoming and outgoing edges across all the neighbors with the same h(k−1) j . An example of an fthat allows this is one-hot encoding of all features. There always exists a number N∈Nsuch that the features h(k−1) i across all nodes iof the graph can be one-hot encoded in an Ndimensional vector. It follows, X j∈N(i)µijf(h(k−1) j) = X j∈N(i)δ(h(k−1) j−h)µij h∈H(k−1) N(i), (13) where H(k−1) N(i)is the set of all the features h(k−1) j of nodes jthat are neighbors of node i. A similar expression can be written for the sum over the incoming weights µji. Thus, at iteration kthe GNN will assign distinct features to all nodes that are also assigned a distinct color at iteration kof the weak WL color refinement algorithm. To complete the proof, we note that because µji=q 1−µ2 ij, we can reparameterize the edge weights as µij= cos( θij)andµji= sin( θij)with0≤θij≤π/2. Eq. 12 can be rewritten as: h(k) i=ϕ f(h(k−1) i),ℜ X j∈N(i)(LF)ijf(h(k−1) j) ,ℑ X j∈N(i)(LF)ijf(h(k−1) j) , (14) Finally, we use universal approximation theorem of multilayer perceptrons (Hornik et al., 1989) to model both the ϕcomputation at layer kand the fcomputation for the next layer, k+ 1. Namely MLP(k)denotes f(k+1)◦ϕ(k). Taken together, the GNN in Eq. 11 is as expressive as the weak form of the WL color refinement algorithm for directed graphs with fuzzy edges. 26 Preprint F M AGNETIC LAPLACIAN IS NOT AS EXPRESSIVE AS THE FUZZY LAPLACIAN . F.1 M AGNETIC LAPLACIAN A commonly used Laplacian for directed graphs is the Magnetic Laplacian. To construct the Mag- netic Laplacian, we start with the asymmetric adjacency matrix of the graph and symmetrize it. Note that we are using the conventional definition of adjacency matrix in this section and not the fuzzy definition used above. The adjacency matrix Afor a directed graph is a binary matrix, where Aij= 1represents the presence of a directed edge from vertex vito vertex vjandAij= 0represents the absence of such an edge. The symmetrized adjacency matrix Sis defined as: Sij=1 2(Aij+Aji). To capture the direction of the edges, a phase matrix is defined as, Θ(q) ij= 2πq(Aij−Aji). (15) The Hermitian adjacency matrix is defined as the element-wise product of the above two matrices, H(q)=S⊙Θ(q). H(q)has some useful properties in capturing the directionality of the edges of the graph. For ex- ample, for q= 1/4, if there is an edge connecting jtokbut no edge connecting ktojthen H(q) jk=i/2andH(q) kj=−i/2. Although this encoding of edge direction is useful, the fact that both incoming and outgoing edges are purely imaginary (only different up to a sign) means that in gen- eral it is impossible to distinguish features aggregated from neighbors that are connected to a node through outgoing edges from those connected through incoming edges. The fuzzy Laplacian LF, however, trivially distinguishes features from outgoing neighbors from those of incoming neighbors by keeping one set real and the other imaginary. We will expand on the implication of this for the expressivity of graph neural networks constructed using these two approaches below. F.2 L IMITATIONS OF THE MAGNETIC LAPLACIAN Conventionally, a magnetic Laplacian is defined by subtracting H(q)from the degree matrix, LM= D−H(q). If such a Laplacian matrix is used to aggregate information of the nodes of the graph, the features of a node itself are combined with the features of its neighboring nodes. For directed graphs, there are two categories of neighboring nodes, those connected to a node iwith outgoing edges from i(Aij= 1andAji= 0) and those connected to node iwith incoming edges ( Aji= 1 andAij= 0). Of course, it is possible for a neighboring node to be both an outgoing and incoming neighbor, in which case Aji=Aij= 1. Features from these two categories of neighbors, alongside the feature of the node itself, form three distinct categories that in general must be kept distinct for maximum expressivity (Rossi et al., 2024). Using a complex Laplacian for aggregating the features across the nodes allows a simple mechanism for distinguishing two categories (corresponding to the real and imaginary parts of the resulting complex features) but not all three. To get around this limitation, we keep the self-feature of every node distinct from the aggregated features of its neighbors and concatenate it with the aggregated feature prior to applying the multi-layer perceptron to update the features from one layer to the next. Therefore, to maximize the expressivity of the magnetic Laplacian, we set the diagonal terms of the Laplacian matrix to zero, and define the magnetic Laplacian simply as LM=H(q). Lemma. For simple directed graphs without fuzzy edges, using the magnetic Laplacian LMin the graph neural network of Eq. 11 in the place of the fuzzy Laplacian LF, h(k) i=MLP(k) h(k−1) i,ℜ X j∈N(i)(LM)ijh(k−1) j ,ℑ X j∈N(i)(LM)ijh(k−1) j , (16) 27 Preprint does not decrease the expressivity of the graph neural network in that both networks are as expres- sive as the weak form of the WL graph isomorphism test. Proof. . To prove this statement, we will show that the MLP can learn a linear combination of its input that would it make it equivalent to the graph neural network defined in Eq. 11. This is because when q= 1/8, the real and imaginary parts of the aggregated features of the neighboring nodes are linear combinations of the features of the outgoing and incoming neighbors. The fuzzy Laplacian LFdirectly separates the features of outgoing and incoming neighbors in the real and imaginary components of the aggregated features respectively. Define F→(i) = ℜhP j∈N(i)(LF)ijh(k−1) ji andF←(i) =ℑhP j∈N(i)(LF)ijh(k−1) ji as the features aggregated from the outgoing and incoming neighbors respectively. With q= 1/8, we have, ℜ X j(LM)ijh(k−1) j =1 2√ 2(F→(i) +F←(i)) ℑ X j(LM)ijh(k−1) j =1 2√ 2(F→(i)−F←(i)) The MLP layer then in the graph neural network of Eq.16 simply needs to learn a linear combination of its second and third concatenated inputs to become equivalent to the graph neural network of Eq.11. Therefore, the two graph neural networks are equally as expressive. Next, we extend the definition of the magnetic Laplacian to directed graphs with fuzzy edges, defined above, which have weighted edges that indicate intermediate values of directionality between the two extremes of the edge pointing form node ito node jand the edge pointing from node jto node i. We implemented these intermediate values in the fuzzy Laplacian by assigning a weight µijto each edge. Although not necessary in general, we further assumed that µji=q 1−µ2 ij. This constraint allowed us to think of each edge weight as an angle θij= arccos( µij)which we then used to define the fuzzy Laplacian matrix (Eq.7). To extend the magnetic Laplacian to directed graphs with fuzzy edges, we can assign a different value for q(Eq. 15) to each edge. To keep the notation comparable to that of the fuzzy Laplacian, we instead assign a separate angle θijto each edge and define the magnetic Laplacian as, (LM)ij= eiθijforj > i if there is an edge between the nodes iandj,Sij̸= 0. To ensure that magnetic Laplacian remains Hermitian, L∗ M=LM, we require that (LM)ji=e−iθij, which defines the θij values for j < i . It follows that θji=−θij. LM= 0··· ··· ......eiθij ...e−iθij... It remains to be shown how we can assign the θijvalues from the µijin a self-consistent way. Lemma. For directed graphs with fuzzy edges, the graph neural network constructed using the magnetic Laplacian LM, Eq. 16, is not as expressive as the weak form of the WL graph isomorphism test, and therefore not as expressive as the graph neural network defined using the fuzzy Laplacian LF, Eq. 11. Proof. The Fuzzy Laplacian matrix conveniently captures the outgoing and incoming weights of the edges from one node to another, (LF)ij= cos( θij) +isin(θij). Namely, ℜ(LF)ij= cos( θij)is the outgoing weight µijof node ito node jandℑ(LF)ij= sin( θij)is in the incoming weight from nodejto node i,µji. Importantly, the outgoing weight from node ito node j,µij, is the same as the incoming weight from node ito node j. Similarly, the incoming weight from node jto node i,µji is the same as the outgoing weight from node jto node i. These relationships are directly captured in the fuzzy Laplacian because (LF)ji= sin( θij) +icos(θij). 28 Preprint Figure F.3: The functions used to map the edge weights µijto the θijvalues of the magnetic Lapla- cian. cos(θij)is plotted on the left. sin(θij)is plotted in the middle. sin(θij)versus cos(θij)is plotted on the right. Let’s consider how we can construct the magnetic Laplacian LMfrom µij. The weight of the edge from itojisµij. The weight from jtoiisµij=q 1−µ2 ij. From our definition above, (LM)ij= cos( θij) +isin(θij)and(LM)ji= cos( θij)−isin(θij). To relate θijtoµijwe note that the ratio of outgoing and incoming weights for node iisµij/µjiand for node jisµji/µij. Because these two quantities are reciprocals of each other, we can define lnµij µji= tan(2 θij). We chose the tanfunction on the right hand side because tan(−2θij) =−tan(2 θij). Moreover, the log-ratio on the left hand side of above equation can take on any value from −∞ to∞. The right hand side can take on a similar range of values if we allow −π/4≤θij≤π/4. The specific form of this function does not matter as long it is an odd function that spans the specified domain and range. Assume that each node of the graph has a trivial scalar feature equal to 1 (the first iteration of the WL test). The magnetic Laplacian applied to this graph to aggregate the features of the neighbors of node igives, ℜ X j(LM)ij =X jcos 1 2arctan lnµijq 1−µ2 ij ℑ X j(LM)ij =X jsin 1 2arctan lnµijq 1−µ2 ij The above functions for a single value of µijare plotted in Figure F.3. Importantly, these functions are not an injective function of all possible neighborhoods of node i. Consider the case of a node i that has 5 neighbors with µijvalues that result in cos(θij) = 0 .8andsin(θij) = 0 .6, and another set of 5 neighbors with µijvalues that result in cos(θij) = 0 .8andsin(θij) =−0.6. It follows thatℜhP j(LM)iji = 8 andℑhP j(LM)iji = 0. The same values would have been obtained if node ihad a neighborhood comprised of 8 neighbors with µijvalues such that cos(θij) = 1 and sin(θij) = 0 . Therefore, the magnetic Laplacian cannot distinguish distinct neighborhoods and is not as expressive as the weak form of the WL test. In practice, the more important limitation of the magnetic Laplacian is that it aggregates neigh- borhood information that results in linear combinations of the outgoing and incoming features to a node. In contrast, the fuzzy Laplacian by construct always keeps these contributions separate in the real and imaginary parts of the aggregated feature vector. In many applications, we would like to access the self-feature of the node and the incoming and outgoing aggregated features of its neigh- bors separately. The fuzzy Laplacian is thus a better choice. It could be argued that the MLP of a graph neural network can learn to disentangle the linear combinations of the outgoing and incoming features aggregated by the magnetic Laplacian. In general, however, this is not possible by the types 29 Preprint of the graph neural networks that we considered here. This is because the parameters are MLP are the same for all the nodes of the graph. The linear combinations, however, depend on the specific weights µijconnecting node ito its neighboring nodes j. Therefore, in general, the MLP will not be able to learn to disentangle the linear combinations of the incoming and outgoing features aggregated by the magnetic Laplacian. 30 | 4 | 1 | The proposed CoED GNN is a graph neural network architecture that utilizes a complex-valued Laplacian with directed edges. Given the nature of GNNs and from insights into existing literature, the complexity of the model estimates that a typical training session could be reasonably completed in under 8 hours. The model likely has a moderate number of parameters because it needs to learn edge directions and update messages, which scales with node and edge counts but is more lightweight compared to larger transformer-based architectures. The dataset size is also conducive for training within 4 hours on a single modern GPU, considering common datasets used in the GNN domain such as Cora and Squirrel, which are not excessively large (typically few hundred examples with node features and classes). The iterative updates involve gradient descent optimizing both the learned parameters and edge directions, which is manageable on one GPU without significant memory constraints, provided batch sizes are set appropriately. | yes | Yes | Graph | Improving Graph Neural Networks by Learning Continuous Edge Directions | 2024-10-18 0:00:00 | https://github.com/hormoz-lab/coed-gnn | 1 | specify on the classification.py and it handles itself | 2 min | https://colab.research.google.com/drive/1FiCFbVmQhjIqcCdViYynfEb9mWtJkB09?usp=sharing | Yes | -- I have put the best parameter with advice of "Gemini" Can change accordingly. |
California Housing Prices | Binary Diffusion | [] | Tabular Data Generation using Binary Diffusion | 2024-09-20T00:00:00 | https://arxiv.org/abs/2409.13882v2 | [
"https://github.com/vkinakh/binary-diffusion-tabular"
] | {'Parameters(M)': '1.5', 'RF Mean Squared Error': '0.39', 'LR Mean Squared Error': '0.55', 'DT Mean Squared Error': '0.45'} | [
"Parameters(M)",
"RF Mean Squared Error",
"DT Mean Squared Error",
"LR Mean Squared Error"
] | Given the following paper and codebase:
Paper: Tabular Data Generation using Binary Diffusion
Codebase: https://github.com/vkinakh/binary-diffusion-tabular
Improve the Binary Diffusion model on the California Housing Prices dataset. The result
should improve on the following metrics: {'Parameters(M)': '1.5', 'RF Mean Squared Error': '0.39', 'LR Mean Squared Error': '0.55', 'DT Mean Squared Error': '0.45'}. You must use only the codebase provided.
| Tabular Data Generation using Binary Diffusion Vitaliy Kinakh Department of Computer Science University of Geneva Geneva, Switzerland vitaliy.kinakh@unige.chSlava Voloshynovskiy Department of Computer Science University of Geneva Geneva, Switzerland Abstract Generating synthetic tabular data is critical in machine learning, especially when real data is limited or sensitive. Traditional generative models often face challenges due to the unique characteristics of tabular data, such as mixed data types and varied distributions, and require complex preprocessing or large pretrained models. In this paper, we introduce a novel, lossless binary transformation method that converts any tabular data into fixed-size binary representations, and a corresponding new generative model called Binary Diffusion, specifically designed for binary data. Binary Diffusion leverages the simplicity of XOR operations for noise addition and removal and employs binary cross-entropy loss for training. Our approach eliminates the need for extensive preprocessing, complex noise parameter tuning, and pretraining on large datasets. We evaluate our model on several popular tabular benchmark datasets, demonstrating that Binary Diffusion outperforms existing state-of-the-art models on Travel, Adult Income, and Diabetes datasets while being significantly smaller in size. Code and models are available at: https: //github.com/vkinakh/binary-diffusion-tabular 1 Introduction The generation of synthetic tabular data is a critical task in machine learning, particularly when dealing with sensitive, private, or scarce real-world data. Traditional generative models often struggle with the inherent complexity and diversity of tabular data, which typically encompasses mixed data types and complex distributions. In this paper, we introduce a method to transform generic tabular data into a binary representation, and a generative model named Binary Diffusion, specifically designed for binary data. Binary Diffusion leverages the simplicity of XOR operations for noise addition and removal, fundamental components of probabilistic diffusion models. This method eliminates the need for extensive preprocessing and complex noise parameter tuning, streamlining the data preparation process. Our approach offers several key advantages. First, by converting all columns into unified binary representations, the proposed transformation removes the necessity for column-specific preprocessing commonly required in handling mixed-type tabular data. Secondly, the Binary Diffusion model itself is optimized for binary data, utilizing binary cross-entropy (BCE) loss for predictions during the training of the denoising network. We evaluate our model on several popular tabular benchmark datasets, including Travel [ tej21 ], Sick [SED+88], HELOC [ lia18 ,FIC18 ], Adult Income [ BK96 ], California Housing [ PB97 ,nug17 ], and Diabetes [ SDG+14,Kag21 ] tabular datasets. The Binary Diffusion model outperforms existing state-of-the-art models on Travel, Adult Income and Dianetes datasets. Additionally, our model is significantly smaller in size compared to contemporary models and does not require pretraining Table Representation Learning Workshop at NeurIPS 2024.arXiv:2409.13882v2 [cs.LG] 28 Oct 2024 on other data modalities, unlike methods based on large language models (LLMs) such as GReaT [BSL+22]. 2 Related Work TVAE (Tabular Variational Autoencoder) adapts the Variational Autoencoder (V AE) framework to handle mixed-type tabular data by separately modeling continuous and categorical variables. CTGAN (Conditional Tabular GAN) employs a conditional generator to address imbalanced categorical columns, ensuring the generation of diverse and realistic samples by conditioning on categorical data distributions. CopulaGAN integrates copulas with GANs to capture dependencies between variables, ensuring that synthetic data preserves the complex relationships present in the original dataset [XSCIV19]. GReaT (Generation of Realistic Tabular data) [ BSL+22] leverages a pretrained auto-regressive language model (LLM) to generate highly realistic synthetic tabular data. The process involves fine-tuning the LLM on textually encoded tabular data, transforming each row into a sequence of words. This approach allows the model to condition on any subset of features and generate the remaining data without additional overhead. Existing data generation methods show several shortcomings. Models such as CopulaGAN, CTGAN, and TV AE attempt to generate columns with both continuous and categorical data simultaneously, em- ploying activation functions like softmax andtanh in the outputs. These models also require complex preprocessing of continuous values and rely on restrictive approximations using Gaussian mixture models and mode-specific normalization. Additionally, large language model-based generators like GReaT need extensive pretraining on text data, making them computationally intensive with large parameter counts with potential bias from the pretraining data. The proposed data transformation and generative model address these shortcomings as follows: (i) by converting all columns to unified binary representations; (ii) the proposed generative model for binary data, with fewer than 2M parameters, does not require pretraining on large datasets and offers both fast training and sampling capabilities. 3 Data transformation To apply the Binary Diffusion model to tabular data, we propose an invertible lossless transformation T, shown on the Figure 1, that converts tabular data columns into fixed-size binary representations. The transformations is essential for preparing tabular data for the Binary Diffusion model, enabling it to process and generate tabular data without the need for extensive preprocessing. This approach ensures that the data retains its original characteristics. <latexit sha1_base64="904ltzoyiOuq+cNhh7xCbvL2ewE=">AAAB8HicbVDLSgNBEOz1GeMr6tHLYBA8hV2R6DHgxWME85AkhNnJbDJkZnaZ6RXCkq/w4kERr36ON//GSbIHTSxoKKq66e4KEyks+v63t7a+sbm1Xdgp7u7tHxyWjo6bNk4N4w0Wy9i0Q2q5FJo3UKDk7cRwqkLJW+H4dua3nrixItYPOEl4T9GhFpFgFJ30mHXDiOC07/dLZb/iz0FWSZCTMuSo90tf3UHMUsU1Mkmt7QR+gr2MGhRM8mmxm1qeUDamQ95xVFPFbS+bHzwl504ZkCg2rjSSufp7IqPK2okKXaeiOLLL3kz8z+ukGN30MqGTFLlmi0VRKgnGZPY9GQjDGcqJI5QZ4W4lbEQNZegyKroQguWXV0nzshJUK9X7q3LNz+MowCmcwQUEcA01uIM6NICBgmd4hTfPeC/eu/exaF3z8pkT+APv8wdPipAM</latexit>t0Continuous dataCategorical dataMin-maxnormalisationBinaryencoding32-bit float point encodingColumn merging3.980.5412.3…Cat 1Cat 2Cat 3… Cat 1 [1001000]Cat 2 [0101000]Cat 3 [1101011]…Cat K [1011001]<latexit sha1_base64="YhpXLgl4NYInTKReY6fmM/XZ0b4=">AAAB/nicbVDLSsNAFJ3UV62vqLhyM1gEVyUpUt0IBTeCmwr2AU0Ik8mkHTp5MHMjllDwV9y4UMSt3+HOv3HaZqGtBy4czrmXe+/xU8EVWNa3UVpZXVvfKG9WtrZ3dvfM/YOOSjJJWZsmIpE9nygmeMzawEGwXioZiXzBuv7oeup3H5hUPInvYZwyNyKDmIecEtCSZx4FngPsEXItTK4ckQy8Or71zKpVs2bAy8QuSBUVaHnmlxMkNItYDFQQpfq2lYKbEwmcCjapOJliKaEjMmB9TWMSMeXms/Mn+FQrAQ4TqSsGPFN/T+QkUmoc+bozIjBUi95U/M/rZxBeujmP0wxYTOeLwkxgSPA0CxxwySiIsSaESq5vxXRIJKGgE6voEOzFl5dJp16zG7XG3Xm1aRVxlNExOkFnyEYXqIluUAu1EUU5ekav6M14Ml6Md+Nj3loyiplD9AfG5w/gWZVh</latexit>dcat= log2K<latexit sha1_base64="6wj2ABHyz1Qfr7D4H4SFra/ElcU=">AAAB+XicbVBNS8NAEN34WetX1KOXYBE8laRK9SIUvHisYD+gDWGz2bRLN7thd1Isof/EiwdFvPpPvPlv3LY5aOuDgcd7M8zMC1PONLjut7W2vrG5tV3aKe/u7R8c2kfHbS0zRWiLSC5VN8SaciZoCxhw2k0VxUnIaScc3c38zpgqzaR4hElK/QQPBIsZwWCkwLajoA/0CXIiBUxvL2uBXXGr7hzOKvEKUkEFmoH91Y8kyRIqgHCsdc9zU/BzrIARTqflfqZpiskID2jPUIETqv18fvnUOTdK5MRSmRLgzNXfEzlOtJ4koelMMAz1sjcT//N6GcQ3fs5EmgEVZLEozrgD0pnF4ERMUQJ8YggmiplbHTLEChMwYZVNCN7yy6ukXat69Wr94arScIs4SugUnaEL5KFr1ED3qIlaiKAxekav6M3KrRfr3fpYtK5ZxcwJ+gPr8wdmhZN2</latexit>dcont= 32Raw Table011011101100101…101011100000011…BinaryTable <latexit sha1_base64="c5L2lHvrAm7AWb5UEk7WPlEoJ8o=">AAAB83icbVDLSsNAFL3xWeur6tLNYBFclUSkuiy4cVnBPqApZTK9aYdOJmFmIpbQ33DjQhG3/ow7/8ZJm4W2Hhg4nHMv98wJEsG1cd1vZ219Y3Nru7RT3t3bPzisHB23dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5zf3OIyrNY/lgpgn2IzqSPOSMGiv5fkTNOAizp9nAHVSqbs2dg6wSryBVKNAcVL78YczSCKVhgmrd89zE9DOqDGcCZ2U/1ZhQNqEj7FkqaYS6n80zz8i5VYYkjJV90pC5+nsjo5HW0yiwk3lGvezl4n9eLzXhTT/jMkkNSrY4FKaCmJjkBZAhV8iMmFpCmeI2K2FjqigztqayLcFb/vIqaV/WvHqtfn9VbbhFHSU4hTO4AA+uoQF30IQWMEjgGV7hzUmdF+fd+ViMrjnFzgn8gfP5AycEkbg=</latexit>x0 <latexit sha1_base64="x7Yfc13wZj8sboSgBjtkUxaoVWA=">AAAB8nicbVBNS8NAFHypX7V+VT16CRbBU0lEqseCF48V2lpoQ9lsN+3SzW7YfRFK6M/w4kERr/4ab/4bN20O2jqwMMy8x86bMBHcoOd9O6WNza3tnfJuZW//4PCoenzSNSrVlHWoEkr3QmKY4JJ1kKNgvUQzEoeCPYbTu9x/fGLacCXbOEtYEJOx5BGnBK3UH8QEJ5SIrD0fVmte3VvAXSd+QWpQoDWsfg1GiqYxk0gFMabvewkGGdHIqWDzyiA1LCF0Ssasb6kkMTNBtog8dy+sMnIjpe2T6C7U3xsZiY2ZxaGdzCOaVS8X//P6KUa3QcZlkiKTdPlRlAoXlZvf7464ZhTFzBJCNbdZXTohmlC0LVVsCf7qyeuke1X3G/XGw3Wt6RV1lOEMzuESfLiBJtxDCzpAQcEzvMKbg86L8+58LEdLTrFzCn/gfP4Ai1WRYw==</latexit>T Figure 1: Transformation of tabular data t0into the binary form x0. The considered transformation is reversible. The continuous column records are presented with the length dcont= 32 and the categorical ones with dcat= log2K, where Kstands for the number of categorical classes. 2 The transformation method converts each column of the table into a binary format. For continuous data, this process includes applying min-max normalization to the columns, followed by converting these normalized values into a binary representation via 32-bit floating-point encoding. For categorical data, binary encoding is used. The encoded columns are concatenated into fixed-size rows. The inverse transformation T−1converts the binary representations back into their original form. For continuous data, the decoded values are rescaled to their original range using metadata generated during the initial transformation. For categorical data, the binary codes are mapped back to their respective categories using a predefined mapping scheme. 4 Binary Diffusion Binary Diffusion shown in Figure 2 is a novel approach for generative modeling that leverages the simplicity and robustness of binary data representations. This method involves adding and removing noise through XOR operation, which makes it particularly well-suited for handling binary data. Below, we describe the key aspects of the Binary Diffusion methodology in detail. <latexit sha1_base64="c5L2lHvrAm7AWb5UEk7WPlEoJ8o=">AAAB83icbVDLSsNAFL3xWeur6tLNYBFclUSkuiy4cVnBPqApZTK9aYdOJmFmIpbQ33DjQhG3/ow7/8ZJm4W2Hhg4nHMv98wJEsG1cd1vZ219Y3Nru7RT3t3bPzisHB23dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5zf3OIyrNY/lgpgn2IzqSPOSMGiv5fkTNOAizp9nAHVSqbs2dg6wSryBVKNAcVL78YczSCKVhgmrd89zE9DOqDGcCZ2U/1ZhQNqEj7FkqaYS6n80zz8i5VYYkjJV90pC5+nsjo5HW0yiwk3lGvezl4n9eLzXhTT/jMkkNSrY4FKaCmJjkBZAhV8iMmFpCmeI2K2FjqigztqayLcFb/vIqaV/WvHqtfn9VbbhFHSU4hTO4AA+uoQF30IQWMEjgGV7hzUmdF+fd+ViMrjnFzgn8gfP5AycEkbg=</latexit>x0 <latexit sha1_base64="tsdYZDyciXBsCk9LjB626l+3w2k=">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi1WXRjcsK9gFNKJPppB06mYR5CDX0N9y4UMStP+POv3HSZqGtBwYO59zLPXPClDOlXffbKa2tb2xulbcrO7t7+wfVw6OOSowktE0SnsheiBXlTNC2ZprTXiopjkNOu+HkNve7j1QqlogHPU1pEOORYBEjWFvJ92Osx2GUPc0GN4Nqza27c6BV4hWkBgVag+qXP0yIianQhGOl+p6b6iDDUjPC6aziG0VTTCZ4RPuWChxTFWTzzDN0ZpUhihJpn9Borv7eyHCs1DQO7WSeUS17ufif1zc6ug4yJlKjqSCLQ5HhSCcoLwANmaRE86klmEhmsyIyxhITbWuq2BK85S+vks5F3WvUG/eXtaZb1FGGEziFc/DgCppwBy1oA4EUnuEV3hzjvDjvzsditOQUO8fwB87nD0Vakcw=</latexit>zB <latexit sha1_base64="904ltzoyiOuq+cNhh7xCbvL2ewE=">AAAB8HicbVDLSgNBEOz1GeMr6tHLYBA8hV2R6DHgxWME85AkhNnJbDJkZnaZ6RXCkq/w4kERr36ON//GSbIHTSxoKKq66e4KEyks+v63t7a+sbm1Xdgp7u7tHxyWjo6bNk4N4w0Wy9i0Q2q5FJo3UKDk7cRwqkLJW+H4dua3nrixItYPOEl4T9GhFpFgFJ30mHXDiOC07/dLZb/iz0FWSZCTMuSo90tf3UHMUsU1Mkmt7QR+gr2MGhRM8mmxm1qeUDamQ95xVFPFbS+bHzwl504ZkCg2rjSSufp7IqPK2okKXaeiOLLL3kz8z+ukGN30MqGTFLlmi0VRKgnGZPY9GQjDGcqJI5QZ4W4lbEQNZegyKroQguWXV0nzshJUK9X7q3LNz+MowCmcwQUEcA01uIM6NICBgmd4hTfPeC/eu/exaF3z8pkT+APv8wdPipAM</latexit>t0<latexit sha1_base64="x7Yfc13wZj8sboSgBjtkUxaoVWA=">AAAB8nicbVBNS8NAFHypX7V+VT16CRbBU0lEqseCF48V2lpoQ9lsN+3SzW7YfRFK6M/w4kERr/4ab/4bN20O2jqwMMy8x86bMBHcoOd9O6WNza3tnfJuZW//4PCoenzSNSrVlHWoEkr3QmKY4JJ1kKNgvUQzEoeCPYbTu9x/fGLacCXbOEtYEJOx5BGnBK3UH8QEJ5SIrD0fVmte3VvAXSd+QWpQoDWsfg1GiqYxk0gFMabvewkGGdHIqWDzyiA1LCF0Ssasb6kkMTNBtog8dy+sMnIjpe2T6C7U3xsZiY2ZxaGdzCOaVS8X//P6KUa3QcZlkiKTdPlRlAoXlZvf7464ZhTFzBJCNbdZXTohmlC0LVVsCf7qyeuke1X3G/XGw3Wt6RV1lOEMzuESfLiBJtxDCzpAQcEzvMKbg86L8+58LEdLTrFzCn/gfP4Ai1WRYw==</latexit>T <latexit sha1_base64="y9Njrb1bdu0cmjX3kt1rxpSwAPE=">AAAB9HicbVBNSwMxFHxbv2r9qnr0EiyCp7IrUj0WvHgRKthaaJeSTdM2NJtdk7eFsvR3ePGgiFd/jDf/jdl2D9o6EBhm3uNNJoilMOi6305hbX1jc6u4XdrZ3ds/KB8etUyUaMabLJKRbgfUcCkUb6JAydux5jQMJH8MxjeZ/zjh2ohIPeA05n5Ih0oMBKNoJb8bUhwxKtO7WQ975Ypbdecgq8TLSQVyNHrlr24/YknIFTJJjel4box+SjUKJvms1E0Mjykb0yHvWKpoyI2fzkPPyJlV+mQQafsUkrn6eyOloTHTMLCTWUiz7GXif14nwcG1nwoVJ8gVWxwaJJJgRLIGSF9ozlBOLaFMC5uVsBHVlKHtqWRL8Ja/vEpaF1WvVq3dX1bqbl5HEU7gFM7Bgyuowy00oAkMnuAZXuHNmTgvzrvzsRgtOPnOMfyB8/kDE+OSQw==</latexit>Mt<latexit sha1_base64="GxtXralMr1iPKuBzmw5rZL3iRr4=">AAAB9XicbVDLSsNAFL3xWeur6tJNsAiuSiJSXRbcuKxgH9DGMplO2qGTSZi5UWrIf7hxoYhb/8Wdf+OkzUJbDwwczrmXe+b4seAaHefbWlldW9/YLG2Vt3d29/YrB4dtHSWKshaNRKS6PtFMcMlayFGwbqwYCX3BOv7kOvc7D0xpHsk7nMbMC8lI8oBTgka6T/shwbEfpE9ZNsBBperUnBnsZeIWpAoFmoPKV38Y0SRkEqkgWvdcJ0YvJQo5FSwr9xPNYkInZMR6hkoSMu2ls9SZfWqUoR1EyjyJ9kz9vZGSUOtp6JvJPKRe9HLxP6+XYHDlpVzGCTJJ54eCRNgY2XkF9pArRlFMDSFUcZPVpmOiCEVTVNmU4C5+eZm0z2tuvVa/vag2nKKOEhzDCZyBC5fQgBtoQgsoKHiGV3izHq0X6936mI+uWMXOEfyB9fkDXo+TCg==</latexit>zt<latexit sha1_base64="SgKMEVN4u0M3Xt/kPYtE5ueltuQ=">AAAB9XicbVDLSsNAFL3xWeur6tJNsAiuSiJSXRbcuKxgH9DGMplO2qGTSZi5UUvIf7hxoYhb/8Wdf+OkzUJbDwwczrmXe+b4seAaHefbWlldW9/YLG2Vt3d29/YrB4dtHSWKshaNRKS6PtFMcMlayFGwbqwYCX3BOv7kOvc7D0xpHsk7nMbMC8lI8oBTgka6T/shwbEfpE9ZNsBBperUnBnsZeIWpAoFmoPKV38Y0SRkEqkgWvdcJ0YvJQo5FSwr9xPNYkInZMR6hkoSMu2ls9SZfWqUoR1EyjyJ9kz9vZGSUOtp6JvJPKRe9HLxP6+XYHDlpVzGCTJJ54eCRNgY2XkF9pArRlFMDSFUcZPVpmOiCEVTVNmU4C5+eZm0z2tuvVa/vag2nKKOEhzDCZyBC5fQgBtoQgsoKHiGV3izHq0X6936mI+uWMXOEfyB9fkDW3+TCA==</latexit>xt<latexit sha1_base64="0wY/KKrt/PB+Ts1QtKwfNr7U4r0=">AAACPHicbVDLSgMxFM34tr6qLt0Ei6AgZUakuhTcuKxoq9ApQya90wlmHiZ3xDrMh7nxI9y5cuNCEbeuTeuAz0MCJ+eey809fiqFRtt+sMbGJyanpmdmK3PzC4tL1eWVtk4yxaHFE5moc59pkCKGFgqUcJ4qYJEv4cy/OBzWz65AaZHEpzhIoRuxfiwCwRkayaue0EvPxRCQuRIC3HRDhnnu+gG9LgrP3qZfwo0RkLqR6NHS4OE2NWf0GhQeuEr0Q9zyqjW7bo9A/xKnJDVSoulV791ewrMIYuSSad1x7BS7OVMouISi4mYaUsYvWB86hsYsAt3NR8sXdMMoPRokytwY6Uj93pGzSOtB5BtnxDDUv2tD8b9aJ8Ngv5uLOM0QYv45KMgkxYQOk6Q9oYCjHBjCuBLmr5SHTDGOJu+KCcH5vfJf0t6pO41643i3dmCXccyQNbJONolD9sgBOSJN0iKc3JJH8kxerDvryXq13j6tY1bZs0p+wHr/AHkMrjI=</latexit>q✓(ˆx0,ˆzt|xt,t ,ye) <latexit sha1_base64="/xpgJ4NkbeRG4lSHpoiBkhuwX3k=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseCF48VTFtoQ9lsJ+3SzSbsboQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByN/c7T6g0T+SjmaYYxHQkecQZNVby+2FEpoNqza27C5B14hWkBgVag+pXf5iwLEZpmKBa9zw3NUFOleFM4KzSzzSmlE3oCHuWShqjDvLFsTNyYZUhiRJlSxqyUH9P5DTWehqHtjOmZqxXvbn4n9fLTHQb5FymmUHJlouiTBCTkPnnZMgVMiOmllCmuL2VsDFVlBmbT8WG4K2+vE7aV3WvUW88XNeabhFHGc7gHC7Bgxtowj20wAcGHJ7hFd4c6bw4787HsrXkFDOn8AfO5w9oQI5i</latexit>y<latexit sha1_base64="bBwl1tzSkp9y590OWH3r9IMVDSE=">AAAB/3icbVDLSsNAFJ3UV62vqODGTbAIrkoiUl0W3LisYB/QhDCZTtqhk0mYuRFLzMJfceNCEbf+hjv/xkmbhbYeGDiccy/3zAkSzhTY9rdRWVldW9+obta2tnd298z9g66KU0loh8Q8lv0AK8qZoB1gwGk/kRRHAae9YHJd+L17KhWLxR1ME+pFeCRYyAgGLfnmkTvGkLkRhnEQZg957me2D7lv1u2GPYO1TJyS1FGJtm9+ucOYpBEVQDhWauDYCXgZlsAIp3nNTRVNMJngER1oKnBElZfN8ufWqVaGVhhL/QRYM/X3RoYjpaZRoCeLoGrRK8T/vEEK4ZWXMZGkQAWZHwpTbkFsFWVYQyYpAT7VBBPJdFaLjLHEBHRlNV2Cs/jlZdI9bzjNRvP2ot6yyzqq6BidoDPkoEvUQjeojTqIoEf0jF7Rm/FkvBjvxsd8tGKUO4foD4zPH/IYlqk=</latexit>ˆx0t <latexit sha1_base64="KeOpLy0J8/W/0GLXIKvxtWCP29A=">AAAB/XicbVDLSsNAFJ3UV62v+Ni5GSyCq5KIVJcFNy4r2Ac0IUymk3bo5MHMjdCG4K+4caGIW//DnX/jpM1CWw8MHM65l3vm+IngCizr26isrW9sblW3azu7e/sH5uFRV8WppKxDYxHLvk8UEzxiHeAgWD+RjIS+YD1/clv4vUcmFY+jB5gmzA3JKOIBpwS05JknzphA5oQExn6QzfLcyyD3zLrVsObAq8QuSR2VaHvmlzOMaRqyCKggSg1sKwE3IxI4FSyvOaliCaETMmIDTSMSMuVm8/Q5PtfKEAex1C8CPFd/b2QkVGoa+nqyiKmWvUL8zxukENy4GY+SFFhEF4eCVGCIcVEFHnLJKIipJoRKrrNiOiaSUNCF1XQJ9vKXV0n3smE3G837q3rLKuuoolN0hi6Qja5RC92hNuogimboGb2iN+PJeDHejY/FaMUod47RHxifP8Ftlgg=</latexit>ˆzt <latexit sha1_base64="ntfZDb6LbfCnGrmvxzz6M2chdGw=">AAACAHicbVBNS8NAEN3Ur1q/oh48eFksggcpiUj1WPDisULTFpJQNttNu3R3E3Y3Qgm9+Fe8eFDEqz/Dm//GTZuDtj4YeLw3w8y8KGVUacf5tipr6xubW9Xt2s7u3v6BfXjUVUkmMfFwwhLZj5AijAriaaoZ6aeSIB4x0osmd4XfeyRS0UR09DQlIUcjQWOKkTbSwD7RMFCUw4AjPcaI5d7Mdy474cCuOw1nDrhK3JLUQYn2wP4KhgnOOBEaM6SU7zqpDnMkNcWMzGpBpkiK8ASNiG+oQJyoMJ8/MIPnRhnCOJGmhIZz9fdEjrhSUx6ZzuJOtewV4n+en+n4NsypSDNNBF4sijMGdQKLNOCQSoI1mxqCsKTmVojHSCKsTWY1E4K7/PIq6V413Gaj+XBdbzllHFVwCs7ABXDBDWiBe9AGHsBgBp7BK3iznqwX6936WLRWrHLmGPyB9fkDuM6Vzg==</latexit>t⇠U[0,T] <latexit sha1_base64="B62M6N/NWk0N4j1fbsWgTB1ncNo=">AAAB83icbVDLSsNAFL2pr1pfVZdugkVwVRKR6rLgxmUF+4AmlMn0ph06mYSZiRBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSDhT2nG+rcrG5tb2TnW3trd/cHhUPz7pqTiVFLs05rEcBEQhZwK7mmmOg0QiiQKO/WB2V/j9J5SKxeJRZwn6EZkIFjJKtJE8LyJ6GoR5Nh/hqN5wms4C9jpxS9KAEp1R/csbxzSNUGjKiVJD10m0nxOpGeU4r3mpwoTQGZng0FBBIlR+vsg8ty+MMrbDWJontL1Qf2/kJFIqiwIzWWRUq14h/ucNUx3e+jkTSapR0OWhMOW2ju2iAHvMJFLNM0MIlcxktemUSEK1qalmSnBXv7xOeldNt9VsPVw32k5ZRxXO4BwuwYUbaMM9dKALFBJ4hld4s1LrxXq3PpajFavcOYU/sD5/AHjfke4=</latexit>ye<latexit sha1_base64="INHXUyIqz0y3F9duvPvFiBSNZro=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgxpKIVJcFNy4r9AVtLJPppB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8W6WNza3tnfJuZW//4PDIPj7pqCiRhLZJxCPZ87GinAna1kxz2oslxaHPadef3ud+d0alYpFo6XlMvRCPBQsYwdpIQ9sehFhPCOZpK3tKr9xsaFedmrMAWiduQapQoDm0vwajiCQhFZpwrFTfdWLtpVhqRjjNKoNE0RiTKR7TvqECh1R56SJ5hi6MMkJBJM0TGi3U3xspDpWah76ZzHOqVS8X//P6iQ7uvJSJONFUkOWhIOFIRyivAY2YpETzuSGYSGayIjLBEhNtyqqYEtzVL6+TznXNrdfqjzfVhlPUUYYzOIdLcOEWGvAATWgDgRk8wyu8Wan1Yr1bH8vRklXsnMIfWJ8/ax+Teg==</latexit>T1<latexit sha1_base64="QvJl4j2P1N436yQCEwYTQPKSMKM=">AAAB/3icbVDLSsNAFJ34rPUVFdy4GSyCq5KIVJcFNy4r2Ac0JUymk3boZBJmboQSs/BX3LhQxK2/4c6/cdJmoa0HBg7n3Ms9c4JEcA2O822trK6tb2xWtqrbO7t7+/bBYUfHqaKsTWMRq15ANBNcsjZwEKyXKEaiQLBuMLkp/O4DU5rH8h6mCRtEZCR5yCkBI/n2sTcmkHkRgXEQZpDnfub4kPt2zak7M+Bl4pakhkq0fPvLG8Y0jZgEKojWfddJYJARBZwKlle9VLOE0AkZsb6hkkRMD7JZ/hyfGWWIw1iZJwHP1N8bGYm0nkaBmSyC6kWvEP/z+imE14OMyyQFJun8UJgKDDEuysBDrhgFMTWEUMVNVkzHRBEKprKqKcFd/PIy6VzU3Ua9cXdZazplHRV0gk7ROXLRFWqiW9RCbUTRI3pGr+jNerJerHfrYz66YpU7R+gPrM8f6+iWpQ==</latexit>ˆt0t <latexit sha1_base64="88sHjrRjdvCWK8X9NJq4QaFDujI=">AAACKXicbZDLSsNAFIYnXmu9VV26GSxCBSmJSHVZcOPCRQV7gaaEyXTSDp1kwsyJtIS8jhtfxY2Com59EaeXhW39YeDnO+cw5/x+LLgG2/6yVlbX1jc2c1v57Z3dvf3CwWFDy0RRVqdSSNXyiWaCR6wOHARrxYqR0Bes6Q9uxvXmI1Oay+gBRjHrhKQX8YBTAgZ5haobEuhTItK7zBu6ggVQSifMD9JhlnnyHLt9AnMslR5kruK9Ppx5haJdtifCy8aZmSKaqeYV3tyupEnIIqCCaN127Bg6KVHAqWBZ3k00iwkdkB5rGxuRkOlOOrk0w6eGdHEglXkR4An9O5GSUOtR6JvO8b56sTaG/9XaCQTXnZRHcQIsotOPgkRgkHgcG+5yxSiIkTGEKm52xbRPFKFgws2bEJzFk5dN46LsVMqV+8ti1Z7FkUPH6ASVkIOuUBXdohqqI4qe0At6Rx/Ws/VqfVrf09YVazZzhOZk/fwCLZSo/w==</latexit>Lx(xo,ˆxot) <latexit sha1_base64="mr4eK1EyEp+6URJnp526zDa/95s=">AAACJ3icbZDLSsNAFIYn9VbrrerSzWARKkhJRKorKbhx4aKCvUBTwmQ6aYdOLsycCG3I27jxVdwIKqJL38Rp2oVt/WHg5zvnMOf8biS4AtP8NnIrq2vrG/nNwtb2zu5ecf+gqcJYUtagoQhl2yWKCR6wBnAQrB1JRnxXsJY7vJnUW49MKh4GDzCKWNcn/YB7nBLQyCle2z6BASUiuUudsS2YB+UkY66XjNPUgTNsDwjMsQRSW/L+AE6dYsmsmJnwsrFmpoRmqjvFN7sX0thnAVBBlOpYZgTdhEjgVLC0YMeKRYQOSZ91tA2Iz1Q3ye5M8YkmPeyFUr8AcEb/TiTEV2rku7pzsq1arE3gf7VODN5VN+FBFAML6PQjLxYYQjwJDfe4ZBTESBtCJde7YjogklDQ0RZ0CNbiycumeV6xqpXq/UWpZs7iyKMjdIzKyEKXqIZuUR01EEVP6AW9ow/j2Xg1Po2vaWvOmM0cojkZP7+Jnqgo</latexit>Lz(zt,ˆzt) <latexit sha1_base64="qkstB8h8jyLs5a48UyeyKb2lMms=">AAAB9HicbVBNSwMxFHxbv2r9qnr0EiyCp7IrUj0WRPBYwdZCu5Rsmm1Ds8maZAvL0t/hxYMiXv0x3vw3Zts9aOtAYJh5jzeZIOZMG9f9dkpr6xubW+Xtys7u3v5B9fCoo2WiCG0TyaXqBlhTzgRtG2Y47caK4ijg9DGY3OT+45QqzaR4MGlM/QiPBAsZwcZKfj/CZkwwz25ng3RQrbl1dw60SryC1KBAa1D96g8lSSIqDOFY657nxsbPsDKMcDqr9BNNY0wmeER7lgocUe1n89AzdGaVIQqlsk8YNFd/b2Q40jqNAjuZh9TLXi7+5/USE177GRNxYqggi0NhwpGRKG8ADZmixPDUEkwUs1kRGWOFibE9VWwJ3vKXV0nnou416o37y1rTLeoowwmcwjl4cAVNuIMWtIHAEzzDK7w5U+fFeXc+FqMlp9g5hj9wPn8ADz+SQA==</latexit>Ey<latexit sha1_base64="tsdYZDyciXBsCk9LjB626l+3w2k=">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiQi1WXRjcsK9gFNKJPppB06mYR5CDX0N9y4UMStP+POv3HSZqGtBwYO59zLPXPClDOlXffbKa2tb2xulbcrO7t7+wfVw6OOSowktE0SnsheiBXlTNC2ZprTXiopjkNOu+HkNve7j1QqlogHPU1pEOORYBEjWFvJ92Osx2GUPc0GN4Nqza27c6BV4hWkBgVag+qXP0yIianQhGOl+p6b6iDDUjPC6aziG0VTTCZ4RPuWChxTFWTzzDN0ZpUhihJpn9Borv7eyHCs1DQO7WSeUS17ufif1zc6ug4yJlKjqSCLQ5HhSCcoLwANmaRE86klmEhmsyIyxhITbWuq2BK85S+vks5F3WvUG/eXtaZb1FGGEziFc/DgCppwBy1oA4EUnuEV3hzjvDjvzsditOQUO8fwB87nD0Vakcw=</latexit>zB<latexit sha1_base64="y9Njrb1bdu0cmjX3kt1rxpSwAPE=">AAAB9HicbVBNSwMxFHxbv2r9qnr0EiyCp7IrUj0WvHgRKthaaJeSTdM2NJtdk7eFsvR3ePGgiFd/jDf/jdl2D9o6EBhm3uNNJoilMOi6305hbX1jc6u4XdrZ3ds/KB8etUyUaMabLJKRbgfUcCkUb6JAydux5jQMJH8MxjeZ/zjh2ohIPeA05n5Ih0oMBKNoJb8bUhwxKtO7WQ975Ypbdecgq8TLSQVyNHrlr24/YknIFTJJjel4box+SjUKJvms1E0Mjykb0yHvWKpoyI2fzkPPyJlV+mQQafsUkrn6eyOloTHTMLCTWUiz7GXif14nwcG1nwoVJ8gVWxwaJJJgRLIGSF9ozlBOLaFMC5uVsBHVlKHtqWRL8Ja/vEpaF1WvVq3dX1bqbl5HEU7gFM7Bgyuowy00oAkMnuAZXuHNmTgvzrvzsRgtOPnOMfyB8/kDE+OSQw==</latexit>Mt<latexit sha1_base64="GxtXralMr1iPKuBzmw5rZL3iRr4=">AAAB9XicbVDLSsNAFL3xWeur6tJNsAiuSiJSXRbcuKxgH9DGMplO2qGTSZi5UWrIf7hxoYhb/8Wdf+OkzUJbDwwczrmXe+b4seAaHefbWlldW9/YLG2Vt3d29/YrB4dtHSWKshaNRKS6PtFMcMlayFGwbqwYCX3BOv7kOvc7D0xpHsk7nMbMC8lI8oBTgka6T/shwbEfpE9ZNsBBperUnBnsZeIWpAoFmoPKV38Y0SRkEqkgWvdcJ0YvJQo5FSwr9xPNYkInZMR6hkoSMu2ls9SZfWqUoR1EyjyJ9kz9vZGSUOtp6JvJPKRe9HLxP6+XYHDlpVzGCTJJ54eCRNgY2XkF9pArRlFMDSFUcZPVpmOiCEVTVNmU4C5+eZm0z2tuvVa/vag2nKKOEhzDCZyBC5fQgBtoQgsoKHiGV3izHq0X6936mI+uWMXOEfyB9fkDXo+TCg==</latexit>zt<latexit sha1_base64="SgKMEVN4u0M3Xt/kPYtE5ueltuQ=">AAAB9XicbVDLSsNAFL3xWeur6tJNsAiuSiJSXRbcuKxgH9DGMplO2qGTSZi5UUvIf7hxoYhb/8Wdf+OkzUJbDwwczrmXe+b4seAaHefbWlldW9/YLG2Vt3d29/YrB4dtHSWKshaNRKS6PtFMcMlayFGwbqwYCX3BOv7kOvc7D0xpHsk7nMbMC8lI8oBTgka6T/shwbEfpE9ZNsBBperUnBnsZeIWpAoFmoPKV38Y0SRkEqkgWvdcJ0YvJQo5FSwr9xPNYkInZMR6hkoSMu2ls9SZfWqUoR1EyjyJ9kz9vZGSUOtp6JvJPKRe9HLxP6+XYHDlpVzGCTJJ54eCRNgY2XkF9pArRlFMDSFUcZPVpmOiCEVTVNmU4C5+eZm0z2tuvVa/vag2nKKOEhzDCZyBC5fQgBtoQgsoKHiGV3izHq0X6936mI+uWMXOEfyB9fkDW3+TCA==</latexit>xt<latexit sha1_base64="0wY/KKrt/PB+Ts1QtKwfNr7U4r0=">AAACPHicbVDLSgMxFM34tr6qLt0Ei6AgZUakuhTcuKxoq9ApQya90wlmHiZ3xDrMh7nxI9y5cuNCEbeuTeuAz0MCJ+eey809fiqFRtt+sMbGJyanpmdmK3PzC4tL1eWVtk4yxaHFE5moc59pkCKGFgqUcJ4qYJEv4cy/OBzWz65AaZHEpzhIoRuxfiwCwRkayaue0EvPxRCQuRIC3HRDhnnu+gG9LgrP3qZfwo0RkLqR6NHS4OE2NWf0GhQeuEr0Q9zyqjW7bo9A/xKnJDVSoulV791ewrMIYuSSad1x7BS7OVMouISi4mYaUsYvWB86hsYsAt3NR8sXdMMoPRokytwY6Uj93pGzSOtB5BtnxDDUv2tD8b9aJ8Ngv5uLOM0QYv45KMgkxYQOk6Q9oYCjHBjCuBLmr5SHTDGOJu+KCcH5vfJf0t6pO41643i3dmCXccyQNbJONolD9sgBOSJN0iKc3JJH8kxerDvryXq13j6tY1bZs0p+wHr/AHkMrjI=</latexit>q✓(ˆx0,ˆzt|xt,t ,ye) <latexit sha1_base64="/xpgJ4NkbeRG4lSHpoiBkhuwX3k=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseCF48VTFtoQ9lsJ+3SzSbsboQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MBVcG9f9dkobm1vbO+Xdyt7+weFR9fikrZNMMfRZIhLVDalGwSX6hhuB3VQhjUOBnXByN/c7T6g0T+SjmaYYxHQkecQZNVby+2FEpoNqza27C5B14hWkBgVag+pXf5iwLEZpmKBa9zw3NUFOleFM4KzSzzSmlE3oCHuWShqjDvLFsTNyYZUhiRJlSxqyUH9P5DTWehqHtjOmZqxXvbn4n9fLTHQb5FymmUHJlouiTBCTkPnnZMgVMiOmllCmuL2VsDFVlBmbT8WG4K2+vE7aV3WvUW88XNeabhFHGc7gHC7Bgxtowj20wAcGHJ7hFd4c6bw4787HsrXkFDOn8AfO5w9oQI5i</latexit>y<latexit sha1_base64="bBwl1tzSkp9y590OWH3r9IMVDSE=">AAAB/3icbVDLSsNAFJ3UV62vqODGTbAIrkoiUl0W3LisYB/QhDCZTtqhk0mYuRFLzMJfceNCEbf+hjv/xkmbhbYeGDiccy/3zAkSzhTY9rdRWVldW9+obta2tnd298z9g66KU0loh8Q8lv0AK8qZoB1gwGk/kRRHAae9YHJd+L17KhWLxR1ME+pFeCRYyAgGLfnmkTvGkLkRhnEQZg957me2D7lv1u2GPYO1TJyS1FGJtm9+ucOYpBEVQDhWauDYCXgZlsAIp3nNTRVNMJngER1oKnBElZfN8ufWqVaGVhhL/QRYM/X3RoYjpaZRoCeLoGrRK8T/vEEK4ZWXMZGkQAWZHwpTbkFsFWVYQyYpAT7VBBPJdFaLjLHEBHRlNV2Cs/jlZdI9bzjNRvP2ot6yyzqq6BidoDPkoEvUQjeojTqIoEf0jF7Rm/FkvBjvxsd8tGKUO4foD4zPH/IYlqk=</latexit>ˆx0t <latexit sha1_base64="B62M6N/NWk0N4j1fbsWgTB1ncNo=">AAAB83icbVDLSsNAFL2pr1pfVZdugkVwVRKR6rLgxmUF+4AmlMn0ph06mYSZiRBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSDhT2nG+rcrG5tb2TnW3trd/cHhUPz7pqTiVFLs05rEcBEQhZwK7mmmOg0QiiQKO/WB2V/j9J5SKxeJRZwn6EZkIFjJKtJE8LyJ6GoR5Nh/hqN5wms4C9jpxS9KAEp1R/csbxzSNUGjKiVJD10m0nxOpGeU4r3mpwoTQGZng0FBBIlR+vsg8ty+MMrbDWJontL1Qf2/kJFIqiwIzWWRUq14h/ucNUx3e+jkTSapR0OWhMOW2ju2iAHvMJFLNM0MIlcxktemUSEK1qalmSnBXv7xOeldNt9VsPVw32k5ZRxXO4BwuwYUbaMM9dKALFBJ4hld4s1LrxXq3PpajFavcOYU/sD5/AHjfke4=</latexit>ye<latexit sha1_base64="INHXUyIqz0y3F9duvPvFiBSNZro=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgxpKIVJcFNy4r9AVtLJPppB06mYSZSaGE/IkbF4q49U/c+TdO2iy09cDA4Zx7uWeOH3OmtON8W6WNza3tnfJuZW//4PDIPj7pqCiRhLZJxCPZ87GinAna1kxz2oslxaHPadef3ud+d0alYpFo6XlMvRCPBQsYwdpIQ9sehFhPCOZpK3tKr9xsaFedmrMAWiduQapQoDm0vwajiCQhFZpwrFTfdWLtpVhqRjjNKoNE0RiTKR7TvqECh1R56SJ5hi6MMkJBJM0TGi3U3xspDpWah76ZzHOqVS8X//P6iQ7uvJSJONFUkOWhIOFIRyivAY2YpETzuSGYSGayIjLBEhNtyqqYEtzVL6+TznXNrdfqjzfVhlPUUYYzOIdLcOEWGvAATWgDgRk8wyu8Wan1Yr1bH8vRklXsnMIfWJ8/ax+Teg==</latexit>T1<latexit sha1_base64="QvJl4j2P1N436yQCEwYTQPKSMKM=">AAAB/3icbVDLSsNAFJ34rPUVFdy4GSyCq5KIVJcFNy4r2Ac0JUymk3boZBJmboQSs/BX3LhQxK2/4c6/cdJmoa0HBg7n3Ms9c4JEcA2O822trK6tb2xWtqrbO7t7+/bBYUfHqaKsTWMRq15ANBNcsjZwEKyXKEaiQLBuMLkp/O4DU5rH8h6mCRtEZCR5yCkBI/n2sTcmkHkRgXEQZpDnfub4kPt2zak7M+Bl4pakhkq0fPvLG8Y0jZgEKojWfddJYJARBZwKlle9VLOE0AkZsb6hkkRMD7JZ/hyfGWWIw1iZJwHP1N8bGYm0nkaBmSyC6kWvEP/z+imE14OMyyQFJun8UJgKDDEuysBDrhgFMTWEUMVNVkzHRBEKprKqKcFd/PIy6VzU3Ua9cXdZazplHRV0gk7ROXLRFWqiW9RCbUTRI3pGr+jNerJerHfrYz66YpU7R+gPrM8f6+iWpQ==</latexit>ˆt0t <latexit sha1_base64="qkstB8h8jyLs5a48UyeyKb2lMms=">AAAB9HicbVBNSwMxFHxbv2r9qnr0EiyCp7IrUj0WRPBYwdZCu5Rsmm1Ds8maZAvL0t/hxYMiXv0x3vw3Zts9aOtAYJh5jzeZIOZMG9f9dkpr6xubW+Xtys7u3v5B9fCoo2WiCG0TyaXqBlhTzgRtG2Y47caK4ijg9DGY3OT+45QqzaR4MGlM/QiPBAsZwcZKfj/CZkwwz25ng3RQrbl1dw60SryC1KBAa1D96g8lSSIqDOFY657nxsbPsDKMcDqr9BNNY0wmeER7lgocUe1n89AzdGaVIQqlsk8YNFd/b2Q40jqNAjuZh9TLXi7+5/USE177GRNxYqggi0NhwpGRKG8ADZmixPDUEkwUs1kRGWOFibE9VWwJ3vKXV0nnou416o37y1rTLeoowwmcwjl4cAVNuIMWtIHAEzzDK7w5U+fFeXc+FqMlp9g5hj9wPn8ADz+SQA==</latexit>Ey<latexit sha1_base64="k9dfo1YHdqlb3mBIXkeF8zLuuI0=">AAAB9XicbVBNSwMxEM3Wr1q/qh69BIvgoZRdkepFKHjxWKFf0K4lm822odlkSWaVUvo/vHhQxKv/xZv/xrTdg7Y+GHi8N8PMvCAR3IDrfju5tfWNza38dmFnd2//oHh41DIq1ZQ1qRJKdwJimOCSNYGDYJ1EMxIHgrWD0e3Mbz8ybbiSDRgnzI/JQPKIUwJWeoCbRhn3RKjAlLHbL5bcijsHXiVeRkooQ71f/OqFiqYxk0AFMabruQn4E6KBU8GmhV5qWELoiAxY11JJYmb8yfzqKT6zSogjpW1JwHP198SExMaM48B2xgSGZtmbif953RSia3/CZZICk3SxKEoFBoVnEeCQa0ZBjC0hVHN7K6ZDogkFG1TBhuAtv7xKWhcVr1qp3l+Wam4WRx6doFN0jjx0hWroDtVRE1Gk0TN6RW/Ok/PivDsfi9ack80coz9wPn8AtBGRTg==</latexit>t=T,...,0 Figure 2: Binary Diffusion training (left) and sampling (right) schemes. In Binary Diffusion, noise is added to the data by flipping bits using the XOR operation with a random binary mask. The amount of noise added is quantified by the proportion of bits flipped. Let x0∈ {0,1}dbe the original binary vector of dimension d, and zt∈ {0,1}dbe a random binary noise vector at timestep t. The noisy vector xtis obtained as: xt=x0⊕zt, where ⊕denotes the XOR operation. The noise level is defined as the fraction of bits flipped in ztin the mapper Mtat step t, with the number of bits flipped ranging within [0,0.5]as a function of the timestep. The denoising network qθ(ˆx0,ˆzt|xt, t,ye)is trained to predict both the added noise ztand the clean- denoised vector x0from the noisy vector xt. We employ binary cross-entropy (BCE) loss (1) to train the denoising network. The loss function is averaged over both the batch of samples and the dimensions of the vectors: L(θ) =1 BBX b=1h Lx(ˆx(b) 0,x(b) 0) +Lz(ˆz(b) t,z(b) t)i =−1 BBX b=1dX i=1h x(b) 0ilogˆx(b) 0i+ (1−x(b) 0i) log(1 −ˆx(b) 0i)i −1 BBX b=1dX i=1h z(b) tilogˆz(b) ti+ (1−z(b) ti) log(1 −ˆz(b) ti)i ,(1) where Bis the batch size, θrepresents the parameters of the denoising network, x(b) 0andˆx(b) 0are the b -th samples of the true clean vectors and the predicted clean vectors, respectively. Similarly, z(b) tand ˆz(b) tare the b-th samples of the true added noise vectors and the predicted noise vectors, respectively. ye=Ey(y)denotes the encoded label y, one-hot encoded for classification and min-max normalized for regression. LxandLzdenotes binary cross-entropy (BCE) loss. The indices iandbcorrespond to the i-th dimension of the vectors and the b-th sample in the batch, respectively. During training (Figure 2 left), we use classifier-free guidance [ HS22 ]. For classification tasks, the conditioning input class label yis a one-hot encoded label ye. For regression tasks, yconsists of 3 min-max normalized target values ye, allowing the model to generate data conditioned on specific nu- merical outcomes. In unconditional training, we use an all-zeros conditioning vector for classification tasks and a value of −1for regression tasks to indicate the absence of conditioning. When sampling (Figure 2 right), we start from a random binary vector xtat timestep t=T, along with the conditioning variable y, encoded into ye. For each selected timestep in the sequence [T, . . . , 0], denoising is applied to the vector. The denoised vector ˆx0and the estimated binary noise ˆztare predicted by the denoising network. These predictions are then processed using a sigmoid function and binarized with a threshold. During sampling, we use the denoised vector ˆx0directly. Then, random noise ztis generated and added to ˆx0via the XOR operation: xt=ˆx0⊕zt. The sampling algorithm is summarized in Algorithm 1. 5 Results We evaluate the performance of Binary Diffusion on widely-recognized tabular benchmark datasets, including Travel [ tej21 ], Sick [ SED+88], HELOC [ lia18 ,FIC18 ], Adult Income [ BK96 ], California Housing [ PB97 ,nug17 ], and Diabetes [ SDG+14,Kag21 ]. For classification tasks (Travel, Sick, HELOC, Adult Income, and Diabetes), classification accuracy was used as metric, while mean squared error (MSE) was used for the regression task (California Housing). Following the evaluation protocol established in [ BSL+22], we employed Linear/Logistic Regression (LR), Decision Tree (DT), and Random Forest (RF) as downstream models to assess the quality of the synthetic data. The datasets were split into training and test sets with an 80/20 split. The generative models were trained on the training set, and the test set was reserved for evaluation. To ensure robustness, 5 sets of synthetic training data were generated, and the results are reported as average performances with corresponding standard deviations. Table 1 shows the detailed results. Binary Diffusion achieved superior performance compared to existing state-of-the-art models on the Travel, Adult Income, and Diabetes datasets. Notably, Binary Diffusion maintained competitive results on the HELOC and Sick datasets, despite having a significantly smaller parameter footprint (ranging from 1.1M to 2.6M parameters) compared to models like GReaT, which utilize large language models with hundreds of millions of parameters. Binary Diffusion does not require pretraining on external data modalities, enhancing its efficiency and reducing potential biases associated with pretraining data. In the regression task (California Housing), Binary Diffusion demonstrated competitive MSE scores. Additionally, Binary Diffusion offers faster training and sampling times, as detailed in Appendix C. Implementation details are summarized in Appendix D. Table 1: Quantitative results on table dataset benchmarks. The best results are marked in bold , second-best are underlined . The number of parameters for every model and dataset are provided in 4-th row for every dataset. Dataset Model Original TV AE CopulaGAN CTGAN Distill-GReaT GReaT Binary Diffusion Travel ( ↑)LR 82.72 ±0.00 79.58 ±0.00 73.30 ±0.00 73.30 ±0.00 78.53 ±0.00 80.10±0.00 83.79±0.08 DT 89.01 ±0.00 81.68 ±1.28 73.61 ±0.26 73.30 ±0.00 77.38 ±0.51 83.56±0.42 88.90±0.57 RF 85.03 ±0.53 81.68 ±1.19 73.30 ±0.00 71.41 ±0.53 79.50 ±0.53 84.30±0.33 89.95±0.44 Params - 36K 157K 155K 82M 355M 1.1M Sick (↑)LR 96.69 ±0.00 94.70 ±0.00 94.57 ±0.00 94.44 ±0.00 96.56±0.00 97.72±0.00 96.14±0.63 DT 98.94 ±0.12 95.39 ±0.18 93.77 ±0.01 92.05 ±0.41 95.39 ±0.18 97.72±0.00 97.07±0.24 RF 98.28 ±0.06 94.91 ±0.06 94.57 ±0.01 94.57 ±0.00 97.72±0.00 98.30±0.13 96.59±0.55 Params - 46K 226K 222K 82M 355M 1.4M HELOC ( ↑)LR 71.80 ±0.00 71.04 ±0.00 42.03 ±0.00 57.72 ±0.00 70.58 ±0.00 71.90±0.00 71.76±0.30 DT 81.90 ±1.06 76.39±0.50 42.36 ±0.10 61.34 ±0.09 81.40 ±0.15 79.10±0.07 70.25±0.43 RF 83.19 ±0.71 77.24 ±0.25 42.35 ±0.34 62.35 ±0.35 82.14±0.13 80.93±0.28 70.47 ±0.32 Params - 62K 276K 277K 82M 355M 2.6M Adult Income ( ↑)LR 85.00 ±0.00 80.53 ±0.00 80.61 ±0.00 83.20 ±0.00 84.65 ±0.00 84.77±0.00 85.45±0.11 DT 85.27 ±0.01 82.80 ±0.08 76.29 ±0.06 81.32 ±0.02 84.49 ±0.04 84.81±0.04 85.27±0.11 RF 85.93 ±0.11 83.48 ±0.11 80.46 ±0.21 83.53 ±0.07 85.25 ±0.07 85.42±0.05 85.74±0.11 Params - 53K 300K 302K 82M 355M 1.4M Diabetes ( ↑)LR 58.76 ±0.00 56.34 ±0.00 40.27 ±0.00 50.93 ±0.00 57.33 ±0.00 57.34 ±0.00 57.75±0.04 DT 57.29 ±0.03 53.30 ±0.08 38.50 ±0.02 49.73 ±0.02 54.10 ±0.04 55.23 ±0.04 57.13±0.15 RF 59.00 ±0.08 55.17 ±0.10 37.59 ±0.31 52.23 ±0.17 58.03±0.16 58.34±0.09 57.52±0.12 Params - 369K 9.4M 9.6M 82M 355M 1.8M California Housing ( ↓)LR 0.40 ±0.00 0.65 ±0.00 0.98 ±0.00 0.61 ±0.00 0.57 ±0.00 0.34±0.00 0.55±0.00 DT 0.32 ±0.01 0.45 ±0.01 1.19 ±0.01 0.82 ±0.01 0.43±0.01 0.39±0.01 0.45±0.00 RF 0.21 ±0.01 0.35 ±0.01 0.99 ±0.01 0.62 ±0.01 0.32±0.01 0.28±0.01 0.39±0.00 Params - 45K 201K 197K 82M 355M 1.5M 4 6 Conclusions This paper proposed a novel lossless binary transformation method for tabular data, which converts any data into fixed-size binary representations. Building upon this transformation, we introduced the Binary Diffusion model, a generative model specifically designed for binary data that utilizes XOR operations for noise addition and removal and is trained using binary cross-entropy loss. Our approach addresses several shortcomings of existing methods, such as the need for complex preprocessing, reliance on large pretrained models, and computational inefficiency. We evaluated our model on several tabular benchmark datasets, and demonstrated that Binary Diffusion achieves state-of-the-art performance on these datasets while being significantly smaller in size compared to existing models. Our model does not require pretraining on other data modalities, which simplifies the training process and avoids potential biases from pretraining data. Our findings indicate that the proposed model works particularly well with datasets that have a high proportion of categorical columns. References [BK96] Barry Becker and Ronny Kohavi. Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20. [BSL+22] Vadim Borisov, Kathrin Seßler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. Language models are realistic tabular data generators. arXiv preprint arXiv:2210.06280 , 2022. [FIC18] FICO. Explainable machine learning challenge. https://community.fico.com/s/ explainable-machine-learning-challenge , 2018. Accessed: 2024-09-13. [GDW+22] Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Man- grulkar, Marc Sun, and Benjamin Bossan. Accelerate: Training and inference at scale made simple, efficient and adaptable. https://github.com/huggingface/accelerate , 2022. [HG16] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 , 2016. [HS22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 , 2022. [Kag21] Kaggle. Lab diabetes readmission prediction. https://www.kaggle.com/c/ 1056lab-diabetes-readmission-prediction , 2021. Accessed: 2024-09-13. [KB14] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [lia18] Home equity line of credit (heloc) dataset. https://www.kaggle.com/datasets/ averkiyoliabev/home-equity-line-of-creditheloc , 2018. Accessed: 2024-09-13. [nug17] California housing prices. https://www.kaggle.com/datasets/camnugent/ california-housing-prices , 2017. Accessed: 2024-09-13. [PB97] R. Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters , 33(3):291–297, 1997. [PGM+19] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems , pages 8024–8035, 2019. [PVG+11] F. Pedregosa, G. Varoquaux, A. Gramfort, V . Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V . Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in python. Journal of Machine Learning Research , 12:2825–2830, 2011. [SDG+14] Beata Strack, Jonathan P. DeShazo, Chris Gennings, Juan L. Olmo, Sebastian Ventura, Krzysztof J. Cios, and John N. Clore. Impact of hba1c measurement on hospital readmission rates: Analysis of 70,000 clinical database patient records. BioMed Research International , 2014:1–11, 2014. [SED+88] Jack W. Smith, James E. Everhart, William C. Dickson, William C. Knowler, and Robert S. Johannes. Using the adap learning algorithm to forecast the onset of diabetes mellitus. Proceedings of the Annual Symposium on Computer Application in Medical Care , pages 261–265, 1988. [tej21] Tour travels customer churn prediction dataset. https://www.kaggle.com/datasets/ tejashvi14/tour-travels-customer-churn-prediction , 2021. Accessed: 2024-09-13. 5 [XSCIV19] Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Modeling tabular data using conditional gan. Advances in neural information processing systems , 32, 2019. 6 A Sampling algorithm Algorithm 1 Sampling algorithm. 1:xt←random binary tensor 2:y←condition/label 3:ye←apply condition enxoding 4:threshold ←threshold value to binarize ▷Default 0.5 5:qθ(ˆx0,ˆzt|xt, t,ye)←pre-trained denoiser network 6:fort∈ {T, . . . , 0}do ▷Selected timesteps 7: ˆx0,ˆzt←qθ(ˆx0,ˆzt|xt, t,ye) 8: ˆx0←σ(ˆx0)> threshold ▷ Apply sigmoid and compare to threshold 9: zt←get_binary _noise (t) ▷Generate random noise 10: xt←ˆx0⊕zt ▷Update xtusing XOR with zt 11:end for 12:return x t B Evaluation models hyperparameters During evaluation, we follow the evaluation proposed in [ BSL+22]. The hyperparameter configura- tion of the evaluation models for the ML efficiency experiments are provided in Table 2. Table 2: Evaluation models hyperparameters. LR DT RF Dataset max_iter max_depth max_depth n_estimators Travel 100 6 12 75 Sick 200 10 12 90 HELOC 500 6 12 78 Adult Income 1000 8 12 85 Diabetes 500 10 20 120 California Housing - 10 12 85 C Runtime comparison We compare the training and sampling times, the number of training epochs, batch sizes, and peak VRAM utilization of generative models. The results, including the number of training epochs and batch sizes required for each model to converge, are summarized in Table 3. Specifically, for TV AE, CopulaGAN, and CTGAN, we employed the default batch size of 500 and trained for 200 epochs; for Distill-GReaT and GReaT, we used a batch size of 32 and trained for 200 epochs; and for Binary Diffusion, a batch size of 256 and 500 epochs were utilized to ensure model convergence. For this study, we utilized the Adult Income dataset. All experiments were conducted on a PC with a single RTX 2080 Ti GPU, an Intel Core i9-9900K CPU 3.60 GHz with 16 threads, 64 GB of RAM, and Ubuntu 20.04 LTS as the operating system. Table 3: Comparison of training and sampling times, and peak VRAM utilization. Model Epochs Batch size Training time Sampling time (s) Peak VRAM use TV AE 200 500 2 min 21 sec 0.036 ±0.001 240 MiB CopulaGAN 200 500 4 min 26 sec 0.101 ±0.003 258 MiB CTGAN 200 500 4 min 33 sec 0.055 ±0.005 258 MiB Distill-GReaT 200 32 5 h 7 min 7.104 ±0.025 8184 MiB GReaT 200 32 7 h 33 min 11.441 ±0.034 8548 MiB Binary Diffusion 5000 256 53 min 2 sec 0.347 ±0.006 266 MiB 7 D Implementation details Denoiser Architecture . We use a similar denoiser architecture across all datasets, which takes as input a noisy vector xtof size d, a timestep t, and an input condition y. The input size dcorresponds to the size of the binary vector in each dataset. The input vector xtis projected through a linear layer with 256 output units. The timestep tis processed using a sinusoidal positional embedding, followed by two linear layers with 256 output units each, interleaved with GELU activation functions [ HG16 ]. The input condition yis processed through a linear projector with 256 output units. The outputs of the timestep embedding and the condition projector are then combined via element-wise addition. This combined representation is subsequently processed by three ResNet blocks that incorporate timestep embeddings. Depending on the size of the binary representation for each dataset, the number of parameters varies between 1.1 million and 1.4 million. Training and Sampling Details . We trained the denoiser for 50,000 steps using the Adam optimizer [KB14 ] with a learning rate of 1×10−4, a weight decay of 0, and a batch size of 256. To maintain a distilled version of the denoiser, we employed an Exponential Moving Average (EMA) with a decay rate of 0.995, updating it every 10 training steps. This distilled model was subsequently used for sampling. During training, we utilized classifier-free guidance with a 10% probability of using a zero token. The diffusion model was configured to perform 1,000 denoising steps during training. Given the relatively small size of our models, we opted for full-precision training. All training parameters are summarized in Table 4. Table 4: Binary Diffusion training details. config value optimizer Adam learning rate 1e-4 weight decay 0 batch size 256 training steps 500000 EMA decay 0.995 EMA update frequency 10 classifier-free guidance zero token 0.1 precision fp32 diffusion timesteps 1000 We empirically observed that model performance, measured by accuracy for classification tasks and mean squared error (MSE) for regression tasks deteriorates as the number of sampling steps increases. We selected 5 sampling steps and a guidance scale of 5 for all datasets to optimize performance. Table 5: Binary Diffusion sampling details. config value sampling steps 5 guidance scale 5 EMA True Environment . All experiments were conducted on a PC with a single RTX 2080 Ti GPU, an Intel Core i9-9900K CPU 3.60 GHz with 16 threads, 64 GB of RAM, and Ubuntu 20.04 LTS as the operating system. We utilized PyTorch [ PGM+19] with the Accelerate [ GDW+22] library for training generative models, and the scikit-learn [PVG+11] library for evaluating models. E Effect of sampling steps We empirically observed that model performance, measured by accuracy for classification tasks and mean squared error (MSE) for regression tasks, deteriorates as the number of sampling steps increases. Notably, for regression tasks, linear regression models show significantly poorer performance with an increasing number of sampling steps. For our analysis, we utilized an Exponential Moving Average (EMA) denoiser with a guidance scale of 5. Across all datasets, the optimal results were consistently 8 achieved when the number of sampling steps was 5. The relationship between the number of sampling steps and model performance is illustrated in Figure 3. (a) Travel (b) Sick (c) HELOC (d) Adult Income (e) Diabetes (f) California Housing Figure 3: Analysis of model performance for different numbers of sampling steps. DT stands for Decision Tree model, RF stands for Random Forest model and LR stands for Linear/Logistic regression model. 9 | 4 | 1 | The proposed Binary Diffusion model has fewer than 2 million parameters, making it lightweight compared to contemporary models that often exceed 100 million parameters. Given its focus on binary data, the model architecture is likely simpler, which will lead to faster training times. The training is performed on benchmark datasets like Travel, Adult Income, and Diabetes, which are commonly used in machine learning and typically range from a few thousand to tens of thousands of samples. Estimating a moderate number of epochs (10-30) would provide sufficient training without excessive computational cost. Considering these factors, 4 hours is a reasonable estimate on a single high-performance GPU. Additionally, since the model's small size and low complexity seem to allow efficient training processes, training under 8 hours on a single GPU is plausible. | yes | Yes | Tabular | Tabular Data Generation using Binary Diffusion | 2024-09-20 0:00:00 | https://github.com/vkinakh/binary-diffusion-tabular | 1 | inside the project repo | around 2 hours | https://drive.google.com/file/d/154F-06anE1dsOik9zkn3uBqcw9t3Lz53/view?usp=sharing | Yes | -- i put some line of code in colab to make sure it runs. Please check the colab file for more info. |
Kvasir-SEG | Yolo-SAM 2 | [] | Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model | 2024-09-14T00:00:00 | https://arxiv.org/abs/2409.09484v1 | [
"https://github.com/sajjad-sh33/yolo_sam2"
] | {'mean Dice': '0.866', 'mIoU': '0.764'} | [
"mean Dice",
"Average MAE",
"S-Measure",
"max E-Measure",
"mIoU",
"FPS",
"F-measure",
"Precision",
"Recall"
] | Given the following paper and codebase:
Paper: Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model
Codebase: https://github.com/sajjad-sh33/yolo_sam2
Improve the Yolo-SAM 2 model on the Kvasir-SEG dataset. The result
should improve on the following metrics: {'mean Dice': '0.866', 'mIoU': '0.764'}. You must use only the codebase provided.
| SELF-PROMPTING POLYP SEGMENTATION IN COLONOSCOPY USING HYBRID YOLO-SAM 2 MODEL Mobina Mansoori†, Sajjad Shahabodini†, Jamshid Abouei††, Konstantinos N. Plataniotis‡, and Arash Mohammadi† †Intelligent Signal & Information Processing (I-SIP) Lab, Concodia University, Canada ‡Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto ††Department of Electrical Engineering, Yazd University, Iran ABSTRACT Early diagnosis and treatment of polyps during colonoscopy are es- sential for reducing the incidence and mortality of Colorectal Can- cer (CRC). However, the variability in polyp characteristics and the presence of artifacts in colonoscopy images and videos pose signif- icant challenges for accurate and efficient polyp detection and seg- mentation. This paper presents a novel approach to polyp segmen- tation by integrating the Segment Anything Model (SAM 2) with the YOLOv8 model. Our method leverages YOLOv8’s bounding box predictions to autonomously generate input prompts for SAM 2, thereby reducing the need for manual annotations. We conducted exhaustive tests on five benchmark colonoscopy image datasets and two colonoscopy video datasets, demonstrating that our method ex- ceeds state-of-the-art models in both image and video segmentation tasks. Notably, our approach achieves high segmentation accuracy using only bounding box annotations, significantly reducing annota- tion time and effort. This advancement holds promise for enhancing the efficiency and scalability of polyp detection in clinical settings https://github.com/sajjad-sh33/YOLO_SAM2 . Index Terms —Colorectal Cancer, Polyp Segmentation, Computer- Aided Diagnosis, YOLOv8, Segment Anything. 1. INTRODUCTION Colorectal Cancer (CRC) is one of the leading causes of cancer- related deaths worldwide, with millions of new cases diagnosed each year [1]. Early detection and removal of polyps during colonoscopy significantly reduces the incidence and mortality of CRC. However, the accuracy and efficiency of polyp detection and segmentation re- main challenging due to the variability in polyp size, shape, and ap- pearance, as well as the presence of various artifacts and noise in colonoscopy images and videos. Computer-Aided Diagnosis (CAD) systems for colonoscopy have shown significant potential in enhancing annotation efficiency and reducing the time required for diagnosis. The advent of deep learning has led to the development of numerous neural networks tailored for medical image segmentation [2]. Most current polyp segmentation algorithms rely on variations of the UNet [3] architec- ture to segment polyps [4]. Additionally, some approaches utilize Res2Net [5] as the backbone and incorporate various regulariza- tion techniques to improve segmentation accuracy [6]. Recently, This work was partially supported by the Natural Sciences and Engineer- ing Research Council (NSERC) of Canada through the NSERC Discovery Grant RGPIN-2023-05654attention-based methods have also been introduced to further en- hance performance [7]. Despite these advancements, the process of annotating medical images remains labor-intensive and costly, as it typically requires medical expertise. This challenge has sparked interest in transfer learning, which applies knowledge from large-scale natural image datasets to specific medical imaging tasks. Recent breakthroughs in foundational models, particularly the Segment Anything Model (SAM) [8], have shown exceptional performance in generating high- quality object masks from various input prompts. SAM’s success in multiple computer vision benchmarks has gained significant atten- tion for its application in medical image segmentation [9], includ- ing polyp image segmentation [10, 11]. The introduction of SAM 2 [12] has further improved real-time segmentation, enabling the processing of entire video sequences based on single-frame annota- tions. This advancement reduces user interaction time and enhances overall performance, making SAM 2 a valuable tool in medical di- agnostics and other applications [13, 14]. Despite the strong zero-shot capabilities of SAM 2 for segment- ing medical images, it typically requires input prompts provided by human experts. This reliance on manual input limits the efficiency and scalability of the segmentation process. To address this lim- itation, we propose a self-prompting segmentation model that au- tonomously generates input prompts by integrating the YOLOv8 [15] model’s pre-trained capabilities. By combining YOLO’s bound- ing box predictions with SAM 2’s segmentation capabilities, our method aims to enhance the accuracy and efficiency of polyp seg- mentation in colonoscopy images and videos. In this work, we uti- lize only bounding box data for training our model to perform the segmentation task. This approach significantly reduces annotation time compared to previous methods that required detailed ground truth segmentation masks for training. By leveraging bounding box annotations, our model can achieve high segmentation accuracy with less manual effort, making it more practical for large-scale applica- tions. This approach not only tackles variability in polyp features but also reduces the computational load for large-scale segmentation tasks. 2. METHODOLOGY Our proposed model integrates two state-of-the-art algorithms, YOLO and SAM 2, to effectively detect and segment polyps in colonoscopy images and videos. The process begins with the YOLO model, which identifies potential polyps and places bounding boxes around them. These bounding boxes are then employed as inputarXiv:2409.09484v1 [eess.IV] 14 Sep 2024 Fig. 1 : Architecture of the Self-Prompting YOLO-SAM 2 Model for Polyp Segmentation. prompts for the SAM 2 model, which performs precise segmenta- tion of the polyps based on the provided coordinates, Fig. 1. A pre-trained YOLOv8 model is employed for its superior speed and accuracy in real-time object detection tasks. YOLOv8 processes the colonoscopy images through a Convolutional Neural Network (CNN), extracting essential features and predicting bounding boxes around potential polyps. The box coordinates of these bounding boxes are then passed to the SAM 2 model as prompts. The SAM 2 model, known for its lightweight nature and high accuracy, refines the detection results provided by YOLOv8. It uses bounding box coordinates to perform detailed segmentation, delin- eating the exact boundaries of the polyps. The SAM 2 model archi- tecture consists of several key components: Image Encoder : Uti- lizes a transformer-based architecture to extract high-level features from both images and video frames. This component is responsi- ble for understanding visual content at each timestep. Prompt En- coder : Processes user-provided prompts to guide the segmentation task. This allows SAM 2 to adapt to user input and target specific objects within a scene. Memory Mechanism : Includes a memory encoder, memory bank, and memory attention module. These com- ponents collectively store and utilize information from past frames, enabling the model to maintain consistent object tracking over time. Mask Decoder : Produces the final segmentation masks based on the encoded image features and prompts. Due to the lightweight nature of both models, it is still possible to use their combination as a real-time segmentation model to seg- ment polyp videos. During training, only the YOLOv8 model would be fine-tuned based on the bounding box dataset, while the SAM 2 weights are frozen and not fine-tuned. It is important to emphasize that in this study, the YOLOv8 model is dedicated solely to generating bounding boxes rather than segmentation masks. Our approach focuses on utilizing only bound- ing box data to train the overall segmentation model, leveraging the zero-shot capabilities of SAM 2 to minimize the need for extensive data annotation. Additionally, our findings indicate that SAM 2 pro- vides superior segmentation performance than YOLOv8.3. EXPERIMENTS AND RESULTS 3.1. Datasets In order to assess the performance of the SAM 2 model, we car- ried out comparative experiments utilizing five well-known bench- mark colonoscopy image datasets, along with two additional video colonoscopy datasets. The following sections provide detailed descriptions of each dataset. 1) Kvasir-SEG [24]: Curated by the Vestre Viken Health Trust in Norway, this dataset includes 1,000 polyp images and their corresponding ground truth from colonoscopy video sequences. 2) CVC-ClinicDB [25]: Created in collaboration with the Hospital Clinic of Barcelona, Spain, it con- tains 612 images from colonoscopy examination videos, originating from 29 different sequences. 3) CVC-ColonDB [25]: This dataset comprises 380 polyp images, each with its corresponding ground truth, captured at a resolution of 500 × 570 pixels from 15 distinct videos. 4) ETIS [26]: It includes 196 polyp images, each captured at a resolution of 966 × 1225 pixels, aiding research in polyp detection and analysis. 5) CVC-300 [27]: Comprising 60 polyp images, each captured at a resolution of 500 × 574 pixels. 6) PolypGen [28]: A comprehensive dataset for polyp detection and segmentation, includ- ing 1,537 polyp images, 4,275 negative frames, and 2,225 positive video sequences, collected from six medical centers across Europe and Africa. 7) SUN-SEG [29, 30]: This dataset includes 158,690 frames from 113 colonoscopy videos, with detailed annotations for each frame. 3.2. Implementation Details As mentioned earlier, the SAM model remains frozen, and only the YOLOv8 model is trained. After conducting experiments, we chose the YOLOv8 medium version due to its superior performance com- pared to the large and small versions. Despite having 25 million pa- rameters, it effectively handles real-time video segmentation when combined with the SAM 2 model. Additionally, we use the SAM 2 large model, which contains 224.4 million parameters. Despite its larger size, it maintains real-time performance, processing approxi- Table 1 : A quantitative comparison of five benchmark datasets with state-of-the-art (SOTA) methods is presented. The best performance is highlighted in bold. MethodsCVC-ClinicDB Kvasir-SEG CVC-ColonDB ETIS CVC-300 mIoU mDice mIoU mDice mIoU mDice mIoU mDice mIoU mDice UNet [3] 0.755 0.823 0.746 0.818 0.436 0.504 0.335 0.398 0.627 0.710 UNet++ [16] 0.729 0.794 0.744 0.821 0.408 0.482 0.344 0.401 0.624 0.707 MSEG [17] 0.864 0.909 0.839 0.897 0.666 0.735 0.630 0.700 0.804 0.874 SANet [7] 0.859 0.916 0.847 0.904 0.669 0.752 0.654 0.750 0.815 0.888 MSNet [18] 0.869 0.918 0.849 0.905 0.668 0.747 0.650 0.720 0.796 0.862 SSFormer [19] 0.855 0.906 0.864 0.917 0.721 0.802 0.720 0.796 0.827 0.895 CFA-Net [6] 0.883 0.933 0.861 0.915 0.665 0.743 0.655 0.732 0.827 0.893 Polyp-PVT [20] 0.905 0.948 0.864 0.917 0.727 0.808 0.706 0.787 0.833 0.900 SAM-Path [21] 0.644 0.750 0.730 0.828 0.516 0.632 0.442 0.555 0.756 0.844 SurgicalSAM [22] 0.505 0.644 0.597 0.740 0.330 0.460 0.238 0.342 0.472 0.623 IC-PolypSeg [2] 0.89 0.938 0.859 0.91 0.729 0.807 0.692 0.774 0.844 0.909 FAGF-Net [23] 0.898 0.943 0.879 0.927 0.738 0.820 0.724 0.801 0.837 0.903 Yolo-SAM 0.810 0.895 0.742 0.852 0.808 0.893 0.875 0.933 0.865 0.925 Yolo-SAM 2 0.909 0.951 0.764 0.866 0.848 0.918 0.904 0.949 0.817 0.889 mately 44 frames per second. This makes it suitable for applications requiring high accuracy and speed, such as video analysis and inter- active segmentation tasks. For training, 80% of the dataset is used, with the remaining 20% reserved for evaluation. We set the batch size to 64 and the input image size to 680. We implemented our model using A100 (40 GB) GPU. 3.3. Evaluation Metrics To assess the performance of our model, we use the following met- rics. Intersection over Union (IoU) : Measures the overlap between the predicted segmentation mask and the ground truth mask. Dice Coefficient : Evaluates the similarity between the predicted and ground truth masks. Additionally, for video datasets, we employ four other metrics, including F-measure ( Fmn β), sensitivity (Sen), enhanced-alignment measure ( Emn ϕ), and structure-measure ( Sα). 3.4. Comparison with State-of-the-art Methods In this section, we evaluate our model’s performance against several state-of-the-art methods for polyp segmentation in images. We pro- vide both quantitative metrics and qualitative visualizations to high- light the strengths of our approach. Table 1 presents a quantitative comparison of our proposed model with various state-of-the-art methods across five publicly available polyp segmentation datasets, as mentioned in Section 3.1, using the metrics discussed in Section 3.3 Specifically, we compared our model with several CNN and ViT models, as well as recent SAM-based segmentation techniques. In our study, CNN-based models include UNet [3], UNet++ [16], MSEG [17], SANet [7], MSNet [18], and CFA-Net [6]. For transformer-based models, we evaluated SSFormer [19] and Polyp PVT [20]. Furthermore, we explored the impact and effectiveness of SAM-Path [21], Surgical- SAM [22], IC-PolypSeg [2] and FAGF-Net [23] of which are built upon the SAM model. Based on the results, we can conclude that YOLO-SAM 2 is capable Fig. 2 : Qualitative Assessment of five polyp segmentation datasets using YOLO-SAM and YOLO-SAM 2. of effectively locating and segmenting polyps without additional training. More importantly, among all image segmentation meth- ods, YOLO-SAM 2 has achieved the highest performance across some scores by a considerable margin (e.g., 9.8%, 11% in mDice, mIoU on CVC-ColonDB [25], 14.8%, 18% in mDice, mIoU on ETIS-LaribPolypD [26] than the previous-best methods). The qualitative assessment of YOLO-SAM 2 for polyp segmen- tation is also explored. fig. 2 showcases the visualization outcomes compared to the YOLO-SAM model, utilizing two chosen bench- mark datasets. Remarkably, the YOLO-SAM 2 model exhibits en- hanced performance, producing segmentation results that are very close to the actual ground truth. Table 2 : Quantitative comparison of the four subsets of the SUN-SEG dataset, highlighting the top performance in bold. MethodsSUN-SEG-Easy SUN-SEG-Hard Sα Emn ϕ Fmn β Sen Dice Sα Emn ϕ Fmn β Sen DiceSeen2/3D [31] 0.895 0.909 0.853 0.808 0.856 0.849 0.868 0.805 0.726 0.809 PNS-Net [32] 0.906 0.910 0.860 0.827 0.861 0.870 0.892 0.822 0.774 0.823 PNS+ [29] 0.917 0.924 0.878 0.837 0.888 0.887 0.929 0.849 0.780 0.855 FLA-Net [33] 0.906 0.922 0.867 0.851 0.875 0.859 0.892 0.810 0.785 0.809 SLT-Net [34] 0.927 0.961 0.914 0.888 0.906 0.894 0.943 0.874 0.851 0.866 YOLO-SAM 2 0.937 0.971 0.958 0.923 0.945 0.893 0.931 0.902 0.805 0.865Unseen2/3D [31] 0.786 0.777 0.708 0.603 0.722 0.786 0.775 0.688 0.607 0.706 PNS-Net [32] 0.767 0.744 0.664 0.574 0.676 0.767 0.755 0.656 0.579 0.675 PNS+ [29] 0.806 0.798 0.730 0.630 0.756 0.797 0.793 0.709 0.623 0.737 FLA-Net [33] 0.722 0.697 0.597 0.506 0.636 0.721 0.701 0.592 0.522 0.628 SLT-Net [34]] 0.848 0.893 0.817 0.747 0.792 0.844 0.904 0.795 0.760 0.781 YOLO-SAM 2 0.90 0.938 0.938 0.837 0.90 0.894 0.941 0.932 0.852 0.902 Table 3 : A quantitative comparison of 23 sequence videos of the Polypgen dataset with state-of-the-art methods is presented. The best performance is highlighted in bold. Methods mDice mIoU Precision Recall F2 UNet [3] 0.4559 0.4049 0.5762 0.6307 0.4668 UNet++ [16] 0.4772 0.4272 0.6269 0.6198 0.4876 ResU-Net++ [35] 0.2105 0.1589 0.2447 0.5095 0.2303 MSEG [17] 0.4662 0.4171 0.6120 0.6217 0.4757 ColonSegNet [36] 0.3574 0.3058 0.4804 0.5296 0.3533 UACANet [37] 0.4748 0.4155 0.6108 0.6357 0.4886 UNeXt [38] 0.2998 0.2457 0.3661 0.5658 0.3201 TransNetR [39] 0.5168 0.4717 0.7881 0.5777 0.5105 YOlO-SAM 2 0.808 0.678 0.858 0.764 0.781 3.5. Quantitative Results on Video Polyp Segmentation In this section, we assess the performance of our proposed model for polyp video segmentation using the SUN-SEG and PolypGen datasets. Table 2 presents the results for four sub-test sets: SUN- SEG-Seen-Hard, SUN-SEG-Seen-Easy, SUN-SEG-Unseen-Hard, and SUN-SEG-Unseen-Easy. The terms Easy/Hard refer to the dif- ficulty levels of the samples to be segmented, Seen indicates that the clips are from the same video as the training set but do not overlap, and Unseen indicates that the clips are from videos that do not overlap with the training set. YOLO-SAM 2 has outperformed the previous best method by achieving a 7.5% higher dice score for SUN-SEG-Unseen-Easy and an 8% higher dice score for SUN- SEG-Unseen-Hard. Notably, these improvements were achieved without training the model with segmentation masks; instead, only bounding box annotations were used for training. This approach distinguishes our method from previous works. Additionally, the re- sults for the PolypGen dataset, as shown in table 3, demonstrate that the YOLO-SAM 2 model significantly improves video segmentation performance, achieving a remarkable 20.7% increase in mean in- tersection over union (mIoU) compared to previous state-of-the-art methods. 4. CONCLUSION In this paper, we introduced a self-prompting segmentation model that combines the strengths of SAM 2 and YOLOv8 for real-timepolyp detection in colonoscopy images and videos. Our approach addresses the limitations of manual input prompts by leveraging YOLOv8’s pre-trained capabilities to generate bounding box pre- dictions, which are then used by SAM 2 for accurate segmenta- tion. Through comprehensive experiments on multiple benchmark datasets, we demonstrated that our model achieves superior per- formance compared to existing state-of-the-art methods. The sig- nificant improvements in segmentation accuracy, coupled with the reduced need for detailed ground truth masks, highlight the prac- ticality and efficiency of our method for large-scale applications. Future work will focus on further optimizing the model for real-time clinical deployment and exploring its potential for other medical imaging tasks. 5. REFERENCES [1] H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, and F. Bray, “Global cancer statistics 2020: Globocan es- timates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer journal for clinicians , vol. 71, no. 3, pp. 209– 249, 2021. 1 [2] Z. Chen, K. Wang, and Y . Liu, “Efficient polyp segmentation via in- tegrity learning,” in ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2024, pp. 1826–1830. 1, 3 [3] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional net- works for biomedical image segmentation,” in Medical image comput- ing and computer-assisted intervention–MICCAI 2015: 18th interna- tional conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18 . Springer, 2015, pp. 234–241. 1, 3, 4 [4] Z. Yin, R. Wei, K. Liang, Y . Lin, W. Liu, Z. Ma, M. Min, and J. Guo, “Semantic memory guided image representation for polyp segmenta- tion,” in ICASSP 2023-2023 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5. 1 [5] S.-H. Gao, M.-M. Cheng, K. Zhao, X.-Y . Zhang, M.-H. Yang, and P. Torr, “Res2net: A new multi-scale backbone architecture,” IEEE transactions on pattern analysis and machine intelligence , vol. 43, no. 2, pp. 652–662, 2019. 1 [6] T. Zhou, Y . Zhou, K. He, C. Gong, J. Yang, H. Fu, and D. Shen, “Cross-level feature aggregation network for polyp segmentation,” Pat- tern Recognition , vol. 140, p. 109555, 2023. 1, 3 [7] J. Wei, Y . Hu, R. Zhang, Z. Li, S. K. Zhou, and S. Cui, “Shallow at- tention network for polyp segmentation,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Pro- ceedings, Part I 24 . Springer, 2021, pp. 699–708. 1, 3 [8] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y . Lo et al. , “Segment any- thing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision , 2023, pp. 4015–4026. 1 [9] J. Ma, Y . He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nature Communications , vol. 15, no. 1, p. 654, 2024. 1 [10] Y . Li, M. Hu, and X. Yang, “Polyp-sam: Transfer sam for polyp seg- mentation,” in Medical Imaging 2024: Computer-Aided Diagnosis , vol. 12927. SPIE, 2024, pp. 759–765. 1 [11] M. M. Rahman, M. Munir, D. Jha, U. Bagci, and R. Marculescu, “Pp- sam: Perturbed prompts for robust adaption of segment anything model for polyp segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 4989–4995. 1 [12] N. Ravi, V . Gabeur, Y .-T. Hu, R. Hu, C. Ryali, T. Ma, H. Khedr, R. R¨adle, C. Rolland, L. Gustafson et al. , “Sam 2: Segment anything in images and videos,” arXiv preprint arXiv:2408.00714 , 2024. 1 [13] Y . Zhang and Z. Shen, “Unleashing the potential of sam2 for biomed- ical images and videos: A survey,” arXiv preprint arXiv:2408.12889 , 2024. 1 [14] M. Mansoori, S. Shahabodini, J. Abouei, K. N. Plataniotis, and A. Mo- hammadi, “Polyp sam 2: Advancing zero shot polyp segmentation in colorectal cancer detection,” arXiv preprint arXiv:2408.05892 , 2024. 1 [15] G. Jocher, A. Chaurasia, and J. Qiu, “Yolo by ultralytics,” 2023. 1 [16] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation.” Springer, 2018, pp. 3–11. 3, 4 [17] C.-H. Huang, H.-Y . Wu, and Y .-L. Lin, “Hardnet-mseg: A simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps,” arXiv preprint arXiv:2101.07172 , 2021. 3, 4 [18] X. Zhao, L. Zhang, and H. Lu, “Automatic polyp segmentation via multi-scale subtraction network,” in Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Pro- ceedings, Part I 24 . Springer, 2021, pp. 120–130. 3 [19] J. Wang, Q. Huang, F. Tang, J. Meng, J. Su, and S. Song, “Stepwise fea- ture fusion: Local guides global,” in International Conference on Med- ical Image Computing and Computer-Assisted Intervention . Springer, 2022, pp. 110–120. 3 [20] B. Dong, W. Wang, D.-P. Fan, J. Li, H. Fu, and L. Shao, “Polyp-pvt: Polyp segmentation with pyramid vision transformers,” arXiv preprint arXiv:2108.06932 , 2021. 3 [21] J. Zhang, K. Ma, S. Kapse, J. Saltz, M. Vakalopoulou, P. Prasanna, and D. Samaras, “Sam-path: A segment anything model for semantic seg- mentation in digital pathology,” in International Conference on Medi- cal Image Computing and Computer-Assisted Intervention . Springer, 2023, pp. 161–170. 3 [22] W. Yue, J. Zhang, K. Hu, Y . Xia, J. Luo, and Z. Wang, “Surgicalsam: Efficient class promptable surgical instrument segmentation,” in Pro- ceedings of the AAAI Conference on Artificial Intelligence , vol. 38, no. 7, 2024, pp. 6890–6898. 3 [23] Y . Li, Z. Zheng, W. Ren, Y . Nie, J. Zhang, and X. Jia, “Frequency aware and graph fusion network for polyp segmentation,” in ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2024, pp. 1586–1590. 3 [24] D. Jha, P. H. Smedsrud, M. A. Riegler, P. Halvorsen, T. De Lange, D. Johansen, and H. D. Johansen, “Kvasir-seg: A segmented polyp dataset,” in MultiMedia modeling: 26th international conference,MMM 2020, Daejeon, South Korea, January 5–8, 2020, proceedings, part II 26 . Springer, 2020, pp. 451–462. 2 [25] N. Tajbakhsh, S. R. Gurudu, and J. Liang, “Automated polyp detec- tion in colonoscopy videos using shape and context information,” IEEE transactions on medical imaging , vol. 35, no. 2, pp. 630–644, 2015. 2, 3 [26] J. Silva, A. Histace, O. Romain, X. Dray, and B. Granado, “Toward embedded detection of polyps in wce images for early diagnosis of col- orectal cancer,” International journal of computer assisted radiology and surgery , vol. 9, pp. 283–293, 2014. 2, 3 [27] D. V ´azquez, J. Bernal, F. J. S ´anchez, G. Fern ´andez-Esparrach, A. M. L´opez, A. Romero, M. Drozdzal, and A. Courville, “A benchmark for endoluminal scene segmentation of colonoscopy images,” Journal of healthcare engineering , vol. 2017, no. 1, p. 4037190, 2017. 2 [28] S. Ali, D. Jha, N. Ghatwary, S. Realdon, R. Cannizzaro, O. E. Salem, D. Lamarque, C. Daul, M. A. Riegler, K. V . Anonsen et al. , “A multi- centre polyp detection and segmentation dataset for generalisability as- sessment,” Scientific Data , vol. 10, no. 1, p. 75, 2023. 2 [29] G.-P. Ji, G. Xiao, Y .-C. Chou, D.-P. Fan, K. Zhao, G. Chen, and L. Van Gool, “Video polyp segmentation: A deep learning perspective,” Machine Intelligence Research , vol. 19, no. 6, pp. 531–549, 2022. 2, 4 [30] M. Misawa, S.-e. Kudo, Y . Mori, K. Hotta, K. Ohtsuka, T. Matsuda, S. Saito, T. Kudo, T. Baba, F. Ishida et al. , “Development of a computer- aided detection system for colonoscopy and a publicly accessible large colonoscopy video database (with video),” Gastrointestinal endoscopy , vol. 93, no. 4, pp. 960–967, 2021. 2 [31] J. G.-B. Puyal, K. K. Bhatia, P. Brandao, O. F. Ahmad, D. Toth, R. Kader, L. Lovat, P. Mountney, and D. Stoyanov, “Endoscopic polyp segmentation using a hybrid 2d/3d cnn,” in Medical Image Comput- ing and Computer Assisted Intervention–MICCAI 2020: 23rd Interna- tional Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23 . Springer, 2020, pp. 295–305. 4 [32] G.-P. Ji, Y .-C. Chou, D.-P. Fan, G. Chen, H. Fu, D. Jha, and L. Shao, “Progressively normalized self-attention network for video polyp seg- mentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention . Springer, 2021, pp. 142–152. 4 [33] J. Lin, Q. Dai, L. Zhu, H. Fu, Q. Wang, W. Li, W. Rao, X. Huang, and L. Wang, “Shifting more attention to breast lesion segmentation in ultrasound videos,” in International Conference on Medical Image Computing and Computer-Assisted Intervention . Springer, 2023, pp. 497–507. 4 [34] X. Cheng, H. Xiong, D.-P. Fan, Y . Zhong, M. Harandi, T. Drummond, and Z. Ge, “Implicit motion handling for video camouflaged object de- tection,” in Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , 2022, pp. 13 864–13 873. 4 [35] D. Jha, P. H. Smedsrud, M. A. Riegler, D. Johansen, T. De Lange, P. Halvorsen, and H. D. Johansen, “Resunet++: An advanced archi- tecture for medical image segmentation,” in 2019 IEEE international symposium on multimedia (ISM) . IEEE, 2019, pp. 225–2255. 4 [36] D. Jha, S. Ali, N. K. Tomar, H. D. Johansen, D. Johansen, J. Rittscher, M. A. Riegler, and P. Halvorsen, “Real-time polyp detection, localiza- tion and segmentation in colonoscopy using deep learning,” Ieee Ac- cess, vol. 9, pp. 40 496–40 510, 2021. 4 [37] T. Kim, H. Lee, and D. Kim, “Uacanet: Uncertainty augmented con- text attention for polyp segmentation,” in Proceedings of the 29th ACM international conference on multimedia , 2021, pp. 2167–2175. 4 [38] J. M. J. Valanarasu and V . M. Patel, “Unext: Mlp-based rapid medical image segmentation network,” in International conference on medical image computing and computer-assisted intervention . Springer, 2022, pp. 23–33. 4 [39] D. Jha, N. K. Tomar, V . Sharma, and U. Bagci, “Transnetr: transformer- based residual network for polyp segmentation with multi-center out- of-distribution testing,” in Medical Imaging with Deep Learning . PMLR, 2024, pp. 1372–1384. 4 | 4 | 1 | The YOLOv8 medium model has 25 million parameters and the SAM 2 large model has 224.4 million parameters. With a batch size of 64 and an input image size of 680, it is fairly demanding but feasible on a single GPU, especially since the paper states they used an A100 GPU (40 GB). Given the datasets involved (around 5,000 images and 158,690 video frames, if including the entire training set), training could reasonably be expected to complete in about 4 hours with a good setup and optimized training routines. For deep learning models of this scale, training on a single A100 GPU for several epochs across substantial data usually fits well within reasonable compute time estimates. | yes | Yes | CV | Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model | 2024-09-14 0:00:00 | https://github.com/sajjad-sh33/yolo_sam2 | 1 | downloaded from kaggle https://www.kaggle.com/datasets/debeshjha1/kvasirseg | 40sec * 50 epoch = 33.33 minutes | https://colab.research.google.com/drive/1_iOHO7njejU5yFtKPoF2477d_H0Cw4tf?usp=sharing | Yes | -- Fine tuning the model. I have patched the code and also put instuctions on how to prepare data and fix the python file for Kvasir dataset. |
Office-31 | EUDA | [] | EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer | 2024-07-31T00:00:00 | https://arxiv.org/abs/2407.21311v1 | [
"https://github.com/a-abedi/euda"
] | {'Accuracy': '92'} | [
"Accuracy",
"Avg accuracy"
] | Given the following paper and codebase:
Paper: EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer
Codebase: https://github.com/a-abedi/euda
Improve the EUDA model on the Office-31 dataset. The result
should improve on the following metrics: {'Accuracy': '92'}. You must use only the codebase provided.
| PREPRINT 1 EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer Ali Abedi, Graduate Student Member, IEEE, Q. M. Jonathan Wu, Senior Member, IEEE, Ning Zhang, Senior Member, IEEE, Farhad Pourpanah, Senior Member, IEEE Abstract —Unsupervised domain adaptation (UDA) aims to mitigate the domain shift issue, where the distribution of training (source) data differs from that of testing (target) data. Many models have been developed to tackle this problem, and recently vision transformers (ViTs) have shown promising results. How- ever, the complexity and large number of trainable parameters of ViTs restrict their deployment in practical applications. This underscores the need for an efficient model that not only reduces trainable parameters but also allows for adjustable complexity based on specific needs while delivering comparable performance. To achieve this, in this paper we introduce an Efficient Unsuper- vised Domain Adaptation (EUDA) framework. EUDA employs the DINOv2, which is a self-supervised ViT, as a feature extractor fol- lowed by a simplified bottleneck of fully connected layers to refine features for enhanced domain adaptation. Additionally, EUDA employs the synergistic domain alignment loss (SDAL), which integrates cross-entropy (CE) and maximum mean discrepancy (MMD) losses, to balance adaptation by minimizing classification errors in the source domain while aligning the source and target domain distributions. The experimental results indicate the effectiveness of EUDA in producing comparable results as compared with other state-of-the-art methods in domain adaptation with significantly fewer trainable parameters, between 42% to 99.7% fewer. This showcases the ability to train the model in a resource-limited environment. The code of the model is available at: https://github.com/A-Abedi/EUDA. Index Terms —Unsupervised domain adaptation, vision trans- formers, Max mean discrepancy, self-supervised Learning I. I NTRODUCTION UNSUPERVISED domain adaptation (UDA) has shown promising results in addressing the domain shift prob- lem, where the distribution of the training data (source do- main) differs from that of the test data (target domain), and knowledge transfer [1]. The domain shift problem can significantly degrade the performance of conventional deep neural networks (DNNs) when applied directly to the target domain [2]. UDA overcomes this issue by leveraging unlabeled data from the target domain to align feature distributions and enhance model adaptation [3]. Additionally, traditional DNNs often rely on a large number of annotated samples, which limits their applicability in conditions where labeling is expensive, such as autonomous driving [4]. UDA solves this A. Abedi, Q. M. J. Wu, and N. Zhang are with the Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON N9B 3P4, Canada (e-mail: abedi3@uwindsor.ca, jwu@uwindsor.ca, ning.zhang@uwindsor.ca). F. Pourpanah is with the Department of Electrical and Computer En- gineering, Queen’s University, Kingston, ON K7L 3N6, Canada (e-mail: f.pourpanahnavan@queensu.ca). Manuscriptproblem by transferring knowledge from previously labeled data (source domain) to unlabeled data in the target domain, thereby enhancing model performance without the need for extensive new annotations. Existing UDA methods can be classified into: ( i) adver- sarial methods, which use generative adversarial networks to generate domain-invariant features [5]–[7]; ( ii) reconstruction methods, which employ encoder-decoder structures to learn invariant features by reconstructing domains [8], [9]; ( iii) transformation methods, which optimize inputs through trans- formations to enhance adaptation [3], [10]; and ( iv) discrep- ancy methods, which align domain features through statistical metrics like maximum mean discrepancy (MMD) [11]–[13] and correlational alignment (CORAL) [14], [15]. Regardless of which UDA method is chosen, the backbone of the network for feature extraction plays a crucial role. The backbone is responsible for extracting features from the input data, and the more generalized and domain-invariant these features are, the better the UDA results will be. Recently, vision transformers (ViTs) [16] have revolutionized computer vision by adapting the transformer architecture, originally designed for natural language processing [17], to process images as sequences of patches. This approach enables ViTs to capture global image dependencies, offering a more comprehensive understanding than the local feature extraction typical of convolutional neural networks (CNNs) [18]. This capability makes ViTs well- suited for complex tasks such as object recognition and scene understanding [16] and particularly effective in UDA [19]– [21]. The success of ViTs in large-scale pre-training also shows significant performance improvements on benchmarks, par- ticularly in UDA, making the integration of self-supervised learning (SSL) strategies effective [22]–[24]. SSL allows ViTs to learn detailed representations from vast, unlabeled datasets, enhancing generalization and robustness across various visual domains [22], [25], [26]. Among these methods, DINOv2 [24] stands out for its ability to extract general-purpose features [24], [26]. This positions DINOv2 as a powerful tool for advancing UDA, highlighting its transformative potential in modern visual computing. Despite the promising performance of the state-of-the- art UDA models, they often follow complex frameworks. Particularly those utilizing ViT architectures as backbone, which involve a significant amount of learnable parameters. These parameters need powerful computational resources for training. This complexity not only increases the cost of de- ployment but also limits their applicability in environmentsarXiv:2407.21311v1 [cs.CV] 31 Jul 2024 PREPRINT 2 Fig. 1. Conceptual diagram representing the impact of the SDAL in aligning the data distributions of the source and target domains while also enhancing classification accuracy. By integrating MMD with CE loss, SDAL effectively minimizes domain discrepancies and optimizes classification outcomes in a simulated environment. with limited resources. Therefore, there is a critical need for UDA models that are inherently simpler and have fewer trainable parameters. Such models would ensure more stable training and better generalization, and offer the versatility to adjust their complexity based on specific task demands and the complexity of various applications. By achieving this, we can facilitate more efficient domain adaptation across a wider range of practical settings, making advanced UDA techniques more accessible and sustainable. In this paper, we introduce a novel Efficient Unsupervised Domain Adaptation framework, known as EUDA, that uti- lizes the capabilities of self-supervised ViTs to address the challenges of UDA. Specifically, EUDA integrates DINOv2 as a feature extractor with a simple yet effective bottleneck consisting of fully connected layers. Using the fully connected layers aims to refine the features extracted by DINOv2 for effective domain adaptation. Additionally, inspired by [12] EUDA uses the synergistic domain alignment loss (SDAL), which combines cross-entropy (CE) loss and MMD loss. This hybrid loss function effectively balances the adaptation by minimizing classification errors on the source domain while aligning the distributions between the source and target domains, as shown in Fig. 1. By leveraging the self-supervised pre-training of DINOv2 and the computational efficiency of the MMD, our approach offers a promising solution to UDA. It reduces the dependency on large, complex models and extensive computational resources, making it not only effective but also suitable for real-world applications where adaptability and efficiency are crucial. To the best of our knowledge, this is the first application of DINOv2 within a UDA framework, marking a significant advancement in leveraging self-supervised learning techniques for addressing domain adaptation challenges. In summary, the main contributions of this paper are as follows: •To the best of our knowledge, we are the first to apply DINOv2, a self-supervised vision transformer, for UDA, utilizing its strength to generate robust, domain-invariant features. •We propose EUDA (Efficient Unsupervised Domain Adaptation), a novel framework that leverages self- supervised ViTs and MMD to tackle the challengesof UDA. EUDA features a flexible architecture with a simple bottleneck of fully connected layers, which require significantly fewer parameters to be tuned and can be adjusted according to the complexity of the problem. •We perform comprehensive experiments and ablation studies to validate the effectiveness of our proposed method on multiple benchmarks. The rest of this paper is organized as follows: Section II reviews UDA, ViTs, and self-supervised learning approaches within ViTs. Section III details the proposed EUDA model. The experimental results across various configurations are presented in Section IV. Finally, Section V concludes the paper with insights and implications of our findings. II. R ELATED WORK A. Unsupervised Domain Adaptation As stated in the introduction, the domain shift problem is the major concern in UDA. Over the years, numerous methods have been developed to tackle this issue, aiming to enable effective model performance on the target domain without relying on labeled data [27]. Adversarial methods use adversarial training to create domain-invariant features. For example, DANN [5] leverages a gradient reversal layer that inverts the gradient sign during training. ADDA [6] trains a source encoder on labeled images, and mixes source and target images to confuse the discriminator. Adversarial methods are computationally intensive, often exhibit unstable training pro- cesses, and do not always guarantee accurate feature mapping. Transformation methods enhance domain adaptation by pre- processing input samples to optimize their condition for model training. DDA [10] preprocesses input data to align signal- to-noise ratios and reduce domain shifts, while TransPar [3] applies the lottery ticket hypothesis to identify and adjust transferable network parameters for better cross-domain gen- eralization. These approaches are simple and effective but may not capture all domain differences. Reconstruction methods utilize an encoder-decoder setup to harmonize features across domains by reconstructing target domain images from source domain data. MTAE [8] utilized a multitask autoencoder to reconstruct images from multiple domains. DSNs [9] enhance model performance by divid- ing image representations into domain-specific and shared subspaces. This improves generalization and surpasses other adaptation methods. They usually face challenges with high computational costs, training instability, and potential overfit- ting to the source domain. Discrepancy methods have emerged as particularly effective for UDA. DAN [12] embeds task-specific layer representa- tions into a reproducing kernel Hilbert space (RKHS) and uses MMD to explicitly match the mean embeddings of different domain distributions. WeightedMMD [13] introduces a weighted MMD that incorporates class-specific auxiliary weights to address class weight bias in domain adaptation. This approach optimizes feature alignment between source and target domains by considering class prior distributions. Joint adaptation networks [28] align the joint distributions of multiple domain-specific layers across domains using a Joint PREPRINT 3 MMD criterion to improve domain adaptation by considering the combined shift in input features and output labels. B. Vision Transformer Transformers, initially introduced by Vaswani et al. [17], have demonstrated exceptional performance across various language tasks. The core of their success lies in the atten- tion mechanism, which excels at capturing long-range de- pendencies. ViT [16] represents a groundbreaking approach to applying transformers in vision tasks. It treats images as sequences of fixed-size, non-overlapping patches. Unlike CNNs that depend on inductive biases such as locality and translation equivariance, ViT leverages the power of large- scale pre-training data and global context modeling. ViT offers a straightforward yet effective balance between accuracy and computational efficiency [18]. In the context of UDA, ViTs have demonstrated remarkable potential. TVT [19] introduces a transferability adaptation module to guide the attention mechanism and a discriminative clustering technique to enhance feature diversity. CDTrans [20] consists of a triple-branch structure with weight-sharing and cross-attention to align features from source and target domains, alongside a two-way center-aware pseudo labeling strategy to improve label quality. WinTR [29] uses two clas- sification tokens within a transformer model to learn distinct domain mappings with domain-specific classifiers. This en- hances the cross-domain knowledge transfer through source- guided label refinement and single-sided feature alignment. PMTrans [21] combines patches from both domains using game-theoretical principles, mixup losses, and attention maps for effective domain alignment and feature learning. While these methods have shown promising performance in solving UDA problems, they typically rely on complex architectures with extensive trainable parameters and sophis- ticated training regimes, including multi-branch transformers, cross-attention, adversarial training, game-theoretical princi- ples, and mixup losses. Furthermore, these models generally require training the entire network, resulting in a substantial computational burden. As a result, achieving promising out- comes necessitates extensive training on large-scale models, limiting their practical applicability in resource-constrained environments [19]–[21]. C. Self-supervised Vision Transformer SSL has revolutionized the field of computer vision by enabling models to learn effective representations from un- labeled data, eliminating the dependency on large annotated datasets. These models learn representations by performing pre-text tasks, such as rotation prediction [30] and image colorization [31], and then apply the learned representations to downstream tasks. In the context of ViTs, SSL has been pivotal in enhancing their ability to extract robust, domain-invariant features. Jigsaw-ViT [32] integrates the jigsaw puzzle-solving problem into vision transformer architectures for improving image classification. EsViT [33] utilizes a multi-stage Trans- former architecture to reduce computational complexity and introduces a novel region-matching pre-training task.Among recent advancements in SSL for ViTs, DINO [26] and DINOv2 [24] have notably enhanced SSL by scaling up ViTs to effectively match representations across different views of the same image. With improvements like automatic data curation and innovative loss functions, DINOv2 excels in stability and efficiency. It can learn domain-invariant features essential for image classification and other vision tasks. Its capacity to generate robust feature maps makes DINOv2 an excellent choice for a feature extractor and representation generator. Because of these properties, we use DINOv2 in this research to significantly boost performance and generalization across various domains. III. M ETHODOLOGY A. Problem Formulation UDA aims to learn a function f:X → Y that performs well on an unlabeled target domain by leveraging information from a labeled source domain. Let XandYdenote the input and label spaces, respectively, Ds={(xs i, ys i)}ns i=1indicates the source domain data, where xs i∈ X andys i∈ Y represent theithinput-output pairs, and Dt={xt j}nt j=1indicates the target domain data, where xt j∈ X is the jthinput samples without labels, nsandntare the number of samples in the source and target domains, respectively. The goal of UDA is to train a model on labeled source data Dsand unlabeled target dataDtin such a way that it performs well on the predicting target data labels yt i. Algorithm 1 The Training Procedure of EUDA 1:ϕ←feature extractor with pre-trained DINOv2, freeze weights 2:β←bottleneck 3:ψ←classifier 4:θ←optimizer 5:λ←lambda value 6:forepoch inrange(num epochs) do 7: for(xs, ys), xtinDataLoader do 8: fs←ϕ(xs) 9: ft←ϕ(xt) 10: ˆy←ψ(β(fs)) 11: Lce←cross entropy (ˆy, ys) 12: Lmmd←compute mmd(β(fs), β(ft)) 13: L←λ×Lce+ (1−λ)×Lmmd 14: θ.zero grad() 15: L.backward () 16: θ.step() 17: end for 18:end for 19:Evaluate on target data B. Model Overview Fig. 2 shows an overview of our framework for addressing UDA problems. The model receives images from labeled source and unlabeled target domains. It then extracts features using a pre-trained self-supervised ViT, while its weights are PREPRINT 4 Fig. 2. The architecture of the proposed EUDA model. The process begins with extracting features from both labeled source and unlabeled target domains by a pre-trained self-supervised ViT. The extracted features pass through a bottleneck consisting of several fully connected layers. The output from the bottleneck is utilized in two ways: to compute the MMD loss and fed into the classification head. The classification results on the source domain are then applied to calculate the CE component of the SDAL, which combines MMD loss and CE loss to effectively train the model under unsupervised domain adaptation conditions. Fig. 3. Self-distillation with no labels. Image from [26]. frozen to ensure stability and efficiency. The extracted features from both domains are fed into a bottleneck composed of multiple fully connected layers to refine and condense the information. The bottleneck’s output serves a dual purpose. Firstly, it inputs into the classification head to compute the CE loss on source domain. Secondly, it contributes to the computation of the MMD loss to minimize the distance between source and target domain distributions in RKHS to align their feature representations. The pseudocode for our training procedure of EUDA is outlined in Algorithm 1. In the following subsections, we present different components of our model in detail. C. Feature Extractor In our effort to leverage the capabilities of ViTs for domain adaptation, we adopt DINOv2 [24] as a self-supervised pre- trained model for feature extraction. It utilizes self-distillation to derive insights from unlabeled data autonomously. Central to DINOv2’s design is its twin-network structure, which includes a student and a teacher network. Both networks em- ploy the same underlying architecture based on ViTs. During training, these networks process different augmentations of the same image. They aim to extract consistent features regardless of the input variations.During the training phase (see Fig. 3), the student net- work’s parameters are continually updated, while the teacher network’s parameters are progressively updated through an exponential moving average of the student’s parameters. This ensures that the teacher model remains robust and generaliz- able. Moreover, DINOv2 uses registers [34] to improve the performance and interpretability of ViTs by addressing the problem of artifacts in feature maps, commonly observed in both supervised and self-supervised ViT networks. Registers are additional tokens added to the input sequence of ViTs to absorb high-norm token computations that typically occur in low-information areas of images. In our study, we leverage DINOv2 as the primary fea- ture extractor due to its robust training on a large-scale, diverse dataset through self-supervised learning. To enhance efficiency, we freeze the model’s parameters. This approach reduces the computational burden and significantly decreases the number of trainable parameters. This makes our method notably more efficient compared to other UDA techniques that require extensive training. Consequently, our streamlined model is well-suited for deployment in real-world scenarios and on-edge devices, where computational resources are often limited. Fig. 4 shows the attention map of the pre-trained DINOv2 base model without any fine-tuning. This highlights its robustness across four different domains of the office-home [35] dataset. This demonstrates the precision and accuracy of the features extracted by DINOv2, underscoring the effective- ness of the EUDA feature extraction approach. Using DINOv2 as the feature extractor significantly en- hances our model by employing its robust, self-supervised pre-training to extract general-purpose features from images. The pre-trained DINOv2, with its weights frozen, ensures the extraction of high-quality features and reduces the number of trainable parameters. It can greatly simplify integration and adaptation in resource-limited settings, providing a strong foundation for effective domain adaptation. PREPRINT 5 Fig. 4. Attention maps of the pre-trained DINOv2 base model on an Alarm Clock from four different domains from the office-home dataset: Art, Clipart, Product, and Real World. This illustrates the model’s robust feature extraction capability across diverse image contexts without any fine-tuning. D. Bottleneck The bottleneck component in our architecture consists of a series of fully connected layers. At the core of our model, the bottleneck output serves two primary functions. Firstly, it feeds into a classification head composed of a simple linear layer. This setup utilizes minimal computational resources to classify images efficiently. Secondly, the output acts as the feature vector for calculating the MMD loss, which aims to minimize the distance between source and target feature vectors. This facilitates effective domain adaptation. The design of this dual-function bottleneck simplifies our model architecture and enhances its generalization capabilities across various domains. Its straightforward structure helps prevent overfitting, a frequent challenge in domain-specific applications, ensuring robustness for real-world deployments. Additionally, the minimalistic approach in the bottleneck de- sign allows for easier maintenance and adaptability, which is crucial for meeting the dynamic requirements of domain adaptation tasks. The bottleneck follows a simple yet efficient design for flexible adjustments in the model’s complexity. This flexibility allows us to adjust the model to meet specific performance needs or computational limits, which is particularly useful in settings with restricted resources. E. Synergistic Domain Alignment Loss The SDAL is a composite loss function designed for the UDA models. It combines the strengths of CE loss and MMD loss. Formally defined as: λLCE+ (1−λ)LMMD , (1) where LCEandLMMD are the CE and MMD losses, re- spectively, and λis a tunable hyperparameter that balances the influence of each loss component, it allows for a flexible adjustment according to specific domain adaptation needs. CE loss is a widely used loss function in classification tasks. The CE loss increases as the predicted probability diverges from the actual label, thus providing a robust metric for op-timizing classification models. The mathematical formulation of CE loss for a multi-class classification task is given by: LCE=−1 NNX i=1CX c=1yi,clog(ˆyi,c), (2) where Nis the number of samples, Cis the number of classes, yi,cis a one-hot vector indicating whether class label cis the correct classification for the ithsample, and ˆyi,cis the predicted probability of the ithsample belonging to class c. This formula penalizes the deviation of each predicted class probability from the actual class labels. MMD loss is defined as the squared distance between the mean embeddings of features from the two domains in an RKHS [36]. The MMD loss can be written as: LMMD =MMD2(Ds,Dt) = 1 nsnsX i=1ϕ(xs i)−1 ntntX j=1ϕ(xt j) 2 H,(3) where nsandntdenote the number of samples in the source and target domains, respectively, xs iandxt jare the data samples from these domains, and ϕ:X → H is a feature mapping function that projects the data into RKHS H. By minimizing the MMD loss, we aim to align the statistical properties of the source and target domains. The loss defined in Eq. (1) leverages the labeled data in the source domain to fine-tune the classification performance via CE loss while aligning the distribution of the source and target domain features through MMD. The synergy between these two loss components enhances the model’s ability to per- form effectively across different domains. This makes SDAL suitable for real-world applications where domain shift is a significant challenge. Additionally, minimizing the distance between the source and target domain distributions ensures that the feature representations from both domains are closely aligned, facilitating smoother and more reliable domain adap- tation. Empirical studies have shown that MMD effectively reduces domain shift and improves model performance across various tasks [11], [12]. IV. E XPERIMENTAL STUDIES A. Datasets We examine the effectiveness of our proposed method across three benchmarks. These datasets include: Office-31 [37], which consists of 4,652 images from three domains: Amazon, Webcam, and DSLR, across 31 categories; Office- Home [35], which consists of 15,500 images spread over four domains: Art, Clipart, Products, and Real World, within 65 categories; VisDA-2017 [38], which includes 280,000 images from four domains: Caltech-256, ImageNet, ILSVRC2012, and PASCAL VOC 2012 as the source domain and real images as the target, belonging to 12 categories; and DomainNet [39], which consists of 48,129 images from six domains: Clipart, Real, Sketch, Infograph, Painting, and Quickdraw, across 345 categories. PREPRINT 6 TABLE I COMPARISON OF OUR BEST MODEL WITH OTHER METHODS ON OFFICE -HOME DATASET . THE BEST PERFORMANCES AND OUR MODEL ARE MARKED AS BOLD .NOTE THAT OUR MODEL ACHIEVED THESE RESULTS WITH ABOUT 83% LESS LEARNABLE PARAMETERS THAN THE BEST SOTA. Model Backbone A)C A )P A )R C )A C )P C )R P )A P )C P )R R )A R )C R )P Avg. Source Only ResNet 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 41.2 59.9 46.1 RevGrad ResNet 45.6 59.3 70.1 47.0 58.5 60.9 46.1 43.7 68.5 63.2 51.8 76.8 57.6 CDAN ResNet 50.7 70.6 76.0 57.6 70.0 70.0 57.4 50.9 77.3 70.9 56.7 81.6 65.8 TADA ResNet 53.1 72.3 77.2 59.1 71.2 72.1 59.7 53.1 78.4 72.4 60.0 82.9 67.6 SHOT ResNet 57.1 78.1 81.5 68.0 78.2 78.1 67.4 54.9 82.2 73.3 58.8 84.3 71.8 Source Only ViT (Deit) 55.6 73.0 79.4 70.6 72.9 76.3 67.5 51.0 81.0 74.5 53.2 82.7 69.8 Source Only ViT 66.16 84.28 86.64 77.92 83.28 84.32 75.98 62.73 88.66 80.10 66.19 88.65 78.74 TVT - Baseline ViT 71.94 80.67 86.67 79.93 80.38 83.52 76.89 70.93 88.27 83.02 72.91 88.44 80.30 CDTrans ViT (Deit) 68.8 85.0 86.9 81.5 87.1 87.3 79.6 63.3 88.2 82.0 66.0 90.6 80.5 TVT ViT 74.89 86.82 89.47 82.78 87.95 88.27 79.81 71.94 90.13 85.46 74.62 90.56 83.56 EUDA - LL (Ours) ViT (DinoV2) 80.6 84.9 88.4 85.2 88.0 88.6 76.6 77.4 86.7 87.7 82.5 92.8 84.9 PMTrans ViT 81.2 91.6 92.4 88.9 91.6 93.0 88.5 80.0 93.4 89.5 82.4 94.5 88.9 TABLE II COMPARISON OF OUR BEST MODEL WITH OTHER METHODS ON OFFICE -31 DATASET . THE BEST PERFORMANCES AND OUR MODEL ARE MARKED AS BOLD .NOTE THAT OUR MODEL ACHIEVED THESE RESULTS WITH ABOUT 83% LESS LEARNABLE PARAMETERS THAN THE BEST SOTA. Model Backbone A)W D )W W )D A )D D )A W )A Avg. Source Only ResNet 68.9 68.4 62.5 96.7 60.7 99.3 76.1 RevGrad ResNet 82.0 96.9 99.1 79.7 68.2 67.4 82.2 JAN ResNet 86.0 96.7 99.7 85.1 69.2 70.7 84.6 CDAN ResNet 86.0 96.7 99.7 85.1 69.2 70.7 84.6 SHOT ResNet 90.1 98.4 99.9 94.0 74.7 74.3 88.6 Source Only ViT (Deit) 90.8 90.4 76.8 98.2 76.4 100.0 88.8 Source Only ViT 89.2 98.9 100.0 88.8 80.1 79.8 89.5 TVT - Baseline ViT 91.6 99.0 100.0 90.6 80.2 80.2 90.3 TVT ViT 91.6 99.0 100.0 90.6 80.2 80.2 90.3 EUDA - LL (Ours) ViT (DinoV2) 95.3 100.0 100.0 93.4 80.5 82.9 92.0 CDTrans ViT (Deit) 97.0 96.7 81.1 99.0 81.9 100.0 92.6 PMTrans ViT 99.1 99.6 100.0 99.4 85.7 86.3 95.0 B. Evaluation To evaluate the effectiveness of our UDA model, following the procedure in [19]–[21], We train our model by alternately using each domain as the source with labeled data and another as the target with unlabeled data, then evaluate using labeled data from the target domain. This procedure is repeated until all domains have been used as the source domain. This setup ensures that the model’s ability to generalize to new, unseen environments is properly tested. The primary metric for evaluation is accuracy, specifically how accurately the model classifies new samples when trained on the source domain and tested on the target domain. This metric provides a clear measure of the model’s performance in bridging the gap between disparate data distributions of the source and target domains.C. Baselines We compare our model against a broad spectrum of state-of- the-art models. We categorize these models into ResNet- and ViT-based models. The ResNet-based models include RevGrad [5], CDAN [40], TADA [41], SHOT [42], JAN [28], BNM [43], MCD [44], SWD [45], and DTA [46]. While ViT-based models include CDTrans [20], PMTrans [21], and TVT [19]. D. Implementation Details In our domain adaptation model, we designed the bottleneck component with varying complexities to assess how architec- tural depth influences feature processing. Our configurations ranged from a simple single-layer network with 256 neurons (designated S for Small) to more complex setups like 2048- 1024-512-256 (B for Base model), 4096-2048-1024-512-256 (L for Large model), and 8192-4096-2048-1024-512-256 (H for Huge model). We also employed different sizes of the PREPRINT 7 TABLE III COMPARISON OF OUR BEST MODEL WITH OTHER METHODS ON VISDA-2017 DATASET . THE BEST PERFORMANCE AND OUR MODEL ARE MARKED AS BOLD .NOTE THAT OUR MODEL ACHIEVED THESE RESULTS WITH ABOUT 42% LESS LEARNABLE PARAMETERS THAN THE BEST SOTA. Model Backbone plane bcycl bus car house knife mcycl person plant sktbrd train truck Avg. Source Only ResNet 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81.0 26.5 73.5 8.5 52.4 RevGrad ResNet 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 BNM ResNet 89.6 61.5 76.9 55.0 89.3 69.1 81.3 65.5 90.0 47.3 89.1 30.1 70.4 MCD ResNet 87.0 60.9 83.7 64.0 88.9 79.6 84.7 76.9 88.6 40.3 83.0 25.8 71.9 SWD ResNet 90.8 82.5 81.7 70.5 91.7 69.5 86.3 77.5 87.4 63.6 85.6 29.2 76.4 DTA ResNet 93.7 82.2 85.6 83.8 93.0 81.0 90.7 82.1 95.1 78.1 86.4 32.1 81.5 SHOT ResNet 94.3 88.5 80.1 57.3 93.1 94.9 80.7 80.3 91.5 89.1 86.3 58.2 82.9 Source Only ViT (Deit) 97.7 48.1 86.6 61.6 78.1 63.4 94.7 10.3 87.7 47.7 94.4 35.5 67.1 Source Only ViT 98.2 73.0 82.6 62.0 97.4 63.6 96.5 29.8 68.8 86.8 96.8 23.7 73.3 TVT - Baseline ViT 94.6 81.6 81.9 69.9 93.6 70.0 88.6 50.5 86.8 88.5 91.5 20.1 76.5 EUDA - BH (Ours) ViT (DinoV2) 99.5 78.1 90.6 58.1 98.5 98.5 97.8 63.4 79.8 97.3 98.2 37.1 83.2 TVT ViT 93.0 85.6 77.6 60.5 93.6 98.2 89.4 76.4 93.6 92.1 91.7 55.8 84.0 PMTrans ViT 98.2 92.2 88.1 77.0 97.4 95.8 94.0 72.1 97.1 95.2 94.6 51.0 87.7 CDTrans ViT (Deit) 97.1 90.5 82.4 77.5 96.6 96.1 93.6 88.6 97.9 86.9 90.3 62.8 88.4 TABLE IV COMPARISON OF OUR BEST MODEL WITH OTHER METHODS ON DOMAIN NETDATASET . THE BEST PERFORMANCES AND OUR MODEL ARE MARKED AS BOLD .NOTE THAT OUR MODEL ACHIEVED THESE RESULTS WITH ABOUT 99.7% LESS LEARNABLE PARAMETERS THAN THE BEST SOTA. MCD clp inf pnt qdr rel skt Avg. SWD clp inf pnt qdr rel skt Avg. BNM clp inf pnt qdr rel skt Avg. clp - 15.4 25.5 3.3 44.6 31.2 24.0 clp - 14.7 31.9 10.1 45.3 36.5 27.7 clp - 12.1 33.1 6.2 50.8 40.2 28.5 inf 24.1 - 24.0 1.6 35.2 19.7 20.9 inf 22.9 - 24.2 2.5 33.2 21.3 20.0 inf 26.6 - 28.5 2.4 38.5 18.1 22.8 pnt 31.1 14.8 - 1.7 48.1 22.8 23.7 pnt 33.6 15.3 - 4.4 46.1 30.7 26.0 pnt 39.9 12.2 - 3.4 54.5 36.2 29.2 qdr 8.5 2.1 4.6 - 7.9 7.1 6.0 qdr 15.5 2.2 6.4 - 11.1 10.2 9.1 qdr 17.8 1.0 3.6 - 9.2 8.3 8.0 rel 39.4 17.8 41.2 1.5 - 25.2 25.0 rel 41.2 18.1 44.2 4.6 - 31.6 27.9 rel 48.6 13.2 49.7 3.6 - 33.9 29.8 skt 37.3 12.6 27.2 4.1 34.5 - 23.1 skt 44.2 15.2 37.3 10.3 44.7 - 30.3 skt 54.9 12.8 42.3 5.4 51.3 - 33.3 Avg. 28.1 12.5 24.5 2.4 34.1 21.2 20.5 Avg. 31.5 13.1 28.8 6.4 36.1 26.1 23.6 Avg. 37.6 10.3 31.4 4.2 40.9 27.3 25.3 CGDM clp inf pnt qdr rel skt Avg. MDD clp inf pnt qdr rel skt Avg. SCDA clp inf pnt qdr rel skt Avg. clp - 16.9 35.3 10.8 53.5 36.9 30.7 clp - 20.5 40.7 6.2 52.5 42.1 32.4 clp - 18.6 39.3 5.1 55.0 44.1 32.4 inf 27.8 - 28.2 4.4 48.2 22.5 26.2 inf 33.0 - 33.8 2.6 46.2 24.5 28.0 inf 29.6 - 34.0 1.4 46.3 25.4 27.3 pnt 37.7 14.5 - 4.6 59.4 33.5 30.0 pnt 43.7 20.4 - 2.8 51.2 41.7 32.0 pnt 44.1 19.0 - 2.6 56.2 42.0 32.8 qdr 14.9 1.5 6.2 - 10.9 10.2 8.7 qdr 18.4 3.0 8.1 - 12.9 11.8 10.8 qdr 30.0 4.9 15.0 - 25.4 19.8 19.0 rel 49.4 20.8 47.2 4.8 - 38.2 32.0 rel 52.8 21.6 47.8 4.2 - 41.2 33.5 rel 54.0 22.5 51.9 2.3 - 42.5 34.6 skt 50.1 16.5 43.7 11.1 55.6 - 35.4 skt 54.3 17.5 43.1 5.7 54.2 - 35.0 skt 55.6 18.5 44.7 6.4 53.2 - 35.7 Avg. 36.0 14.0 32.1 7.1 45.5 28.3 27.2 Avg. 40.4 16.6 34.7 4.3 43.4 32.3 28.6 Avg. 42.6 16.7 37.0 3.6 47.2 34.8 30.3 CDTrans clp inf pnt qdr rel skt Avg. EUDA - BS (Ours) clp inf pnt qdr rel skt Avg. PMTrans clp inf pnt qdr rel skt Avg. clp - 29.4 57.2 26.0 72.6 58.1 48.7 clp - 35.3 59.9 19.9 72.8 63.5 50.3 clp - 34.2 62.7 32.5 79.3 63.7 54.5 inf 57.0 - 54.4 12.8 69.5 48.4 48.4 inf 64.9 - 57.3 15.0 72.4 56.9 53.3 inf 67.4 - 61.1 22.2 78.0 57.6 57.3 pnt 62.9 27.4 - 15.8 72.1 53.9 46.4 pnt 66.3 34.0 - 15.5 73.0 60.8 49.9 pnt 69.7 33.5 - 23.9 79.8 61.2 53.6 qdr 44.6 8.9 29.0 - 42.6 28.5 30.7 qdr 47.8 17.1 31.4 - 41.7 38.0 35.2 qdr 54.6 17.4 38.9 - 49.5 41.0 40.3 rel 66.2 31.0 61.5 16.2 - 52.9 45.6 rel 71.0 37.8 65.4 17.2 - 61.8 50.6 rel 74.1 35.3 70.0 25.4 - 61.1 53.2 skt 69.0 29.6 59.0 27.2 72.5 - 51.5 skt 72.5 35.7 62.3 19.2 72.4 - 52.4 skt 73.8 33.0 62.6 30.9 77.5 - 55.6 Avg. 59.9 25.3 52.2 19.6 65.9 48.4 45.2 Avg. 64.5 32.0 55.3 17.4 66.5 56.2 48.7 Avg. 67.9 30.7 59.1 27.0 72.8 56.9 52.4 PREPRINT 8 TABLE V THIS TABLE SHOWCASES THE PERFORMANCE OF OUR BS MODEL WITH DIFFERENT λVALUES (0.5, 0.3, AND 0.7) ON THE OFFICE -HOME DATASET ACROSS VARIOUS DOMAIN COMBINATIONS . THE RESULTS CONFIRM THAT λ= 0.7CONSISTENTLY YIELDS THE HIGHEST CLASSIFICATION ACCURACY , UNDERSCORING ITS EFFECTIVENESS IN BALANCING CE AND MMD LOSS FOR OPTIMAL DOMAIN ADAPTATION . λ A)C A )P A )R C )A C )P C )R P )A P )C P )R R )A R )C R )P Avg. 0.3 73.4 83.6 86.6 79.0 84.1 84.1 74.9 67.9 85.4 81.6 75.9 90.3 80.6 0.5 73.5 83.6 87.2 78.9 84.6 85.0 74.4 67.8 85.4 81.4 76.4 90.4 80.7 0.7 73.9 84.0 86.9 79.4 85.1 85.5 74.5 68.6 85.5 82.2 75.9 90.5 81.0 TABLE VI THE NUMBER OF LEARNABLE PARAMETERS FOR VARIOUS DOMAIN ADAPTATION MODELS Model # Learnable Parameters TVT (Base) 85.8M PMTrans (VIT) 87.8M PMTrans (Swin) 90.0M PMTrans (Deit) 87.3M CDTrans (Base) 85.8M EUDA - BS 0.3M EUDA - BB 4.4M EUDA - BL 14.3M EUDA - BH 51.1M EUDA - LS 0.4M EUDA - LB 5.0M EUDA - LL 15.6M EUDA - LH 53.2M DINOv2 model, base and large, as feature extractors to ex- amine their effects on domain adaptation performance. We used a naming convention in XY format for clarity, where X represents the feature extractor size (B for base and L for large) and Y the bottleneck size. For instance, BB indicates that both the feature extractor and bottleneck are base size. To optimize domain alignment and classification accuracy, we experimented with different values of the hyperparameter λ, and based on the experiments conducted on Section IV-G1, the value of 0.7 was selected for further experiments. Adjust- ments were made to batch sizes and learning rates according to the model size to ensure computational efficiency on a 1080Ti GPU, with batch sizes set to 32, 16, and 8, and learning rates starting at 3e-2 and gradually decreasing. E. Comparison with the baseline models Performance. Tables I, II, III and IV show the accuracy rates of EUDA and baseline models on Office-Home, Office- 31, VISDA-2017 and DomainNet, respectively. As can be seen, our model consistently outperformed all ResNet-based models across all datasets. Specifically, for the Office-Home dataset, our LL model surpassed CDTrans and TVT while delivering comparable results to PMTrans. For the Office- 31 dataset, our model exceeded the performance of TVT and matched that of CDTrans and PMTrans. In the case ofthe VISDA-2017 dataset in Table III, our model performed closely to other SOTA models. For the DomainNet dataset, our findings were particularly striking. Despite the large number of classes the dataset contains, our smallest model configuration, the BS, outperformed most of the SOTA models, including CDTrans, and delivered results comparable to PMTrans. Learnable parameters. Table VI compares the number of learnable parameters of our proposed EUDA model with baseline models. EUDA requires significantly fewer trainable parameters compared to ViT-based baseline models while still achieving competitive performance. Specifically, our best model uses approximately 83% fewer parameters for the Office-Home and Office-31 datasets, 43% fewer parameters for the VISDA-2017 dataset, and an impressive 99.7% fewer parameters for the DomainNet dataset. This substantial re- duction in model complexity highlights the efficiency of our approach, making it more suitable for deployment in resource- constrained environments without compromising accuracy. This indicates a higher efficiency in our modeling approach. EUDA employs a pre-trained DINOv2 model with frozen parameters as a feature extractor, i.e., there are no learnable parameters in this component, and learnability is confined to the bottleneck layers, the normalization layers associated with the feature extractor, and the classification heads. This streamlines the training process and reduces computational overhead, making EUDA particularly suitable for applications where resource efficiency is critical. This efficiency combined with the demonstrated effectiveness of our model in adapting to various domains underscores the practical advantages of our approach in real-world settings. F . Ablation Study We test the effectiveness of the SDAL loss on the perfor- mance of EUDA model. To achieve this, we remove MMD loss from Eq. (1), i.e., the model only relies on the source domain and excludes learning from the target domain data. In Table VII, we assessed the performance of our BS and BL configurations on the Office-Home dataset using a source- only approach. The results clearly show that incorporating SDAL led to 1.2% and 3.5% improvements in the BS and BL configurations, respectively. Similarly, on the Office-31 dataset (see Table VIII), our BB configuration improved by +0.4, and BL improved by +0.6 when using MMD loss. For the VisDA-2017 dataset (see Table IX), implementing SDAL with our BB and BL models perform approximately 8.1% and 7.4%, respectively better than only using CE loss. PREPRINT 9 TABLE VII THE PERFORMANCE OF OUR MODELS ON THE OFFICE -HOME DATASET ,ILLUSTRATING THE SUPERIOR RESULTS ACHIEVED WITH THE LARGE FEATURE EXTRACTOR ,WHICH ENHANCES THE MODEL ’S CAPABILITY TO HANDLE COMPLEX IMAGE DATA EFFECTIVELY . Model A)C A )P A )R C )A C )P C )R P )A P )C P )R R )A R )C R )P Avg. BS - Source Only 70.2 83.5 86.9 79.8 85.0 85.8 73.7 64.6 85.5 81.4 70.6 90.7 79.8 BS 73.9 84.0 86.9 79.4 85.1 85.5 74.5 68.6 85.5 82.2 75.9 90.5 81.0 BB 75.1 84.9 87.0 81.4 85.5 86.2 75.8 70.1 86.7 84.8 77.6 90.9 82.2 BL - Source Only 69.6 81.6 85.9 78.7 84.9 85.0 71.0 61.0 83.6 82.3 70.2 89.9 78.6 BL 75.5 84.0 87.3 80.7 85.1 86.0 78.5 69.9 86.4 84.4 77.1 90.8 82.1 BH 74.4 84.4 86.3 80.2 84.0 85.7 75.6 69.9 86.3 83.9 77.4 90.5 81.6 LL 80.6 84.9 88.4 85.2 88.0 88.6 76.6 77.4 86.7 87.7 82.5 92.8 84.9 LH 79.8 87.4 87.7 85.1 87.9 88.2 79.4 75.4 86.0 87.6 82.7 92.5 84.9 TABLE VIII PERFORMANCE RESULTS OF OUR DIFFERENT MODEL CONFIGURATIONS ON THE OFFICE -31 DATASET Model A)W D )W W )D A )D D )A W )A Avg. BB - Source Only 93.0 99.5 99.8 94.4 81.8 80.5 91.5 BB 94.9 99.4 100.0 95.2 80.6 81.1 91.9 BL - Source Only 93.6 99.2 99.8 93.0 80.3 79.4 90.9 BL 94.6 99.4 100.0 93.6 80.2 81.0 91.5 LL 95.3 100.0 100.0 93.4 80.5 82.9 92.0 G. Sensitivity Analysis 1) Effect of λ:Table V shows the impact of λin Eq. (1) on the BS configuration for the Office-Home dataset. In this experiment, we used four different values for λ: 0.3, 0.5, and 0.7. As can be seen, λ= 0.7managed to produce the best results. The effectiveness of λ= 0.7comes from its balanced approach that minimizes domain discrepancies through MMD loss while maintaining classification accuracy, ensuring robust performance across various domain shifts. 2) Performance of the EUDA Framework Under Various Configurations: In this experiment, we test our model in different configurations across various datasets. Our goal is to find which model best suits each dataset. Our findings indicate that while more complex models perform better on complex datasets, our simpler models, which have significantly fewer trainable parameters, can also achieve comparable results. This flexibility allows users to adjust the model’s complexity to match their datasets’ specific requirements. This adaptive capability is a distinctive feature of our approach which provides a unique advantage by offering a scalable solution that adjusts to varying data complexities without sacrificing performance. Table VII demonstrates our model’s performance on the Office-Home dataset with B and L feature extractors and S, B, L, and H bottleneck configurations. Our testing on Office-Home helped us identify optimal configurations, which reducing the need for extensive trials on other datasets. The L feature extractor was notably effective due to its ability to handle the significant domain variance within the dataset, and benefits from higher-dimensional features that capture moreinformative details. The LL configuration provides the best balance of complexity and performance. Table VIII shows the results of Office-31 dataset using B and L feature extractors and B and L bottleneck configurations. It can be seen that LL configuration produces the best results. Insights from the Office-Home tests informed the decision not to use the H bottleneck, as higher complexity had not resulted in improved performance in previous tests. For the VisDA-2017 dataset (see Table IX), the BH configu- ration stood out, particularly suited to managing the transition from simulation to reality. This confirms the benefit of a more complex bottleneck in handling extensive domain shifts between real and simulation data. This emphasizes the im- portance of matching architectural choices to specific dataset challenges. While we have conducted extensive testing across mul- tiple configurations and datasets, time constraints limited the breadth of our experiments. We therefore encourage re- searchers and practitioners to further explore various bottle- neck configurations to meet the specific demands and com- plexities of their datasets, enabling customized solutions that optimally address their unique challenges. V. C ONCLUSION In this paper, we highlighted the potential of using a self-supervised pre-trained ViT for UDA by introducing a versatile framework that maintains simplicity, efficiency, and adaptability to ensure its applicability in practical scenarios. Specifically, we leveraged DINOv2, a self-supervised learning method in ViTs, to extract general features, and employed a simple yet effective bottleneck of fully connected layers to refine the extracted features. Additionally, we utilized the MMD loss to effectively align the source and target domains. Our model produces comparable results to state-of-the-art UDA methods with significantly fewer trainable parameters. This makes our method particularly suitable for real-world applications, including on-edge devices. Our proposed frame- work demonstrates promising results, achieving top-tier perfor- mance with 43% to 99.7% fewer trainable parameters across benchmark datasets compared to other methods. In our future research, we will explore additional UDA approaches based PREPRINT 10 TABLE IX PERFORMANCE RESULTS OF OUR DIFFERENT MODEL CONFIGURATIONS ON THE VISDA-2017 DATASET Model plane bcycl bus car house knife mcycl person plant sktbrd train truck Avg. BB - Source Only 97.9 79.6 91.3 56.1 88.5 65.9 96.2 22.0 74.4 92.9 94.3 33.7 74.4 BB 99.4 78.1 90.7 56.9 98.5 97.7 97.6 61.3 77.2 97.2 97.7 37.2 82.5 BL - Source Only 98.9 77.9 90.8 59.2 93.9 63.7 94.8 37.0 74.1 86.8 91.7 33.5 75.2 BL 99.5 77.5 91.0 55.9 98.3 98.0 97.5 62.5 78.4 97.3 98.0 36.9 82.6 BH 99.5 78.1 90.6 58.1 98.5 98.5 97.8 63.4 79.8 97.3 98.2 37.1 83.2 LL 99.9 72.6 91.1 55.6 97.8 98.8 97.7 50.8 61.9 98.6 98.6 39.4 80.2 on self-supervised pre-trained ViT backbones and expand the applications of UDA in fields such as Autonomous Vehicles and other demanding areas where UDA is crucial. REFERENCES [1] I. Redko, E. Morvant, A. Habrard, M. Sebban, and Y . Bennani, Advances in domain adaptation theory , ser. Computer engineering. London, UK : Kidlington, Oxford, UK: ISTE Press Ltd ; Elsevier Ltd, 2019, oCLC: ocn988168970. [2] M. Wang and W. Deng, “Deep visual domain adaptation: A survey,” Neurocomputing , vol. 312, pp. 135–153, Oct. 2018. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0925231218306684 [3] Z. Han, H. Sun, and Y . Yin, “Learning Transferable Parameters for Unsupervised Domain Adaptation,” IEEE Transactions on Image Processing , vol. 31, pp. 6424–6439, 2022. [Online]. Available: https://ieeexplore.ieee.org/document/9807644/ [4] J. Li, R. Xu, J. Ma, Q. Zou, J. Ma, and H. Yu, “Domain Adaptive Object Detection for Autonomous Driving Under Foggy Weather,” 2023, pp. 612–622. [Online]. Available: https://openaccess.thecvf.com/content/WACV2023/html/Li Domain Adaptive Object Detection forAutonomous Driving Under Foggy Weather WACV 2023 paper.html [5] Y . Ganin and V . Lempitsky, “Unsupervised Domain Adaptation by Backpropagation,” Feb. 2015, arXiv:1409.7495 [cs, stat]. [Online]. Available: http://arxiv.org/abs/1409.7495 [6] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial Discriminative Domain Adaptation,” 2017, pp. 7167–7176. [On- line]. Available: https://openaccess.thecvf.com/content cvpr 2017/html/ Tzeng Adversarial Discriminative Domain CVPR 2017 paper.html [7] Z. Cao, M. Long, J. Wang, and M. I. Jordan, “Partial Transfer Learning With Selective Adversarial Networks,” 2018, pp. 2724–2732. [Online]. Available: https://openaccess.thecvf.com/content cvpr 2018/html/Cao Partial Transfer Learning CVPR 2018 paper [8] M. Ghifary, W. B. Kleijn, M. Zhang, and D. Balduzzi, “Domain Generalization for Object Recognition With Multi- Task Autoencoders,” 2015, pp. 2551–2559. [Online]. Avail- able: https://openaccess.thecvf.com/content iccv 2015/html/Ghifary Domain Generalization forICCV 2015 paper [9] K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan, “Domain Separation Networks,” in Advances in Neural Information Processing Systems , vol. 29. Curran Associates, Inc., 2016. [Online]. Available: https://papers.nips.cc/paper files/paper/2016/ hash/45fbc6d3e05ebd93369ce542e8f2322d-Abstract.html [10] T. Alkhalifah and O. Ovcharenko, “Direct domain adaptation through reciprocal linear transformations,” Aug. 2021, arXiv:2108.07600 [cs]. [Online]. Available: http://arxiv.org/abs/2108.07600 [11] A. Gretton, K. Borgwardt, M. Rasch, B. Sch ¨olkopf, and A. Smola, “A Kernel Method for the Two-Sample-Problem,” in Advances in Neural Information Processing Systems , vol. 19. MIT Press, 2006. [Online]. Available: https://papers.nips.cc/paper files/paper/2006/hash/ e9fb2eda3d9c55a0d89c98d6c54b5b3e-Abstract.html [12] M. Long, Y . Cao, J. Wang, and M. I. Jordan, “Learning Transferable Features with Deep Adaptation Networks,” May 2015, arXiv:1502.02791 [cs]. [Online]. Available: http://arxiv.org/abs/1502.02791[13] H. Yan, Y . Ding, P. Li, Q. Wang, Y . Xu, and W. Zuo, “Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation,” 2017, pp. 2272–2281. [Online]. Available: https://openaccess.thecvf.com/content cvpr 2017/html/Yan Mind theClass CVPR 2017 paper [14] B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain adaptation,” in Proceedings of the Thirtieth AAAI Conference on Artifi- cial Intelligence , ser. AAAI’16. Phoenix, Arizona: AAAI Press, Feb. 2016, pp. 2058–2065. [15] B. Sun and K. Saenko, “Deep CORAL: Correlation Alignment for Deep Domain Adaptation,” in Computer Vision - ECCV 2016 Workshops , G. Hua and H. Jegou, Eds. Cham: Springer International Publishing, 2016, vol. 9915, pp. 443–450, series Title: Lecture Notes in Computer Science. [Online]. Available: http://link.springer.com/10.1007/978-3-319-49409-8 35 [16] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” Jun. 2021, arXiv:2010.11929 [cs]. [Online]. Available: http://arxiv.org/abs/2010.11929 [17] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is All you Need,” in Advances in Neural Information Processing Systems , vol. 30. Curran Associates, Inc., 2017. [Online]. Available: https://papers.nips.cc/paper files/paper/ 2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html [18] J. Maur ´ıcio, I. Domingues, and J. Bernardino, “Comparing Vision Transformers and Convolutional Neural Networks for Image Classification: A Literature Review,” Applied Sciences , vol. 13, no. 9, p. 5521, Apr. 2023. [Online]. Available: https://www.mdpi.com/2076-3417/13/9/5521 [19] J. Yang, J. Liu, N. Xu, and J. Huang, “TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation,” Nov. 2021, arXiv:2108.05988 [cs]. [Online]. Available: http://arxiv.org/abs/2108. 05988 [20] T. Xu, W. Chen, P. Wang, F. Wang, H. Li, and R. Jin, “CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation,” Mar. 2022, arXiv:2109.06165 [cs]. [Online]. Available: http://arxiv.org/abs/2109.06165 [21] J. Zhu, H. Bai, and L. Wang, “Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective,” 2023, pp. 3561–3571. [Online]. Available: https://openaccess.thecvf.com/content/ CVPR2023/html/Zhu Patch-Mix Transformer forUnsupervised Domain Adaptation AGame Perspective CVPR 2023 paper.html [22] M. B. Sariyildiz, Y . Kalantidis, D. Larlus, and K. Alahari, “Concept Generalization in Visual Representation Learning,” Sep. 2021, arXiv:2012.05649 [cs]. [Online]. Available: http://arxiv.org/abs/ 2012.05649 [23] K. Han, Y . Wang, H. Chen, X. Chen, J. Guo, Z. Liu, Y . Tang, A. Xiao, C. Xu, Y . Xu, Z. Yang, Y . Zhang, and D. Tao, “A Survey on Vision Transformer,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 45, no. 1, pp. 87–110, Jan. 2023. [Online]. Available: https://ieeexplore.ieee.org/document/9716741/ [24] M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, M. Assran, N. Ballas, W. Galuba, R. Howes, P.-Y . Huang, S.-W. Li, I. Misra, M. Rabbat, V . Sharma, G. Synnaeve, H. Xu, H. Jegou, J. Mairal, P. Labatut, A. Joulin, and P. Bojanowski, “DINOv2: Learning Robust Visual PREPRINT 11 Features without Supervision,” Feb. 2024, arXiv:2304.07193 [cs]. [Online]. Available: http://arxiv.org/abs/2304.07193 [25] K. He, H. Fan, Y . Wu, S. Xie, and R. Girshick, “Momentum Contrast for Unsupervised Visual Representation Learning,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Seattle, WA, USA: IEEE, Jun. 2020, pp. 9726–9735. [Online]. Available: https://ieeexplore.ieee.org/document/9157636/ [26] M. Caron, H. Touvron, I. Misra, H. J ´egou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging Properties in Self- Supervised Vision Transformers,” 2021, pp. 9650–9660. [Online]. Available: https://openaccess.thecvf.com/content/ICCV2021/html/ Caron Emerging Properties inSelf-Supervised Vision Transformers ICCV 2021 paper.html [27] A. Ajith and G. Gopakumar, “Domain Adaptation: A Survey,” in Computer Vision and Machine Intelligence , M. Tistarelli, S. R. Dubey, S. K. Singh, and X. Jiang, Eds. Singapore: Springer Nature Singapore, 2023, vol. 586, pp. 591–602, series Title: Lecture Notes in Networks and Systems. [Online]. Available: https://link.springer.com/10.1007/978-981-19-7867-8 47 [28] M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Deep transfer learning with joint adaptation networks,” in Proceedings of the 34th Interna- tional Conference on Machine Learning - Volume 70 , ser. ICML’17. JMLR.org, 2017, pp. 2208–2217, place: Sydney, NSW, Australia. [29] W. Ma, J. Zhang, S. Li, C. H. Liu, Y . Wang, and W. Li, “Exploiting Both Domain-specific and Invariant Knowledge via a Win-win Transformer for Unsupervised Domain Adaptation,” Nov. 2021, arXiv:2111.12941 [cs]. [Online]. Available: http://arxiv.org/abs/2111.12941 [30] Z. Feng, C. Xu, and D. Tao, “Self-Supervised Representation Learning by Rotation Feature Decoupling,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Long Beach, CA, USA: IEEE, Jun. 2019, pp. 10 356–10 366. [Online]. Available: https://ieeexplore.ieee.org/document/8953870/ [31] R. Zhang, P. Isola, and A. A. Efros, “Colorful Image Colorization,” Oct. 2016, arXiv:1603.08511 [cs] version: 5. [Online]. Available: http://arxiv.org/abs/1603.08511 [32] Y . Chen, X. Shen, Y . Liu, Q. Tao, and J. A. Suykens, “Jigsaw-ViT: Learning jigsaw puzzles in vision transformer,” Pattern Recognition Letters , vol. 166, pp. 53–60, Feb. 2023. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0167865522003920 [33] C. Li, J. Yang, P. Zhang, M. Gao, B. Xiao, X. Dai, L. Yuan, and J. Gao, “Efficient Self-supervised Vision Transformers for Representation Learning,” Jul. 2022, arXiv:2106.09785 [cs]. [Online]. Available: http://arxiv.org/abs/2106.09785 [34] T. Darcet, M. Oquab, J. Mairal, and P. Bojanowski, “Vision Transformers Need Registers,” Apr. 2024, arXiv:2309.16588 [cs]. [Online]. Available: http://arxiv.org/abs/2309.16588 [35] H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan, “Deep Hashing Network for Unsuper- vised Domain Adaptation,” 2017, pp. 5018–5027. [On- line]. Available: https://openaccess.thecvf.com/content cvpr 2017/html/ Venkateswara Deep Hashing Network CVPR 2017 paper [36] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch ¨olkopf, and G. R. G. Lanckriet, “Hilbert space embeddings and metrics on probability measures,” Jan. 2010, arXiv:0907.5309 [math, stat]. [Online]. Available: http://arxiv.org/abs/0907.5309 [37] D. Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor, O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar, M. Y . Vardi, G. Weikum, K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting Visual Category Models to New Domains,” in Computer Vision - ECCV 2010 , K. Daniilidis, P. Maragos, and N. Paragios, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, vol. 6314, pp. 213–226, series Title: Lecture Notes in Computer Science. [Online]. Available: http://link.springer.com/10.1007/978-3-642-15561-1 16 [38] X. Peng, B. Usman, N. Kaushik, J. Hoffman, D. Wang, and K. Saenko, “VisDA: The Visual Domain Adaptation Challenge,” Nov. 2017, arXiv:1710.06924 [cs]. [Online]. Available: http://arxiv.org/abs/1710. 06924 [39] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang, “Moment Matching for Multi-Source Domain Adaptation,” 2019, pp. 1406–1415. [Online]. Available: https: //openaccess.thecvf.com/content ICCV 2019/html/Peng Moment Matching forMulti-Source Domain Adaptation ICCV 2019 paper [40] M. Long, Z. CAO, J. Wang, and M. I. Jordan, “Conditional Adversarial Domain Adaptation,” in Advances in Neural Information Processing Systems , vol. 31. Curran Associates, Inc., 2018.[Online]. Available: https://papers.nips.cc/paper files/paper/2018/hash/ ab88b15733f543179858600245108dd8-Abstract.html [41] H. Liu, M. Long, J. Wang, and M. Jordan, “Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers,” in Proceedings of the 36th International Conference on Machine Learning . PMLR, May 2019, pp. 4013–4022, iSSN: 2640-3498. [Online]. Available: https://proceedings.mlr.press/v97/liu19b.html [42] J. Liang, D. Hu, and J. Feng, “Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation,” Jun. 2021, arXiv:2002.08546 [cs]. [Online]. Available: http://arxiv.org/abs/2002.08546 [43] S. Cui, S. Wang, J. Zhuo, L. Li, Q. Huang, and Q. Tian, “Towards Discriminability and Diversity: Batch Nuclear-Norm Maximization Under Label Insufficient Situations,” 2020, pp. 3941–3950. [Online]. Available: https://openaccess.thecvf.com/content CVPR 2020/html/ Cui Towards Discriminability and Diversity Batch Nuclear-Norm Maximization Under Label Insufficient CVPR 2020 paper.html [44] K. Saito, K. Watanabe, Y . Ushiku, and T. Harada, “Maximum Classifier Discrepancy for Unsupervised Do- main Adaptation,” 2018, pp. 3723–3732. [Online]. Avail- able: https://openaccess.thecvf.com/content cvpr 2018/html/Saito Maximum Classifier Discrepancy CVPR 2018 paper.html [45] C.-Y . Lee, T. Batra, M. H. Baig, and D. Ulbricht, “Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation,” 2019, pp. 10 285–10 295. [Online]. Available: https://openaccess.thecvf.com/ content CVPR 2019/html/Lee Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation CVPR 2019 paper.html [46] S. Lee, D. Kim, N. Kim, and S.-G. Jeong, “Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation,” 2019, pp. 91–100. [Online]. Available: https://openaccess.thecvf.com/content ICCV 2019/html/Lee Drop to Adapt Learning Discriminative Features forUnsupervised Domain Adaptation ICCV 2019 paper.html Ali Abedi (Graduate Student Member, IEEE) re- ceived his B.Sc. in Computer Software Engineering in 2019 and completed his M.Sc. in Artificial In- telligence Engineering at the University of Isfahan in 2022. He has accumulated extensive academic and industrial experience in the fields of artificial intelligence and machine learning. Currently, he is pursuing a Ph.D. in Electrical and Computer Engi- neering at the University of Windsor. His primary research interests include deep learning, machine learning, and computer vision, with a particular focus on transformers architecture and vision transformers. Q. M. Jonathan Wu (Senior Member, IEEE) re- ceived the Ph.D. degree in electrical engineering from the University of Wales, Swansea, U.K., in 1990. He was affiliated with the National Research Council of Canada for ten years beginning, in 1995, where he became a Senior Research Officer and a Group Leader. He is currently a Professor with the Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, Canada. He has authored or coauthored more than 300 peer- reviewed papers in computer vision, image process- ing, intelligent systems, robotics, and integrated microsystems. His research interests include 3-D computer vision, active video object tracking and extraction, interactive multimedia, sensor analysis and fusion, and visual sensor networks. Prof. Wu held the Tier 1 Canada Research Chair in automotive sensors and information systems from 2005 to 2019. He is an Associate Editor for IEEE TRANSACTIONS ON CYBERNETICS, the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECH- NOLOGY , the Journal of Cognitive Computation, and Neurocomputing. He was on technical program committees and international advisory committees for many prestigious conferences. He is a Fellow of the Canadian Academy of Engineering. PREPRINT 12 Ning Zhang (Senior Member, IEEE) is an Asso- ciate Professor in the Department of Electrical and Computer Engineering at University of Windsor, Canada. He received the Ph.D degree in Electri- cal and Computer Engineering from University of Waterloo, Canada, in 2015. After that, he was a postdoc research fellow at University of Waterloo and University of Toronto, respectively. His research interests include connected vehicles, mobile edge computing, wireless networking, and machine learn- ing. He is a Highly Cited Researcher. He serves as an Associate Editor of IEEE Transactions on Mobile Computing, IEEE Internet of Things Journal, IEEE Transactions on Cognitive Communications and Networking, and IEEE Systems Journal; and a Guest Editor of several inter- national journals, such as IEEE Wireless Communications, IEEE Transactions on Industrial Informatics, and IEEE Transactions on Intelligent Transportation Systems. He also serves/served as a TPC chair for IEEE VTC 2021 and IEEE SAGC 2020, a general chair for IEEE SAGC 2021, a track chair for several international conferences and workshops. He received 8 Best Paper Awards from conferences and journals, such as IEEE Globecom and IEEE ICC. Farhad Porpanah (Senior Member, IEEE) re- ceived the Ph.D. degree in computational intel- ligence from the University of Science Malaysia (USM), Malaysia, in 2015. He is currently a Post- doctoral Research Fellow with the Department of Electrical and Computer Engineering, Queens Uni- versity, Kingston, ON, Canada. Prior to his cur- rent position, he was an Associate Research Fellow with the Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, Canada. From 2019 to 2021, he held the position of Associate Research Fellow with the College of Mathematics and Statistics, Shenzhen University (SZU), Shenzhen, China. Before his tenure at SZU, he was a Postdoctoral Research Fellow with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China. His research interests include machine learning and deep learning, driven by an emphasis on creating smart environments that improve the quality of life for the elderly population. | 4 | 1 | The EUDA framework utilizes a frozen DINOv2 feature extractor (self-supervised Vision Transformer) and incorporates a bottleneck of fully connected layers. Given the efficiency improvements stated (42% to 99.7% fewer parameters than prior ViT models), it is likely in the range of hundreds of millions of parameters, similar to the ViT models, which typically have around 150M parameters. Estimating a reasonable batch size (considering GPU VRAM limitations), training over 50 epochs with a moderate dataset size could yield around 4-8 hours of training time on a single GPU like an NVIDIA A100 or similar. The paper emphasizes computational efficiency and resource-limited environments, indicating that the model's design is effective enough to require less training time than standard ViT-based approaches. Additionally, there is no mention of distributed training, supporting the estimate of a single GPU setup. Therefore, this model can likely be trained in under 8 hours on a single GPU. | yes | Yes | CV | EUDA: An Efficient Unsupervised Domain Adaptation via Self-Supervised Vision Transformer | 2024-07-31 0:00:00 | https://github.com/a-abedi/euda | 1 | https://drive.usercontent.google.com/download?id=0B4IapRTv9pJ1WGZVd1VDMmhwdlE&export=download&authuser=0&resourcekey=0-gNMHVtZfRAyO_t2_WrOunA | 2000 steps × 2.05 sec/step = 4100 seconds ≈ 68 minutes ≈ 1 hour 8 minutes | https://drive.google.com/file/d/1woeCrW4aU_I6LUR6K2N7bh_uUPn5rkAK/view?usp=sharing | Yes | --Need to fix some line of code which i included in the colab file. |
WiGesture | CSI-BERT | [] | Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing | 2024-03-19T00:00:00 | https://arxiv.org/abs/2403.12400v1 | [
"https://github.com/rs2002/csi-bert"
] | {'Accuracy (% )': '93.94'} | [
"Accuracy (% )"
] | Given the following paper and codebase:
Paper: Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing
Codebase: https://github.com/rs2002/csi-bert
Improve the CSI-BERT model on the WiGesture dataset. The result
should improve on the following metrics: {'Accuracy (% )': '93.94'}. You must use only the codebase provided.
| Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing Zijian Zhao∗†, Tingwei Chen∗, Fanyi Meng∗‡, Hang Li∗, Xiaoyang Li∗, Guangxu Zhu∗ ∗Shenzhen Research Institute of Big Data †School of Computer Science and Engineering, Sun Yat-sen University ‡School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen Email:{zhaozj28 }@mail2.sysu.edu.cn, {tingweichen,fanyimeng }@link.cuhk.edu.cn, {hangdavidli,lixiaoyang,gxzhu }@sribd.cn Abstract —Despite the development of various deep learning methods for Wi-Fi sensing, package loss often results in noncon- tinuous estimation of the Channel State Information (CSI), which negatively impacts the performance of the learning models. To overcome this challenge, we propose a deep learning model based on Bidirectional Encoder Representations from Transformers (BERT) for CSI recovery, named CSI-BERT. CSI-BERT can be trained in an self-supervised manner on the target dataset without the need for additional data. Furthermore, unlike tra- ditional interpolation methods that focus on one subcarrier at a time, CSI-BERT captures the sequential relationships across different subcarriers. Experimental results demonstrate that CSI- BERT achieves lower error rates and faster speed compared to traditional interpolation methods, even when facing with high loss rates. Moreover, by harnessing the recovered CSI obtained from CSI-BERT, other deep learning models like Residual Network and Recurrent Neural Network can achieve an average increase in accuracy of approximately 15% in Wi-Fi sensing tasks. The collected dataset WiGesture and code for our model are publicly available at https://github.com/RS2002/CSI-BERT. Index Terms —Bidirectional Encoder Representations from Transformers, Adversarial Learning, Data Recovery, Channel Statement Information, Wi-Fi Sensing I. I NTRODUCTION In recent years, wireless sensing is expected to be applied in ubiquitous signal-based scenarios [1], including gesture recognition [2], human motion identification [3], [4], and target location [5]. Note that, a key issue confronting Wi-Fi sensing applications is the uneven temporal distribution of Wi- Fi Channel State Information (CSI) signals, even when a fixed sampling rate is set. This issue is generally caused by packet loss. In prac- tice, the receiver fails to successfully decode the packets because of factors such as weak signal strength, frequency interference, and hardware errors. Consequently, the number of data packets collected often falls below expectations, and the packet loss situation varies from second to second. This leads to the uneven data of time series. Since machine learning methods typically require input signals to possess consistent dimensions, this inconsistency presents a significant trouble Corresponding author: Guangxu Zhuin model training. Unfortunately, previous researchers have rarely focused on the task of recovering lost packets, where most studies simply employ interpolation methods to maintain consistent data dimensions. Nevertheless, these methods are not specifically tailored for CSI and fail to consider the inher- ent relationship among transmitters, receivers, and subcarriers, potentially leading to significant discrepancies between the recovered data and the actual signal. To address this issue, we propose the use of a Deep Neural Network (DNN) as it excels at capturing information and relationships within signal data. In this paper, we aim to design a deep learning model that specifically leverages the unique characteristics of CSI to recover lost packets. Recently, Language Models (LMs) have demonstrated ex- cellent performance in various fields, including image recog- nition, music analysis, and biology research. These mod- els are predominantly based on the Transformer architecture [6]. Transformers utilize a bi-directional encoder to capture global information and an auto-regressive decoder to generate coherent data. Among these models, Bidirectional Encoder Representations from Transformers (BERT) [7] has shown exceptional performance in understanding tasks. BERT’s bi- directional encoder structure and effective pre-training meth- ods contribute to its success. Since wireless information sequences also exhibit contextual relationships like neural networks, BERT can also help extract their inner features. Previous researchers [8]–[10] have explored the application of BERT in wireless sensing tasks. However, most of these stud- ies directly convert continuous wireless signals into discrete tokens, resulting in significant information loss. In this paper, we propose a CSI-BERT model, which leverages the powerful comprehension capabilities of BERT to recover the lost CSI data. In contrast to previous BERT-based methods, we introduce a novel Embedding Layer that directly processes continuous CSI along with its corresponding time in- formation. This approach eliminates the need to convert wire- less information into discrete tokens. For practical implemen- tation, we further present a corresponding pre-training method for CSI data, which can be trained in an self-supervisedarXiv:2403.12400v1 [cs.LG] 19 Mar 2024 manner using target incomplete CSI sequences. To enhance the realism of the recovered CSI, we also employ adversarial learning techniques. As shown in Fig. 1, our experimental system is based on the ESP32S3, a lightweight Wi-Fi sensing device, which can establish connections with home routers and extract the CSI data. The collected dataset includes identity IDs and gestures. We conduct tests on this system under various scenarios where packet loss is deliberately induced. The experimental results demonstrate that CSI-BERT exhibits outstanding performance in both CSI recovery and sensing tasks. In summary, the main contributions of this work are: (1) We propose CSI-BERT, the first deep learning model for CSI recovery. Our framework also provides potential directions in several domains, including the application of Natural Language Process (NLP) models in CSI, exploring pre-training methods for neural networks, and recovering multi-dimensional data. (2) CSI-BERT can be trained exclusively on incomplete CSI data, effectively handling various packet loss rates. This capability holds significant practical implications. (3) Experimental results demonstrate that CSI-BERT achieves lower recovery errors and faster recovery time com- pared to traditional interpolation methods. Moreover, it pro- vides substantial improvements to other deep learning models like Residual Network (ResNet) [11] and Recurrent Neural Network (RNN) [12] in Wi-Fi sensing tasks. Additionally, CSI-BERT itself exhibits excellent performance in CSI sensing tasks. subcarrier Packet indexCSIBERT Tx Rx Data collection Gesture Recognition Human Identificationsubcarrier Packet index Fig. 1: Workflow II. R ELATED WORK BERT [7] was initially proposed in the field of NLP. Due to its bi-directional encoder structure and pre-trained knowledge, BERT has demonstrated excellent performance in various understanding tasks. BERT employs two pre-training tasks, namely Masked Language Model (MLM) and Next Sentence Prediction (NSP). Liu et al. [13] have shown that the model achieves better performance by removing the NSP task and utilizing dynamic masking in the MLM task. Specifically, they randomly replace 15% of the tokens with the [MASK] token during pre-training and train the model to predict the original tokens. In this work, we adopt a similar approach. Fig. 2: Overview of the Proposed CSI-BERT Framework In recent years, researchers have begun applying BERT in the field of wireless sensing [8] and communication [14]. However, unlike domains such as NLP and Computer Vision (CV), the field of wireless sensing lacks a sufficient amount of large datasets. Furthermore, the data in wireless sensing is heavily influenced by the environment and the collecting device, making it challenging to combine data from different datasets. Several studies have demonstrated that pretraining can enhance performance in downstream tasks, particularly in scenarios with limited training data [15], [16]. Consequently, pre-trained models such as BERT have been introduced to wireless sensing. Previous researchers [9], [10] have predominantly fo- cused on localization using BERT. Wang et al. [17] pro- posed a BERT-based method for constructing radio maps. In these work, they all transformed continuous Received Signal Strength Indicator (RSSI) measurements into discrete tokens to input the data into BERT, potentially resulting in the loss of valuable RSSI information. Additionally, their approach simply adapted BERT from NLP without making significant modifications. In contrast, this paper aims to modify the BERT structure to align with the characteristics of CSI. Our proposed CSI-BERT model can directly process continuous CSI data without any loss of information. III. M ETHODOLOGY A. Overview The proposed CSI-BERT model is a BERT-based approach designed for CSI recovery and sensing, as illustrated in Fig. 2. In this approach, we leverage the CSI sequence C= [c1, c2, ..., c n]and the corresponding timestamps T= [t1, t2, ..., t n]as input. Each cirepresents a flattened vector containing the amplitude and phase information of the CSI matrix at time ti. Notably, the unit of Tis not a concern as it will be normalized by our model. The utilization of CSI-BERT involves three main phases: pre-training, recovering, and fine-tuning. During pre-training, the CSI sequence is randomly destructed, and the model is trained to recover the missing components in an self- supervised manner. Subsequently, the recovery phase aims to restore the lost CSI information using two distinct strategies, namely “recover” and “replace”. Finally, in the fine-tuning phase, specific tasks are employed to train the top layers of CSI-BERT for CSI Sensing tasks. In the following sections, we provide a detailed exposition of the CSI-BERT architecture in Section III-B, followed by a comprehensive description of the training and utilization steps in Section III-C. B. Model Structure Fig. 2 illustrates the detailed structure of CSI-BERT. CSI- BERT is built upon the BERT [7] model, which is a bi- directional encoder based on the Transformer [6]. To improve the network’s adaptation to the structure of CSI, we have designed novel bottom and top layers. In the traditional BERT model, the embedding layer con- sists of token embedding, position embedding, and segment embedding. For the token embedding, considering that CSI is continuous, we replace the word embedding layer with a linear layer. Before the linear layer, we add a standardization layer as shown in Eq (1), which directly affects the effectiveness of the model. µ(j)=Pn i=1c(j) i n, σ(j)=sPn i=1(c(j) i−µ(j))2 n, Standard (c(j) i) =c(j) i−µ(j) σ(j),(1) where c(j) irepresents the jthdimension of ci,µandσ represent the mean and standard deviation, respectively. These values are important for the subsequent layers. Since CSI does not have a similar concept of a phrase or sentence, we use a time embedding to replace the segment embedding, as shown in Eq (2). To account for the varying time intervals between consecutive CSI samples, we use the time embedding to represent the absolute position, while the traditional position embedding can represent the relative position. TE(ti)(j)= sin(Norm(ti) 104j d), j= 2k cos(Norm(ti) 104(j−1) d), j= 2k+ 1, Norm (ti) =ti−min(T) max(T)−min(T),(2) where jrepresents the jthdimension of TE(ti),kis a positive integer, and dis the dimension of each CSI vector ci. During pre-training and fine-tuning, we use different top layers for CSI-BERT. In the pre-training phase, the top layer consists of a Recoverer, which is used to recover the destructedCSI and is also employed during the recovering phase, and a Discriminator, which discriminates whether the CSI is real or generated by CSI-BERT. For the Recoverer, we use a de-standardization layer to ensure that the output has a similar distribution as the input. This is achieved through the following equation: De-Standard (y(j) i) = (y(j) i+µ(j))∗σ(j), (3) where yrepresents the output of the linear layer in the Recoverer, µandσare calculated using Eq (1). As for the Discriminator, we employ a Gradient Reversal Layer [18] (GRL) before the final linear layer. The GRL reverses the gradient during back-propagation. The Discrim- inator aims to distinguish between real and generated CSI samples, but the bottom layers prevent it from classifying correctly. Through this adversarial approach, the generated CSI samples can be more realistic. Finally, during fine-tuning, we incorporate a self-attention layer after BERT to combine the features in the time dimen- sion. Subsequently, a linear layer is used to make the final decision. C. Key Phases 1) Pre-training Phase: In the pre-training phase of CSI- BERT, we make use of a similar approach to the MLM task in BERT [7], which involves fulfilling the empty positions by [PAD] tokens, randomly masking tokens by [MASK] tokens, and training the model to recover them. However, due to the differences between continuous CSI and discrete natural language, we have made some modifications to adapt to the CSI data. In BERT, the [MASK] token is a fixed token, but in CSI- BERT, we replace the word embedding layer with a linear layer, which means the [MASK] token can have different values affecting the output. To address this, we assign a random value to the [MASK] token, sampled from a Gaussian distribution as shown in Eq (4) whose mean and standard deviation are calculated from the standardization layer. This random approach makes it more challenging for the model to identify the position of the [MASK] token, thereby improving its understanding of the CSI sequence structure. The [PAD] token, on the other hand, can be assigned a fixed value since it is disregarded by the model with the assistance of the Attention Mask Matrix. [MASK ](j) i∼N(µ(j), σ(j)), (4) For the pre-training phase, we follow a similar setting as RoBERTa [13] as shown in Fig. 3a, where the [PAD] token is used to fill the positions of missing CSI, and some non-[PAD] tokens are randomly replaced with the [MASK] token. In RoBERTa, the masking proportion is fixed at 15%. However, in practical scenarios, the loss rate of CSI can vary, so we introduce a random masking proportion ranging from 15% to 70%, which is varied in each epoch. (a) Pre-training Phase (b) Recovering Phase Fig. 3: Process of Pre-training and Recovering Next, we train the model to recover these [MASK] tokens and make the recovered CSI more realistic using the Discrimi- nator. We design a mixed loss function consisting of five parts based on the features of the CSI, as shown in Eq (5). L1=MSE(C,ˆC), L2=MSE(µ,ˆµ), L3=MSE(σ,ˆσ), L4=CrossEntropy (Discriminator (C),0), L5=CrossEntropy (Discriminator (ˆC),1),(5) where ˆCrepresents the output of the Recoverer, ˆµ,ˆσrepresents the mean and standard deviation of ˆCin the dimension of time, respectively. The L1loss is the traditional loss function used in BERT [7], as it measures the accuracy of the output. However, in CSI recovery, the overall shape of the CSI is also important. Therefore, we utilize the L2, L3losses to consider the overall shape of the CSI by assessing its mean, and standard deviation. Furthermore, we train the Recoverer to classify the input and generated CSI using CrossEntropy as L4, L5. Additionally, we calculate all the loss functions focusing only on the [MASK] tokens again to ensure that the model prioritizes the recovery of the missing CSI. 2) Recovering Phase: The Recovering Phase is illustrated in Fig. 3b. First, all the [PAD] tokens are replaced by the [MASK] token to make the data more similar to the form in pre-training. Then, the modified CSI is inputted into CSI- BERT. Here, we propose two recovery methods named “recover” and “replace”. The “recover” method is similar to traditional interpolation methods that only fill the lost tokens, while other tokens retain their original values. However, the Recoverer outputs a complete CSI sequence regardless of whether the token is [MASK], [PAD], or others. With the assistance of the Discriminator, the output sequence also maintains coherence. Therefore, we directly use the output sequence of the Recov- erer as the recovered CSI. Since it is not identical to the actual CSI at each position, we refer to this method as “replace”. The expression of “recover” and “replace” CSI is shown in Eq (6).CSIreplace=ˆC, CSIrecover= (1−IsPad )·C+IsPad ·ˆC, IsPad = (C== [PAD ]),(6) where IsPad indicates whether the position of the original CSICcorresponds to the [PAD] token. 3) Fine-tuning Phase: During the fine-tuning phase, we adopt the approach commonly used for pre-trained models, where we freeze the bottom layers of CSI-BERT and only train the top layers for specific classification tasks. IV. D ATASET AND NUMERICAL ANALYSIS A. Dataset In our study, we utilized the ESP32S3 microcontroller and a home Wi-Fi network to collect data at a sampling rate of 100Hz. The home Wi-Fi router served as the transmitter, while the ESP32S3, equipped with an external antenna, received CSI embedded in Ping Replies. This setup, shown in Fig. 1, was cost-effective and operated at a carrier frequency of 2.4 GHz. Data collection took place in a cluttered conference room, with the transmitter and receiver positioned 1.5 meters apart. Eight volunteers, with an average age of 23.38 years, height of 1.75m, and BMI of 23.07, participated in the study. They performed various gestures such as moving left-right, forward-backward, up-down, circling clockwise, clapping, and waving. Each action was recorded for one minute, resulting in 60 samples per action. To construct our training dataset, we divided the CSI data within each second into individual samples. Our analysis revealed an average loss rate of 14.51% in the collected CSI data. Across the 52 subcarriers, the maximum and minimum loss rates within each second were 70% and 1%, respectively. B. Experiment Setup We illustrate our model configurations in TableI. We trained our model on an NVIDIA RTX 3090. During training, we observed that CSI-BERT occupied approximately 1300 MB of GPU memory. Configuration Our Setting Input Length 100 Input Dimension 52 Network Layers 4 Hidden Size 64 Inner Linear Size 128 Attn. Heads 4 Dropout Rate 0.1 Optimizer Adam Learning Rate 0.0005 Batch Size 64 Total Number of Parameters 2.11 M TABLE I: Model Configurations: This table provides a detailed overview of our CSI-BERT in the following experiments. In this context, ‘M’ represents ‘million’ and ‘K’ represents ‘kilo’, which will remain consistent in the subsequent tables. C. Experiment Result In this section, we evaluate the recovery performance of CSI-BERT using two metrics. Firstly, we compare the Mean Squared Error (MSE), Mean Absolute Error (MAE), Sym- metric Mean Absolute Percentage Error (SMAPE), and Mean Absolute Percentage Error (MAPE) between CSI-BERT and other interpolation methods including Linear Interpolation, Ordinary Kringing, Inverse Distance Weighted (IDW). To conduct this evaluation, we randomly delete 15% of the non- [PAD] data from the testing set of CSI-BERT, and then use different methods to fill in these gaps. To ensure fairness, we calculate the aforementioned metrics only on the 15% of the deleted CSI. Besides, we also calculate the Fr ´echet Shape Similarity (FSS) [19] as shown in Eq (7) between whole original CSI and the recovered result. S(C,ˆC) =Pd j=1S(C(j),ˆC(j)) d, S(C(j),ˆC(j)) = max{1−inf α,βmax t∈[0,1]D(C(j)(α(t)),ˆC(j)(β(t))) q 1 2∗ ||C(j)|| ∗ || ˆC(j)||+ϵ,0},(7) where dis the dimension of the flattened CSI ci, as described in Eq (2). The functions αandβare arbitrary continuous non-decreasing functions mapping from the interval [0,1]to [a, b], where [a, b]represents the time range of the CSI. D(·,·)denotes the Euclidean distance, ||C(j)||and||ˆC(j)|| represent the lengths of the curves C(j)andˆC(j), respectively. What’s more, we also compare the recovering time of different models. For fairness, we only use CPU during the comparison. The results are presented in Table II. It is evident that CSI- BERT demonstrates the best performance across all metrics. Notably, we observe that the “replace” method of CSI-BERT achieves a higher FSS, compared to the “recover” method. This discrepancy may be attributed to the fact that the output of CSI-BERT exhibits inherent coherence, aided by the Dis- criminator. Additionally, to simulate a scenario with a higher loss rate, we manually deleted some CSI packages as shown in Fig. 4. It is worth mentioning that CSI-BERT maintains its superior performance even with increased data loss rates.Then, we evaluate the performance of our recovered data and the interpolated data by inputting them into other mod- els, including shallow Feed-Forward Network (FFN), shallow Convolutional Neural Network (CNN), RNN [12], Long Short- Term Memory (LSTM) [20], and ResNet [11], for two specific tasks: action classification and people identification. The re- sults are presented in Table III. It is evident that the data recov- ered by CSI-BERT achieves the highest improvement across most models. Furthermore, CSI-BERT itself demonstrates the best performance when the original data is used as input. D. Ablation Study In this section, we conduct an ablation experiment to demonstrate the effectiveness of the modifications we made to BERT [7]. We compare the recovery performance of the original BERT with our modified version, CSI-BERT, as shown in Fig. 5. We observe that the amplitude spectrum of the original BERT appears smooth across all subcarriers, indicating that it tends to map all tokens within each subcarrier to similar value. Although BERT can also achieve a relatively low MSE of 5.26, it fails to capture any valuable information. V. C ONCLUSION In this paper, we present CSI-BERT, a novel model specif- ically designed for CSI recovery and sensing. CSI-BERT is the first model to be developed exclusively for CSI recovery. Our approach not only demonstrates the applicability of NLP models in the field of CSI but also leverages time information in CSI sensing, recovers multi-dimensional continuous data, and incorporates a novel pre-training method. Through extensive experimentation, we have observed that CSI-BERT achieves outstanding performance in CSI recov- ery. The recovered data obtained with CSI-BERT not only enhances the performance of other CSI sensing models but also showcases superior performance in CSI sensing tasks. Besides, its fast recovering speed makes it suitable for real- time systems. ACKNOWLEDGMENT This work is supported by Guangdong Major Project of Basic and Applied Basic Research (No. 2023B0303000001), the Major Key Project of PengCheng Laboratory under grant PCL2023AS1-2, National Natural Science Foundation of China under Grant 62371313, Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515010109, the Internal Project Fund from Shenzhen Research Institute of Big Data under Grants J00120230001. REFERENCES [1] G. Zhu, Z. Lyu, X. Jiao, P. Liu, M. Chen, J. Xu, S. Cui, and P. Zhang, “Pushing AI to wireless network edge: An overview on integrated sensing, communication, and computation towards 6G,” Science China Information Sciences , vol. 66, no. 3, p. 130301, 2023. [2] H. Abdelnasser, M. Youssef, and K. A. Harras, “WiGest: A ubiquitous WiFi-based gesture recognition system,” in 2015 IEEE conference on computer communications (INFOCOM) , pp. 1472–1480, 2015. [3] D. Wen, P. Liu, G. Zhu, Y . Shi, J. Xu, Y . C. Eldar, and S. Cui, “Task- oriented sensing, computation, and communication integration for multi- device edge AI,” IEEE Transactions on Wireless Communications , 2023. Method MSE↓ MAE↓ SMAPE ↓ MAPE ↓ FSS↑ Time Cost (min) ↓ CSI-BERT 1.7326 0.9413 0.0902 0.0945 0.9999 (replace), 0.9979 (recover) 0.03 Linear Interpolation 2.8294 1.2668 0.1248 0.1344 0.9841 0.64 Ordinary Kringing 3.6067 1.4371 0.1627 0.1395 0.9936 45.15 IDW 2.4306 1.1854 0.1278 0.1167 0.9970 3.30 TABLE II: Recovery Error: The bold value indicates the best result, which remains consistent across subsequent tables. (a) MSE (b) MAE (c) SMAPE (d) MAPE (e) FSS Fig. 4: CSI Recovery Performance at Different Loss Rates: The x-axis represents the CSI loss rate (excluding the real lost CSI), and the y-axis represents the different metrics between the real CSI and CSI recovered by various methods. The line representing Ordinary Kriging is shorter than others, indicating its failure at higher loss rates. Task Action Classification People Identification DataModel FFN CNN RNN [12] LSTM [20] ResNet [11] CSI-BERT FFN CNN RNN [12] LSTM [20] ResNet [11] CSI-BERT 337K 23K 33K 133K 11M 2M 337K 23K 33K 133K 11M 2M Original Data 66.93% 55.72% 39.56% 11.97% 70.31% 76.91% 71.34% 71.14% 66.39% 21.09% 83.76% 93.94% CSI-BERT recover 74.23% 59.39% 48.96% 22.92% 92.57% 71.87% 97.13% 80.60% 80.51% 35.18% 94.30% 95.05% CSI-BERT replace 86.90% 61.51% 58.80% 52.36% 84.52% 79.54% 97.65% 79.18% 89.24% 24.22% 97.39% 95.83% Linear Interpolation 72.91% 58.35% 45.32% 49.09% 80.75% 74.55% 81.84% 70.88% 84.45% 26.83% 86.75% 97.92% Ordinary Kringing 65.62% 57.55% 53.64% 50.00% 88.71% 74.27% 94.76% 85.38% 86.42% 21.61% 97.32% 95.83% IDW 40.17% 56.77% 48.70% 46.88% 80.32% 67.22% 83.22% 74.56% 88.54% 33.91% 94.27% 95.20% TABLE III: CSI Sensing Classification Performance: The number below each model name represents the number of parameters. The bold value indicates the best result in each column, and the underlined value indicates the best result in each row within each task. PacketSubcarrier BERT CSI-BERT CSI Fig. 5: Amplitude Spectrum of Original CSI and Output of BERT and CSI-BERT: The blank in the original CSI represents the lost CSI. [4] P. Liu, G. Zhu, S. Wang, W. Jiang, W. Luo, H. V . Poor, and S. Cui, “To- ward ambient intelligence: Federated edge learning with task-oriented sensing, computation, and communication integration,” IEEE Journal of Selected Topics in Signal Processing , vol. 17, no. 1, pp. 158–172, 2022. [5] X. Li, F. Liu, Z. Zhou, G. Zhu, S. Wang, K. Huang, and Y . Gong, “In- tegrated sensing, communication, and computation over-the-air: MIMO beamforming design,” IEEE Transactions on Wireless Communications , 2023. [6] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems , vol. 30, 2017. [7] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional Transformers for language understanding,” arXiv preprint arXiv:1810.04805 , 2018. [8] Q. Zhang, H. Zhu, P. Wang, E. Chen, and H. Xiong, “Hierarchical Wi- Fi trajectory embedding for indoor user mobility pattern analysis,” Pro- ceedings of the ACM on Interactive, Mobile, Wearable and UbiquitousTechnologies , vol. 7, no. 2, pp. 1–21, 2023. [9] B. Guo, W. Zuo, S. Wang, W. Lyu, Z. Hong, Y . Ding, T. He, and D. Zhang, “WEPOS: Weak-supervised indoor positioning with unlabeled Wi-Fi for on-demand delivery,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies , vol. 6, no. 2, pp. 1–25, 2022. [10] X. Sun, H. Ai, J. Tao, T. Hu, and Y . Cheng, “BERT-ADLOC: A secure crowdsourced indoor localization system based on BLE fingerprints,” Applied Soft Computing , vol. 104, p. 107237, 2021. [11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. [12] L. R. Medsker and L. Jain, “Recurrent neural networks,” Design and Applications , vol. 5, no. 64-67, p. 2, 2001. [13] Y . Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V . Stoyanov, “RoBERTa: A robustly optimized BERT pretraining approach,” arXiv preprint arXiv:1907.11692 , 2019. [14] H. Wong and T. Luo, “Man-in-the-middle attacks on MQTT-based IoT using BERT based adversarial message generation,” in KDD 2020 AIoT Workshop , 2020. [15] S. Wang, M. Khabsa, and H. Ma, “To pretrain or not to pretrain: Examining the benefits of pretraining on resource rich tasks,” arXiv preprint arXiv:2006.08671 , 2020. [16] P. Ramachandran, P. J. Liu, and Q. V . Le, “Unsupervised pretraining for sequence to sequence learning,” arXiv preprint arXiv:1611.02683 , 2016. [17] Z. Wang, Q. Kong, B. Wei, L. Zhang, and A. Tian, “Radio map construction based on BERT for fingerprint-based indoor positioning system,” EURASIP Journal on Wireless Communications and Network- ing, vol. 2023, no. 1, pp. 1–18, 2023. [18] Y . Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Lavi- olette, M. March, and V . Lempitsky, “Domain-adversarial training of neural networks,” Journal of machine learning research , vol. 17, no. 59, pp. 1–35, 2016. [19] T. Eiter and H. Mannila, “Computing discrete Fr ´echet distance,” 1994. [20] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation , vol. 9, no. 8, pp. 1735–1780, 1997. | 4 | 1 | The CSI-BERT model has approximately 2.11 million parameters, similar in scale to other models like BERT-base, which has around 110 million parameters. Given that the dataset entails wireless Channel State Information (CSI) samples collected at 100Hz, with an average of 14.51% loss rate, we estimate the dataset is manageable for a single GPU during pre-training and fine-tuning phases. The batch size used is 64, which is within the capacity of an NVIDIA RTX 3090. Considering these aspects, we estimate around 4 hours for training on a single GPU. Training could yield results in less than 8 hours based on the model size and 1300MB memory consumption during training. | yes | Yes | Signal Processing | Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing | 2024-03-19 0:00:00 | https://github.com/rs2002/csi-bert | 1 | http://www.sdp8.net/Dataset?id=5d4ee7ca-d0b0-45e3-9510-abb6e9cdebf9 | around 2 hours estimated. | https://colab.research.google.com/drive/1ijfudC_ZodlZSMvHtHgcLEvLwfWVF6-i?usp=sharing | Yes | -- Login and download the dataset or inside the repo it is present. |
Astock | SRL&Factors | [] | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05T00:00:00 | https://arxiv.org/abs/2403.02647v1 | [
"https://github.com/frinkleko/finreport"
] | {'Accuray': '69.48', 'F1-score': '69.28', 'Recall': '69.41', 'Precision': '69.54'} | [
"Accuray",
"F1-score",
"Recall",
"Precision"
] | Given the following paper and codebase:
Paper: FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model
Codebase: https://github.com/frinkleko/finreport
Improve the SRL&Factors model on the Astock dataset. The result
should improve on the following metrics: {'Accuray': '69.48', 'F1-score': '69.28', 'Recall': '69.41', 'Precision': '69.54'}. You must use only the codebase provided.
| FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model Xiangyu Li∗ 65603605lxy@gmail.com South China University of TechnologyXinjie Shen∗ frinkleko@gmail.com South China University of TechnologyYawen Zeng yawenzeng11@gmail.com ByteDance AI Lab Xiaofen Xing† xfxing@scut.edu.cn South China University of TechnologyJin Xu† jinxu@scut.edu.cn South China University of Technology Pazhou Lab ABSTRACT The task of stock earnings forecasting has received considerable attention due to the demand investors in real-world scenarios. How- ever, compared with financial institutions, it is not easy for ordinary investors to mine factors and analyze news. On the other hand, al- though large language models in the financial field can serve users in the form of dialogue robots, it still requires users to have financial knowledge to ask reasonable questions. To serve the user experi- ence, we aim to build an automatic system, FinReport, for ordinary investors to collect information, analyze it, and generate reports after summarizing. Specifically, our FinReport is based on financial news announce- ments and a multi-factor model to ensure the professionalism of the report. The FinReport consists of three modules: news factorization module, return forecasting module, risk assessment module. The news factorization module involves understanding news informa- tion and combining it with stock factors, the return forecasting mod- ule aim to analysis the impact of news on market sentiment, and the risk assessment module is adopted to control investment risk. Ex- tensive experiments on real-world datasets have well verified the ef- fectiveness and explainability of our proposed FinReport. Our codes and datasets are available at https://github.com/frinkleko/FinReport. CCS CONCEPTS •Computing methodologies →Artificial intelligence . KEYWORDS Quantization Finance, Stock Earnings Forecasting, Semantic Un- derstanding, Large Language Model ∗Both authors contributed equally to the paper †Corresponding authors Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore ©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0172-6/24/05. . . $15.00 https://doi.org/10.1145/3589335.3648330 Figure 1: Examples of the manual search, Chat with LLMs and our FinReport solution. ACM Reference Format: Xiangyu Li, Xinjie Shen, Yawen Zeng, Xiaofen Xing, and Jin Xu. 2024. Fin- Report: Explainable Stock Earnings Forecasting via News Factor Analyzing Model. In Companion Proceedings of the ACM Web Conference 2024 (WWW ’24 Companion), May 13–17, 2024, Singapore, Singapore. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3589335.3648330 1 INTRODUCTION The endeavor to forecast the ever-changing stock market has always been a captivating task for investors. In particular, its randomness, volatility, and behavioral diversity of participants have posed sig- nificant challenges. Among them, stock data and news data are crucial factors that influence investors’ decision-making. 1) Stock data refers to numerical data that characterizes stocks in time series. For instance, [ 4] proposed a novel data quantization and fusion way of stock data for stock time series predictions. 2) News data is unstructured text that contains complex time-sensitive information. Pioneers are utilizing natural language processing (NLP) tools[ 29– 31] to comprehend news and facilitate stock forecasting [ 22,32]. For example, [ 3] exacted key word by TF-IDF and contacted it with index. Sawhney et al . [23] focused on social media text and corre- lations among stocks, proposed a hierarchical temporal fusion to process index and news data. While the field of stock forecasting is thriving, unfortunately not everyone can benefit from it. In fact, well-known investmentarXiv:2403.02647v1 [cs.CL] 5 Mar 2024 WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore Li, et al. Figure 2: The overall architecture of our FinReport for explainable stock returns forecasting. institutions can build complex and comprehensive systems of quan- titative and public opinion analysis to sensitively detect market changes. However, this is not an easy task for the majority of ordi- nary investors. As shown in Fig. 1(a), where manually collecting news, reading financial reports, and building small factor analysis models are common strategies that are tedious and unstable. On the other hand, with the emergence of ChatGPT [ 20], large language models (LLMs) in the financial field like BloombergGPT can serve users in the form of dialogue robots, as shown in Fig. 1(b). However, this requires users to have financial knowledge to ask reasonable questions. Therefore, in this paper, we aim to build an automatic system for ordinary investors to collect information, analyze it, and generate reports after summarizing. As shown in Fig. 1(c), the user only needs to input the ticker symbol or news, and our system has the ability to generate a customized report (e.g. stock returns forecasts) for the user, which is more convenient and reliable. To obtain a comprehensive report, we focus on two main compo- nents: stock data and news data. Previous studies have attempted to directly combine news and stock factors, disregarding the fun- damental differences between the continuity and density of stocks and the discontinuity and sparseness of news. Moreover, the impact of news on the stock market is subject to "chronological deviation," meaning that news events may produce varying reactions at dif- ferent timestamps. Therefore, capturing the dynamic relationship between news events and the market requires: 1) News factorization, which involves understanding news information and combining it with stock factors. 2) Return forecasting, which includes assessing the impact of news on market sentiment and the impact of news announcements on a company’s stock. 3) Risk assessment, which is crucial for users to effectively participate in financial markets. Ultimately, our system will provide users with effective investment recommendations based on these elements. To address the aforementioned challenges, we propose FinRe- port, an explainable model for reporting stock returns. Specifically, our FinReport is based on financial news announcements and a multi-factor model to ensure the professionalism of the report. The FinReport consists of three modules: 1) News factorization module, which combines Semantic role labeling and semantic dependencyparsing graph to comprehend news information and then combines it with stock multi-factors to predict news announcement classifica- tion. 2) Return forecasting module, which utilizes the Fama-French 5-factor model to analyze the sentiment impact of the news on the market. 3) Risk assessment module, which employs the EGARCH model to build a VaR risk assessment for the stock risk level based on historical fluctuation information. Finally, the above informa- tion is fed into our LLM to generate a readable report. Extensive experiments demonstrate that our model achieves a higher ROI and Sharpe Ratio. The main contributions are summarized as follows: •To the best of our knowledge, this is the first work that introduces a financial report model to automatically collects information, analyzes, and summarizes. •We propose three sub-modules to respectively address news fac- torization, return forecasting, and risk assessment, which make reporting more reliable. •Extensive experiments conducted on real-world datasets demon- strate the effectiveness and explainability of our solution. 2 RELATED WORK 2.1 Quantitative Finance Quantitative finance is an applied mathematics field that focuses on financial markets, which involves the use of mathematical and statistical methods to analyze financial markets [ 14,15]. The field of quantitative finance can be divided into two main branches: derivatives pricing[ 12,24] and risk analysis[ 17]. Derivatives pricing is concerned with the pricing of financial instruments, such as options, while risk analysis focuses on models and techniques for measuring and managing the risk of complex financial instruments, such as credit derivatives.In order to quantify the risk of financial investments more accurately, the VaR (Value at Risk) [ 26] method is commonly employed. It estimates the maximum loss that an asset or investment portfolio may suffer over a given period of time. 2.2 News Semantic Understanding In the realm of news semantic understanding, pivotal roles are played by Semantic Role Labeling (SRL) [ 16] and Semantic Depen- dency Parsing Graph (SDPG) [ 1] techniques. SRL aims to identify FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore predicates within sentences and label associated semantic roles, like agents and patients. In parallel, SDPG transforms sentences into a structured form represented as directed graphs, where arcs link pairs of words. Since the introduction of standard datasets[ 19], numerous SDPG methods have been explored on the basis of syn- tactic analysis[ 10,18,21,27]. Decoding methods for SDPG include integer linear programming[ 2] and transition-based approaches introducing novel shift-reduce automata[ 33]. By employing seman- tic understanding techniques, we can extract semantic roles and semantic dependency relationships within sentence components, enabling more detailed and profound semantic analysis. 3 PROPOSED METHOD In this paper, we try to automatically generate a comprehensive report including news analysis and quantitative understanding, so as to lower the threshold for ordinary users. Towards this goal, our FinReport consists of three components, as shown in Fig. 2: news factorization module, return forecasting module, and risk assessment module. Finally, based on the return forecasting and risk assessment results obtained from the aforementioned components, we will automatically generate readable and explainable reports with the help of LLMs. 3.1 Preliminaries The stock return forecasting task is defined as predicting future returns based on historical data. Stock data includes opening price, closing price, highest price, lowest price and trading volume, with related news. Among them, the news data is mainly the news title with the stock name, which can be “None”, because the news does not happen every day. 3.2 News Factorization Module Financial news and stock factors are vital parts of market info, crucial for report quality. In this section, we factorize news by extracting information and merging it with quantitative factors, which are used to predict return classifications. In terms of financial news, we consider both the overall semantic information and the roles information within the news sentence. Specifically, we obtain the semantic information via a pre-trained textual encoder (i.e. RoBERTa), while the roles information is ex- tracted through SRL and SDPG. SRL annotates the semantic roles, the verb (V), proto-agent (A0), and proto-patient (A1) as word em- beddings, within the financial news, where 𝑒𝑖 𝑉represents the token index of𝑉in sentence 𝑖. Further, we utilize SDPG to construct a semantic dependency parsing graph. As shown in Fig. 3, every relationship was showed as their grammar categorical names. For simplicity, it is denoted as X𝐴𝐵, which represent the edge feature of connecting two roles in semantic dependency graph. Subsequently, we employ a pooling operation to aggregate the semantic roles and dependency graph. For instance, the pooling V1 factor can be formulated as follows: e𝑆𝑅𝐿=poolingn e𝑆𝑅𝐿o e𝑆𝐷𝑃𝐺=poolingn e𝑆𝐷𝑃𝐺o, (1) Root华大基因中标河北省孕妇无创产前基因检测服务项目BGIGenomicswon the bidthe non-invasive prenatal genetic testing service project for pregnant women in Hebei Province项目,140 million yuan。Root总预算为1.4亿元projecttotalbudgetHostQpmPuncmPunceEquAgtPat华大基因中标河北省孕妇无创产前基因检测服务项目,项目总预算为1.4亿元。BGI Genomics won the bid for the non-invasive prenatal genetic testing serviceproject for pregnant women in Hebei Province, with a total budget of 140 million.A0华大基因BGIGenomicsV中标won the bidA1河北省孕妇无创产前基因检测服务项目the non-invasive prenatal genetic testing service project for pregnant women in Hebei ProvinceSRL:(a) SRL processing results. (b) SDPG processing results.SDPG processing result based on SRLNews:Figure 3: The processing results of SRL and SDPG. where e𝑆𝑅𝐿ande𝑆𝐷𝑃𝐺are the final representation of the financial news. Denoting the edge attribute in semantic graphs between the semantic roles as G𝑉𝐴0,G𝑉𝐴1,G𝐴0𝐴1. Based on this, an input feature matrix Xcontain𝑁pieces of news can be formulated as follows: X𝑛=e1𝑣... e𝑁𝑣 e1 𝐴0... e𝑁 𝐴0 e1 𝐴1... e𝑁 𝐴1 G1 𝑉𝐴0... G𝑁 𝑉𝐴0 G1 𝑉𝐴1... G𝑁 𝑉𝐴1 G1 𝐴0𝐴1... G𝑁 𝐴0𝐴1. (2) For stock factors, which represents the intrinsic characteristics of the stock such as size, value and momentum, it is represented as a matrix X𝐹as follows, where 𝑓𝑚is the𝑚-th factor of the stock. X𝑓=𝑓1 1... 𝑓𝑁 1 𝑓1 2... 𝑓𝑁 2......... 𝑓1𝑚... 𝑓𝑁𝑚. (3) To this end, we concatenate the news and stock factors to obtain the complete factor matrix Xas follows: X=W𝛼⊙X𝑛 X𝑓 , (4) where𝑊𝛼is an adaptive weight to balance the importance of news and stock factors. With aforementioned features, we use a 2-layers MLP and a softmax layer to predict the classification of returns as follows: y𝑛=𝑆𝑜𝑓𝑡𝑚𝑎𝑥(𝑀𝐿𝑃(X[:,𝑛])). (5) The definition of classification y𝑛in the Zou et al . [34] is followed, where mainly divided into three categories: positive, netural, and negative: E= positive if rranked top 20% neutral if rranked 20%∼60% negative if rranked bottom 20%, (6) WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore Li, et al. where𝑟is the ranking of one stock’s return ratio in the market. This module is trained from cross entropy loss, which can be format as: L𝑛𝑓=𝐶𝑟𝑜𝑠𝑠𝐸𝑛𝑡𝑟𝑜𝑝𝑦(y,E), (7) 3.3 Return Forecasting Module In order to comprehensively analyze and predict the impact of news on stock prices, we have introduced news factors based on the Fama- French 5 factors model[ 11], leading to the creation of the FF5-News model. Within this model, we utilize the aforementioned News Factorization Module to obtain news classifications represented asy, and incorporate specific stock trading data as inputs to the model. This process results in the establishment of FF5-News factors, enabling a more comprehensive multidimensional forecast of stock returns. The Fama-French 5 factors model offers multiple dimensions for predicting and analyzing the intricacies of the stock market. It is widely utilized in academia and the finance industry for research and investment decision-making purposes.The model is expressed as follows: 𝑅𝑖𝑡−𝑅𝑓𝑡=𝛼𝑖+𝛽𝑖 𝑅𝑚𝑡−𝑅𝑓𝑡 +𝑏𝑖𝑆𝑀𝐵𝑡 +ℎ𝑖𝐻𝑀𝐿𝑡+𝑟𝑖𝑅𝑀𝑊𝑡+𝑐𝑖𝐶𝑀𝐴𝑡+𝜀𝑖𝑡(8) where𝑅𝑖𝑡is the extra return of stock 𝑖at time𝑡,𝑅𝑓𝑡is the risk- free rate at time 𝑡,𝑅𝑚𝑡is the market return at time 𝑡, and𝛼𝑖is the intercept of the regression model. 𝛽𝑖is the market factor that measures the sensitivity of stock 𝑖to the market factor. 𝑆𝑀𝐵𝑡is the size factor,𝐻𝑀𝐿𝑡is the valuation factor, 𝑅𝑀𝑊𝑡is the profit factor, 𝐶𝑀𝐴𝑡is the investment factor. 𝑏𝑖,ℎ𝑖,𝑟𝑖,𝑐𝑖,𝑢𝑖are the exposure coefficients of the corresponding factors, and 𝜀𝑖𝑡is the zero-mean random term which represents the residual randomness of the stock. The target of this model is to minimize the 𝛼𝑖. However, the Fama-French 5 factors model can’t sensitively cap- ture the changes in stock returns caused by news announcements. Therefore, we introduce FF5-News, an integrated model that en- hances precision and explanatory power by incorporating news factors.The model is expressed as follows: 𝑅𝑖𝑡−𝑅𝑓𝑡=𝛼𝑖+𝛽𝑖 𝑅𝑚𝑡−𝑅𝑓𝑡 +𝑏𝑖𝑆𝑀𝐵𝑡 +ℎ𝑖𝐻𝑀𝐿𝑡+𝑟𝑖𝑅𝑀𝑊𝑡+𝑐𝑖𝐶𝑀𝐴𝑡 +𝑚𝑖𝑀𝑡+𝜀𝑖𝑡. (9) where𝑀𝑡is the news factor, 𝑚𝑖is the exposure coefficients of the news factor. According to the factor definitions in the Fama-French 5 fac- tors model, we believe that news articles categorized yas ’Positive’ within the News Factorization Module will yield higher returns compared to those categorized yas negative.Therefore, we adopted the approach recommended by Fama and French in their paper[ 11], using the news classification yobtained from the News Factoriza- tion Module as a profitability indicator in the news dimension. This successfully quantifies the impact of news on stock returns, result- ing in a metric named 𝑀𝑡. By integrating 𝑀𝑡with the Fama-French 5 factors model, the FF5-News model was established. For spe- cific construction methods, please refer to the Appendix:B. Finally, through our FF5-News model, we can get a comprehensive returnyield prediction in terms of market risk, size, valuation, profitability, investment style, and news effect, which has stronger explainable. 3.4 Risk Assessment Module To provide a comprehensive FinReport, it is essential to evaluate and manage risks in addition to the returns dimension described above. To achieve this, we develop a VaR risk assessment system that supplements our model with risk analysis dimension.To calculate the VaR risk, we follow [ 26] and construct an EGARCHmodel to fit the time-varying volatility of the stock, which captures the volatility of financial time series, especially the heteroscedasticity of volatility, i.e., volatility clustering in the financial market. The conditional equation of the EGARCH (p, q) model is as follows: ln𝜎2 𝑡=𝜔+𝑝∑︁ 𝑙=1𝛼𝑙(|𝑒𝑡−𝑙|−E|𝑒𝑡−𝑙|) +𝑞∑︁ 𝑗=1𝛽𝑗ln𝜎2 𝑡−𝑗+𝑞∑︁ 𝑘=1𝛾𝑘𝑒2 𝑡−𝑘, (10) where𝜎2 𝑡is the volatility (variance) at time 𝑡,𝑒𝑡is the residual term at time𝑡,𝜔is the constant term of the model, 𝛼𝑙is the weight of the residual. And 𝛽𝑗is used to consider the autoregressive charac- teristics of the volatility, 𝛾𝑘is used to consider the ARCH effect of the volatility (the impact of volatility on itself). Subsequently, the volatility is derived via the fitted EGARCH model, and the VaR estimate is computed by amalgamating the Z quantile of the confidence level (provided by the standard normal distribution). The VaR estimate represents the highest potential loss within a future horizon, and is formulated as follows: VaR=𝜇𝑡−𝜎𝑡×𝑍𝛼, (11) where𝜇𝑡is the expected return at time 𝑡,𝜎𝑡is the volatility of the stock at time 𝑡, and𝑍𝛼is the Z quantile of the confidence level 𝛼. The loss of VaR module can be calculated by the following equa- tion: L𝑣𝑎𝑟=1 𝑁𝑁∑︁ 𝑖=1|VaR𝑖−ActualVaR 𝑖|, (12) where𝑁is the number of stocks, VaR𝑖is the predict VaR value of the𝑖-th stock, and ActualVaR 𝑖is the actual VaR value of the 𝑖-th stock. With aforementioned VaR assessment, the estimated interval of the maximum loss of the stock is obtained to help our report system control the risk. Report Generation Based On LLM In the aforementioned module, we have obtained multidimensional stock return prediction analysis data and risk assessment data. To this end, we propose a report generation model based on a large language model (LLM). Consequently, the final report can be generated, with the following format: report =𝐿𝐿𝑀(𝑅,𝑀, VaR) (13) where R is aforementioned multidimensional stock Earning. We use prompt M for LLM which is “Based on multi-dimensional predictive information and risk assessment values, a financial analysis report will be generated, comprising four main sections: return forecasting, FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore Table 1: Performance comparison of different models Model Reference Resource Accuray F1 Score Recall Precision StockNet [28] ACL 2018 News 46.72 44.44 46.68 47.65 HAN Stock [13] WSDM 2018 News 57.35 56.61 57.20 58.41 Bert Chinese [9] ACL 2019 News 59.11 58.99 59.20 59.07 ERNIE-SKEP [25] ACL 2020 News 60.66 60.66 60.59 61.85 XLNET Chinese [6] EMNLP 2020 News 61.14 61.19 61.09 61.60 RoBERTa WWM Ext [6] EMNLP 2020News 61.34 61.48 61.32 61.97 News+Factors 62.49 62.54 62.51 62.59 Chinese Lert Large [7] Arxiv 2022News 64.37 64.30 64.31 64.34 News+Factors 66.36 66.16 66.69 66.4 Chinese Pert Large [8] Arxiv 2022News 65.09 65.03 65.07 65.02 News+Factors 67.37 67.27 67.73 67.28 Self-supervised SRLP [34] FinNLP 2022Factors 59.76 59.71 59.75 59.72 News 62.97 63.05 62.93 63.47 News+Factors 66.89 66.92 66.95 66.92 Factors only OursFactors 63.74 63.66 63.71 63.67 SRL&SDPG News 66.10 66.01 66.09 66.04 SRL & Factors News+Factors 69.48 69.28 69.41 69.54 SDPG & Factors News+Factors 73.12 72.97 72.96 73.04 SRL & SDPG & Factors News+Factors 75.40 75.12 75.23 75.42 risk assessment, overall trend prediction, and summary. Among them, the return forecasting section is required to include predictive analyses in six dimensions: Market Factor, Size Factor, Valuation (BP) Factor, Profitability Factor, Investment Factor, and News Effect Factor. The risk assessment section provides an estimation of the maximum potential loss, while the overall trend prediction outputs either ‘Positive’ or ‘Negative’ based on the overall profitability. The summary section includes a comprehensive analysis of the predictive information and risk assessment, offering an integrated evaluation of the investment potential of the stock. ” 4 EXPERIMENTS 4.1 Datasets Astock1is a dataset proposed by [ 34], which contains stock and news data from 2018-07-01 to 2021-11-30. Specifically, this famous dataset spanning 1248 days contains many items, such as price, news, and so on. It’s the only dataset which contains both factors and news. We follow the splitting strategy of [ 34]. 2018-07-01 to 2020-7-01 of the dataset is used for training, 2020-7-01 to 2020-10-01 for validation, and the rest for testing (2020-10-01 to 2020-12-31). 4.2 Evaluation Metrics Firstly, we utilized accuracy, F1 score, recall, and precision as eval- uation metrics for our news factorization module, same as [ 34]. Secondly, we employed the GRS (Goodness-of-Fit R-squared)[ 5] metric, its p-value and the mean absolute value of alpha to measure the model’s explanatory power on return predictions for explain- able news effects. For VaR (Value at Risk) risk assessment, we used RMSE (Root Mean Squared Error), MAE (Mean Absolute Error), and VaR Loss Coverage Rate as evaluation metrics. The evaluation 1https://github.com/JinanZou/Astockmetrics for the report generation system are currently omitted. To simulate trading based on the generated analysis reports, we uti- lized annualized rate of return, maximum drawdown, and Sharpe ratio as evaluation metrics. 4.3 Implementation Details For specific hyper-parameters in our method, all baselines are com- pared with same hyperparameters search grid, [ 1𝑒−3,1𝑒−4,1𝑒−5,1𝑒−6] of learning rate, [16,32,64,128] of batch size. The dropout rate is set to be 0.1, while the hidden size of our two layers MLP in Eqn.5 is 1,024. 4.4 News Factorization Evaluation To evaluate the effectiveness of news factorization, we compared our model with two kinds of state-of-the-art baselines. 1) large pre-trained language models, including StockNet [ 28], HAN Stock [13], Bert Chinese [ 9], ERNIE-SKEP [ 25], and XLNET Chinese [ 6]. 2) methods are integrated both news and factors information, in- cluding RoBERTa WWM Ext [ 6], Chinese Lert large [ 7], Chinese Pert large [8], and Self-supervised SRLP [34]. The experimental results are presented in Table 1. The following observations can be made: 1) The combination of factor and news approaches, generally outperform factor-only (e.g. Self-supervised SRLP) and news-only models (e.g. Chinese Pert Large). 2) SRLP using a more powerful embedding model achieves excellent perfor- mance, probably because it obtains a reasonable representation of the news. 3) Our model outperforms all the other methods, mani- fests that our model focus on better extracting the news information with semantic roles and relationship awareness and combining it with the factor data. For the above Pert pre-trained language mod- els, we extracted embeddings from the [CLS] token and attached a three-classifier to predict stock trends. WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore Li, et al. Table 2: Fama-French 5 Factors with News Effect Factor model GRS test results and VaR risk assessment system evaluation results Model GRS GRS p-valueMean Absolute Value of Alpha Fama-French 5 Factors 3.8345 1.791×10−30.0754 Fama-French 5 Factors with News Effect Factor5.3526 0.000×10−30.0602 Model RMSE MAEVaR Loss Coverage Rate VaR Risk Assessment System0.0947 0.8176 0.8123 To better understand the contributions of different components in our framework, we conduct component studies on multiple vari- ants: 1) News; 2) Factors; 3) SRL; 4) SDPG. Specifically, we explored four scenarios within our model framework: extracting factors in- formation using only the MLP method, extracting news information using only the SRL method, using only the SDPG method, and using a combination of both SDPG and SRL. The detailed evaluation re- sults are shown in Table 1. From the results, we draw the following two observations: 1) Our method after combining factors and news significantly outperforms factor-only and news-only variants. This fact may arise from factor information is not able to sensitively capture the fluctuations in stock returns caused by changes in news, and relying solely on news information without considering the inherent characteristics of the stocks also cannot fully explain the variations in stock returns. 2) After ablation of the SRL and SDPG components, the model performance degrades, suggesting that ev- ery piece of our model is essential. Among them, the ablation SRL represents that our model cannot extract a concise representation of key information from the news, while the ablation SRL represents that our model cannot capture the semantic dependencies among essential information within the news. 4.5 Return Forecasting Research In order to assess the effectiveness of the newly proposed News Effect Factor in this paper with precision and to validate the model’s explanatory capabilities after the incorporation of this News Effect Factor in real-world, we have opted to out of distribution (OOD) data for in-depth analysis and validation of the model.Specifically, we selected all the data with stock news announcements in the Astock financial dataset from 2021-01-01 to 2021-11-01. The GRS indicators of FF5 and FF5-News models are compared in Table 2. From Table 2, it is evident that the FF5-News model has a bet- ter GRS indicator, with a lower p-value, mean absolute value of alpha, and a higher GRS value. These results demonstrate that our model has a strong explanatory power for predicting returns and outperforms the Fama-French 5 Factors model. Therefore, the News Effect Factor we constructed can effectively explain a portion of the excess return rate and supplement the unexplained part of the FF5 model, thereby enhancing the explanatory power and predictive analysis ability.Table 3: Backtest in real-world scenarios ModelMaximum DrawdownAnnualized Rate of ReturnSharpe Ratio CSI300 -18.19% -9.65% -0.3653 XIN9 -26.70% -18.36% -0.6253 Chinese-PERT-large + Factors-4.17% 11.26% 1.9621 Self-supervised SRLP + Factors-3.66% 26.38% 3.76 SRL & SDPG + Factors-3.06% 57.76% 7.4043 4.6 VaR Risk Assessment In this section, we assess the effectiveness of our VaR risk assess- ment module. The results are presented in Table 2. The results indicate that the RMSE indicator of our risk assessment system is 0.0947, and the MAE is 0.817. These two indicators demonstrate that our risk system has a high level of predictability for risk assessment and can provide a valuable source of information for risk analysis. Furthermore, our VaR loss coverage rate indicator achieved an im- pressive 0.8123, which confirms that our system covers a significant portion of the losses and has a strong risk reference significance. 4.7 Report Generation In this section, we utilized the LLMs (e.g. ChatGPT) to generate comprehensive reports for users, in which the multi-dimensional return forecasting results and VaR risk assessment results will be used as input. Figure 4 illustrates two specific examples of our FinReport. As shown in Figure 4(a), for a good news announcement (i.e. Light Textile City announced intent to acquire state-owned land in Keqiao ), the investment advice given by our FinReport is “posi- tive”. Among them, the return forecasting is composed of multiple dimensions such as stock factors and size factors, while the risk assessment believes that it is only 2%. Conversely, for bad news as shown in Figure 4(b), the return forecasting is negative and the risk assessment reaches 10%. Further, to evaluate users’ satisfaction with our generated re- ports, we invited 10 experts to rate the 1,000generated reports from three levels, namely -1, 0, 1. In the end, due to the factuality and high fluency of FinReport, users gave 96% positive score, which shows that the automatically generated reports are more conve- nient than self-finding information. Of course, the premise is that it is sufficiently convincing, which is what this paper is dedicated to. 4.8 Profitability Evaluation In Real-world scenarios In this section, we evaluate our method in real-world by backtest. In the view of investment, we hope that our model can achieve on out of distribution (OOD) generalization. To achieve this goal, we conducted simulated trading from 2021-01-1 to 2021-11-01. Notably, considering the trading restrictions in China’s A-share market, we bought the constructed portfolio at the opening price on each trading day and sold it at the opening price on the next trading day, assuming a transaction cost of 0.1% for each trade. FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore Based on the analysis from our model, the following impacts on the stock's returns are expected from this news:FinReport Based on the analysis from our model, the following impacts on the stock‘s returns are expected from this news.Return ForecastMarket Factor:Due to market risk appetite, this may lead to short-term fluctuations, causing a 2% stock decline.Size Factor: The announcement will increase asset size, leading to a 1% stock price increase.Valuation (BP) Factor: The announcement enhances valuation, potentially increasing P/E ratio by 0.5%.Profitability Factor:Earnings growth may rise by 0.7% due to the announcement.Investment Factor: Increased assets and scope could boost stock price by 1%.News Effect Factor: Positive sentiment may increase investor confidence, with an estimated 0.5% rise in news effect returns.Risk Assessment: Historical data shows moderate stockfluctuations in 30 days, no abnormal swings. Expected maxdecline: 2%, indicating favorable risk.Overall trend prognosis: PositiveSummary: After considering the revenue forecast and risk assessment information, we believe that the stock has a relatively low level of risk, and the estimated rate of return is expected to grow by 1% to 2%.This report serves as a reference; it is not investment advice. Kindly exercise caution and acknowledge personal risks.Analyze stock 600790.SH. Today, the following news has emerged: On August 13, 2021, Light Textile City announced intent to acquire state-owned land in Keqiao, Shaoxing, for Light Textile Digital Logistics Port project with approx. RMB 3.172 billion investment. Based on the analysis from our model, the following impacts on the stock's returns are expected from this news:FinReport Based on the analysis from our model, the following impacts on the stock‘s returns are expected from this news.Return ForecastMarket Factor: Announcement may lead to 3% stock decline,impacting factor returns.Size Factor: Reduced asset size suggests limited gains, causing a 1% decrease.Valuation (BP) Factor: Net loss may lower investor valuation, causing about 1.5% decrease.Profitability Factor: Net loss might affect future earnings perception, causing a projected 3% decrease.Investment Factor: Anticipated loss may foster cautious investment, raising returns by 0.8%.News EffectFactor: Expected loss may cause negativity, but ongoing efforts could maintain attention, resulting in 2% decrease.Risk Assessment: Historical data reveals heightened stock volatility in 30 days, indicating potential for abnormal fluctuations.Expected maxdecline: 10%, indicating substantial risk.Overall trend prognosis: NegativeSummary: After considering the revenue forecast and risk assessment information, we believe that the stock carries an extremely high level of risk, and the anticipated rate of return decline is expected to be above 8%.This report serves as a reference; it is not investment advice. Kindly exercise caution and acknowledge personal risks.Analyze stock 002069.SZ. Today, the following news has emerged: On April 14, 2021, ZonecoGroup reported an estimated Q1 net loss of 30-50 million yuan, compared to a 3.7139 million yuan profit last year. The company aims to enhance core business focus, streamline operations, and manage risks. (a)(b) Figure 4: Examples of both postive and negateive report 01/01/2021 02/01/2021 03/01/2021 04/01/2021 05/01/2021 06/01/2021 07/01/2021 08/01/2021 09/01/2021 10/01/2021 11/01/20210.80.91.01.11.21.31.4Cumulative Returns Cumulative Returns and Maximum Drawdown SRL & SDPG with Factors Self-supervised SRLP with Factors Chinese-PERT-large with Factors CSI 300 Index XIN9 Index 0.25 0.20 0.15 0.10 0.05 0.00Draw Down Rate Figure 5: Comparing the performance of different models in real-world scenarios In Table 3, we can observe that our approach achieved a re- markable annualized rate of return of 57.76%, surpassing previous baselines and market indices XIN9 and CSI300. Additionally, our method obtained the lowest Maximum Drawdown of -3.06% andthe highest Sharpe Ratio of 7.4043, significantly outperforming pre- vious methods and indicating that our self-supervised approach successfully achieved higher expected returns while maintaining relatively lower risk, as shown in Figure 5. 5 CONCLUSIONS In this paper, we present FinReport, a novel model for generating financial reports. Our FinReport is designed to ensure accuracy, professionalism, and comprehensiveness, and consists of three sub- modules: News Factorization, Return Forecasting, and Risk Assess- ment. Leveraging innovative techniques, we utilize Propbank-style semantic role labeling (SRL) and semantic dependency parsing graphs (SDPG) to extract key information from news articles and combine it with stock features. The incorporation of news factors into the Fama-French 5-factor model improves stock return predic- tion. Our experimental results demonstrate superior performance compared to benchmarks, providing investors with more insightful and comprehensive financial news analysis reports. 6 ACKNOWLEDGMENTS This work is supported in part by the National Natural Science Foundation of China (62372187), in part by the National Key Re- search and Development Program of China (2021YFC2202603) and in part by the Guangdong Provincial Key Laboratory of Human Digital Twin (2022B1212010004). WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore Li, et al. REFERENCES [1]Željko Agić, Alexander Koller, and Stephan Oepen. 2015. Semantic dependency graph parsing using tree approximations. In Proceedings of the 11th International Conference on Computational Semantics . 217–227. [2]Mariana S. C. Almeida and André F. T. Martins. 2015. Lisbon: Evaluating TurboSe- manticParser on Multiple Languages and Out-of-Domain Data. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) . Association for Computational Linguistics. https://doi.org/10.18653/v1/s15-2162 [3]Shubin Cai, Xiaogang Feng, Ziwei Deng, Zhong Ming, and Zhiguang Shan. 2018. Financial News Quantization and Stock Market Forecast Research Based on CNN and LSTM. In Lecture Notes in Computer Science . Springer International Publishing, 366–375. https://doi.org/10.1007/978-3-030-05755-8_36 [4]Kinjal Chaudhari and Ankit Thakkar. 2023. Data fusion with factored quantization for stock trend prediction using neural networks. Information Processing & Management 60, 3 (2023), 103293. https://doi.org/10.1016/j.ipm.2023.103293 [5]A. Colin Cameron and Frank A.G. Windmeijer. 1997. An R-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics 77, 2 (1997), 329–342. https://doi.org/10.1016/S0304-4076(96)01818- 0 [6]Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting Pre-Trained Models for Chinese Natural Language Processing. In Findings of the Association for Computational Linguistics: EMNLP 2020 . Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp. 58 [7]Yiming Cui, Wanxiang Che, Shijin Wang, and Ting Liu. 2022. LERT: A Linguistically-motivated Pre-trained Language Model. (2022). arXiv:2211.05344 [cs.CL] [8]Yiming Cui, Ziqing Yang, and Ting Liu. 2022. PERT: Pre-training BERT with Permuted Language Model. (2022). arXiv:2203.06906 [cs.CL] [9]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. In Proceedings of the 2019 Conference of the North . Association for Computational Linguistics. https://doi.org/10.18653/v1/n19-1423 [10] Timothy Dozat and Christopher D. Manning. 2018. Simpler but More Accurate Semantic Dependency Parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) . Association for Computational Linguistics. https://doi.org/10.18653/v1/p18-2077 [11] Eugene F. Fama and Kenneth R. French. 2015. A five-factor asset pricing model. Journal of Financial Economics 116, 1 (2015), 1–22. https://doi.org/10.1016/j. jfineco.2014.10.010 [12] Blanka Horvath, Aitor Muguruza, and Mehdi Tomas. 2021. Deep learning volatil- ity: a deep neural network perspective on pricing and calibration in (rough) volatility models. Quantitative Finance 21, 1 (2021), 11–27. [13] Ziniu Hu, Weiqing Liu, Jiang Bian, Xuanzhe Liu, and Tie-Yan Liu. 2018. Listening to Chaotic Whispers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining . ACM. https://doi.org/10.1145/3159652.3159690 [14] Takashi Kanamura, Lasse Homann, and Marcel Prokopczuk. 2021. Pricing analysis of wind power derivatives for renewable energy risk management. Applied Energy 304 (2021), 117827. [15] Gang Kou, Xiangrui Chao, Yi Peng, Fawaz E Alsaadi, Enrique Herrera Viedma, et al.2019. Machine learning methods for systemic risk analysis in financial sectors. (2019). [16] Lluís Màrquez, Xavier Carreras, Kenneth C Litkowski, and Suzanne Stevenson. 2008. Semantic role labeling: an introduction to the special issue. , 145–159 pages. [17] Harald A Mieg. 2022. Volatility as a transmitter of systemic risk: Is there a structural risk in finance? Risk Analysis 42, 9 (2022), 1952–1964. [18] Stephan Oepen, Omri Abend, Lasha Abzianidze, Johan Bos, Jan Hajic, Daniel Hershcovich, Bin Li, Tim O’Gorman, Nianwen Xue, and Daniel Zeman. 2020. MRP 2020: The Second Shared Task on Cross-Framework and Cross-Lingual Meaning Representation Parsing. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing . Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.conll-shared.1 [19] Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing. In Proceedings of the 9th In- ternational Workshop on Semantic Evaluation (SemEval 2015) . Association for Computational Linguistics. https://doi.org/10.18653/v1/s15-2153 [20] Keyu Pan and Yawen Zeng. 2023. Do LLMs Possess a Personality? Mak- ing the MBTI Test an Amazing Evaluation for Large Language Models. arXiv:2307.16180 [cs.CL] [21] Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learn- ing Joint Semantic Parsers from Disjoint Data. In Proceedings of the 2018 Con- ference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers) . Association for Computational Linguistics. https://doi.org/10.18653/v1/n18-1135 [22] Ru Peng, Heming Zou, Haobo Wang, Yawen Zeng, Zenan Huang, and Junbo Zhao. 2024. Energy-based Automated Model Evaluation. arXiv:2401.12689 [cs.LG][23] Ramit Sawhney, Shivam Agarwal, Arnav Wadhwa, and Rajiv Ratn Shah. 2020. Deep Attentive Learning for Stock Movement Prediction From Social Media Text and Company Correlations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) . Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.emnlp-main.676 [24] Domingo Tavella. 2003. Quantitative methods in derivatives pricing: an introduction to computational finance . John Wiley & Sons. [25] Hao Tian, Can Gao, Xinyan Xiao, Hao Liu, Bolei He, Hua Wu, Haifeng Wang, and Feng Wu. 2020. SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis. https://doi.org/10.48550/ARXIV.2005.05635 [26] Elena Villar-Rubio, María-Dolores Huete-Morales, and Federico Galán-Valdivieso. 2023. Using EGARCH models to predict volatility in unconsolidated financial markets: the case of European carbon allowances. Journal of Environmental Studies and Sciences 13, 3 (May 2023), 500–509. https://doi.org/10.1007/s13412- 023-00838-5 [27] Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, and Jing Gao. 2020. Weak Supervision for Fake News Detection via Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence 34, 01 (April 2020), 516–523. https://doi.org/10.1609/aaai.v34i01.5389 [28] Yumo Xu and Shay B. Cohen. 2018. Stock Movement Prediction from Tweets and Historical Prices. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Association for Computational Linguistics. https://doi.org/10.18653/v1/p18-1183 [29] Yawen Zeng. 2022. Point Prompt Tuning for Temporally Language Grounding. InSIGIR . 2003–2007. [30] Yawen Zeng, Da Cao, Xiaochi Wei, Meng Liu, Zhou Zhao, and Zheng Qin. 2021. Multi-Modal Relational Graph for Cross-Modal Video Moment Retrieval. In Proceedings of the CVPR . IEEE, 2215–2224. [31] Yawen Zeng, Yiru Wang, Dongliang Liao, Gongfu Li, Weijie Huang, Jin Xu, Da Cao, and Hong Man. 2022. Keyword-Based Diverse Image Retrieval with Variational Multiple Instance Graph. IEEE Trans. Neural Networks Learn. Syst. (2022). [32] Yawen Zeng, Yiru Wang, Dongliang Liao, Gongfu Li, Jin Xu, Hong Man, Bo Liu, and Xiangmin Xu. 2024. Contrastive topic-enhanced network for video captioning. Expert Systems with Applications 237 (2024), 121601. [33] Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-Based Parsing for Deep Dependency Structures. Computational Linguistics 42, 3 (Sept. 2016), 353–389. https://doi.org/10.1162/coli_a_00252 [34] Jinan Zou, Haiyao Cao, Lingqiao Liu, Yuhao Lin, Ehsan Abbasnejad, and Javen Qinfeng Shi. 2022. Astock: A New Dataset and Automated Stock Trading based on Stock-specific News Analyzing Model. In Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP) . Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (Hybrid), 178–186. https://aclanthology.org/2022.finnlp-1.24 FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model WWW ’24 Companion, May 13–17, 2024, Singapore, Singapore 7 APPENDIX In this appendix, to further analyze the effectiveness of our proposed method, more details about the experiment settings and results are presented. 7.1 Return Calculation Groups in FF5 Model are written as follows: •Size: big/small •BP: High/Netural/Low •Profitability : Robust/Neutral/Weak •Investment : Conservative/Neutral/Aggressive Groups in FF5-News Model are written as follows: •Size: big/small •BP: High/Netural/Low •Profitability : Robust/Neutral/Weak •Investment : Conservative/Neutral/Aggressive •NewsEffect : Positive/Medium/Negative 7.2 Factor Construction Factors in FF5 Model are formulated as follows: 𝑆𝑀𝐵𝐵𝑃=(𝑆𝐻+𝑆𝑁+𝑆𝐿)/3−(𝐵𝐻+𝐵𝑁+𝐵𝐿)/3, 𝑆𝑀𝐵𝑜𝑝=(𝑆𝑅+𝑆𝑁+𝑆𝑊)/3−(𝐵𝑅+𝐵𝑁+𝐵𝑊)/3, 𝑆𝑀𝐵𝑖𝑛𝑣=(𝑆𝐶+𝑆𝑁+𝑆𝐴)/3−(𝐵𝐶+𝐵𝑁+𝐵𝐴)/3, 𝑆𝑀𝐵 =(𝑆𝑀𝐵𝐵𝑃+𝑆𝑀𝐵𝑜𝑝+𝑆𝑀𝐵𝑖𝑛𝑣)/3, 𝐻𝑀𝐿 =(𝐵𝐻+𝑆𝐻)/2−(𝐵𝐿+𝑆𝐿)/2, 𝑅𝑀𝑊 =(𝐵𝑅+𝑆𝑅)/2−(𝐵𝑊+𝑆𝑊)/2, 𝐶𝑀𝐴 =(𝐵𝐶+𝑆𝐶)/2˘(𝐵𝐴+𝑆𝐴)/2. Factors in FF5-News Model are formulated as follows: 𝑆𝑀𝐵𝐵𝑃=(𝑆𝐻+𝑆𝑁+𝑆𝐿)/3−(𝐵𝐻+𝐵𝑁+𝐵𝐿)/3, 𝑆𝑀𝐵𝑜𝑝=(𝑆𝑅+𝑆𝑁+𝑆𝑊)/3−(𝐵𝑅+𝐵𝑁+𝐵𝑊)/3, 𝑆𝑀𝐵𝑖𝑛𝑣=(𝑆𝐶+𝑆𝑁+𝑆𝐴)/3−(𝐵𝐶+𝐵𝑁+𝐵𝐴)/3, 𝑆𝑀𝐵𝑛𝑒𝑤𝑠=(𝑆𝑃+𝑆𝑀+𝑆𝑁)/3−(𝐵𝑃+𝐵𝑀+𝐵𝑁)/3, 𝑆𝑀𝐵 =(𝑆𝑀𝐵𝐵𝑃+𝑆𝑀𝐵𝑜𝑝+𝑆𝑀𝐵𝑖𝑛𝑣+𝑆𝑀𝐵𝑛𝑒𝑤𝑠)/4, 𝐻𝑀𝐿 =(𝐵𝐻+𝑆𝐻)/2−(𝐵𝐿+𝑆𝐿)/2, 𝑅𝑀𝑊 =(𝐵𝑅+𝑆𝑅)/2−(𝐵𝑊+𝑆𝑊)/2, 𝐶𝑀𝐴 =(𝐵𝐶+𝑆𝐶)/2˘(𝐵𝐴+𝑆𝐴)/2, 𝑁𝑒𝑤𝑠 =(𝐵𝑃+𝑆𝑃)/2˘(𝐵𝑁+𝑆𝑁)/2. 7.3 Detial infomation of back test In this section, we provide the more widely used metrics and the details of backtest in Figure 1,2,3. Important metrics are described as below. Cumulative ReturnAnnualized Rateof Return Excess ReturnBenchmark ReturnSharpe RatioWin RateSortino RatioAverage Excess ReturnDaily Win RateInformation RatioProfit TimesLoss TimesMaximum DrawdownMaximum Drawdown Period0.5591.8770.23%0.64578621.575-3.66%26.38%3.7621.11%-7.19%28.30%2021-10-12 , 2021-10-29Figure 6: Details of backtest on Self-supervised SRLP with factors Cumulative ReturnAnnualized Rateof Return Excess ReturnBenchmark ReturnSharpe RatioWin RateSortino RatioAverage Excess ReturnDaily Win RateInformation RatioProfit TimesLoss TimesMaximum DrawdownMaximum Drawdown Period0.5741.3620.16%0.59879611.512-4.17%11.26%1.96218.76%-7.19%15.95%2021-09-24 , 2021-11-01 Figure 7: Details of backtest on Chinese Pert large Cumulative ReturnAnnualized Rateof Return Excess ReturnBenchmark ReturnSharpe RatioWin RateSortino RatioAverage Excess ReturnDaily Win RateInformation RatioProfit TimesLoss TimesMaximum DrawdownMaximum Drawdown Period0.6372.5320.35%0.68989512.116-3.06%57.76%7.404346.45%-7.19%53.64%2021-07-23 , 2021-07-30 Figure 8: Details of backtest on SRL& SDPG with factors | 4 | 1 | The model includes multiple modules (news factorization, return forecasting, risk assessment) but seems to utilize established architectures like RoBERTa for the news factorization, which could have a manageable parameter count. The dataset, Astock, has a significant amount of historical data over more than three years (1248 days) but trained on a smaller subset (approximately 2 years) which can be processed efficiently. Assuming a batch size of 32 and reasonable epochs for convergence, I estimate around 4 hours to train the model on a single GPU. A single high-memory GPU (like a 16GB NVIDIA RTX 2080 Ti) should suffice for managing the computational load due to the model's modularity. Therefore, it is feasible this model could train in under 8 hours on a single GPU. | yes | Yes | NLP | FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model | 2024-03-05 0:00:00 | https://github.com/frinkleko/finreport | 1 | isndie the repo . | under 5 minutes | https://colab.research.google.com/drive/1G6z0MNnOdpYGIu6F2wPc69cd-fbUWjsr?usp=sharing | Yes | -- Just run this colab file. I ahave include the dataextraction process from the repo and passed into path corrctly. This ipynb file is downloaded from repo itself. |
Fashion-MNIST | ENERGIZE | [] | Towards Physical Plausibility in Neuroevolution Systems | 2024-01-31T00:00:00 | https://arxiv.org/abs/2401.17733v1 | [
"https://github.com/rodriguesGabriel/energize"
] | {'Percentage error': '9.8', 'Accuracy': '0.902', 'Power consumption': '71.92'} | [
"Percentage error",
"Accuracy",
"Trainable Parameters",
"NMI",
"Power consumption"
] | Given the following paper and codebase:
Paper: Towards Physical Plausibility in Neuroevolution Systems
Codebase: https://github.com/rodriguesGabriel/energize
Improve the ENERGIZE model on the Fashion-MNIST dataset. The result
should improve on the following metrics: {'Percentage error': '9.8', 'Accuracy': '0.902', 'Power consumption': '71.92'}. You must use only the codebase provided.
| arXiv:2401.17733v1 [cs.NE] 31 Jan 2024Towards Physical Plausibility in Neuroevolution Systems Gabriel Cortês[0000 −0001 −6318 −8520], Nuno Lourenço[0000 −0002 −2154 −0642], and Penousal Machado[0000 −0002 −6308 −6484] University of Coimbra, CISUC/LASI – Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineer ing {cortes,naml,machado}@dei.uc.pt Abstract. The increasing usage of Artificial Intelligence (AI) models , especially Deep Neural Networks (DNNs), is increasing the p ower con- sumption during training and inference, posing environmen tal concerns and driving the need for more energy-efficient algorithms and hardware solutions. This work addresses the growing energy consumpt ion problem in Machine Learning (ML), particularly during the inferenc e phase. Even a slight reduction in power usage can lead to significant ener gy savings, benefiting users, companies, and the environment. Our appro ach focuses on maximizing the accuracy of Artificial Neural Network (ANN ) mod- els using a neuroevolutionary framework whilst minimizing their power consumption. To do so, power consumption is considered in th e fitness function. We introduce a new mutation strategy that stochas tically rein- troduces modules of layers, with power-efficient modules hav ing a higher chance of being chosen. We introduce a novel technique that a llows train- ing two separate models in a single training step whilst prom oting one of them to be more power efficient than the other while maintainin g similar accuracy. The results demonstrate a reduction in power cons umption of ANN models by up to 29.2% without a significant decrease in pre dictive performance. Keywords: Evolutionary Computation · Neuroevolution · Energy Ef- ficiency 1 Introduction As the demand for Machine Learning (ML) continues to grow, so does the elec- trical power required for training and assessment. Accordi ng to Patterson et al., GPT-3, the model behind ChatGPT, consumes 1287 MWh, corresp onding to approximately 552 tons of CO 2equivalent emissions just for training during 15 days [16]. In addition to the environmental impacts of this p ower usage, it can also burden individual users and organizations, who may fac e high energy costs. Therefore, finding ways to reduce the power consumption of ML processes is becoming increasingly important. Artificial Neural Networks (ANNs) are a type of ML model inspi red by bio- logical neural networks [19]. They consist of multiple laye rs of artificial neurons, 2 G. Cortês et al. which are functions that take input data and produce an outpu t based on it. The connections between neurons have an associated weight v alue modified in the training process to allow the network to “learn” how to so lve a specific task. Deep Neural Networks (DNNs) are ANNs with a considerable num ber of hidden layers [9,10]. This allows them to avoid the feature enginee ring step, thus auto- matically discovering the representations needed for clas sification and achieving higher accuracy values. Training and executing ANNs is powe r-intensive due to the required computational resources. Evolutionary Algorithms (EAs) are algorithms inspired by n atural selection [6,17]. To evolve solutions over multiple generations, the y utilize mechanisms, such as selection, crossover, and mutation. The process beg ins with a randomly initialized population whose evolution is steered by a fitne ss function that mea- sures the quality of an individual. In conjunction with the m entioned evolution- ary mechanisms, the process is predicted to culminate in nea r-optimal individ- uals. Neuroevolution (NE) uses EAs to generate and optimize ANNs f or a given task [7]. It can optimize the ANN’s architecture and hyperpa rameters. We hypothesise that we can address the energy inefficiency iss ue by us- ing NE to search for well-suited models for a particular prob lem while be- ing power-efficient. Fast Deep Evolutionary Network Structu red Representation (Fast-DENSER) is a method that utilizes an Evolution Strate gy (ES) to find op- timal ANN models by using their accuracy as the fitness functi on, thus guiding the search towards accurate models [2]. In this work, we propose novel approaches integrated into Fa st-DENSER to find power-efficient models. We have incorporated a new approa ch to measure the power consumption of a DNN model during the inference pha se. This metric has been embedded into multi-objective fitness functions to steer the evolution towards more power-efficient DNN models. We also introduce a n ew mutation strategy that allows the reutilization of modules of layers with inverse probability to the power usage of a module, thus (re)introducing efficient sets of layers in a model. We propose the introduction of an additional output layer connected to an intermediate layer of a DNN model and posterior partiti oning into two separate models to obtain smaller but similarly accurate mo dels that utilize less power. To the best of our knowledge, no prior works employ a si milar approach. The experiments are analyzed through two metrics: accuracy and mean power usage during the validation step. The motive for using the po wer usage of the validation step instead of the training step is that the trai ning is usually per- formed only once. Contrarily, the inference is executed mul tiple times. Moreover, inference does not necessarily occur on the machine where th e training was con- ducted, which is vital since many devices are not optimized f or these tasks. The results of this work show that it is possible to have DNN mo dels with substantially inferior power usage. The best model found re garding power con- sumes 29.18 W (29.2%) less whilst having a tiny decrease in pe rformance (less than 1%). Towards Physical Plausibility in Neuroevolution Systems 3 This work is structured as follows: Section 2 provides backg round informa- tion on ANNs, and NE. Section 3 introduces our methodologies to enhance the power efficiency of ANN models. Section 4 outlines the experim ental setup. Sec- tion 5 presents the experimental results. Finally, in Secti on 6, we provide our conclusions and prospects for future research. 2 Background 2.1 Artificial Neural Networks Artificial Neural Networks are a type of supervised ML inspir ed by biologic neural networks [19]. An ANN consists of connected processi ng units known as neurons. The connections follow a specific topology to ach ieve the desired application. A neuron’s input may be the output of other neur ons, external sources, or itself. Every connection has an associated weig ht, allowing the system to simulate biological synapses. A weighted sum of the input s is computed at a given instant, considering the connection weights. It is a lso possible to sum a bias value to this. An activation function is applied, and t hus, the neuron’s output is obtained. DNNs are ANNs composed of many hidden layers. Due to this, DNN s can avoid the feature engineering step – which usually requires human expertise – by automatically discovering the representations needed f or classification [9,10]. Thus, they can model more complex relationships and achieve higher accuracy on tasks requiring pattern recognition. The development an d usage of DNNs have substantially increased due to the widespread deploym ent of more capable hardware, such as Graphics Processing Units (GPUs) [3]. 2.2 Neuroevolution NE is the application of evolutionary techniques to search f or DNN models. It is used to optimize the structure and weights of DNNs to improve their performance on specific tasks, such as image classification and natural la nguage processing. NE is a gradient-free method based on the concept of populati on [7]. It allows for the simultaneous exploration of multiple zones of the se arch space through parallelization techniques at the cost of taking a usually l ong time to execute since each individual of the population is a DNN that require s training and testing. Deep Evolutionary Network Structured Evolution (DENSER) i s a neuroevo- lutionary framework that allows the search of DNNs through a grammar-based neuroevolutionary approach that searches both network top ology and hyperpa- rameters [1]. The developed DNNs are structured according to a provided co ntext-free grammar. DENSER uses Dynamic Structured Grammatical Evolu tion (DSGE) as the strategy that allows the modification of the network to pology. DSGE is built upon Structured Grammatical Evolution (SGE), with th e main differences 4 G. Cortês et al. of allowing the growth of the genotype and only storing encod ed genes [11]. Allied with dynamic production rules, DSGE allows the creat ion of multiple- layer DNNs. SGE proves to perform better than Grammatical Ev olution (GE), and DSGE proves to be superior to SGE [12]. The individuals of the evolutionary process are represented in two levels: the outer level encod es the topology of the ANN, and the inner one encodes its hyperparameters. Fast-DENSER was developed to overcome some limitations ver ified on DENSER: evaluating the population consumes a considerable amount of time, and the developed DNNs are not fully trained [2]. Fast-DENSE R is an extension of DENSER on which the evolutionary engine is replaced by a (1 +λ)-ES. This modification dramatically reduces the required number of ev aluations per gener- ation, enabling executions 20 times faster than the origina l version of DENSER. Moreover, individuals are initialized with shallow topolo gies, and the stop- ping criterion is variable to allow an individual to be train ed for a more extended time. On the CIFAR-10 dataset [8], DENSER obtained models with an a ccuracy higher than most of the state-of-the-art results, and on the CIFAR-100 [8], it obtained the best accuracy reported by NE approaches. Fast- DENSER proves to be highly competitive relative to DENSER, achieving execut ion times far inferior to its predecessor. Additionally, Fast-DENSER can develop DNNs that do not require additional training after the evolutionary approa ch and are, therefore, ready to be deployed. 3 Approach This section outlines the approaches developed to address t he challenge of re- ducing power consumption in ANN models. 3.1 Power Measurement Measuring the power a GPU consumes is fundamental when devel oping ap- proaches that minimize a model’s energetic footprint. The e cosystem of de- veloping a DNN model mainly consists of three phases: design , training, and deployment. The design phase uses some energy, be it with manual design te chniques or automatic methods. DENSER is a NE framework and, as such, con sumes energy in the search for optimal models, and such consumption might be on par with the energy used on manual, trial-and-error methods. Reduci ng the energy used in this phase is out of the scope of this work. The training of a DNN model is an expensive process in which a m odel is trained on a large dataset to learn to predict unseen instanc es, taking a significant toll on technological companies’ and individuals’ power bi lls. While diminishing energy consumption during the training process remains a si gnificant objective, it is worth noting that the inference phase in DNNs holds vita l importance during software deployment, as the software obtains result s through inference. Towards Physical Plausibility in Neuroevolution Systems 5 This becomes particularly relevant when considering the po tential utilization of these models by millions of users. As such, tackling the mini mization of energy consumption in this step is vital. For example, it is estimat ed that 80% to 90% of NVIDIA’s ML computations are inference processing [13] a nd about 60% of Google’s ML energy usage is for inference with the remaining portion being for training [16]. Considering this, our work focuses on the power consumption in the inference step to allow a large deployment, thus saving more computati onal resources and energy and, on another layer, reducing financial expenses an d reducing environ- mental impact. 3.2 Model Partitioning Training a DNN model requires a substantial amount of time an d considerable energy. Creating a process on which a single model is trained but can be split posteriorly into two models would reduce the time spent on tr aining two models by, at most, two times. Pushing one of those two models into be ing smaller than the other may produce a simpler, similarly accurate, yet mor e power-efficient model. Following this line of reasoning, we propose a modification t o Fast-DENSER on which an extra output layer is connected to an intermediat e layer of the model. The two-output model (Figure 1a) is trained to optimize for t wo outputs. At the validation step, it is split into left (Figure 1b) and right ( Figure 1c) partitions. These partitions are disjoint and can be evaluated similarl y to how the complete model is evaluated, and metrics such as accuracy and power co nsumption can be obtained. The intermediate point is a marker for where the additional o utput is added at the model partitioning step. We can, for example, conside r a model as an array of layers, and the mentioned marker is the index of the l ayer to which the additional output is connected. This point can be assigned t o any intermediate layer of the model. The input and output layers are excluded t o prevent useless and redundant partitions. Since the maximum allowed value of the point is equal to the nu mber of layers of the model minus one, the grammar initializer – which gener ates individuals according to the grammar – and the mutation mechanism for the macrostruc- ture level of DENSER – which performs mutations on the hyperp arameters of the individuals – were modified to consider the maximum numbe r of layers of the model dynamically. To introduce the intermediate point in t he evolutionary pro- cess, it was considered part of the macrostructure and, as su ch, as a rule of the grammar. The introduced rule is <middle_point> ::= [middle_point,int,1,0,x] , meaning that one integer value is obtained with the lower lim it being zero. The upper limit is an arbitrary variable xthat is replaced at any instance by the maximum number of layers of the model minus one. 6 G. Cortês et al. (a) Full model (b) Left partition (c) Right partition Fig. 1: Example of a two-output model and its left and right pa rtitions, with the layer marked by the intermediate point in red. Towards Physical Plausibility in Neuroevolution Systems 7 3.3 Fitness Functions To consider accuracy and power consumption in the fitness fun ction, some func- tions were developed to take these parameters into account. Since our objective is to maximize accuracy but minimize power consumption, we con sider the inverse of the latter, i.e., power−1. Considering our approach of the division of a DNN model into t wo compa- rably accurate partitions, with one smaller than the other, all of the presented fitness functions consider the accuracy of both partitions, intending to enhance both. These fitness functions only focus on minimizing power consumption within the larger partition, which is anticipated to experience hi gher power usage. Firstly, as presented in Equation 1, we developed a fitness fu nction that sums the accuracy of both partitions with the inverse of the p ower usage of the left partition. The accuracy values have an upper limit, con sisting of minimum satisfiable values for the models, i.e., values below the sta te-of-the-art [15] to allow some tradeoff between accuracy and power consumption. The upper limit is higher on the right partition (0.85) than on the left parti tion (0.80) since it is desired that the right partition obtains a higher accurac y value, if possible. The goal of this function design was to obtain satisfiable mod els and, after that, guide the evolutionary process only by their power usage to m inimize the power usage of the models. After testing, we observed that the powe r usage typically falls within the range [30 ,100] W, which, when inverted, resulted in values too small to be able to properly steer the evolutionary process. f1= min(0 .80, acc left) + min(0 .85, acc right) +power−1 left(1) Considering this, another fitness function was designed (Eq uation 2), where the power usage is multiplied by 10, thus giving it a more cons iderable weight since power usage values for the used GPU typically fall with in the [30 ,100] W range. This weight is closely related to the used GPU and sho uld be mod- ified accordingly. Preliminary experiments showed that alt hough the evolution managed to somewhat minimize the power usage of the models, t heir accuracy remained around the chosen upper limits. Since this is not an optimal behaviour, a function that does not limit accuracy was developed. f2= min(0 .80, acc left) + min(0 .85, acc right) + 10 ∗power−1 left(2) As shown in Equation 3, this fitness function considers only t he accuracy of the partitions when both are below a threshold. After any o f them surpass their respective threshold, power consumption is also cons idered, with a weight of 10. This means that, at first, evolution is only steered by t he accuracy of the models. When satisfiable models are obtained, power consump tion starts being considered to evolve both accurate and energy-efficient mode ls. f3=/braceleftBigg accleft+accright ifaccleft≤0.80∧accright ≤0.85 accleft+accright + 10 ∗power−1 left,otherwise (3) 8 G. Cortês et al. 3.4 Module Reutilization Internally, Fast-DENSER considers modules of layers on eac h individual from which a DNN is then unravelled. One way to encourage the evolu tion of energy- efficient models is to provide an individual with a set of layer s that are known to be efficient. As such, a scheme of module reutilization is pr oposed through the design of new mutation operators and the addition of an ar chive of modules and their respective power consumption. Since this strategy only considers power consumption, it is expected that inaccurate models may sometimes be generated. Due to the nat ure of the evo- lutionary process and the used fitness function (Equation 3) , inaccurate models are intensely penalized and, as such, discarded in favour of better ones. Whenever a module of layers is randomly generated or modified , its power consumption is measured. To do this, a temporary model is cre ated, which con- sists of an input layer, the module’s layers, and an output la yer. Since the mod- ule’s accuracy is irrelevant, this temporary network is nei ther trained nor fed with a proper dataset, i.e., it is given random values instea d of a dataset. An operator of mutation, reuse module , was introduced to take advantage of this information. It selects a module with a probability inv ersely proportional to its power consumption, i.e., modules with inferior power consumption have a superior probability of being chosen. As shown in Equation 4, to obtain the probability of a module ibeing chosen, we divide the inverse of its power, power i, by the sum of the inverse power of all modules, with nthe number of saved modules. The selected module is introduced in a randomly cho sen position. An operator that randomly removes a module from an individual i s also introduced to counteract the described operator. P(i) =1 power i/summationtextn j=01 power j(4) 4 Experimental Setup We performed two experiments: the baseline, which uses the p lain version of Fast-DENSER with accuracy as the fitness function, and an exp eriment where our proposed approaches were applied, using the fitness func tion presented in Equation 3. Table 1 presents the experimental parameters us ed across the ex- periments. Note that DSGE-level rate refers to the probability of a grammar mutation on the model’s layers, the Macro layer rate pertains to the probabil- ity of a grammar mutation affecting the macrostructure, enco mpassing elements such as hyperparameters or intermediate point mutation, an d the Train longer rateis the probability of allocating more time for an individual to be trained. The rates of reusing and removing modules do not apply to the b aseline experi- ment. The experimental analyses consider the Mean Best Fitn ess (MBF) over 5 runs. The experiments were performed on a server running Ubuntu 20 .04.3 LTS with an Intel Core i7-5930K CPU with a clock frequency of 3.50 GHz, 32 GB Towards Physical Plausibility in Neuroevolution Systems 9 of RAM, and an NVIDIA TITAN Xp with CUDA 11.2, CuDNN 8.1.0, Pyt hon 3.10.9, Tensorflow 2.9.1 and Keras 2.9.0 installed as well as the pyJoules 0.5.1 Python module with the NVIDIA specialization. Table 1: Experimental parameters Evolutionary Parameter Value Number of runs 5 Number of generations 150 Maximum number of epochs 10 000 000 Population size 5 Add layer rate 25% Reuse layer rate 15% Remove layer rate 25% Reuse module rate 15% Remove module rate 25% Add connection 0% Remove connection 0% DSGE-level rate 15% Macro layer rate 30% Train longer rate 20% Train Parameter Value Default train time 10 min Loss Categorical Cross-entropy All experiments used the Fashion-MNIST dataset [18], which was developed as a more challenging replacement for the well-known MNIST d ataset [5] by swapping handwritten digits with images of clothes such as s hirts and coats, aiming at a more realistic and relevant benchmark. It is a bal anced dataset consisting of a collection of 60 thousand examples for train ing and 10 thousand for testing, where each example is a 28x28 grey-scale image r epresenting clothing items belonging to one of ten classes. Since power usage is essential in making NE physically plaus ible, a func- tion to measure power was developed using the pyJoules library. Its pseudocode can be analyzed in Algorithm 1, with meter being the library tool that facili- tates the measurement of energy consumed, and start andstop the functions that allow controlling it. It wraps a function call ( func, with corresponding arguments args) while measuring the GPU energetic consumption during its execution and the call’s duration. This measurement is conv erted from milli- Joule to Watt and appended to the array of measures. These ste ps are per- formed n_measures times, and then the mean value is calculated. In our work, we considered n_measures = 30. The described function was integrated with Fast-DENSER on the model’s validation step to measure the po wer used in the inference phase. 10 G. Cortês et al. Algorithm 1 Power Measure Algorithm Require: func, args, n_measures measures←∅ i←1 while i≤n_measures do start(meter) output←func (args) stop(meter) (energy ,duration )←measure (meter ) measure←energy/ 1000/duration ⊲ Convert mJ to W measures←measures∪measure i←i+ 1 end while mean _power←mean (measures ) return (output ,mean _power ) It should be noted that ambient conditions of the server’s lo cation, such as temperature and humidity, were not considered, as well as other external variables, and no other processes used the GPU during the exe cution of these experiments. 5 Results This section compares the results from the baseline experim ent and the experi- ment where our approaches were applied. The results show the mean accuracy and the mean power consumption, which are derived from the be st individuals by fitness over 5 separate runs. Since the results did not follow a normal distribution and th e samples were independent, the Kruskal-Wallis non-parametric test was e mployed to deter- mine if significant differences existed among the various app roach groups. When significant differences were observed, the Mann-Whitney pos t-hoc test with Bon- ferroni correction was applied. We considered a significanc e level of α= 0.05 in all statistical tests. Figure 2 compares the accuracy obtained in the two experimen ts. The exper- iments present a similar accuracy until generation 70, wher e it becomes possible to observe a clear difference between them. The baseline expe riment achieves a higher accuracy than the other experiment, and, relative to that experiment, it is visible that the smaller model obtains a marginally small er accuracy than the larger one. Table 2 provides statistical analysis, and Tabl e 3 showcases statistical values of the experiments. It is possible to see that, relati ve to the median val- ues, the proposed method achieves inferior accuracy and tha t the smaller model obtains the worst accuracy. Figure 3 presents a comparison of the power consumption meas ured in the two experiments. The baseline predominantly has an increas ing behaviour, which Towards Physical Plausibility in Neuroevolution Systems 1 1 Fig. 2: Evolution of the accuracy over 150 generations. Table 2: Pair-wise comparison of used groups on accuracy met ric, using Mann- Whitney U post-hoc test with Bonferroni correction with bol d values denoting statistically significant differences. Baseline Proposed Method Metric Accuracy Accuracy leftAccuracy right Baseline Accuracy Accuracy left1.09×10−4Proposed MethodAccuracy right1.15×10−71.16×10−4 can be explained by the fact that the evolution is only being g uided by accu- racy, i.e., there are no incentives to favour models that con sume less power. Contrarily, the proposed method obtained relatively stabl e results over the evo- lutionary process, with the smaller model presenting margi nally lower results than its counterpart. Table 4 provides statistical analysi s, and Table 5 show- cases statistical values of the experiments. We can conclud e that relative to the median values, the proposed method achieves inferior power consumption and that the smaller model is the most power-efficient. 6 Conclusion In this work, we developed approaches integrated into Fast- DENSER, which empower it to generate DNN models with better power efficiency . The most fundamental approach consists of measuring the pow er consumed by the GPU on the inference phase of the DNN. We use the measure provided 12 G. Cortês et al. Table 3: Mean value, standard deviation, median and differen ce to baseline me- dian of the accuracy of the experiments. Experiment Metric Mean SD Median Diff. to Baseline Baseline Accuracy 0.904 0.037 0.916 Proposed MethodAccuracy left 0.902 0.024 0.911−0.005 Accuracy right 0.895 0.034 0.907−0.009 Fig. 3: Evolution of the power consumption over 150 generati ons. by the GPU to do this. Using this metric, we developed multi-o bjective fitness functions that steer the evolutionary process in a path that minimizes power consumption. We created a process by which an additional output is added to a DNN model and, after being trained, the model is split into two models – a larger one which consists of all the layers and a smaller one composed of the la yers up to the one where the additional output is connected to. This allows us to create models tuned for environments with fewer resources, such as smartp hones, while creating more power-intensive models tuned for environments with mo re resources, such as servers. This is performed in one training, thus taking le ss time to develop the two models and saving energy in the process. No prior work has been identified that employs a similar approach. We introduced a new mutation strategy to Fast-DENSER that al lows the reutilization of sets of layers – modules – according to the p ower consumption of the modules. We stochastically favour the reintroductio n of modules in a model according to the inverse of the power they consume, thu s incorporating power-efficient modules into a model. Towards Physical Plausibility in Neuroevolution Systems 1 3 Table 4: Pair-wise comparison of used groups on power metric , using Mann- Whitney U post-hoc test with Bonferroni correction with bol d values denoting statistically significant differences. Baseline Proposed Method Metric P ower P ower left P ower right Baseline P ower P ower left2.72×10−29Proposed MethodP ower right8.84×10−323.67×10−19 Table 5: Mean value, standard deviation, median and differen ce to baseline me- dian of the experiments power consumption. Experiment Metric Mean SD Median Diff. to Baseline Baseline P ower 97.80 W 18.84 W 99.89 W Proposed MethodP ower left71.92 W 1.60 W 72.20 W−27.69 W P ower right 70.40 W 1.30 W 70.71 W−29.18 W The results obtained by our proposals show that we can reduce the power consumption of the ANNs without compromising their predict ive performance, showing that it is possible to minimize power consumption wh ile, at the same time, maximizing accuracy through the usage of NE framework s such as Fast- DENSER. The best model found regarding power consumes 29.18 W (29.2%) less whilst having a tiny decrease in performance (less than 1%), proving that a small trade-off on accuracy can yield a considerable reduct ion in the power consumed by the model. 6.1 Future Work We introduced novel approaches and performed a baseline exp eriment and an experiment where the mentioned strategies were applied. It could be valuable to explore other approaches and perform more experiments in th e future. To better understand the individual impact of each strategy on the efficiency of the models, it would be valuable to perform experiments wi th the employment of only one strategy at a time. It would also be interesting to vary the fitness functions (e.g., the weights used in them) and to vary evolut ionary parameters such as the probabilities of the mutations. One of the most important constraints of our work is GPU-time due to the amount of operations required to train every model of eac h generation. To minimize the required time, it would be noteworthy to resear ch how to employ training-less strategies in Fast-DENSER, i.e., use strate gies that estimate the 14 G. Cortês et al. accuracy of a model without training it [4,14]. Such strateg ies would allow us to perform more experiments in less time, saving energy in the d esign process. Acknowledgments This work was supported by the Portuguese Recovery and Resil ience Plan (PRR) through project C645008882-00000055, Center for Responsi ble AI, by the FCT, I.P./MCTES through national funds (PIDDAC), by Project No. 7059 - Neuras- pace - AI fights Space Debris, reference C644877546-0000002 0, supported by the RRP - Recovery and Resilience Plan and the European Next G eneration EU Funds, following Notice No. 02/C05-i01/2022, Component 5 - Capitalization and Business Innovation - Mobilizing Agendas for Business Inno vation, and within the scope of CISUC R&D Unit - UIDB/00326/2020. References 1. Assunção, F., Lourenço, N., Machado, P., Ribeiro, B.: DEN SER: Deep evolutionary network structured representation. Genet. Program. Evolv able Mach. 20(1), 5–35 (2019). https://doi.org/10.1007/S10710-018-9339-Y 2. Assunção, F., Lourenço, N., Ribeiro, B., Machado, P.: Fas t-DENSER: Fast deep evolutionary network structured representation. Softwar eX14, 100694 (2021). https://doi.org/10.1016/j.softx.2021.100694 3. Balas, V., Roy, S., Sharma, D., Samui, P.: Handbook of Deep Learning Appli- cations, Smart Innovation, Systems and Technologies, vol. 136. Springer, Nether- lands, 1 edn. (2019). https://doi.org/10.1007/978-3-030 -11479-4 4. Chen, W., Gong, X., Wang, Z.: Neural architecture search o n ImageNet in four GPU hours: A theoretically inspired perspective. In: 9th In ternational Conference on Learning Representations, ICLR 2021, Virtual Event, Aus tria, May 3-7, 2021. OpenReview.net (2021). https://doi.org/10.48550/arXiv .2102.11535 5. Deng, L.: The MNIST database of handwritten digit images f or ma- chine learning research. IEEE Signal Process. Mag. 29(6), 141–142 (2012). https://doi.org/10.1109/MSP.2012.2211477 6. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary C omputing. Springer, 2nd edn. (2015). https://doi.org/10.1007/978-3-662-44874- 8 7. Galván, E., Mooney, P.: Neuroevolution in deep neural net works: Current trends and future challenges. IEEE Trans. Artif. Intell. 2(6), 476–493 (2021). https://doi.org/10.1109/TAI.2021.3067574 8. Krizhevsky, A., Hinton, G.: Learning multiple layers of f ea- tures from tiny images. Tech. rep., University of Toronto (2 009), https://www.cs.toronto.edu/~kriz/learning-features- 2009-TR.pdf 9. LeCun, Y., Bengio, Y., Hinton, G.E.: Deep learning. Nat. 521(7553), 436–444 (2015). https://doi.org/10.1038/nature14539 10. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E .: A survey of deep neural network architectures and their applications. Neur ocomputing 234, 11–26 (2017). https://doi.org/10.1016/J.NEUCOM.2016.12.038 11. Lourenço, N., Assunção, F., Pereira, F.B., Costa, E., Ma chado, P.: Structured grammatical evolution: A dynamic approach. In: Ryan, C., O’ Neill, M., Collins, J.J. (eds.) Handbook of Grammatical Evolution, pp. 137–161 . Springer (2018). https://doi.org/10.1007/978-3-319-78717-6_6 Towards Physical Plausibility in Neuroevolution Systems 1 5 12. Lourenço, N., Pereira, F.B., Costa, E.: SGE: A structure d repre- sentation for grammatical evolution. In: Artificial Evolut ion. Lecture Notes in Computer Science, vol. 9554, pp. 136–148. Springer (2015). https://doi.org/10.1007/978-3-319-31471-6_11 13. Luccioni, A.S., Viguier, S., Ligozat, A.L.: Estimating the carbon footprint of BLOOM, a 176B parameter language model (2022). https://doi.org/10.48550/ARXIV.2211.02001 14. Mellor, J., Turner, J., Storkey, A.J., Crowley, E.J.: Ne ural architecture search with- out training. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event. Pro- ceedings of Machine Learning Research, vol. 139, pp. 7588–7 598. PMLR (2021). https://doi.org/10.48550/arXiv.2006.04647 15. Meshkini, K., Platos, J., Ghassemain, H.: An analysis of convolutional neural network for fashion images classification (Fashion- MNIST). In: Pro- ceedings of the Fourth International Scientific Conference “Intelligent In- formation Technologies for Industry” (IITI’19). pp. 85–95 . Springer (2020). https://doi.org/10.1007/978-3-030-50097-9_10 16. Patterson, D.A., Gonzalez, J., Hölzle, U., Le, Q.V., Lia ng, C., Munguia, L., Rothchild, D., So, D.R., Texier, M., Dean, J.: The carbon foo tprint of ma- chine learning training will plateau, then shrink. Compute r55(7), 18–28 (2022). https://doi.org/10.1109/MC.2022.3148714 17. Vikhar, P.A.: Evolutionary algorithms: A critical revi ew and its future prospects. In: 2016 International Conference on Global Trends in Signa l Processing, In- formation Computing and Communication (ICGTSPICC). pp. 26 1–265 (2016). https://doi.org/10.1109/ICGTSPICC.2016.7955308 18. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a Nove l im- age dataset for benchmarking machine learning algorithms ( 2017). https://doi.org/10.48550/arXiv.1708.07747 19. Yegnanarayana, B.: Artificial Neural Networks. PHI Lear ning (2009) | 4 | 1 | The study utilizes Fast-DENSER on the Fashion-MNIST dataset, which has 60,000 training images. Given the detailed architecture modifications for training two separate models simultaneously, I estimate the training time based on the complexity of multiple evolutionary computations and a default training time of 10 minutes with 150 generations. Considering any overhead for model complexity and a manageable batch size, 4 hours total training time on a single GPU seems reasonable, especially given that Fashion-MNIST is less complex than more extensive datasets like CIFAR-10. Moreover, the hardware mentioned (NVIDIA TITAN Xp) is capable enough for this task, confirming that a single GPU can suffice for training these models effectively without exceeding 8 hours of training time. | yes | Yes | CV | Towards Physical Plausibility in Neuroevolution Systems | 2024-01-31 0:00:00 | https://github.com/rodriguesGabriel/energize | 1 | downlaoded by training script | Max runtime = generations × population_size × train_time_per_individual
= 150 × 4 × 300 seconds
= 180,000 seconds
= 50 hours (plus overhead for evaluation, logging, mutation, etc.)
| https://drive.google.com/file/d/1ToU-VDe6i5AXDihxb_T3v7gNC6iEP9ng/view?usp=sharing | Yes | -- Straight forward just change -d while calling the train script. I have included the arguments for the train file in colab |
Fashion-MNIST | GECCO | [] | A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification | 2024-02-01T00:00:00 | https://arxiv.org/abs/2402.00564v6 | [
"https://github.com/geccoproject/gecco"
] | {'Percentage error': '11.91', 'Accuracy': '88.09'} | [
"Percentage error",
"Accuracy",
"Trainable Parameters",
"NMI",
"Power consumption"
] | Given the following paper and codebase:
Paper: A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification
Codebase: https://github.com/geccoproject/gecco
Improve the GECCO model on the Fashion-MNIST dataset. The result
should improve on the following metrics: {'Percentage error': '11.91', 'Accuracy': '88.09'}. You must use only the codebase provided.
| A SINGLE GRAPH CONVOLUTION IS ALL YOU NEED: EFFICIENT GRAYSCALE IMAGE CLASSIFICATION Jacob Fein-Ashley†, Sachini Wickramasinghe†, Bingyi Zhang†, Rajgopal Kannan∗, Viktor Prasanna† †University of Southern California,∗DEVCOM Army Research Office ABSTRACT Image classifiers for domain-specific tasks like Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) and chest X-ray classification often rely on convolutional neural networks (CNNs). These networks, while powerful, experience high latency due to the number of operations they perform, which can be problematic in real-time applications. Many image classification models are designed to work with both RGB and grayscale datasets, but classifiers that oper- ate solely on grayscale images are less common. Grayscale image classification has critical applications in fields such as medical imaging and SAR ATR. In response, we present a novel grayscale image classification approach using a vector- ized view of images. By leveraging the lightweight nature of Multi-Layer Perceptrons (MLPs), we treat images as vectors, simplifying the problem to grayscale image classification. Our approach incorporates a single graph convolutional layer in a batch-wise manner, enhancing accuracy and reducing performance variance. Additionally, we develop a customized accelerator on FPGA for our model, incorporating several op- timizations to improve performance. Experimental results on benchmark grayscale image datasets demonstrate the ef- fectiveness of our approach, achieving significantly lower latency (up to 16×less on MSTAR) and competitive or su- perior performance compared to state-of-the-art models for SAR ATR and medical image classification. Index Terms —GCN, grayscale, MLP, low-latency 1. INTRODUCTION As the demand and popularity of real-time systems increase, low-latency machine learning has become increasingly im- portant. With more and more consumers interacting with ma- chine learning models through the cloud, the speed at which those models can deliver results is critical. Consumers ex- pect fast and accurate results; any latency can lead to a poor user experience. Moreover, low-latency machine learning is essential in real-time applications, such as autonomous vehi- cles or stock market trading, where decisions must be made quickly and accurately. In these scenarios, delays caused by high latency can result in severe consequences and even cause inaccurate downstream calculations [1].A particular instance where low-latency machine learning is needed is grayscale image classification for SAR ATR. For example, a targeting system on a satellite is costly, and deci- sions must be made using SAR efficiently and accurately. Ex- amples like this are where low-latency grayscale image clas- sification comes into play. It is often the case that image clas- sifiers work on RGB datasets and grayscale image datasets, but seldom do modern image classifiers focus solely on the grayscale setting. RGB models are overkill for the grayscale setting, as the grayscale problem allows us to focus on a sin- gle channel. Models focusing on grayscale image classifica- tion are naturally more efficient, as they can concentrate on a single channel rather than three. Thus, many image classi- fiers that generalize to the grayscale image classification are not truly optimized for the grayscale case. For these reasons, we present a lightweight grayscale image classifier capable of achieving up to 16×lower latency than other state-of-the-art machine learning models on the MSTAR dataset. From a trustworthy visual data processing perspective, the demand for grayscale image classification requires data to be collected from various domains with high resolution and correctness so that we can train a robust machine learning model. Additionally, recent advancements in machine learn- ing rely on convolutional neural networks, which often suf- fer from high computation costs, large memory requirements, and many computations needed, resulting in poor inference latency, poor scalability, and weak trustworthiness. The inherent novelties of our model are as follows: Our proposed method is the first to vectorize an image in a fully connected manner and input the resultant into a single-layer graph convolutional network (GCN). We also find that a sin- gle GCN layer is enough to stabilize the performance of our shallow model. Additionally, our proposed method benefits from a batch-wise attention term, allowing our shallow model to capture interdependencies between images and form con- nections for classification. Finally, by focusing on grayscale imagery, we can focus on a streamlined method for grayscale image classification rather than concentrating on the RGB set- ting. A result of these novelties is extremely low latency and high throughput for SAR ATR and medical image classifica- tion. • We present a lightweight, graph-based neural networkarXiv:2402.00564v6 [cs.CV] 26 Jun 2024 for grayscale image classification. Specifically, we (1) apply image vectorization, (2) construct a graph for each batch of images and apply a single graph convo- lution, and (3) propose a weighted-sum mechanism to capture batch-wise dependencies. • We implement our proposed method on FPGA, includ- ing the following design methodology: (1) a portable and parameterized hardware template using high-level synthesis, (2) layer-by-layer design to maximize run- time hardware resource utilization, and (3) a one-time data load strategy to reduce external memory accesses. • Experiments show that our model achieves competi- tive or leading accuracy with respect to other popular state-of-the-art models while vastly reducing latency and model complexity for SAR ATR and medical im- age classification. • We implement our model on a state-of-the-art FPGA board, Xilinx Alveo U200. Compared with state-of- the-art GPU implementation, our FPGA implementa- tion achieves comparable latency and throughput with state-of-the-art GPU, with only 1/41of the peak per- formance and 1/10of the memory bandwidth. 2. PROBLEM DEFINITION The problem is to design a lightweight system capable of han- dling high volumes of data with low latency. The solution should be optimized for performance and scalability while minimizing resource utilization, a necessary component of many real-time machine learning applications. The system should be able to process and respond to requests quickly, with minimal delays. High throughput and low latency are critical requirements for this system, which must handle many concurrent requests without compromising performance. We define latency and throughput in the following ways: Throughput =Total number of images processed Total inference time Latency =Total time for a single inference Latency refers to the total time (from start to finish) it takes to gather predictions for a model in one batch. A lightweight machine learning model aims to maximize throughput and accuracy while minimizing latency. 3. RELATED WORK 3.1. MLP Approaches Our model combines various components of simple mod- els and is inherently different from current works in low-latency image classification. Some recent architectures in- volve simple MLP-based models. Touvron et al. introduced ResMLP [2], an image classifier based solely on MLPs. ResMLP is trained in a self-supervised manner with patches interacting together. Touvron et al. highlight their model’s high throughput properties and accuracy. ResMLP uses patches from the image and alternates linear layers where patches interact and a two-layer feed-forward network where channels interact independently per patch. Additionally, MLP-Mixer [3] uses a similar patching method, which also attains competitive accuracy on RGB image datasets com- pared to other CNNs and transformer models. Our proposed method uses the results from a single-layer MLP to feed into a graph neural network, during which we skip the information from the three-channel RGB setting and only consider the single-channel grayscale problem. This is inherently differ- ent from the methods mentioned earlier, as they use patching approaches while we focus on the vectorization of pixels. 3.2. Graph Image Construction Methods The dense graph mapping that utilizes each pixel as a node in a graph is used and mentioned by [4, 5]. For this paper, we employ the same terminology. Additionally, Zhang et al. pre- sented a novel graph neural network architecture and exam- ined its low-latency properties on the MSTAR dataset using the dense graph [6]. Our proposed method differs from dense graph methods, as we vectorize an image rather than using the entire grid as a graph. Han et al. [7] form a graph from the image by splitting the image into patches, much like a transformer. A deep graph neural network learns on the patches similarly to a transformer but in a graph structure. Our structure does not form a graph where each patch is a node in a graph; instead, we create a graph from the resultant of a vectorized image passed through a fully connected layer. Mondal et al. proposed applying graph neural networks on a minibatch of images [8]. Mondal et al. claim that this method improves performance for adversarial attacks. We use the proposed method to stabilize the performance of a highly shallow model. The graph neural network, in this case, allows learning to be conducted in a graph form, connecting images containing similar qualities. Besides the model proposed by Zhang et al., all the meth- ods mentioned focus on the RGB setting. This is overkill for grayscale image classification. Focusing on a single channel allows us to develop a more streamlined solution rather than forcing a model to operate on RGB datasets and having the grayscale setting come as an afterthought. Doing so allows us to reduce computational costs. 4. OVERVIEW AND ARCHITECTURE This section describes our model architecture (GECCO: Grayscale Efficient Channelwise Classification Operative). The overall process is summarized in Figure 1. Fig. 1 . GECCO Architecture Overall Architecture. Many existing methods do not fo- cus on the latency of their design and its implications. Ad- ditionally, the vast majority of image classification models focus on the performance of their work in the RGB setting, rarely citing the performance of datasets in various domains. We address these problems by presenting a novel architecture focused on low latency and the grayscale image setting. Our model vectorizes a batch of images, allowing us to use a fully connected layer (FC) pixel-wise for low compu- tation time rather than relying on convolutional neural net- works. We vectorize the input images and input them into a fully connected layer. Then, we use a graph convolutional layer to learn similarities between images batch-wise. We then apply a batch-wise attention term, which is inputted into an FC for classification.1 Image Vectorization. For each image in a batch, we view the image as a vector. For a tensor X∈RB×H×W where Bis the batch size, HandWare the height and width of an image; we flatten the tensor to X1∈RB×(H·W). View- ing an image as a vector allows our model to skip the tradi- tional convolutional neural network, which views the image as a grid and cuts computation time. Fully Connected Layer. We input X1into an FC layer with output dimensionality Dout. Formally, X2= σ(X1W1+b1), where σis the ReLU function, W1is a learned weight matrix, b1is a bias term, and X2∈RB×Dout. After the fully connected layer, we apply a dropout layer and the ReLU function to X2, yielding X3, such that the re- sultant dimensionality of X3isRB×Dout. 1We make our code publicly available at https://github.com/ GECCOProject/GECCOGraph Construction. We construct a graph batch-wise fromX3. This means that for each batch, a vectorized image is each node in the graph with feature size RDout, and each image is connected to every other image in a batch. Formally, we calculate the adjacency matrix AasAij= 1, which con- nects all nodes. Graph Convolution. Our single graph convolutional layer learns from similar features of images within its mini- batch. Generally, a graph convolutional layer updates the rep- resentations of nodes by aggregating each node’s neighbor’s representation. We can write a graph convolutional layer as h′ i=fθ(hi,AGGREGATE ({hj|j∈ Ni})). In our case, the input for each node hiis the output from each vector in X3. Applying graph convolution to X3yields X4. Formally, X4=σ(AX 3W2), where W2is a learned weight matrix andσrepresents the sigmoid function. After the graph con- volution, we apply batch normalization and max-pooling op- erations to X4, resulting in a dimensionality of RB×⌊Dout/2⌋. Batch-wise Attention, Residual Connections, & Out- put. We propose a batch-wise attention term defined as X5= σ X4X⊤ 4 PB i=1σ X4X⊤ 4 i! X4 where σis the sigmoid function. This term allows the model to capture similar features from each image to another batch- wise. The residual connection is defined X6=X5+X4. The residual term makes the learning process easier and more sta- ble. By multiplying a softmax-like term with the output of the previous graph convolution, we weigh the correspondence of each image compared to other similar images within similar images batch-wise. We then feed the residual term into an FC inputted into the softmax function for classification results. Model Structure Discussion We justify our model’s design choices by considering the fol- lowing theoretical aspects. 1. The batch-wise attention term allows the model to fur- ther capture similar features from each image to another batch-wise. Relating similar properties from images to each other boosts accuracy in our case of a shal- low model. Additionally, our batch-wise attention term is similar in spirit to the mechanism proposed by [9], which allows the model to capture long-range depen- dencies across the entire image. 2. The batch-size hyperparameter is crucial in our model. A larger batch size allows the model to capture more dependencies across images, which is crucial for un- derstanding complex image patterns. We refer to the work of [10] for a detailed analysis of the impact of batch size on the performance of GNNs. 3. If the batch size for a given dataset is 1, the model elim- inates the graph construction phase, making the term X3fed directly into the FC and softmax for classifica- tion. 4. The residual connection term makes the learning pro- cess easier and more stable. we refer to [11] for a more detailed analysis of the impact of residual connections on shallow models. 5. EXPERIMENTS 5.1. Datasets Datasets from several domains are examined to gauge the effectiveness of GECCO in diverse settings. We use the SAR ATR dataset, MSTAR, and a medical imaging dataset CXR [12]. • MSTAR is a SAR ATR dataset with a training size of 2747 and testing size of 2425 SAR images of 10dif- ferent vehicle categories. We resize each image in the dataset to (128,128) pixels. • CXR is a chest X-ray dataset containing 5863 X-ray images and 2 categories (Pneumonia/Normal). The im- ages are (224,224) pixels. The training size is 5216 , and the testing size is 624. Our goal is to create a real-time system. That is, we wish to minimize the inference latency and maximize the through- put of our model while maintaining leading or competitive ac- curacy on its respective dataset. In the following sections, we measure the inference latency and throughput, as described in section 2. 5.2. Results 5.2.1. Backbone For Table 1, we choose ResConv as the backbone of our model because it has the most desirable characteristics for applying a graph convolutional layer. Table 1 . Performance of GCN Layers on MSTAR Convolutional Layer Top-1 Accuracy Throughput (imgs /ms)Latency (ms) GCN [13] 98.89% 50 .04±6.85 5 .86±0.98 TAGConv [14] 99.05% 47 .87±7.11 6 .24±1.32 SAGEConv [15] 99.08% 51 .77±8.19 5 .95±0.87 ChebConv [16] 98.56% 45 .37±5.99 6 .83±1.27 ResConv [17] 99.29% 52.98±9.04 5 .22±1.03 We use the following hyperparameters listed in Table 2 for our experiments.Table 2 . Feature Lengths, Optimizer, and Batch Size for Each Dataset Dataset Feature Length Optimizer Batch Size MSTAR 86 Adam 64 CXR 112 Adam 64 5.2.2. Experimental Performance Experimental performance includes the top-1 accuracy, infer- ence throughput, and inference latency. We perform our infer- ence batch-wise as a means to reduce latency. These metrics vary across each dataset. We summarize our findings in Tables 3 and 4. We re- port the best-performing accuracy, average throughputs, and latencies with their standard deviations. Our model outper- forms every other model in terms of throughput and latency across all datasets, leads accuracy on the MSTAR dataset, and performs competitively in terms of accuracy on all datasets. We perform the remaining experiments on a state-of-the- art NVIDIA RTX A5000 GPU. Additionally, we compare our model to the top-performing variants of VGG [18], the variant of the popular ViT [19], the ViT for small-sized datasets (SS-ViT) [20], FastViT [21], Swin Transformer [22], and ResNet [23] models. We use the open-source packages PyTorch and HuggingFace for model building and the Py- Torch Op-Counter for operation counting. Performing the remaining experiments on the same hardware system is vital in fostering a fair comparison for each model. Table 3 . MSTAR Performance Model Top-1 Accuracy Throughput (imgs /ms)Latency (ms) Swin-T 86.04% 1 .36±0.10 46 .98±3.20 SS-ViT 95.61% 2 .29±0.43 27 .97±5.26 VGG16 93.13% 1 .69±0.33 37 .89±7.52 FastViT 91.78% 1 .04±0.13 61 .44±7.69 ResNet34 98.64% 3 .13±0.22 20 .48±1.39 GECCO 99.29% 12.26±2.42 5 .22±1.03 Table 4 . CXR Performance Model Top-1 Accuracy Throughput (imgs /ms)Latency (ms) Swin-T 73.66% 0 .27±0.05 236 .71±46.09 SS-ViT 71.09% 1 .03±0.21 62 .35±12.85 VGG16 82.01% 0 .76±0.25 84 .10±28.43 FastViT 75.46% 1 .06±0.14 60 .30±14.24 ResNet34 78.31% 0 .60±0.11 105 .84±19.39 GECCO 77.57% 2.63±0.55 24 .32±5.08 5.2.3. Model Complexity Metrics Model complexity metrics for this paper include the num- ber of multiply-accumulate operations (MACs), the number of model parameters, the model size, and the number of lay- ers. In other words, suppose accumulator acounts an opera- tion of arbitrary b, c∈R. We count the number of multiply- accumulate operations as a←a+ (b×c). Additionally, the layer count metric is an essential factor of latency. De- creasing the number of layers will also improve the latency of a model’s inference time. The goal of an effective machine learning model is to maximize throughput while minimizing the number of MACs and the number of layers, in our case. We measure the model complexity of our model against other popular machine learning models that we have chosen in Table 5. Our model outperforms in all categories regard- ing our chosen model complexity metrics, highlighting its lightweightness. Table 5 . Model Complexity Metrics Model # MACs # Parameters Model Size (Mb)# Layers Swin-T 2.12×10102.75×107109.9 167 SS-ViT 1.55×10104.85×10619.62 79 VGG16 9.51×1094.69×10618.75 20 FastViT 7.16×1084.02×10616.1 226 ResNet34 4.47×1092.13×10785.1 92 GECCO 5.10×1045.08×1040.19 16 5.2.4. Ablation Study We perform an ablation study to verify that the components of our proposed model contribute positively to the overall ac- curacy of the MSTAR dataset. Table 6 . Ablation Study Mini-batch GNN Weighted Sum Residual TermAccuracy on MSTAR ✓ ✓ 99.29% ✗ ✓ 97.94% ✓ ✗ 88.04% ✗ ✗ 78.64% Additionally, we find that only a single graph convolu- tional layer is enough to reduce the variance and increase the accuracy of our model. Refer to Figure 2. Fig. 2 . Accuracy on MSTAR vs. Number of Graph Convolu- tional Layers 5.3. Discussion Across multiple datasets, GECCO achieves leading or com- petitive accuracy compared to other state-of-the-art imageclassifiers. GECCO outperforms other machine learning models regarding model complexity, highlighting our model’s low latency and lightweight properties. It is difficult for our model to generalize to the RGB set- ting. We attribute this challenge to the vectorization process that our model uses. Learning on three channels poses a complexity challenge, as GECCO is very shallow and simple, thus making it challenging to learn on three separate channels. Additionally, our model is optimized for a low-complexity dataset regime, as datasets like CIFAR and ImageNet are much too complex for our shallow model, as they pose a complexity challenge. Our proposed method does not make use of positional em- beddings or class tokens. GECCO can learn essential features using the weighted residual term. Additionally, we tested the addition of positional embeddings and class tokens and found no improvement in accuracy across various datasets. We note that the X5attention-like term adds positional awareness to the model. 5.4. FPGA Implementation We develop an accelerator for the proposed model on state-of- the-art FPGA, Xilinx Alveo U200 [24], to further highlight the model’s efficiency and compatibility with hardware. It has 3 Super Logic Regions (SLRs), 4 DDR memory banks, 1182k Look-up tables, 6840 DSPs, 75.9 Mb of BRAM, and 270 Mb of URAM. The FPGA kernels are developed using the Xilinx High-level Synthesis (HLS) tool to expedite the design process. Our FPGA design incorporates several novel features: (1) Portability of the design : We design a parameterized hard- ware template using HLS. It is portable to different FPGA platforms, including embedded and data-center FPGAs. We present our hardware mapping algorithm in Algorithm 1. (2)Resource sharing : The model is executed layer-by-layer. Each layer in the model is decomposed into basic kernel functions. The basic kernel functions, including matrix multi- plication, elementwise activation, column-wise and row-wise summations, max pooling, and various other elementwise operations, are implemented separately and subsequently in- voked within their corresponding layers. Due to the reuse of these fundamental kernel functions across multiple layers, FPGA resources are shared among the different layers, maxi- mizing runtime hardware resource utilization. (3) Single-load strategy : We employ a one-time data load strategy to load the required data from DDR only once. All other data required for the computations are stored in on-chip memory, reducing inference latency. Figure 3 illustrates the overall hardware architecture of our design. We utilize the Vitis tool [25] for hardware synthesis and place-and-route to determine the achieved frequency. The Vi- tis Analyzer tool is then used to report resource utilization and the number of clock cycles. The latency is calculated by multiplying the achieved clock period by the number of cycles. Table 8 reports the results obtained for the MSTAR dataset. Given the compact design and resource efficiency of the model, it can be accommodated within a single SLR. Hence, we deploy multiple accelerator instances across mul- tiple SLRs, each with one instance. This increases the infer- ence throughput. Table 8 shows the latency obtained for a single inference and the throughput achieved by running the design on 3 SLRs concurrently. Table 7 . Comparison with state-of-the-art GPU platform GPU Our Design Platform NVIDIA A5000 Alveo U200 Technology Samsung 8 nm TSMC 16 nm Frequency 1.17 GHz 200 MHz Peak Performance (TFLOPS) 27.7 0.66 On-chip Memory 6 MB 35 MB Memory Bandwidth 768 GB/s 77 GB/s Latency on MSTAR (ms) 5.22 5.65 Throughput on MSTAR (imgs/ms) 12.26 33.98 Algorithm 1 Hardware Mapping Algorithm (See the defini- tion of layer in Section 4) Input: Model f()and the input images; Output: Execution result; 1:foreach layer iinf()do 2: iflayer iis a fully connected layer then 3: Map to the matrix multiplication unit 4: iflayer iis a graph convolution layer then 5: Map to the matrix multiplication unit 6: Map to the activation unit 7: Map to the elementwise operation unit 8: Map to the matrix addition unit 9: iflayer iis a batch-wise attention then 10: Map to the matrix multiplication unit 11: Map to the activation unit 12: Map to the elementwise operation unit 13: iflayer iis a max pooling layer then 14: Map to the max pooling unit 15: iflayer iis a activation layer then 16: Map to the activation unit 17: iflayer iis a batch normalization layer then 18: Map to the batch normalization unit We compare our FPGA implementation with the baseline GPU implementation. The GPU baseline is executed on an NVIDIA RTX A5000 GPU, which operates at 1170 MHz and has a memory bandwidth of 768 GB/s. However, the FPGA operates at 200 MHz and has an external memory bandwidth of 77 GB/s. We compare the hardware features of the two platforms in Table 7. Although the GPU has higher peak per- formance ( 41×) and memory bandwidth ( 10×), our FPGAimplementation achieves a comparable latency of 5.65ms and an improved throughput of 33.98imgs/ms . Fig. 3 . Overview of Hardware Architecture Table 8 . Resource Utilization (per SLR), Latency, and Throughput for MSTAR dataset Latency 5.65ms Throughput 33.98imgs/ms BRAMs 956 (22%) URAMs 228 (24%) DSPs 1226 (17%) LUTs 459K(38%) FFs 597K(25%) 6. CONCLUSION AND FUTURE WORK This work introduced a novel architecture combining fully connected and graph convolutional layers, benchmarked on popular grayscale image datasets. The model demonstrated strong performance and low complexity, highlighting the im- portance of lightweight, low-latency image classifiers for var- ious applications. Its efficacy was shown across SAR ATR and medical image classification, with an FPGA implementa- tion underscoring its hardware friendliness. Key innovations include using a single-layer GCN, which, along with batch- wise attention, enhances accuracy and reduces variance. Fu- ture work should explore extending this approach to color im- age datasets and other domains, optimizing the architecture for even greater efficiency, and further investigating the po- tential of graph neural networks in shallow models. 7. ACKNOWLEDGEMENT This work is supported by the DEVCOM Army Research Lab (ARL) under grant W911NF2220159. Distribution State- ment A: Approved for public release. Distribution is unlim- ited. 8. REFERENCES [1] Kaoru Ota, Minh Son Dao, Vasileios Mezaris, and Francesco G. B. De Natale, “Deep learning for mobile multimedia: A survey,” ACM Trans. Multimedia Com- put. Commun. Appl. , vol. 13, no. 3s, jun 2017. [2] Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Gautier Izacard, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, and Herv ´e J´egou, “Resmlp: Feed- forward networks for image classification with data- efficient training,” 2021. [3] Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jes- sica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, and Alexey Dosovitskiy, “Mlp- mixer: An all-mlp architecture for vision,” 2021. [4] Benjamin Sanchez-Lengeling, Emily Reif, Adam Pearce, and Alexander B. Wiltschko, “A gentle in- troduction to graph neural networks,” Distill , 2021, https://distill.pub/2021/gnn-intro. [5] Naman Goyal and David Steiner, “Graph neural net- works for image classification and reinforcement learn- ing using graph representations,” 2022. [6] Bingyi Zhang, Rajgopal Kannan, Viktor Prasanna, and Carl Busart, “Accurate, low-latency, efficient sar auto- matic target recognition on fpga,” in 2022 32nd Inter- national Conference on Field-Programmable Logic and Applications (FPL) . Aug. 2022, IEEE. [7] Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, and Enhua Wu, “Vision gnn: An image is worth graph of nodes,” in NeurIPS , 2022. [8] Arnab Kumar Mondal, Vineet Jain, and Kaleem Sid- diqi, “Mini-batch graphs for robust image classifica- tion,” 2021. [9] Qishang Cheng, Hongliang Li, Qingbo Wu, and King Ngi Ngan, “Ba2m: A batch aware attention mod- ule for image classification,” 2021. [10] Yaochen Hu, Amit Levi, Ishaan Kumar, Yingxue Zhang, and Mark Coates, “On batch-size selection for stochas- tic training for graph neural networks,” 2021. [11] Shuzhi Yu and Carlo Tomasi, “Identity connections in residual nets improve noise stability,” 2019. [12] Daniel Kermany, “Labeled optical coherence tomog- raphy (oct) and chest x-ray images for classification,” 2018.[13] Thomas N. Kipf and Max Welling, “Semi-supervised classification with graph convolutional networks,” 2017. [14] Jian Du, Shanghang Zhang, Guanhang Wu, Jose M. F. Moura, and Soummya Kar, “Topology adaptive graph convolutional networks,” 2018. [15] William L. Hamilton, Rex Ying, and Jure Leskovec, “In- ductive representation learning on large graphs,” 2018. [16] Micha ¨el Defferrard, Xavier Bresson, and Pierre Van- dergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” 2017. [17] Xavier Bresson and Thomas Laurent, “Residual gated graph convnets,” 2018. [18] Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recogni- tion,” 2015. [19] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” 2021. [20] Seung Hoon Lee, Seunghyun Lee, and Byung Cheol Song, “Vision transformer for small-size datasets,” CoRR , vol. abs/2112.13492, 2021. [21] Pavan Kumar Anasosalu Vasu, James Gabriel, Jeff Zhu, Oncel Tuzel, and Anurag Ranjan, “Fastvit: A fast hybrid vision transformer using structural reparameterization,” 2023. [22] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo, “Swin transformer: Hierarchical vision transformer us- ing shifted windows,” 2021. [23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016, pp. 770–778. [24] Xilinx, “Xilinx alveo u200 board,” https://docs. xilinx.com/r/en-US/ds962-u200-u250/ FPGA-Resource-Information . [25] “Vitis HLS,” https://www.xilinx. com/products/design-tools/vitis/ vitis-hls.html . | 4 | 1 | The GECCO model is lightweight with a relatively low number of parameters (approx. 5.08M) and uses simple architecture elements (single GCN layer and MLP). The MSTAR dataset consists of 2747 training samples and 2425 testing samples of 128x128 pixels, and the CXR dataset consists of 5216 training samples and requires less computation due to the grayscale focus. Assuming about 50 epochs of training and a batch size of 64, the total number of training iterations required would be manageable compared to larger deep models. The model was implemented on an NVIDIA RTX A5000 GPU, suggesting good performance; thus, I estimate it could be trained effectively on a single GPU in a reasonable timeframe of about 4 hours, well within the 8-hour limit. | yes | Yes | CV | A Single Graph Convolution Is All You Need: Efficient Grayscale Image Classification | 2024-02-01 0:00:00 | https://github.com/geccoproject/gecco | 1 | downloaded by training script | 20s * 1000 epoch = 5.5 hr approx | https://drive.google.com/file/d/1b72abDo06zMcoMYcEnbhx-eryDxMP2G0/view?usp=sharing | Yes | -- Need to make some fixes for fashion mnsit . I have included the changes in colab file please follow that |
MNIST | rKAN | [] | rKAN: Rational Kolmogorov-Arnold Networks | 2024-06-20T00:00:00 | https://arxiv.org/abs/2406.14495v1 | [
"https://github.com/alirezaafzalaghaei/rkan"
] | {'Accuracy': '99.293'} | [
"Percentage error",
"Accuracy",
"Trainable Parameters",
"Cross Entropy Loss",
"Epochs",
"Top 1 Accuracy"
] | Given the following paper and codebase:
Paper: rKAN: Rational Kolmogorov-Arnold Networks
Codebase: https://github.com/alirezaafzalaghaei/rkan
Improve the rKAN model on the MNIST dataset. The result
should improve on the following metrics: {'Accuracy': '99.293'}. You must use only the codebase provided.
| rKAN: Rational Kolmogorov-Arnold Networks Alireza Afzal Aghaei Independent Researcher Email: alirezaafzalaghaei@gmail.com June 21, 2024 Abstract The development of Kolmogorov-Arnold networks (KANs) marks a significant shift from traditional multi-layer perceptrons in deep learning. Initially, KANs employed B-spline curves as their primary basis function, but their inherent com- plexity posed implementation challenges. Consequently, researchers have explored alternative basis functions such as Wavelets, Polynomials, and Fractional functions. In this research, we explore the use of rational functions as a novel basis function for KANs. We propose two different approaches based on Pad´ e approximation and rational Jacobi functions as trainable basis functions, establishing the rational KAN (rKAN). We then evaluate rKAN’s performance in various deep learning and physics-informed tasks to demonstrate its practicality and effectiveness in function approximation. Keywords— Rational Functions, Jacobi Polynomials, Kolmogorov-Arnold Networks, Physics- informed Deep Learning 1 Introduction Function approximation is a crucial area of study within numerical analysis and computational mathematics. It involves using simpler functions, known as basis functions, to represent com- plex ones, thereby simplifying analysis and computation. This process is essential for various applications such as solving differential equations, data fitting, and machine learning [19, 3]. By approximating functions, we can predict outcomes, optimize processes, and identify patterns in data. Various basis functions are employed for function approximation, each with its unique ad- vantages suited to specific problem requirements. These functions serve as the building blocks for approximating more complex functions and can significantly influence the accuracy and efficiency of the approximation. In numerical analysis, one common method is polynomial curve fitting, where the basis functions are polynomials. These functions are simple and easy to compute but can suffer from instability issues, especially with higher-degree polynomials. This phenomenon, known as Runge’s phenomenon [11], highlights the limitations of polynomial basis functions for certain types of data. Spline interpolation is another widely used method, particularly effective for functions with intricate shapes. Splines are piecewise polynomials that ensure smoothness at the points where the polynomial pieces connect, called knots. This method offers great flexibility 1arXiv:2406.14495v1 [cs.LG] 20 Jun 2024 and smoothness, making it ideal for applications requiring a high degree of accuracy in the approximation of curves and surfaces. Fourier series, which use trigonometric functions as basis functions, are particularly effec- tive for approximating periodic functions. The Fourier basis functions, consisting of sines and cosines, can represent periodic behavior accurately and are widely used in signal processing, image analysis, and other fields requiring periodic function analysis [10]. Wavelets are another class of basis functions that have gained popularity, especially in signal and image processing. Wavelets enable multi-resolution analysis, providing a method to analyze data at various levels of detail. This is particularly beneficial for applications involving hierarchical or time-frequency analysis [12]. Fractional basis functions are another type of functions that can capture intricate behaviors and subtle variations in data that integer-order methods might miss. This makes them particularly useful in modeling natural phenomena [5]. They are able to provide smooth approximations with fewer terms compared to integer-order polynomials, resulting in more ef- ficient computations and a reduced risk of overfitting. Fractional B-splines, for example, offer greater flexibility in controlling the smoothness and continuity of the approximating function, making them ideal for applications requiring high precision and adaptability [33]. Rational approximations, where basis functions are ratios of polynomials, provide a robust method for approximating functions with asymptotic behavior and singularities. These basis functions are particularly useful for functions with sharp peaks or rapid changes. Pad´ e ap- proximants, a specific type of rational approximation, are known for their ability to achieve accurate approximations over a broad range of values. This method is particularly effective in areas such as control theory and complex analysis, where premasti2024collocationcise function approximation is critical [5, 10, 6]. Most of these basis functions have been developed for machine learning and deep learning tasks. In these fields, algorithms aim to approximate a function through a potentially nested combination of basis functions that best fit the given data. Examples include support vector machines with polynomial [3] or fractional kernels [39], the least-squares support vector machines with an orthogonal rational kernel [5], and neural networks with various activation functions such as orthogonal Legendre [36, 2], Fourier [46, 38], fractional [21], and rational functions [9, 47, 26, 50]. Additionally, B-spline neural networks have been explored for their modeling capabilities [23, 41, 14, 8]. In some scenarios, B-spline neural networks can be regarded as Kolmogorov-Arnold neural networks. Kolmogorov-Arnold Networks, based on the Kolmogorov-Arnold representation theorem [31, 25, 40, 30], offer a novel approach to accurately fitting real-world data. These networks have been applied to various domains, including time-series analysis [20, 52, 55], human activity recognition [29], seizure detection [22], electrohydrodynamic pumps [37], and cognitive diagnosis [56]. Initially, these networks were developed using B-spline curves [31]. However, due to implementation challenges and issues with smoothness, alternative basis functions have been explored. These alternatives include Wavelet KANs [12, 44], Fourier KANs [54], radial basis function KANs [28, 49], polynomial KANs [43, 48], and fractional KANs [1]. While some attempts have applied rational functions to traditional neural networks [9, 47, 26, 50], there has been no research on the applicability of rational functions in KANs. In this paper, we examine the accuracy of KANs using rational functions through two different ap- proaches: 1) Pad´ e approximation and 2) a mapped version of Jacobi polynomials. The first approach uses the original Jacobi polynomials to construct a rational approximation in the Pad´ e scheme for describing the data. The second approach maps the original Jacobi polyno- mial, defined on a finite interval, into a possibly infinite domain with a rational mapping. We demonstrate in both cases how the fractional KAN [1] approach can be utilized as a generalized case in our methodology. Finally, we compare the results of these two approaches with KANs and other alternatives in deep learning tasks such as regression and classification. We also assess the accuracy of this approach in physics-informed deep learning tasks, particularly by 2 approximating the solution of certain differential equations. The rest of the paper is organized as follows. In Section 2, we review some preliminaries on Jacobi polynomials and their properties. Section 3 explains the KAN formulations and the proposed methodology. In Section 4, we validate the proposed method on several real-world problems. Finally, in Section 5, we present some concluding remarks. 2 Jacobi polynomials Jacobi polynomials (denoted by J(α,β) n(ξ)) are an infinite sequence of orthogonal functions that are mutually orthogonal to each other [39]. Mathematically, the following inner product will be zero for n̸=m: ⟨J(α,β) m,J(α,β) n⟩ω(ξ)=Z1 −1J(α,β) m(ξ)J(α,β) n(ξ)ω(ξ)dξ=⟨J(α,β) n,J(α,β) n⟩ω(ξ)δm,n. Forn= 0,1, . . . , these polynomials are defined by the Gamma function: J(α,β) n(ξ) =Γ(α+n+ 1) n! Γ(α+β+n+ 1)nX m=0n mΓ(α+β+n+m+ 1) Γ(α+m+ 1)ξ−1 2m . In this definition, α, β > −1 play the role of hyperparameters that affect the shape of the resulting function. We can treat these parameters as unknown weights in the computational graph and optimize them during the network’s optimization process. However, we must ensure the validity of their values. For this purpose, we utilize the well-known ELU activation function [16], which possesses the property ELU : R→(−κ,∞): ELU( ξ;κ) =( ξ ifξ >0, κ×(eξ−1) if ξ≤0, where κis a parameter that controls the lower bound of the range of the ELU function. Con- sequently, to ensure meaningful Jacobi functions for parameters αandβ,κcan be set to 1. The Jacobi function is traditionally defined on the interval [ −1,1], which limits its use in approximating functions across desired intervals. Consequently, researchers have developed techniques to extend their definition to a potentially infinite domain. These extended functions can be generated by the following definition. Definition 1 (Mapped Jacobi function) .By applying an invertible mapping function φ: Ω→ [−1,1] to the input of Jacobi polynomials, the mapped Jacobi functions can be generated as: R(α,β) n(ξ) =J(α,β) n(φ(ξ)). The choice of φ(·) can vary depending on the original problem domain. For instance, for a finite domain Ω = [ d0, d1], the linear mapping φ(ξ;d0, d1) =2ξ−d0−d1 d1−d0, can be employed (Figure 1a). For a semi-infinite domain Ω = (0 ,∞), three major options with the hyperparameter ι >0 are available: •Logarithmic mapping (Figure 1b): ϕ(ξ;ι) = 2 tanhξ ι −1, (1) 3 •Algebraic mapping (Figure 1c): ϕ(ξ;ι) =ξ−ι ξ+ι, (2) •Exponential mapping (Figure 1d): ϕ(ξ;ι) = 1−2exp(−ξ ι). (3) Finally, for the infinite interval Ω = ( −∞,∞), one can use a nonlinear mapping with ι >0 to generate mapped Jacobi functions: •Logarithmic mapping (Figure 1e): ϕ(ξ;ι) = tanhξ ι , (4) •Algebraic mapping (Figure 1f): ϕ(ξ;ι) =ξp ξ2+ι2. (5) When the domain Ω is semi-infinite or infinite, it is common to call R(α,β) n(ξ) as rational Jacobi functions [10]. As illustrated in Figure 1, these functions are non-zero and possess real-valued distinct roots within their domain. They are differentiable, and their derivatives can be ex- pressed in terms of the functions themselves. For a more detailed discussion of these functions, we refer the reader to [2, 39, 10]. 3 Rational KAN In this section, we introduce two approaches for developing rKANs. To begin, we briefly review the original KANs by stating the Kolmogorov-Arnold theorem. Theorem 3.1. For any continuous function F: [0,1]ν→R, there exist continuous functions φq,k: [0,1]→Rand continuous functions ψk:R→Rsuch that F(ξ1, ξ2, . . . , ξ ν) =2ν+1X k=1ψk νX q=1ϕq,k(ξq) . Proof. The proof of the Kolmogorov-Arnold representation theorem is highly non-trivial and relies on advanced concepts in functional analysis. For more detailed definitions and proofs, refer to [31, 13, 17, 42]. Employing this theorem, KANs suggest using a nested combination of this approximation for more accurate predictions. In matrix form, KANs are defined as: ˆF(ξ) =ΦL−1◦ ··· ◦ Φ1◦Φ0◦ξ, where Φq,k=ϕq,k(·) and ξis the input sample of the network. To employ a rational basis function in this approach, one can define the functions ϕq,k(·) using a rational function. There are two approaches to generate such a rational function. The first approach is to divide two polynomials, known as the Pad´ e approximation. The second approach is to use a rationalized form of Jacobi functions. In the following, we explain these two approaches. 4 (a) Finite mapping (b) Logarithmic mapping (0 ,∞) (c) Algebraic mapping (0 ,∞) (d) Exponential mapping (0 ,∞) (e) Logarithmic mapping ( −∞,∞) (f) Algebraic mapping ( −∞,∞) Figure 1: Plots of mapped Jacobi functions J(0,0) n(ξ) for n= 2,3, . . . , 6 over finite, semi- infinite, and infinite domains. 5 3.1 Pad´ e approximation The Pad´ e approximation is a method for approximating a function by a rational function of the given order. Specifically, a Pad´ e approximant of order [ q/k] for a function F(ξ) is: F[q/k](ξ)≈Aq(ξ) Bk(ξ)=qX i=0aiξi kX j=0bjξj, where ai, biare real-valued numbers. The polynomials Aq(ξ) and Bk(ξ) can be chosen as the original or finite-shifted Jacobi polynomials. For rKAN, we consider the functions ϕq,kas: ϕq,k(ξ) =kX i=0θe iR(α,β) i(ξq) pX i=0θd iR(α,β) i(ξq). Here θe iandθe iare trainable weights and pis a positive integer. Note that the input of the Jacobi polynomial should lie within a specific domain [ d0, d1]; therefore, a bounded range activation function such as Sigmoid or hyperbolic tangent (namely σ(·)) should be applied to the input of these functions. Finally, the Pad´ e-rKAN is defined as: F(ξ) =KX k=1ψk νX q=1ϕq,k(σ(ξq)) , in which the functions ψk(·) are considered as linear functions. In this formulation, the fractional rational KAN (frKAN) is applicable if we use a linear mapping function on Jacobi polynomials that shifts data to the positive part of the real line. Suppose we use φ(ξ) = 2 ξγ−1 with the Sigmoid function σ(·) and a trainable positive fractional order parameter γ. The fractional rational basis functions then take the form: ϕq,k(ξ) =qX i=0θe iR(α,β) i(φ(σ(ξ))) pX i=0θd iR(α,β) i(φ(σ(ξ))). 3.2 Rational Jacobi functions Another approach to using a rational function in KANs is to map the Jacobi polynomials using a nonlinear rational mapping. These mappings can be defined on a semi-infinite domain or on the entire real line. Since the output of a network layer is unbounded, using an infinite mapping such as (5) or (4) can be beneficial. As a result, the basis functions φq,k(·) are defined as: φq,k(ξq) =J(α,β) k(φ(ξk; SoftPlus( ι))), where the soft plus function is defined as: SoftPlus : R→(0,∞), SoftPlus( ι) = log(1 + exp( ι)), 6 and is applied to the trainable parameter ιto ensure its positiveness. Similar to the Pad´ e-rKAN, the approximation of Jacobi-rKAN takes the form: F(ξ) =KX k=1ψk νX q=1ϕq,k(σ(ξq)) . In this case, to apply fractional basis functions, it is necessary to use a positive range function σalong with a rational mapping that is defined for positive values. Suitable rational mappings include those given by formulas such as (2) (algebraic mapping), (3) (exponential mapping), and (1) (logarithmic mapping). The other definitions within the framework remain unchanged. 4 Experiments In this section, we evaluate the proposed rational KAN on various deep learning tasks. All experiments are implemented in Python using the PyTorch and TensorFlow libraries. The experiments are conducted on a PC equipped with an Intel Core i3-10100 CPU, an Nvidia GeForce GTX 1650 GPU, and 16GB of RAM. The implementation of this approach is publicly available on GitHub1. 4.1 Deep learning This section presents classification and regression tasks simulated using rKAN. 4.1.1 Regression Tasks We begin the assessment of rKAN with a regression task using synthetic data generated from three different functions with asymptotic behavior. These functions are defined as follows and are illustrated in Figure 2: F1(ξ) =ξ 1 +ξ2, F2(ξ) =1 1 +ξ2, F3(ξ) = exp( −ξ2). For training, we sample 200 random points and for testing, 100 random points within the interval [ −10,10]. We use a neural network with an architecture of [1, 10, 1], where the hidden layer contains 10 neurons. The network is optimized using the L-BFGS optimizer with full batch processing for 50 epochs. We use the mean squared error (MSE) to evaluate both the training and testing accuracy. The MSE results for the test data are presented in Tables 1, 2, and 3. In all tables, we have employed (5) as the rational mapping of Jacobi functions. 4.1.2 MNIST classification Recent studies have explored the application of KANs in various image processing tasks, such as image classification [4, 44, 1, 15], image denoising [1] image segmentation [27]. In this section, we focus on the classification task using the MNIST dataset, which includes 60,000 training images of handwritten digits and 10,000 test images, each with a size of 28 ×28×1. We designed a 2-dimensional convolutional neural network for this task, as illustrated in Figure 3. The network was trained using the Adam optimizer with the default learning rate in Keras, a 1https://github.com/alirezaafzalaghaei/rKAN 7 −10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0−0.4−0.20.00.20.40.60.81.0F1(ζ) F2(ζ) F3(ζ) 1Figure 2: The plots of the functions F1(ξ),F2(ξ), and F3(ξ). The prediction results of rKAN, fKAN, and KAN are presented in Tables 1, 2, and 3. Model K=2 K=3 K=4 K=5 K=6 fKAN 3 .330×10−77.454×10−76.100×10−73.339×10−66.967×10−6 Jacobi-rKAN 4 .223×10−74.289×10−73.273×10−64.861×10−51.132×10−4 Pad´ e[K/3]-rKAN 2 .616×10−72.778×10−21.634×10−77.760×10−72.142×10−6 Pad´ e[K/4]-rKAN 2 .951×10−51.075×10−61.109×10−69.297×10−71.722×10−5 Pad´ e[K/5]-rKAN 4 .334×10−33.930×10−31.042×10−41.140×10−39.587×10−4 Pad´ e[K/6]-rKAN - - 7 .672×10−4- - Tanh 2 .711×10−7ReLU 5 .143×10−4KAN 2 .240×10−2 Table 1: The MSE between the predicted values and the exact values for the F1(ξ). batch size of 512, and 30 epochs. The validation loss and accuracy during training are depicted in Figure 4. Additionally, the performance metrics on the test set, including accuracy and loss, are presented in Table 4, which also compares our results with those obtained using fractional KAN [1] and common activation functions like hyperbolic tangent and ReLU. Figure 3: The architecture of proposed method for MNIST classification data. 4.2 Physics-informed Deep Learning Physics-informed deep learning tasks often involve mathematical problems augmented with real-world data, providing researchers with more precise insights into both the data and the governing equations. KANs have been developed to address these challenges [1, 31, 35, 53, 8 Model K=2 K=3 K=4 K=5 K=6 fKAN 5 .406×10−74.629×10−72.170×10−63.699×10−64.684×10−6 Jacobi-rKAN(K) 1 .889×10−78.220×10−71.446×10−69.917×10−67.252×10−5 Pad´ e[K/3]-rKAN 2 .560×10−73.752×10−77.343×10−77.138×10−72.728×10−6 Pad´ e[K/4]-rKAN 9 .726×10−46.778×10−71.521×10−61.727×10−66.301×10−4 Pad´ e[K/5]-rKAN 4 .420×10−33.629×10−36.551×10−41.813×10−32.964×10−4 Pad´ e[K/6]-rKAN - - 3 .364×10−41.279×10−64.787×10−2 Tanh 2 .963×10−7ReLU 3 .402×10−5KAN 1 .520×10−2 Table 2: The MSE between the predicted values and the exact values for the F2(ξ). Model K=2 K=3 K=4 K=5 K=6 fKAN 4 .320×10−74.940×10−77.540×10−74.100×10−61.420×10−5 Jacobi-rKAN(K) 5 .330×10−75.350×10−73.540×10−61.730×10−52.760×10−4 Pad´ e[K/3]-rKAN 2 .590×10−71.100×10−25.670×10−77.750×10−72.680×10−6 Pad´ e[K/4]-rKAN 1 .460×10−65.250×10−76.990×10−79.890×10−24.550×10−5 Pad´ e[K/5]-rKAN 4 .070×10−34.420×10−34.340×10−61.290×10−51.240×10−3 Pad´ e[K/6]-rKAN 4 .420×10−29.670×10−43.790×10−6- 1 .750×10−5 Tanh 1 .490×10−6ReLU 4 .750×10−5KAN 1 .890×10−2 Table 3: The MSE between the predicted values and the exact values for the F3(ξ). Act. Func. Loss Accuracy Mean Std. Mean Std. Sigmoid 0 .0611 0 .0028 98 .092 0 .0937 Tanh 0 .0322 0 .0015 98 .904 0 .0695 ReLU 0 .0256 0 .0010 99 .140 0 .0434 fKAN(2) [1] 0 .0252 0 .0017 99 .134 0 .0484 fKAN(3) [1] 0 .0224 0 .0019 99 .200 0 .0787 fKAN(4 )[1] 0 .0217 0 .0008 99 .228 0 .0515 fKAN(5) [1] 0 .0249 0 .0009 99 .204 0 .0467 fKAN(6) [1] 0 .0290 0 .0028 99 .024 0 .1198 rKAN(2) 0 .0215 0 .0012 99 .268 0 .0683 rKAN(3) 0 .0222 0 .0012 99 .210 0 .0464 rKAN(4) 0 .0292 0 .0006 99 .060 0 .0332 rKAN(5) 0.0213 0.0004 99.293 0.0597 rKAN(6) 0 .0214 0 .0027 99 .218 0 .0944 Table 4: Performance of different activation functions in a CNN for classifying MNIST dataset. It is observed that rKAN outperforms fKAN in certain cases. 45]. In these networks, the loss function is defined to enable the network to approximate the dynamics of physical problems. For example, for a differential equation in the operator form L(F) = 0 with initial condition F(0) = F0, the network loss is defined as the mean squared 9 0 5 10 15 20 25 30 Iteration−0.9−0.8−0.7−0.6−0.5−0.4−0.3−0.2log(Loss)rKAN(2) rKAN(3) rKAN(4) rKAN(5) rKAN(6) 0 5 10 15 20 25 30 Iteration90%92%94%96%98%100%Accuracy rKAN(2) rKAN(3) rKAN(4) rKAN(5) rKAN(6)Figure 4: Loss and accuracy of MNIST classification using Jacobi-rKAN with different values of K. residual [18]: Loss( ξ) =1 |ξ||ξ|X i=1L(F)(ξi)2+|ˆF(0)−F0|2, where ξrepresents the training data in the domain of the problem. In this section, we evaluate two examples of data-driven solutions to differential equations using rKAN. 4.2.1 Ordinary Differential Equations For this task, we will focus on the Lane-Emden equation, a well-known ordinary differential equation. This equation represents a dimensionless form of Poisson’s equation, which describes the gravitational potential of a Newtonian, self-gravitating, spherically symmetric, polytropic fluid. This equation, for a positive integer w, is defined as follows: d2 dξ2F(ξ) +2 ξd dξF(ξ) +Fw(ξ) = 0 , F(0) = 1 , F′(0) = 0 . To simulate this problem, we use the rKAN architecture with Pad´ e and Jacobi rational mapping (5). For a fair comparison, we adopt a network architecture similar to that of fKAN [1], but replace the fKAN layers with rKAN layers. This network incorporates six different Jacobi basis functions (i.e., K= 6 in our rKAN architecture) and is optimized using the L-BFGS algorithm with 1500 equidistant points in the domain [0 ,15]. The first roots of the predicted solution to the differential equation hold significant physical meaning. Therefore, in Table 5, we compare our results with those obtained from fractional KAN [1] and the Grammatical Evolution Physics-Informed Neural Network (GEPINN) [34]. 4.2.2 Partial Differential Equations For a more challenging task, we select an elliptic partial differential equation (PDE) defined as follows: ∂2 ∂ξ2 1F(ξ1, ξ2) +∂2 ∂ξ2 2F(ξ1, ξ2) = sin( πξ1) sin(πξ2), F(ξ1,0) = 0 , F (ξ1,1) = 0 , F(0, ξ2) = 0 , F (1, ξ2) = 0 .(6) 10 wJacobi-rKAN Pad´ e[q/6]-rKAN fKAN [1] GEPINN [34] 0 5 .15×10−64.86×10−63.52×10−51.40×10−7 1 7 .12×10−58.67×10−68.67×10−64.83×10−3 2 2 .88×10−55.09×10−59.34×10−68.93×10−3 3 2 .40×10−41.06×10−55.55×10−71.88×10−2 4 1 .57×10−32.82×10−22.97×10−45.08×10−2 Table 5: Comparison of the first roots of the predicted solution with the exact roots from [24] and the approximated results from a similar neural network approaches [34, 1]. The exact solution to this PDE is given by [32]: F(ξ1, ξ2) =−1 2π2sin (πξ1) sin ( πξ2). We simulate the solution of this PDE using a simple rKAN with the architecture [1 ,10,10,1] and Jacobi-rKAN basis functions of order 4 using 50 ×50 datapoints in [0 ,1]2. The simulation results for this problem are shown in Figure 5. 0.0 0.2 0.4 0.6 0.8 1.0 ξ10.0 0.5 1.0ξ2−0.05−0.04−0.03−0.02−0.010.00 ˆF(ξ1, ξ2) 1 (a) Prediction 0.00.20.40.60.81.0 ξ10.0 0.5 1.0ξ2−0.00050.00000.00050.00100.0015 R(ξ1, ξ2) 1 (b) Residual Figure 5: The predicted solution and the residual function with respect to the exact solution for the elliptic PDE given in Equation (6). 5 Conclusion In this paper, we have introduced a new perspective on Kolmogorov-Arnold networks utilizing rational basis functions. Rational functions, a type of basis function in numerical approxima- tion, enhance prediction accuracy, particularly in scenarios involving asymptotic behavior and singularities. We proposed two types of rational KANs based on the Pad´ e approximation and rational Jacobi functions. The first architecture employs the division of two polynomials, specif- ically shifted Jacobi functions, while the second approach maps Jacobi functions directly into a rational space. In both models, the basis function hyperparameters α,β, and ι(for rational Jacobi functions) are optimized as network weights. We also demonstrated that our method can be integrated with fractional KANs in certain contexts. 11 We validated the effectiveness of the proposed method through simulations on real-world examples, including a regression task, a classification task, and numerical approximations for solving the Lane-Emden ordinary differential equation and an elliptic partial differential equa- tion. The results indicate that our method can sometimes achieve greater accuracy compared to existing alternatives. However, our experiments showed that the Pad´ e-rKAN increases the time complexity of training, as it involves the computation of two weighted polynomials. For future work, we suggest exploring the use of rational versions of B-spline curves [51, 7], which are renowned for their flexible shape representation capabilities. Furthermore, a focused evaluation of fractional rational KANs is warranted, particularly for solving physics-informed problems defined on semi-infinite domains [5, 2]. References [1] Alireza Afzal Aghaei. “fKAN: Fractional Kolmogorov-Arnold Networks with train- able Jacobi basis functions”. In: arXiv preprint arXiv:2406.07456 (2024). [2] Alireza Afzal Aghaei et al. “Solving Falkner-Skan type equations via Legendre and Chebyshev neural blocks”. In: arXiv preprint arXiv:2308.03337 (2023). [3] Ethem Alpaydin. Introduction to machine learning . MIT press, 2020. [4] Basim Azam and Naveed Akhtar. Suitability of KANs for Computer Vision: A preliminary investigation . 2024. arXiv: 2406.09087 . [5] Maryam Babaei et al. “Solving a class of Thomas–Fermi equations: A new solu- tion concept based on physics-informed machine learning”. In: Mathematics and Computers in Simulation (2024). [6] George A Baker Jr and John L Gammel. “The pad´ e approximant”. In: Journal of Mathematical Analysis and Applications 2.1 (1961), pp. 21–30. [7] L Bardis and NM Patrikalakis. “Surface approximation with rational B-splines”. In:Engineering with computers 6 (1990), pp. 223–235. [8] Pakshal Bohra et al. “Learning activation functions in deep (spline) neural net- works”. In: IEEE Open Journal of Signal Processing 1 (2020), pp. 295–309. [9] Nicolas Boull´ e, Yuji Nakatsukasa, and Alex Townsend. “Rational neural networks”. In:Advances in neural information processing systems 33 (2020), pp. 14243–14253. [10] John P Boyd. Chebyshev and Fourier spectral methods . Courier Corporation, 2001. [11] John P Boyd. “Defeating the Runge phenomenon for equispaced polynomial inter- polation via Tikhonov regularization”. In: Applied mathematics letters 5.6 (1992), pp. 57–59. [12] Zavareh Bozorgasl and Hao Chen. “Wav-kan: Wavelet kolmogorov-arnold networks”. In:arXiv preprint arXiv:2405.12832 (2024). [13] J¨ urgen Braun and Michael Griebel. “On a constructive proof of Kolmogorov’s su- perposition theorem”. In: Constructive approximation 30 (2009), pp. 653–675. [14] Sheng Chen et al. “Complex-valued B-spline neural networks for modeling and inverting Hammerstein systems”. In: IEEE Transactions on Neural Networks and Learning Systems 25.9 (2014), pp. 1673–1685. [15] Minjong Cheon. “Kolmogorov-Arnold Network for Satellite Image Classification in Remote Sensing”. In: arXiv preprint arXiv:2406.00600 (2024). 12 [16] Djork-Arn´ e Clevert, Thomas Unterthiner, and Sepp Hochreiter. “Fast and accu- rate deep network learning by exponential linear units (elus)”. In: arXiv preprint arXiv:1511.07289 (2015). [17] S Dzhenzher and A Skopenkov. “A structured proof of Kolmogorov’s Superposition Theorem”. In: arXiv preprint arXiv:2105.00408 (2021). [18] Ali Nosrati Firoozsalari et al. “deepFDEnet: A novel neural network architecture for solving fractional differential equations”. In: arXiv preprint arXiv:2309.07684 (2023). [19] Walter Gautschi. Numerical analysis . Springer Science & Business Media, 2011. [20] Remi Genet and Hugo Inzirillo. “Tkan: Temporal kolmogorov-arnold networks”. In: arXiv preprint arXiv:2405.07344 (2024). [21] Amir Hosein Hadian-Rasanan et al. “A single layer fractional orthogonal neural network for solving various types of Lane–Emden equation”. In: New Astronomy 75 (2020), p. 101307. [22] Luis Fernando Herbozo Contreras et al. “KAN-EEG: Towards Replacing Backbone- MLP for an Effective Seizure Detection System”. In: medRxiv (2024), pp. 2024– 06. [23] Xia Hong and Sheng Chen. “Modeling of complex-valued Wiener systems using B-spline neural network”. In: IEEE Transactions on Neural Networks 22.5 (2011), pp. 818–825. [24] George Paul Horedt. Polytropes: applications in astrophysics and related fields . Vol. 306. Springer Science & Business Media, 2004. [25] Mehrdad Kiamari, Mohammad Kiamari, and Bhaskar Krishnamachari. “GKAN: Graph Kolmogorov-Arnold Networks”. In: arXiv preprint arXiv:2406.06470 (2024). [26] Henry Leung and Simon Haykin. “Rational function neural network”. In: Neural Computation 5.6 (1993), pp. 928–938. [27] Chenxin Li et al. U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation . 2024. arXiv: 2406.02918 . [28] Ziyao Li. “Kolmogorov-Arnold Networks are Radial Basis Function Networks”. In: arXiv preprint arXiv:2405.06721 (2024). [29] Mengxi Liu et al. “iKAN: Global Incremental Learning with KAN for Human Activ- ity Recognition Across Heterogeneous Datasets”. In: arXiv preprint arXiv:2406.01646 (2024). [30] Mengxi Liu et al. Initial Investigation of Kolmogorov-Arnold Networks (KANs) as Feature Extractors for IMU Based Human Activity Recognition . 2024. arXiv: 2406.11914 . [31] Ziming Liu et al. “Kan: Kolmogorov-arnold networks”. In: arXiv preprint arXiv:2404.19756 (2024). [32] Susmita Mall and Snehashish Chakraverty. “Single layer Chebyshev neural net- work model for solving elliptic partial differential equations”. In: Neural Processing Letters 45 (2017), pp. 825–840. 13 [33] I Masti and K Sayevand. “On collocation-Galerkin method and fractional B-spline functions for a class of stochastic fractional integro-differential equations”. In: Math- ematics and Computers in Simulation 216 (2024), pp. 263–287. [34] Hassan Dana Mazraeh and Kourosh Parand. “GEPINN: An innovative hybrid method for a symbolic solution to the Lane-Emden type equation based on gram- matical evolution and physics-informed neural networks”. In: Astronomy and Com- puting (2024), p. 100846. [35] George Nehma and Madhur Tiwari. “Leveraging KANs For Enhanced Deep Koop- man Operator Discovery”. In: arXiv preprint arXiv:2406.02875 (2024). [36] K Parand et al. “A neural network approach for solving nonlinear differential equa- tions of Lane–Emden type”. In: Engineering with Computers (2023), pp. 1–17. [37] Yanhong Peng et al. “Predictive Modeling of Flexible EHD Pumps using Kolmogorov- Arnold Networks”. In: arXiv preprint arXiv:2405.07488 (2024). [38] Harry Pratt et al. “Fcnn: Fourier convolutional neural networks”. In: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part I 17 . Springer. 2017, pp. 786–798. [39] Jamal Amani Rad, Kourosh Parand, and Snehashish Chakraverty. Learning with fractional orthogonal kernel classifiers in support vector machines: Theory, algo- rithms and applications . Springer, 2023. [40] Moein E Samadi, Younes M¨ uller, and Andreas Schuppert. “Smooth Kolmogorov Arnold networks enabling structural knowledge representation”. In: arXiv preprint arXiv:2405.11318 (2024). [41] Leandro dos Santos Coelho and Marcelo Wicthoff Pessˆ oa. “Nonlinear identification using a B-spline neural network and chaotic immune approaches”. In: Mechanical Systems and Signal Processing 23.8 (2009), pp. 2418–2434. [42] Johannes Schmidt-Hieber. “The Kolmogorov–Arnold representation theorem revis- ited”. In: Neural networks 137 (2021), pp. 119–126. [43] Seyd Teymoor Seydi. “Exploring the Potential of Polynomial Basis Functions in Kolmogorov-Arnold Networks: A Comparative Study of Different Groups of Poly- nomials”. In: arXiv preprint arXiv:2406.02583 (2024). [44] Seyd Teymoor Seydi. “Unveiling the Power of Wavelets: A Wavelet-based Kolmogorov- Arnold Network for Hyperspectral Image Classification”. In: arXiv preprint arXiv:2406.07869 (2024). [45] Khemraj Shukla et al. “A comprehensive and FAIR comparison between MLP and KAN representations for differential equations and operator networks”. In: arXiv preprint arXiv:2406.02917 (2024). [46] Adrian Silvescu. “Fourier neural networks”. In: IJCNN’99. International Joint Con- ference on Neural Networks. Proceedings (Cat. No. 99CH36339) . Vol. 1. IEEE. 1999, pp. 488–491. [47] Kai-Yeung Siu, Vwani P Roychowdhury, and Thomas Kailath. “Rational approxi- mation techniques for analysis of neural networks”. In: IEEE Transactions on In- formation Theory 40.2 (1994), pp. 455–466. 14 [48] Sidharth SS. “Chebyshev Polynomial-Based Kolmogorov-Arnold Networks: An Ef- ficient Architecture for Nonlinear Function Approximation”. In: arXiv preprint arXiv:2405.07200 (2024). [49] Hoang-Thang Ta. BSRBF-KAN: A combination of B-splines and Radial Basic Functions in Kolmogorov-Arnold Networks . 2024. arXiv: 2406.11173 . [50] Matus Telgarsky. “Neural networks and rational functions”. In: International Con- ference on Machine Learning . PMLR. 2017, pp. 3387–3393. [51] Wayne Tiller. “Rational B-splines for curve and surface representation”. In: IEEE Computer Graphics and Applications 3.06 (1983), pp. 61–69. [52] Cristian J Vaca-Rubio et al. “Kolmogorov-arnold networks (kans) for time series analysis”. In: arXiv preprint arXiv:2405.08790 (2024). [53] Yizheng Wang et al. Kolmogorov Arnold Informed neural network: A physics- informed deep learning framework for solving PDEs based on Kolmogorov Arnold Networks . 2024. arXiv: 2406.11045 . [54] Jinfeng Xu et al. “FourierKAN-GCF: Fourier Kolmogorov-Arnold Network–An Ef- fective and Efficient Feature Transformation for Graph Collaborative Filtering”. In: arXiv preprint arXiv:2406.01034 (2024). [55] Kunpeng Xu, Lifei Chen, and Shengrui Wang. “Kolmogorov-Arnold Networks for Time Series: Bridging Predictive Power and Interpretability”. In: arXiv preprint arXiv:2406.02496 (2024). [56] Shangshang Yang, Linrui Qin, and Xiaoshan Yu. “Endowing Interpretability for Neural Cognitive Diagnosis by Efficient Kolmogorov-Arnold Networks”. In: arXiv preprint arXiv:2405.14399 (2024). 15 | 4 | 1 | The model described (rKAN) is similar to existing neural network architectures in complexity. It has a manageable architecture (1-10-1 for regression tasks) and a batch size of 512 for the MNIST classification task. Training on the MNIST dataset (60,000 training images) for 30 epochs with a relatively simple architecture should allow it to complete training within a few hours on a single mid-range GPU, such as the Nvidia GeForce GTX 1650 mentioned in the paper. Additionally, given the use of standard optimizers like Adam and L-BFGS, the training process is likely to be efficient, aligning with typical training times for CNNs on this dataset. Therefore, I estimate that it could be trained in around 4 hours using a single GPU and wouldn't require more than one GPU for this scale of work. | yes | Yes | CV | rKAN: Rational Kolmogorov-Arnold Networks | 2024-06-20 0:00:00 | https://github.com/alirezaafzalaghaei/rkan | 1 | In Code | 1 | cnn.ipynb | Yes | null |
Tiny ImageNet Classification | MANO-tiny | [] | Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics | 2025-07-03T00:00:00 | https://arxiv.org/abs/2507.02748 | [
"https://github.com/AlexColagrande/MANO"
] | {'Validation Acc': '87.52'} | [
"Validation Acc"
] | Given the following paper and codebase:
Paper: Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Codebase: https://github.com/AlexColagrande/MANO
Improve the MANO-tiny model on the Tiny ImageNet Classification dataset. The result
should improve on the following metrics: {'Validation Acc': '87.52'}. You must use only the codebase provided.
| arXiv:2507.02748v1 [cs.CV] 3 Jul 2025Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics Alex Colagrande1, Paul Caillon1, Eva Feillet1, Alexandre Allauzen1,2 1Miles Team, LAMSADE, Universit ´e Paris Dauphine-PSL, Paris, France 2ESPCI PSL, Paris, France {name}.{surname }@dauphine.psl.eu Abstract Transformers have become the de facto standard for a wide range of tasks, from image classification to physics simula- tions. Despite their impressive performance, the quadratic complexity of standard Transformers in both memory and time with respect to the input length makes them imprac- tical for processing high-resolution inputs. Therefore, sev- eral variants have been proposed, the most successful rely- ing on patchification, downsampling, or coarsening tech- niques — often at the cost of losing the finest-scale de- tails. In this work, we take a different approach. Inspired by state-of-the-art techniques in n-body numerical simula- tions, we cast attention as an interaction problem between grid points. We introduce the Multipole Attention Neural Operator (MANO) that computes attention in a distance- based multiscale fashion. MANO maintains, in each at- tention head, a global receptive field and has a linear time and memory complexity with respect to the number of grid points. Empirical results on image classification and Darcy flows demonstrate that MANO rivals state-of- the-art models, such as ViT and Swin transformer, while re- ducing runtime and peak memory usage by orders of mag- nitude. We open-source our code for reproducibility at: https://github.com/AlexColagrande/MANO . 1. Introduction Convolutional Neural Networks (CNNs) have formed the cornerstone of modern computer vision [22, 29, 31]. Their architectural design leverages the spatial locality and trans- lational invariance properties of images by applying shared convolutional filters over local receptive fields, enabling an efficient parameter usage and a strong inductive bias for grid-structured data. In recent years, Vision Transformers (ViTs) [16] have emerged as an alternative to CNNs. They are based on the Transformer architecture [54] introduced in the field of Nat-ural Language Processing (NLP) for sequence-to-sequence learning. This neural architecture is characterized by the use of the self-attention mechanism [2] that allows mod- eling global contextual information across the tokens of a text or the patches of an image. Despite lacking the strong locality priors of CNNs, attention-based architectures have demonstrated competitive performance in image classifica- tion, particularly when trained on large-scale datasets [44]. Beyond computer vision and NLP, Transformer-based models have found application in scientific machine learn- ing, particularly in the resolution of Partial Differential Equations (PDEs). PDEs constitute the fundamental math- ematical framework for modeling a vast array of phenom- ena across the physical and life sciences - from molecular dynamics to fluid flows and climate evolution. Substan- tial efforts have been devoted to approximating the solu- tion operators of such equations at scale. Classical numer- ical solvers — including finite difference [14], finite ele- ment [13], and spectral methods [6]— discretize the un- derlying continuous operators, thereby recasting the prob- lem as a finite-dimensional approximation. More recently, the increasing availability of observational data on struc- tured grids has fostered a paradigm shift towards data- driven approaches such as Physics-Informed Neural Net- works (PINNs) [24, 39, 46]. PINNs harness these obser- vations to learn PDE solutions directly, enforcing physical consistency through soft constraints without relying on ex- plicit mesh-based formulations. However, like classical nu- merical solvers, PINNs are typically designed to approxi- mate the solution of a specific PDE instance — for exam- ple, computing the solution corresponding to a fixed coeffi- cient, boundary or initial condition. This means even minor variations in input parameters require re-solving the system or, in the case of neural models, costly re-training. In con- trast, operator learning [25] targets a fundamentally more ambitious goal: to approximate a mapping between infinite- dimensional function spaces. Although considerably more challenging, operator learn- ing offers the advantage to generalize across input condi- 1 tions without further optimization, offering a scalable and computationally efficient alternative to traditional point- wise solvers. Note that operator learning is not restricted to PDEs, as images can naturally be viewed as real-valued functions on 2-dimensional domains . As in computer vision, recent neural operator models benefit from the development of attention-based architec- tures [1, 3]. However, attention suffers from quadratic time and memory complexity with respect to the input size, mak- ing it impractical for high-resolution data. To tackle this, some variants of the attention mechanism process data in local patches or down-sample the input, drastically cutting computational cost but often sacrificing the fine-grained details crucial for dense-prediction tasks. Other methods replace full attention with low-rank approximations [55], sparsity-inducing schemes [59], or kernel-inspired formu- lations [12]. Alternatively, Synthesizer proposes to learn at- tention weights without relying on explicit query–key prod- ucts [49]. However, these approaches often trade off run- time for expressiveness or ease of implementation. In this work, we propose an efficient variant of the atten- tion mechanism specifically suited for image classification as well as dense-prediction tasks such as physical simula- tions. Our method achieves computational gains by relaxing the classical attention formulation while preserving perfor- mance by preserving global context. We propose the Multipole Attention Neural Operator (MANO), a novel transformer neural operator in which each head computes attention between a point and a multiscale expansion of the input centered at that point. The attention is performed against a hierarchical decomposition of the in- put, dynamically downsampled based on the query location. Importantly, we compute the query, key and value matrices Q, K andVat every scale using the same point-wise oper- ator to allow the model to accept inputs at any resolution. Our contributions are as follows: • We propose the Multipole Attention Neural Operator (MANO) that formulates attention as an interaction prob- lem and solves it using the Fast Multipole Method. • By combining MANO with the Swinv2 architecture, we improve transfer learning results on several image classi- fication tasks. • MANO achieves state-of-the art results on Darcy flow simulation benchmarks matching, and sometimes sur- passing, state of the art baselines. 2. Related work The Vision Transformer (ViT) [16] was the first to success- fully adapt the Transformer architecture to image classifi- cation, achieving remarkable performance. It divides the input image into fixed-size patches, flattens them into to- ken embeddings, adds positional encodings, and processes the resulting sequence with a Transformer encoder. Whenpretrained on large datasets such as ImageNet-21k [15] or LVD-142M[44], ViTs rival or exceed CNNs on image clas- sification tasks. However, despite their efficiency, they suffer from limited local information interaction, single- feature representation and therefore low-resolution outputs making it sub-optimal for dense prediction tasks. These limitations have motivated a number of efficient vision transformer variants. 2.1. Efficient Vision Transformer Variants Swin Transformers The Swin Transformer [35] restricts self-attention to non-overlapping windows that are shifted between layers, yielding hierarchical, multi-scale represen- tations without global attention. Swin Transformer V2 [36] augments this design with learnable-temperature scaled co- sine attention, log-spaced relative position bias, and contin- uous pre-norm, improving high-resolution stability and en- abling deeper networks—all while preserving the original’s efficient window-based computation. Distilled and Compact ViTs : TinyViT [58] uses pretraining-stage distillation from a large teacher (e.g., Swin-B/L trained on ImageNet-21k). By caching teacher logits and applying neural architecture search under FLOPs/parameter constraints, TinyViT produces smaller models at only a small performance loss. Data-Efficient Image Transformers (DeiT) [51] add a learnable distillation token that learns from a CNN teacher’s soft logits. Later work [52] adds self-supervised distillation and token pruning for further efficiency. Collectively, these efforts have greatly extended ViT ap- plicability across resource-constrained tasks. However, the inherent multi-scale structure of images remains only par- tially integrated into existing alternatives to the attention mechanisms, potentially hindering the overall performance. 2.2. Operator Learning via Multipole Attention In this work, we illustrate the interest of our proposed mul- tipole attention mechanism for learning solution operators of PDEs directly from input–output pairs, as encountered in tasks like fluid flow estimation and other dense predic- tion problems [25]. Operator learning was first explored by Lu et al. [37], who established a universal approxima- tion theorem for nonlinear operators using DeepONets, lay- ing theoretical foundations for neural operator approxima- tion. Building on this foundation, the Fourier Neural Op- erator (FNO) [33] parameterizes an integral kernel in the Fourier domain—using efficient FFT-based convolutions to capture global interactions across the entire domain. These pioneering methods have since inspired a wealth of exten- sions—but their reliance on global or Fourier-based inter- actions limits their scalability to very high-resolution grids. These works paved the way for numerous extensions. 2 Figure 1. (Left) The multi-scale grid structure. (Center) The V-cycle structure for computing multipole attention with the fast multipole method. (Right) Attention matrices. Illustration with three levels. The attention matrix Ais computed in a multiscale manner with respect to each level. The higher the level, the shorter the range of the interaction. At a given layer, down-sampling (resp. up-sampling) is performed using a convolution kernel (resp. deconvolution) shared across all different levels. Transformer neural operators In [10] the classical transformer was adapted for the first time to operator learn- ing problems related to PDEs. The paper explores two vari- ants, respectively based on the Fourier transform and on the Galerkin method. The latter one uses a simplified atten- tion based operator, without softmax normalization. This solutions shares the linear complexity with our work but not the same expressivity. In this line of work, LOCA [34] uses kernel theory to map the input functions to a finite set of features and attends to them by output query location. Recently, [9] proposed to handle attention in a continuous setting and, as well as [56], proposed an operator-learning version of the more classical ViT. Notably, the Universal Physics Transformer (UPT) [1] scales efficiently based on a coarsening of the input mesh. Multiscale numerical solvers. Our method is inspired by multi-scale numerical solvers [7, 8, 21], in particular the Fast Multipole Method (FMM). A new version of the Fast Multipole Method is introduced by [19] for the evaluation of potential fields in three dimensions and its specialization with the V-cycle algorithm, introduced by [43]. Theoretical studies on transformers. The particle- interaction interpretations of attention was first introduced in Sinkformer [48]. [17] views Transformers as interact- ing particle systems, they described the geometry of learned representations when the weights are not time dependent, and [18] developed a mathematical framework for analyz- ing Transformers based on their interpretation as interacting particle systems inspiring us to compute the attention using the most efficient techniques available for solving particle- interaction problems. Multiscale neural architectures. Several transformer ar- chitectures related to the multiscale principle used in ourmethod were proposed in the one-dimensional setting of Natural Language Processing (NLP) [23, 41, 60, 62], and in graph learning methods [32, 40, 61] Relation to Fast Multipole Attention Among existing approaches, the closest to ours is Fast Multipole Attention (FMA) [23], which reduces the O(N2)cost of 1D self- attention via hierarchical grouping: nearby queries attend at full resolution, while distant keys are merged into low-rank summaries, achieving O(NlogN)orO(N). Our method differs in two key aspects: •Input domain. FMA targets one-dimensional token se- quences; we operate on two-dimensional image grids with multiscale spatial windows. •Downsampling. FMA hierarchically downsamples queries, keys, and values. In contrast, we downsample the input feature map prior to attention, yielding a self- contained block that integrates seamlessly with standard transformer backbones (e.g., SwinV2) and preserves pre- trained attention weights. 3. Introducing MANO 3.1. Attention as an interaction problem In this section, we cast the computation of self-attention as a dense n-body interaction problem. An n-body system con- sists of nentities (often referred to as bodies) whose state is described by a configuration (x1, . . . , x n). The evolution of such a system is governed by a set of interaction laws, which in our setting are determined by pairwise interactions specified through a kernel function: κ:Rd×Rd→R (xi, xj)7→κ(xi, xj) Ann-body simulation refers to a numerical method for computing these interactions, typically requiring O(n2)op- 3 Figure 2. Darcy flow reconstruction: from left to right—input coefficient field, ground truth solution, MANO prediction, and ViT predic- tions using patch sizes 2, 4, and 8. MANO applies multipole attention using overlapping windows of size 2, and performs downsampling and upsampling across 5levels using convolutions with kernel size 2×2, stride 2, and zero padding. erations due to the dense pairwise structure. This computa- tional cost motivates the development of faster approxima- tions, such as the Fast Multipole Method, which reduces the complexity to O(n). In the following, we consider a regular grid, corresponding either to the pixels of an image in the classification setting or to the discretized samples of an input function in the op- erator learning framework. We denote by Xa sequence ofNobservations (x1, . . . , x N)⊤∈RN×dwith elements embedded in dimension d. The self-attention mechanism firstly applies three learnable linear projections to obtain queries, keys and values [54]: Q=XW q, K =XW k, V =XW v (1) withWq, Wk, Wv∈Rd×dandbq, bk, bv∈Rd. Next, it computes an N×Nattention matrix Awhose i-th row forms a probability distribution over all keys: Aij=exp Q⊤ iKj/√ d NX l=1exp Q⊤ iKl/√ d. (2) Finally, each token is updated as a convex combination of the value vectors: xi←Pn i=jAi,jVj,In this form, one can view the set {xi}N i=1as a cloud of Nparticles in Rd interacting in a pairwise manner via a kernel κdefined as: κ(Qi, Kj) = exp Q⊤ iKj/√ d . (3) Therefore, we interpret a self-attention layer as a single time step of a discretized N-body dynamical system. Under this analogy, computing attention is equivalent to predict- ing the next state of an interacting particle system — and it becomes natural to accelerate this computation using the FMM [19], reducing the usual O(N2)cost of the pairwise sums to O(N). 3.2. MANO In this section we detail the Multipole Attention layer as well as the complexity of the model.Method Overview : Let X0=X∈RH×W×dbe the original high-resolution image (height H, width W, em- bedding dimension d). We define Llevels of downsam- pling by a convolutional kernel Dwith weights shared ac- cross levels, producing Xℓ=D Xℓ−1 (ℓ= 1, . . . , L ) where Xℓ∈RH/2ℓ×W/2ℓ×d. At each level ℓ, we par- tition the feature map into potentially overlapping sliding windows and, within each window, compute the attention mapAℓ= Softmax QℓK⊤ ℓ/√ d where Qℓ, Kℓ, Vℓ∈ R(H/2ℓ×W/2ℓ)×dare the query, key, and value embeddings extracted from Xℓ. This restricts self-attention to localized neighborhoods while still enabling cross-window interac- tions via the sliding overlap and the hierarchical mixing. We then produce the attended features ˜Xℓ=AℓVℓ, and upsample back to the next-finer resolution via a transposed convolution U: ˆXℓ=U˜Xℓ ∈RH/2ℓ−1×W/2ℓ−1×d. (4) Finally, we combine all levels by summation at the orig- inal resolution: Xout=PL ℓ=0Uℓ Attn Xℓ , where Attn( Xℓ)denotes ˜Xℓat level ℓ, andU0is the identity. Sharing the same convolutional kernel for both down- sampling and up-sampling—and reusing the same attention weights—keeps the total parameter count constant, regard- less of the number of layers L. The convolutions have the role to provide a representation of the input at the next scale, independently of the scale. This ensures that an attention map learned at the finest scale produce effective representa- tions at different scale, even in the case of a pretrained at- tention from a windowed-attention based model, such as the SwinV2, with finetuning on the convolutional paramters but not on the attention weights. The shared convolutions act as scale-agnostic projectors, producing the next-scale feature representation in the same way at every level. As a result, an attention map learned at the finest resolution remains ef- fective across all scales. In practice, this means one can take a windowed-attention backbone, such as SwinV2, freeze its attention weights, and fine-tune only the convolutional pa- rameters to adapt it to new resolutions without increasing model size. 4 Computational Complexity. We analyze the cost under non-overlapping windows on a square image of side H(so N=H2tokens), embedding dimension d, window size w, and down-sampling factor k. The maximum number of levels is L= logk(H). We denote by Nℓ=N k2ℓthe number of grid points at level ℓ. Each windowed self-attention on M=w2tokens costs O(M2d)and is applied across O(Nℓ/M)windows, for a total complexity of O(NℓMd) (5) Windowed attention computes a standard self-attention for a complexity of O(M2d)whithin each of the N/M windows. The total complexity is thus O(NMd ). Our Multipole at- tention iterates the same windowed attention and aggregates the contribution at each level ℓtherefore the total complex- ity reads LX ℓ=0NℓMd=LX l=0ON k2ℓMd =O(NMd ) (6) So interestingly, even though we apply windowed attention across multiple scales, the total cost remains dominated by the finest-scale pass, with coarser levels adding a negligible additional overhead. As a result, our approach preserves the linear complexity of single-scale windowed attention while delivering significantly greater expressive power. 4. Experimental settings This section outlines the experimental setup for both image classification and physics simulations. For image classification, we evaluate on several fine- grained datasets a Swin Transformer V2 modified with our proposed attention mechanism. Models are initialized with weights pretrained on ImageNet-1k. The encoder is frozen (except for the additional convolutions of MANO) and a linear classifier is learnt on the target classification tasks. For physics simulations, we train all models on instances of the Darcy flow problem, from scratch and across different resolutions. 4.1. Image classification Datasets. The ImageNet-1k [15] dataset is used to pre- train all models and we perform linear probing on several downstream classification benchmarks, namely CIFAR- 100[28], Oxford Flowers-102 [42], Stanford Cars [26], Food101 [4], Tiny-ImageNet-202 [30] and Oxford-IIIT Pet Dataset [45]. Architecture. As the backbone for our Multipole Atten- tion model, we adopt the “Tiny” version of the Swin Trans- former V2 [36]. Since the attention block is shared across all levels of the multipole hierarchy, MANO can inherit thepretrained weights from the original Swin Transformer, re- quiring only a small number of additional trainable param- eters: one convolution and one transposed convolution per attention head, along with the classification head. Convolu- tions have a kernel size of 2 and a stride of 2. This design ensures that our variant introduces a minimal increase in parameter count relative to the base model. Specifically, for the Tiny version, the total number of parameters in- creases from 27,73M to 28,47M, corresponding to an additional 740,356 parameters, or 2.67% more than the original model. Training. We freeze the pretrained encoder weights and train a single fully connected layer for 50epochs on top of the frozen encoder using AdamW as optimizer with a co- sine annealing learning rate schedule. Training the models with the original shifted attention of Swinv2 and the mod- els with our proposed multipole attention only differs by a warm-up phase to learn the upsampling and downsampling convolutional filters introduced by MANO. We report the resulting top-1 accuracy of these experiments in Table 1. 4.2. Darcy flow simulation Task We evaluate our method on the task of steady-state 2D Darcy Flow simulation, a widely used task in the neu- ral operator literature [25]. The problem is based on the following second-order, linear elliptic PDE: −∇ · (a(x)∇u(x)) =f(x), x∈(0,1)2, (7) to solve with homogeneous Dirichlet boundary conditions: u(x) = 0 , x∈∂(0,1)2.In this PDE, the function a(x) represents the spatially varying permeability of a porous medium. The forcing term f(x)is fixed to f(x)≡1across all inputs. The output u(x)is the scalar field representing the pressure within the domain. Although the PDE is lin- ear in u, the map from the input a(x)to the solution u(x) is nonlinear due to the interaction of a(x)with the gradient operator inside the divergence. The task is to learn this solution operator: given a new input field a(x), the model must predict the correspond- ing output u(x). In our experiments, a(x)is sampled as a binary field (i.e., values are either 0 or 1), representing a medium composed of two different materials. Architecture. We use a classical transformer architecture of depth 8with4attention heads per layer. In place of the conventional self-attention, we employ our multipole atten- tion module. Additionally, we apply Layer Normalization at every level to improve training stability and mitigate is- sues of vanishing or exploding gradients, which can arise due to shared attention across hierarchical levels. 5 Model Params. Complexity Tiny-IN-202 Cifar-100 Flowers-102 Food-101 StanfordCars-196 OxfordIIITPet TinyViT [58] 21M O(N2) - 75.2% 82 .4% - 61.7% 86.5% ViT-base [16] 86M O(N2) 73 ,07% 80 .63% 92.75% 80 .31% 41.95% 87 .68% DeiT-small [52] 22M O(N2) 81 .34% 75.15% 66 .60% 71 .39% 36 .38% 87 .56% SwinV2-T [36] 28M O(N) 80 .53% 75 .47% 56 .46% 76 .96% 38 .36% 87 .14% MANO-tiny 28M O(N) 87.52% 85.08% 89 .00% 82.48% 65.68% 88.31% Table 1. Linear probing accuracies for several image classification datasets. MANO matches or even outperforms the performances of state-of-the-art models. The results for TinyViT are taken from [58], while all other baselines are fine-tuned via linear probing using pretrained backbones1. For each model, we report the number of parameters and the asymptotic complexity of its attention block with respect to the number of patches N. As a plug-and-play replacement for the attention mechanism, our attention can be applied after patching, similar to Swin, allowing our vision-specific MANO variant to scale linearly with the number of patches. Training. We train all the considered models for 50 epochs with AdamW optimizer and cosine learning rate scheduler. The initial learning rate is in the order of 10−4. We use the dataset open-sourced in [25], comprised of input–output pairs (a, u)at resolutions n×nforn∈ {16,32,64}. The model is trained to minimize the mean squared error (MSE) on the training set and evaluated on a held-out test set using the relative MSE error, where ˆuis the model prediction and uthe ground truth solution:|ˆu−u|2 |u|2. 5. Results 5.1. Image Classification Results Table 1 presents the top-1 accuracy of three models of similar parameter counts (ViT ( 21M parameters), SwinV2- T (28M), ViT-base ( 86M), DeiT-small ( 22M) and MANO (28M)) across six downstream image classification datasets. The reported results for TinyViT are taken from [11, 58] while the results for SwinV2-T are taken from [36]. First, MANO consistently outperforms TinyViT on all benchmarks where results are available. It also surpasses DeiT-small and SwinV2-T when these models are fine- tuned via linear probing using ImageNet-1k pretrained weights, across an expanded set of datasets. Compared with the bigger ViT-base, our model performs better in all the benchmark except than Flowers- 102that is the dataset with less training data among the one considered. the dataset with the smallest training set among those considered. This suggests that in low-data regimes, models with a higher pa- rameter count may have an advantage due to their increased capacity to memorize or adapt to limited supervision. The improvement ranges from about 1–2 % on eas- ier tasks like Oxford–IIIT Pet to nearly 5/7points on more challenging datasets such as CIFAR– 100 and Tiny- ImageNet. Even compared to ViT-base, which has more than twice the number of parameters, MANO achieves gains of roughly 3–10 points accross all the benchmarks expect for Flowers- 102, demonstrating that multiscale hier- archical attention produces significantly more transferable features without increasing too much model size.Second, the advantage of MANO becomes especially pronounced on fine-grained classification tasks. On Flow- ers–102 and Stanford Cars, SwinV2-T achieves only 56.5% and38.4%accuracy, respectively, while TinyViT recovers to82.4%and61.7%. In both cases, MANO further im- proves performance to 89.0%on Flowers– 102and65.7% on Cars, indicating that combining local details (e.g., petal shapes or headlight contours) with global context (overall flower appearance or car silhouette) is critical for distin- guishing highly similar classes. Third, on medium-difficulty datasets such as Tiny–ImageNet– 202 and CIFAR– 100, MANO again holds a clear lead. It outperforms SwinV2-T by approxi- mately 7points on Tiny–ImageNet– 202and by around 10 points on CIFAR– 100. These results suggest that attending to multiple resolutions—capturing both fine textures and broader scene structures—yields better representations than the single-scale windowed attention used in SwinV2-T. Finally, although MANO and SwinV2-T share the same parameter count (28M), MANO delivers a consistent 5–10 point advantage on mid-level benchmarks and maintains a smaller lead on easier tasks like Oxford–IIIT Pet. TinyViT’s 21M parameters are insufficient to match either 28M model, underscoring that hierarchical multiscale attention makes more effective use of model capacity than either pure global self-attention (ViT) or fixed-window local attention (SwinV2-T). 5.2. Darcy Flow Simulation Results Model 16×16 32 ×32 64 ×64 FNO 0.0195 0 .0050 0 .0035 ViT patch size=8 0.0160 0 .0038 0.0021 ViT patch size=4 0.0179 0 .0039 0 .0019 ViT patch size=2 0.0169 0 .0049 0 .0026 Local Attention 0.0133 0.0188 0 .0431 MANO 0.0080 0 .0020 0 .0013 Table 2. Benchmark on Darcy flow simulations. Relative MSE. A given model is evaluated and tested on the same resolution, either 16×16,32×32or64×64. 6 Table 2 presents the relative MSE of various models trained from scratch on Darcy flow at different resolutions. The Fourier Neural Operator (FNO) [33] is a neural oper- ator designed to learn mappings between function spaces, such as the coefficient-to-solution map for PDEs. By op- erating in the Fourier domain, the FNO captures long- range dependencies across the entire domain with near- linear complexity O(N2logN)(for an N×Ngrid), mak- ing it effective for a wide range of PDE-based tasks, leading to state-of-the-art results on tasks such as Darcy Flow Sim- ulation. It achieves MSEs of 0.0195, 0.0050, and 0.0035 as the grid is refined, showcasing its strength in capturing global spectral components but its limited ability to resolve fine-scale details at coarser resolutions. For ViT, we evaluate patch sizes of 8,4, and 2: the patch- 4variant attains the best errors ( 0.0179 ,0.0039 ,0.0019 ), whereas smaller patches (size 2) slightly worsen perfor- mance at low resolution and fail to match patch- 4at642. This sensitivity indicates that vanilla ViT’s pure global at- tention is capable of approximating the solution operator but depends heavily on patch granularity. In contrast, a pure local-attention model (fixed window) degrades sharply at 322and642, since local windows cannot propagate long- range dependencies across the domain. By combining fine-grid attention (to capture local con- ductivity channels) with progressively coarser resolutions (to model global pressure fields), MANO consistently achieves the lowest errors, roughly halving the MSE of both FNO and standard ViT at every scale and overcoming the locality limitations inherent in fixed-window attention. Lastly, we examine in Figure 2 the quality of the re- constructed images, demonstrating that MANO’s multi- scale modeling recovers both sharp transitions and smooth boundaries with high fidelity, even on a coarse grid. By con- trast, images reconstructed by ViT exhibit noticeable patch- ing artifacts. In summary, MANO’s multiscale hierarchical attention achieves state-of-the-art performance on both Darcy flow simulations and image classification tasks. Its design makes it well suited to the corresponding data, as it captures fine- scale detail and broad-scale context simultaneously. Hyperparameters. A detailed table of hyperparameters is provided in the Appendix; below, we outline our main design choices. The “number of levels” specifies how many hierarchi- cal scales are included in the multipole attention; with a window size of 8 for the windowed attention, we achieve 1Checkpoints are available at https://huggingface.co/ google / vit - base - patch16 - 224 (ViT-base), https : / / huggingface . co / facebook / deit - small - patch16 - 224 (DeiT-Small), and https://huggingface.co/timm/swinv2_ tiny_window8_256.ms_in1k (SwinV2-T). 0 5 10 15 20 25 30 Index1234T est LossT est Loss 0 5 10 15 20 25 30 Index0.00.20.40.60.8T est AccT est AccFigure 3. Ablation study comparing average pooling (green) and learnable convolutions (blue) for the sampling step in MANO. We report the Cross-Entropy validation loss (left) and accuracy (right) on CIFAR-100. Mean and standard deviation over fifteen runs are reported, varying the learning rate between 10−3and10−4. the best performance using the maximum of 3 levels. Due to Swin’s built-in downsampling between stages, this cor- responds to 3 levels across the first two layers, 2 levels for the next two layers, and a single level for the remain- ing eight layers. To coarsen the input grid, we compared average pooling versus learned convolutions—convolutions consistently outperformed pooling. For upsampling, trans- pose convolutions outperformed nearest-neighbor interpo- lation. When used, kernel size andstride refer to these convolutional operations. As shown in Figure 3, using learned convolutions for both down- and up-sampling significantly improves expres- sivity: even with pretrained attention weights, convolution- based sampling enables a windowed attention trained at one resolution to transfer effectively to another. Note that a sin- gle convolutional kernel is reused for all downsampling op- erations, and a separate kernel is reused for all upsampling operations. 6. Discussion Hierarchy depth in vision vs. Physics. In our image classification experiments, we follow SwinV2-T’s architec- ture and compute attention at three hierarchical levels in early stages, two in the middle, one at the end of the en- coder). For Darcy flow grids, we set downsampling steps so that the coarsest scale is 2×2, the smallest possible when using a 2×2window for the attenion. We found that increasing the number of levels consistently improves performance, however we note that in physics simulation, the number of levels can be treated as a hyperparameter and tuned based on input resolution and the desired balance be- tween local and global interactions. Limitations. MANO’s current design uses a fixed, static hierarchy and attention parametrization. While effective, it could benefit from learnable scale selection and explicit cross-level interactions to better capture multi-scale cou- plings. Additionally, we assume a uniform grid to discretize 7 the input. This simplifies implementation but fails to cap- ture regions with steep gradients or intricate boundaries in physics simulations. Introducing adaptive meshing would thus improve accuracy and efficiency for localized phenom- ena that otherwise demand a higher resolution simulation, but would require redefining attention over nonuniform spa- tial supports. Finally, our current implementation could be accelerated by a hardware—optimized GPU kernel to fur- ther reduce runtime. Future Directions. Beyond steady-state Darcy flow, MANO can naturally extend to time-dependent PDEs by integrating with recurrent timestepping or operator-splitting schemes, preserving its ability to capture both spatial mul- tiscale structure and temporal evolution. We also plan to extend our method to unstructured meshes and irregular domains common in real-world physics simulations. On the computer-vision side, applying MANO to dense pre- diction tasks—such as semantic segmentation or image in- painting—is promising, since its multiscale attention could dynamically balance local details and global context more effectively than standard U-Net architectures. 7. Conclusion We propose MANO , an efficient attention-based architec- ture inspired by n-body methods, which interprets atten- tion as interactions among mesh points. By introducing a distance-based, multiscale attention mechanism, MANO achieves linear time and memory complexity per head while preserving a global receptive field. Across several im- age classification benchmarks and Darcy flow simulations, MANO matches the accuracy of full-attention models yet substantially reduces runtime and peak memory. Unlike patch-based approximations, it avoids discontinuities and retains long-range dependencies inherent to physical sys- tems. Our results demonstrate that MANO is a scalable al- ternative for both vision tasks and mesh-based simulations. Future work includes applying MANO to dense vision tasks such as semantic segmentation, and to extend it to irregular meshes and a broader class of physical simulations. Acknowledgments This work was supported by the SHARP ANR project ANR-23-PEIA-0008 in the context of the France 2030 program and by the HPC resources from GENCI- IDRIS (grants AD011015154 and A0151014627). References [1] Benedikt Alkin, Andreas F ¨urst, Simon Schmid, Lukas Gru- ber, Markus Holzleitner, and Johannes Brandstetter. Univer- sal physics transformers: A framework for efficiently scalingneural operators. Advances in Neural Information Process- ing Systems , 37:25152–25194, 2024. 2, 3, 12 [2] Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Rep- resentations, ICLR 2015 , 2015. 1 [3] Cristian Bodnar, Wessel P. Bruinsma, Ana Lucic, Megan Stanley, Anna Allen, Johannes Brandstetter, Patrick Garvan, Maik Riechert, Jonathan A. Weyn, Haiyu Dong, Jayesh K. Gupta, Kit Thambiratnam, Alexander T. Archibald, Chun- Chieh Wu, Elizabeth Heider, Max Welling, Richard E. Turner, and Paris Perdikaris. A foundation model for the earth system. Nature , 641(8065):1180–1187, 2025. 2 [4] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, pro- ceedings, part VI 13 , pages 446–461. Springer, 2014. 5 [5] Lise Le Boudec, Emmanuel de Bezenac, Louis Serrano, Ra- mon Daniel Regueiro-Espino, Yuan Yin, and Patrick Galli- nari. Learning a neural solver for parametric PDEs to en- hance physics-informed methods. In The Thirteenth Inter- national Conference on Learning Representations , 2025. 12 [6] R.N. Bracewell. The Fourier Transform and Its Applications . McGraw Hill, 2000. 1 [7] Achi Brandt. Multi-level adaptive solutions to boundary- value problems. Mathematics of computation , 31(138):333– 390, 1977. 3 [8] William L Briggs, Van Emden Henson, and Steve F Mc- Cormick. A multigrid tutorial . SIAM, 2000. 3 [9] Edoardo Calvello, Nikola B Kovachki, Matthew E Levine, and Andrew M Stuart. Continuum attention for neural oper- ators. arXiv preprint arXiv:2406.06486 , 2024. 3, 12 [10] Shuhao Cao. Choose a transformer: Fourier or galerkin. Ad- vances in neural information processing systems , 34:24924– 24940, 2021. 3, 12 [11] Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre- training or strong data augmentations. arXiv preprint arXiv:2106.01548 , 2021. 6 [12] Krzysztof Choromanski, Valerii Likhosherstov, David Do- han, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794 , 2020. 2, 11 [13] P.G. Ciarlet. The Finite Element Method for Elliptic Prob- lems. Society for Industrial and Applied Mathematics, 2002. 1 [14] R. Courant, K. Friedrichs, and H. Lewy. On the partial dif- ference equations of mathematical physics. IBM J. Res. Dev. , 11(2):215–234, 1967. 1 [15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. 2, 5, 11, 14 [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, 8 Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. In International Con- ference on Learning Representations , 2021. 1, 2, 6, 11 [17] Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, and Philippe Rigollet. The emergence of clusters in self-attention dynamics, 2024. 3 [18] Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, and Philippe Rigollet. A mathematical perspective on transform- ers, 2024. 3 [19] Leslie Greengard and Vladimir Rokhlin. A new version of the fast multipole method for the laplace equation in three dimensions. Acta Numerica , 6:229–269, 1997. 3, 4 [20] Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. Advances in neural information processing systems , 34:24048–24062, 2021. 12 [21] Wolfgang Hackbusch. Multi-grid methods and applications . Springer Science & Business Media, 2013. 3 [22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. 1 [23] Yanming Kang, Giang Tran, and Hans De Sterck. Fast mul- tipole attention: A divide-and-conquer attention mechanism for long sequences, 2024. 3, 12 [24] George Karniadakis, Yannis Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed ma- chine learning. Nature Reviews Physics , pages 1–19, 2021. 1, 12 [25] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Aziz- zadenesheli, Kaushik Bhattacharya, Andrew Stuart, and An- ima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Ma- chine Learning Research , 24(89):1–97, 2023. 1, 2, 5, 6, 12 [26] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on com- puter vision workshops , pages 554–561, 2013. 5 [27] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. In Ad- vances in Neural Information Processing Systems , pages 26548–26560. Curran Associates, Inc., 2021. 12 [28] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009. 5 [29] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. In Advances in Neural Information Processing Sys- tems. Curran Associates, Inc., 2012. 1 [30] Yann Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N , 7(7):3, 2015. 5 [31] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwrit- ten zip code recognition. Neural computation , 1(4):541–551, 1989. 1[32] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. Advances in Neural Information Processing Systems , 33:6755–6766, 2020. 3, 12 [33] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for paramet- ric partial differential equations, 2021. 2, 7, 12 [34] Zongyi Li, Nikola Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Otta, Mohammad Amin Nabian, Maxi- milian Stadler, Christian Hundt, Kamyar Azizzadenesheli, et al. Geometry-informed neural operator for large-scale 3d pdes. Advances in Neural Information Processing Systems , 36:35836–35854, 2023. 3, 12 [35] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision , pages 10012–10022, 2021. 2, 11 [36] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition , pages 12009–12019, 2022. 2, 5, 6, 11 [37] Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deep- onet: Learning nonlinear operators for identifying differen- tial equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193 , 2019. 2, 12 [38] Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence , 3(3):218–229, 2021. 12 [39] Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karni- adakis. Deepxde: A deep learning library for solving differ- ential equations. SIAM review , 63(1):208–228, 2021. 1 [40] Leon Migus, Yuan Yin, Jocelyn Ahmed Mazari, and Patrick Gallinari. Multi-scale physical representations for approxi- mating pde solutions with graph neural operators. In Topo- logical, Algebraic and Geometric Learning Workshops 2022 , pages 332–340. PMLR, 2022. 3, 12 [41] Tan M. Nguyen, Vai Suliafu, Stanley J. Osher, Long Chen, and Bao Wang. Fmmformer: Efficient and flexible trans- former via decomposed near-field and far-field attention, 2021. 3, 12 [42] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & im- age processing , pages 722–729. IEEE, 2008. 5 [43] G. Of. An efficient algebraic multigrid preconditioner for a fast multipole boundary element method. Computing , 82(2): 139–155, 2008. 3 [44] Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. 9 Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 , 2023. 1, 2, 11 [45] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition , pages 3498–3505. IEEE, 2012. 5 [46] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solu- tions of nonlinear partial differential equations, 2017. 1, 12 [47] Tim De Ryck, Florent Bonnet, Siddhartha Mishra, and Em- manuel de Bezenac. An operator preconditioning perspective on training in physics-informed machine learning. In The Twelfth International Conference on Learning Representa- tions , 2024. 12 [48] Michael E. Sander, Pierre Ablin, Mathieu Blondel, and Gabriel Peyr ´e. Sinkformers: Transformers with doubly stochastic attention, 2022. 3 [49] Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. Synthesizer: Rethinking self-attention for transformer models. In International conference on ma- chine learning , pages 10183–10192. PMLR, 2021. 2 [50] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lu- cas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems , 34:24261–24272, 2021. 11 [51] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Herv ´e J´egou. Going deeper with im- age transformers. In Proceedings of the IEEE/CVF interna- tional conference on computer vision , pages 32–42, 2021. 2, 11 [52] Hugo Touvron, Matthieu Cord, and Herv ´e J´egou. Deit iii: Revenge of the vit. In European conference on computer vision , pages 516–533. Springer, 2022. 2, 6, 11 [53] Tapas Tripura and Souvik Chakraborty. Wavelet neural op- erator: a neural operator for parametric partial differential equations. arXiv preprint arXiv:2205.02191 , 2022. 12 [54] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neu- ral Information Processing Systems . Curran Associates, Inc., 2017. 1, 4 [55] Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768 , 2020. 2, 11 [56] Sifan Wang, Jacob H Seidman, Shyam Sankaran, Hanwen Wang, George J Pappas, and Paris Perdikaris. Cvit: Contin- uous vision transformer for operator learning. arXiv preprint arXiv:2405.13998 , 2024. 3, 12 [57] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision , pages 568–578, 2021. 11 [58] Kan Wu, Jinnian Zhang, Houwen Peng, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Tinyvit: Fast pretrainingdistillation for small vision transformers. In European con- ference on computer vision , pages 68–85. Springer, 2022. 2, 6, 11 [59] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformers for longer sequences. Advances in neu- ral information processing systems , 33:17283–17297, 2020. 2 [60] Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh. Multi resolution analysis (mra) for approx- imate self-attention, 2022. 3, 12 [61] Maksim Zhdanov, Max Welling, and Jan-Willem van de Meent. Erwin: A tree-based hierarchical transformer for large-scale physical systems. CoRR , 2025. 3, 12 [62] Zhenhai Zhu and Radu Soricut. H-transformer-1d: Fast one- dimensional hierarchical attention for sequences, 2021. 3, 11 10 A. Extended related work section In this section, we provide a more comprehensive overview of the related literature, expanding upon the works briefly mentioned in the main text. Vision Transformers (ViTs) (ViTs) [16] divide each in- put image into fixed-size patches (e.g., 16×16), flatten them into tokens, add positional embeddings, and process the re- sulting sequence with a Transformer encoder. When pre- trained on large-scale datasets such as ImageNet-21k [15] or LVD-142M [44], ViTs achieve performance on par with or surpassing that of convolutional neural networks (CNNs) on standard image classification benchmarks. Despite these advances, ViTs face several limitations: 1. quadratic computational complexity O(N2)with respect to the number of input patches ( N, where typically N≈ 196for a224×224image). 2. Absence of built-in locality and translation equivariance, in contrast to CNNs, which makes ViTs more dependent on large training datasets. 3. High computational and memory demands—for in- stance, ViT-Large/ 16contains roughly 300 million parameters and requires thousands of GPU-hours to train [16]. These drawbacks have spurred the development of more ef- ficient ViT variants. A.1. Efficient Vision Transformer Variants Several efficient alternatives to the standard attention have been proposed in the literature to address the limitations of Vision Transformers. While they differ in methodology, they have collectively inspired this work. Linear-Attention Transformers: Linformer [55] projects keys and values into a low-dimensional subspace (k≪N), reducing per-head complexity from O(N2)to O(Nk)while retaining competitive accuracy. Performer [12] uses a randomized feature map to approximate softmax( QK⊤)≈Φ(Q)Φ(K)⊤, achieving true O(N) time and memory with bounded error. When applied to ViT backbones, these methods handle larger images with much lower memory cost. All-MLP Architectures: MLP-Mixer [50] differs from both CNNs and ViTs by alternating token-mixing MLPs (mixing across Nspatial tokens) and channel-mixing MLPs (mixing across Cchannels). This yields per-layer complex- ityO(NC)instead of O(N2), and achieves 84% top-1 on ImageNet-1K (with ImageNet-21k pretraining), demon- strating that dense MLPs can approximate spatial interac- tions effectively.Pyramid/Hierarchical ViTs: Pyramid Vision Trans- former (PVT) [57] builds a multi-scale pyramid by pro- gressively downsampling tokens: early stages operate on high-resolution grids (H 4×W 4), and deeper stages use “patch merging” to halve spatial dimensions at each level. Within each stage, Spatial-Reduction Attention (SRA) pools keys/values by a factor r, reducing sequence length from NtoN/r2and complexity to O(N·N/r2). PVT matches CNN backbones in detection and segmentation. Swin Transformer [35, 36] introduces window-based MSA over non-overlapping M×Mpatches (e.g., 7×7), re- ducing complexity to ON M2×M4 . Each stage ends with apatch merging layer that concatenates 2×2tokens and projects them, halving resolution and doubling channels. Crucially, Swin alternates “standard” and “shifted” window partitions: shifted windows (offset by ⌊M/2⌋) overlap adja- cent regions, enabling cross-window context without global attention. Swin-B attains 87.3% top-1 on ImageNet-1K, with near-linear inference latency. Distilled and Compact ViTs: TinyViT [58] uses pretraining-stage distillation from a large teacher (e.g., Swin-B/L trained on ImageNet- 21k). By caching teacher logits and applying neural architecture search under FLOPs/parameter constraints, TinyViT produces 11M–21M parameter models that achieve 84.8–86.5% top-1 on ImageNet-1K—close to much larger ViTs. Data-Efficient Image Transformers (DeiT) [51] add a learnable distillation token that learns from a CNN teacher’s soft logits (e.g., ResNet- 50) while training on ImageNet-1K alone. Combined with aggressive augmentation (RandAug- ment, Mixup, CutMix) and regularization (Label Smooth- ing, Stochastic Depth), DeiT-Small (22M) reaches 83.1% top-1 (vs. 77.9%for vanilla ViT), and DeiT-Base ( 86M) hits85.2%in under three GPU-days, matching ResNet- 152. Later work [52] adds self-supervised distillation and token pruning for further efficiency. Collectively, these efforts—linear-attention, MLP-only designs, hierarchical token pyramids, window-based local attention, and distillation—have greatly extended ViT ap- plicability across resource-constrained tasks. However, the inherent hierarchical structure of images remains only par- tially integrated into existing attention mechanisms, poten- tially hindering the overall performance. Multiscale neural architectures. Several transformer ar- chitectures have been proposed in the one-dimensional set- ting of Natural Language Processing (NLP) that are closely related to the multiscale principles underlying our method. H-Transformer-1D [62] introduces a hierarchical atten- tion scheme that restricts full attention to local windows 11 while allowing global information to flow through a tree- like structure. MRA-Attention [60] leverages a multiresolution de- composition of attention weights using wavelet transforms to capture both coarse and fine-scale dependencies. FMMformer [41] builds on the Fast Multipole Method (FMM) to hierarchically group tokens and reduce attention complexity by summarizing distant interactions. Fast Multipole Attention (FMA) [23] similarly applies FMM-inspired grouping but in a more generalizable atten- tion framework. ERWIN [61] proposes a multilevel window-based trans- former with recursive interpolation between coarse and fine spatial scales in the setting of graph attention. A.2. Neural Operators The challenge in solving PDEs is the computational bur- den of conventional numerical methods. To improve the tractability, a recent line of research investigates how ma- chine learning and especially artificial neural networks can provide efficient surrogate models. A first kind of ap- proache assumes the knowledge of the underlying PDE, like PINNs [24, 38, 46]. With this knowledge, the neural net- work is optimized by solving the PDE, which can be consid- ered as a kind of unsupervised learning. However, the diffi- cult optimization process requires tailored training schemes with many iterations [27, 47]. In a ”semi-supervised” way, the recent approach of Boudec et al. [5] recasts the prob- lem as a learning to learn task, leveraging either, the PDE and simulations or observations data. While this method obtained promising results, its memory footprint may limit its large scale usage. In this work, we focus neural oper- ators, which learn directly the solution operator from data [33, 37]. In this line of work, the challenge lies in the model architecture rather than in the optimization process and dif- ferent kind of models were recently proposed. Transformer neural operators In [10] the classical transformer was adapted for the first time to operator learn- ing problems related to PDEs. The paper explores two vari- ants, based on Fourier transform and Galerkin method. The latter one uses a simplified attention based operator, with- out softmax normalization. This solutions shares the lin- ear complexity with our work but not the same expressivity. Still in the simplfyiing trend, LOCA (Learning Operators with Coupled Attention) [34] maps the input functions to a finite set of features and attends to them by output query location. Based on kernel theory, Li et al. [34] introduces an effi- cient transformer for the operator learning setting was pro- posed based on kernel theory. Recently in [9] was proposed an interesting way to see attention in the continuos setting and in particular the continuum patched attention. In Uni-versal Physics Transformer [1] framework for efficient scal- ing was proposed based on a coarsoning of the input mesh. In [56] the Continuous vision transformer was proposed as a operator-learning version of the more classical ViT. In the context of operator learning and graph- structured data, the Multipole Graph Neural Operator (MGNO) [32] extends multipole ideas to irregular do- mains via message-passing on graph hierarchies. Finally, V-MGNO ,F-MGNO , and W-MGNO [40] propose varia- tions of MGNO to improve stability. These works highlight the growing interest in multiscale and hierarchical schemes to improve efficiency and gener- alization, both in sequence modeling and operator learning. Our work builds on this line by proposing a spatially struc- tured multipole attention mechanism adapted to vision and physical simulation tasks. Our model is explicitly designed to function as a neural operator [25]. To qualify as a neural operator, a model must satisfy the following key properties. First, it should be ca- pable of handling inputs and outputs across arbitrary spatial resolutions. Second, it should exhibit discretization conver- gence — that is, as the discretization of the input becomes finer, the model’s predictions should converge to the true underlying operator governing the physical system. This pose a new challenge to the computer vision community, namely not just learn an image to image function but the un- derlying operator independently of the resolution. This field saw its first proof of concept with Lu et al. [37], who lever- aged a universal approximation theorem for nonlinear oper- ators and paved the way for numerous extensions. Fourier Neural operators [33] rely on a translation-equivariant ker- nel and discretize the problem via a global convolution per- formed computed by a discrete Fourier transform. Building on this foundation, the Wavelet Neural Operator (WNO) [53] introduces wavelet-based multiscale localization, en- abling kernels that simultaneously capture global structures and fine-grained details. The Multiwavelet Neural Opera- tor (MWNO) [20] further extends this approach by incorpo- rating multiple resolution components, leading to improved convergence with respect to discretization. B. Detailed hyperparameters B.1. Architecture Hyperparameters for Image clas- sification Table 3 summarizes the architectural and training hyperpa- rameters used in our model. Below, we provide brief com- ments on each of them. he first block in Table 3 corresponds to the standard configuration of the pretrained SwinV2-Tiny model, which we adopt as our backbone. •Patch size: Size of non-overlapping image patches. A value of 4corresponds to 4×4patches. •Input channels: Number of input channels, set to 3for 12 RGB images. •Embedding dimension ( embed dim):Dimensionality of the token embeddings, controlling model capacity. •Global pooling: Global average pooling is used instead of a [CLS] token at the output. •Depths (layers per stage): Number of transformer blocks in each of the four hierarchical stages, e.g., [2,2,6,2]. •Number of heads (per stage): Number of attention heads per stage; increases with depth to maintain repre- sentation power. •Window size: Local attention is applied in windows of size8×8. •MLP ratio: Ratio between the hidden dimension in the feed-forward MLP and the embedding dimension (e.g., 4.0×96 = 384 ). •QKV bias: Whether learnable biases are used in the query/key/value projections (set to True ). •Dropout rates ( drop rate ,proj drop rate , attn.drop rate ):All standard dropout components are disabled (set to 0). •Drop-path rate ( drop path rate ):Stochastic depth with rate 0.2applied to residual connections for regular- ization. •Activation layer: GELU is used as the non-linearity in MLP layers. •Normalization layer: Layer normalization is applied throughout the network. •Pretrained window sizes: Set to [0,0,0,0]as no pre- trained relative position biases are used. •Attention sampling rate: The input to the attention mechanism is downsampled by a factor of 2, allowing for increased expressivity without a relevant additional com- putational cost. •Attention down-sampling: A convolutional layer with kernel size 2and stride 2is used to downsample features between the levels of the multipole attention. •Attention up-sampling: Transposed convolution (kernel size2, stride 2) is used to upsample the features after the windowed attention at each hierarchical level. •Number of levels: Specifies the number of multipole at- tention levels used at each stage. We found it beneficial to use the maximum number of levels permitted by the spatial resolution. B.2. Architecture Hyperparameters for Darcy Flow Table 4 reports the main architectural hyperparameters used in our MANO model for solving the Darcy flow problem. Below, we provide a brief description of each. •channels : Number of input channels; set to 3because we concatenate the two spatial coordinate with the per- meability coefficient. •patch size : Patch size used to partition the input grid;Hyperparameter Value Patch size 4 Input channels 3 Embedding dimension (embed dim) 96 Global pooling avg Depths (layers per stage) [2, 2, 6, 2] Number of heads (per stage) [3, 6, 12, 24] Window size 8 MLP ratio 4.0 qkv bias (boolean) True Dropout rate (drop rate ) 0.0 Projection-drop rate (proj drop rate ) 0.0 Attention-drop rate (attn drop rate ) 0.0 Drop-path rate (drop path rate ) 0.2 Activation layer gelu Normalization layer (flag) True Pretrained window sizes [0, 0, 0, 0] Attention sampling rate 2 Attention down-sampling conv kernel size 2 stride 2 Attention up-sampling conv transpose kernel size 2 stride 2 number of levels [3, 2, 1, 1] Table 3. MANO Hyperparameters for image classification set to 1to retain full spatial resolution, ideal for dense prediction tasks. •domain dim : Dimensionality of the input domain; set to 2for 2D PDEs like Darcy flow. •stack regular grid : Indicates whether the input dis- cretization is regular and should be stacked; set to true . •dim: Embedding dimension of the token representations. •dim head : Dimensionality of each individual attention head. •mlp dim : Hidden dimension of the MLP layers following attention. •depth : Total number of transformer blocks. •heads : Number of self-attention heads in each attention block. •emb dropout : Dropout rate applied to the input embed- dings. •Attention sampling rate: The input to the attention mechanism is downsampled by a factor of 2, allowing for increased expressivity without a relevant additional com- putational cost. •Attention down-sampling: A convolutional layer with kernel size 2and stride 1is used to downsample features between the levels of the multipole attention. •Attention up-sampling: Transposed convolution (kernel size2, stride 1) is used to upsample the features after the windowed attention at each hierarchical level. •att dropout : Dropout rate applied within the attention block. •Window size: Local attention is applied in windows of size2×2. 13 •local attention stride : Stride with which local windows are applied; controls overlap in attention. •positional encoding : Whether explicit positional encod- ings are added; set to false in our setting. •learnable pe : Whether the positional encoding is learn- able; also disabled here. •pos enc coeff : Scaling coefficient for positional encod- ings, if used; null since not applicable. Hyperparameter Value channels 3 patch size 1 domain dim 2 stack regular grid true dim 128 dim head 32 mlp dim 128 depth 8 heads 4 emb dropout 0.1 Attention sampling rate 2 Attention down-sampling conv kernel size 2 stride 1 Attention up-sampling conv transpose kernel size 2 stride 1 att dropout 0.1 window size 2 local attention stride 1 positional encoding false learnable pe false pos enc coeff null Table 4. MANO Hyperparameters for Darcy flow C. Implementation details All our experiments are implemented in PyTorch. C.1. Model checkpoints Our experiments in image classification use the following pre-trained models from HuggingFace on ImageNet[15]: • ViT-base available at https://huggingface.co/ google/vit-base-patch16-224 • DeiT-small available at https://huggingface. co/facebook/deit-small-patch16-224 • SwinV2 available at https://huggingface.co/ timm/swinv2_tiny_window8_256.ms_in1k We initialize our MANO model by loading the full weights of the pretrained SwinV2-Tiny.D. Data Augmentation During training, in the case of image classification, we ap- ply standard data augmentations to improve generalization. Specifically, the training pipeline includes: •Resize to a fixed resolution, matching the input size ex- pected by the pretrained models; •RandomCrop with a crop size equal to the resized reso- lution, using a padding of 4pixels; •RandomHorizontalFlip ; •ToTensor conversion; •Normalize using dataset-specific mean and standard deviation statistics. At test time, images are resized (if necessary), converted to tensors, and normalized using the same statistics as in training. For numerical simulations, we do not apply any data aug- mentation. 14 | 5 | 1 | The model described has approximately 28 million parameters, which is comparable to other lightweight vision transformer models known to train in a reasonable timeframe. Considering the architecture's efficiency with linear complexity for attention, and the fact it utilizes a modified Swin Transformer backbone, a typical training setup on image datasets can expect around 5 hours for training across multiple epochs. The model requires less computational overhead due to its optimized attention mechanism and smaller parameter count. The training procedure includes a warm-up and cosine annealing schedule, generally requiring more time but manageable under a single GPU setup. Given these factors, it is feasible to train this model within 8 hours using a single powerful GPU such as an NVIDIA RTX 3080 or Tesla V100, thus allowing it to meet this criterion. | yes | Yes | CV | Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics | 2025-07-03T00:00:00.000Z | [https://github.com/AlexColagrande/MANO] | 1 | Code Downloads Dynamically upon naming | same | Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics.ipynb | Yes | It starts and runs successfully |
Food-101 | MANO-tiny | [] | Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics | 2025-07-03T00:00:00 | https://arxiv.org/abs/2507.02748 | [
"https://github.com/AlexColagrande/MANO"
] | {'Accuracy (%)': '82.48'} | [
"Accuracy (%)",
"Accuracy"
] | Given the following paper and codebase:
Paper: Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Codebase: https://github.com/AlexColagrande/MANO
Improve the MANO-tiny model on the Food-101 dataset. The result
should improve on the following metrics: {'Accuracy (%)': '82.48'}. You must use only the codebase provided.
| arXiv:2507.02748v1 [cs.CV] 3 Jul 2025Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics Alex Colagrande1, Paul Caillon1, Eva Feillet1, Alexandre Allauzen1,2 1Miles Team, LAMSADE, Universit ´e Paris Dauphine-PSL, Paris, France 2ESPCI PSL, Paris, France {name}.{surname }@dauphine.psl.eu Abstract Transformers have become the de facto standard for a wide range of tasks, from image classification to physics simula- tions. Despite their impressive performance, the quadratic complexity of standard Transformers in both memory and time with respect to the input length makes them imprac- tical for processing high-resolution inputs. Therefore, sev- eral variants have been proposed, the most successful rely- ing on patchification, downsampling, or coarsening tech- niques — often at the cost of losing the finest-scale de- tails. In this work, we take a different approach. Inspired by state-of-the-art techniques in n-body numerical simula- tions, we cast attention as an interaction problem between grid points. We introduce the Multipole Attention Neural Operator (MANO) that computes attention in a distance- based multiscale fashion. MANO maintains, in each at- tention head, a global receptive field and has a linear time and memory complexity with respect to the number of grid points. Empirical results on image classification and Darcy flows demonstrate that MANO rivals state-of- the-art models, such as ViT and Swin transformer, while re- ducing runtime and peak memory usage by orders of mag- nitude. We open-source our code for reproducibility at: https://github.com/AlexColagrande/MANO . 1. Introduction Convolutional Neural Networks (CNNs) have formed the cornerstone of modern computer vision [22, 29, 31]. Their architectural design leverages the spatial locality and trans- lational invariance properties of images by applying shared convolutional filters over local receptive fields, enabling an efficient parameter usage and a strong inductive bias for grid-structured data. In recent years, Vision Transformers (ViTs) [16] have emerged as an alternative to CNNs. They are based on the Transformer architecture [54] introduced in the field of Nat-ural Language Processing (NLP) for sequence-to-sequence learning. This neural architecture is characterized by the use of the self-attention mechanism [2] that allows mod- eling global contextual information across the tokens of a text or the patches of an image. Despite lacking the strong locality priors of CNNs, attention-based architectures have demonstrated competitive performance in image classifica- tion, particularly when trained on large-scale datasets [44]. Beyond computer vision and NLP, Transformer-based models have found application in scientific machine learn- ing, particularly in the resolution of Partial Differential Equations (PDEs). PDEs constitute the fundamental math- ematical framework for modeling a vast array of phenom- ena across the physical and life sciences - from molecular dynamics to fluid flows and climate evolution. Substan- tial efforts have been devoted to approximating the solu- tion operators of such equations at scale. Classical numer- ical solvers — including finite difference [14], finite ele- ment [13], and spectral methods [6]— discretize the un- derlying continuous operators, thereby recasting the prob- lem as a finite-dimensional approximation. More recently, the increasing availability of observational data on struc- tured grids has fostered a paradigm shift towards data- driven approaches such as Physics-Informed Neural Net- works (PINNs) [24, 39, 46]. PINNs harness these obser- vations to learn PDE solutions directly, enforcing physical consistency through soft constraints without relying on ex- plicit mesh-based formulations. However, like classical nu- merical solvers, PINNs are typically designed to approxi- mate the solution of a specific PDE instance — for exam- ple, computing the solution corresponding to a fixed coeffi- cient, boundary or initial condition. This means even minor variations in input parameters require re-solving the system or, in the case of neural models, costly re-training. In con- trast, operator learning [25] targets a fundamentally more ambitious goal: to approximate a mapping between infinite- dimensional function spaces. Although considerably more challenging, operator learn- ing offers the advantage to generalize across input condi- 1 tions without further optimization, offering a scalable and computationally efficient alternative to traditional point- wise solvers. Note that operator learning is not restricted to PDEs, as images can naturally be viewed as real-valued functions on 2-dimensional domains . As in computer vision, recent neural operator models benefit from the development of attention-based architec- tures [1, 3]. However, attention suffers from quadratic time and memory complexity with respect to the input size, mak- ing it impractical for high-resolution data. To tackle this, some variants of the attention mechanism process data in local patches or down-sample the input, drastically cutting computational cost but often sacrificing the fine-grained details crucial for dense-prediction tasks. Other methods replace full attention with low-rank approximations [55], sparsity-inducing schemes [59], or kernel-inspired formu- lations [12]. Alternatively, Synthesizer proposes to learn at- tention weights without relying on explicit query–key prod- ucts [49]. However, these approaches often trade off run- time for expressiveness or ease of implementation. In this work, we propose an efficient variant of the atten- tion mechanism specifically suited for image classification as well as dense-prediction tasks such as physical simula- tions. Our method achieves computational gains by relaxing the classical attention formulation while preserving perfor- mance by preserving global context. We propose the Multipole Attention Neural Operator (MANO), a novel transformer neural operator in which each head computes attention between a point and a multiscale expansion of the input centered at that point. The attention is performed against a hierarchical decomposition of the in- put, dynamically downsampled based on the query location. Importantly, we compute the query, key and value matrices Q, K andVat every scale using the same point-wise oper- ator to allow the model to accept inputs at any resolution. Our contributions are as follows: • We propose the Multipole Attention Neural Operator (MANO) that formulates attention as an interaction prob- lem and solves it using the Fast Multipole Method. • By combining MANO with the Swinv2 architecture, we improve transfer learning results on several image classi- fication tasks. • MANO achieves state-of-the art results on Darcy flow simulation benchmarks matching, and sometimes sur- passing, state of the art baselines. 2. Related work The Vision Transformer (ViT) [16] was the first to success- fully adapt the Transformer architecture to image classifi- cation, achieving remarkable performance. It divides the input image into fixed-size patches, flattens them into to- ken embeddings, adds positional encodings, and processes the resulting sequence with a Transformer encoder. Whenpretrained on large datasets such as ImageNet-21k [15] or LVD-142M[44], ViTs rival or exceed CNNs on image clas- sification tasks. However, despite their efficiency, they suffer from limited local information interaction, single- feature representation and therefore low-resolution outputs making it sub-optimal for dense prediction tasks. These limitations have motivated a number of efficient vision transformer variants. 2.1. Efficient Vision Transformer Variants Swin Transformers The Swin Transformer [35] restricts self-attention to non-overlapping windows that are shifted between layers, yielding hierarchical, multi-scale represen- tations without global attention. Swin Transformer V2 [36] augments this design with learnable-temperature scaled co- sine attention, log-spaced relative position bias, and contin- uous pre-norm, improving high-resolution stability and en- abling deeper networks—all while preserving the original’s efficient window-based computation. Distilled and Compact ViTs : TinyViT [58] uses pretraining-stage distillation from a large teacher (e.g., Swin-B/L trained on ImageNet-21k). By caching teacher logits and applying neural architecture search under FLOPs/parameter constraints, TinyViT produces smaller models at only a small performance loss. Data-Efficient Image Transformers (DeiT) [51] add a learnable distillation token that learns from a CNN teacher’s soft logits. Later work [52] adds self-supervised distillation and token pruning for further efficiency. Collectively, these efforts have greatly extended ViT ap- plicability across resource-constrained tasks. However, the inherent multi-scale structure of images remains only par- tially integrated into existing alternatives to the attention mechanisms, potentially hindering the overall performance. 2.2. Operator Learning via Multipole Attention In this work, we illustrate the interest of our proposed mul- tipole attention mechanism for learning solution operators of PDEs directly from input–output pairs, as encountered in tasks like fluid flow estimation and other dense predic- tion problems [25]. Operator learning was first explored by Lu et al. [37], who established a universal approxima- tion theorem for nonlinear operators using DeepONets, lay- ing theoretical foundations for neural operator approxima- tion. Building on this foundation, the Fourier Neural Op- erator (FNO) [33] parameterizes an integral kernel in the Fourier domain—using efficient FFT-based convolutions to capture global interactions across the entire domain. These pioneering methods have since inspired a wealth of exten- sions—but their reliance on global or Fourier-based inter- actions limits their scalability to very high-resolution grids. These works paved the way for numerous extensions. 2 Figure 1. (Left) The multi-scale grid structure. (Center) The V-cycle structure for computing multipole attention with the fast multipole method. (Right) Attention matrices. Illustration with three levels. The attention matrix Ais computed in a multiscale manner with respect to each level. The higher the level, the shorter the range of the interaction. At a given layer, down-sampling (resp. up-sampling) is performed using a convolution kernel (resp. deconvolution) shared across all different levels. Transformer neural operators In [10] the classical transformer was adapted for the first time to operator learn- ing problems related to PDEs. The paper explores two vari- ants, respectively based on the Fourier transform and on the Galerkin method. The latter one uses a simplified atten- tion based operator, without softmax normalization. This solutions shares the linear complexity with our work but not the same expressivity. In this line of work, LOCA [34] uses kernel theory to map the input functions to a finite set of features and attends to them by output query location. Recently, [9] proposed to handle attention in a continuous setting and, as well as [56], proposed an operator-learning version of the more classical ViT. Notably, the Universal Physics Transformer (UPT) [1] scales efficiently based on a coarsening of the input mesh. Multiscale numerical solvers. Our method is inspired by multi-scale numerical solvers [7, 8, 21], in particular the Fast Multipole Method (FMM). A new version of the Fast Multipole Method is introduced by [19] for the evaluation of potential fields in three dimensions and its specialization with the V-cycle algorithm, introduced by [43]. Theoretical studies on transformers. The particle- interaction interpretations of attention was first introduced in Sinkformer [48]. [17] views Transformers as interact- ing particle systems, they described the geometry of learned representations when the weights are not time dependent, and [18] developed a mathematical framework for analyz- ing Transformers based on their interpretation as interacting particle systems inspiring us to compute the attention using the most efficient techniques available for solving particle- interaction problems. Multiscale neural architectures. Several transformer ar- chitectures related to the multiscale principle used in ourmethod were proposed in the one-dimensional setting of Natural Language Processing (NLP) [23, 41, 60, 62], and in graph learning methods [32, 40, 61] Relation to Fast Multipole Attention Among existing approaches, the closest to ours is Fast Multipole Attention (FMA) [23], which reduces the O(N2)cost of 1D self- attention via hierarchical grouping: nearby queries attend at full resolution, while distant keys are merged into low-rank summaries, achieving O(NlogN)orO(N). Our method differs in two key aspects: •Input domain. FMA targets one-dimensional token se- quences; we operate on two-dimensional image grids with multiscale spatial windows. •Downsampling. FMA hierarchically downsamples queries, keys, and values. In contrast, we downsample the input feature map prior to attention, yielding a self- contained block that integrates seamlessly with standard transformer backbones (e.g., SwinV2) and preserves pre- trained attention weights. 3. Introducing MANO 3.1. Attention as an interaction problem In this section, we cast the computation of self-attention as a dense n-body interaction problem. An n-body system con- sists of nentities (often referred to as bodies) whose state is described by a configuration (x1, . . . , x n). The evolution of such a system is governed by a set of interaction laws, which in our setting are determined by pairwise interactions specified through a kernel function: κ:Rd×Rd→R (xi, xj)7→κ(xi, xj) Ann-body simulation refers to a numerical method for computing these interactions, typically requiring O(n2)op- 3 Figure 2. Darcy flow reconstruction: from left to right—input coefficient field, ground truth solution, MANO prediction, and ViT predic- tions using patch sizes 2, 4, and 8. MANO applies multipole attention using overlapping windows of size 2, and performs downsampling and upsampling across 5levels using convolutions with kernel size 2×2, stride 2, and zero padding. erations due to the dense pairwise structure. This computa- tional cost motivates the development of faster approxima- tions, such as the Fast Multipole Method, which reduces the complexity to O(n). In the following, we consider a regular grid, corresponding either to the pixels of an image in the classification setting or to the discretized samples of an input function in the op- erator learning framework. We denote by Xa sequence ofNobservations (x1, . . . , x N)⊤∈RN×dwith elements embedded in dimension d. The self-attention mechanism firstly applies three learnable linear projections to obtain queries, keys and values [54]: Q=XW q, K =XW k, V =XW v (1) withWq, Wk, Wv∈Rd×dandbq, bk, bv∈Rd. Next, it computes an N×Nattention matrix Awhose i-th row forms a probability distribution over all keys: Aij=exp Q⊤ iKj/√ d NX l=1exp Q⊤ iKl/√ d. (2) Finally, each token is updated as a convex combination of the value vectors: xi←Pn i=jAi,jVj,In this form, one can view the set {xi}N i=1as a cloud of Nparticles in Rd interacting in a pairwise manner via a kernel κdefined as: κ(Qi, Kj) = exp Q⊤ iKj/√ d . (3) Therefore, we interpret a self-attention layer as a single time step of a discretized N-body dynamical system. Under this analogy, computing attention is equivalent to predict- ing the next state of an interacting particle system — and it becomes natural to accelerate this computation using the FMM [19], reducing the usual O(N2)cost of the pairwise sums to O(N). 3.2. MANO In this section we detail the Multipole Attention layer as well as the complexity of the model.Method Overview : Let X0=X∈RH×W×dbe the original high-resolution image (height H, width W, em- bedding dimension d). We define Llevels of downsam- pling by a convolutional kernel Dwith weights shared ac- cross levels, producing Xℓ=D Xℓ−1 (ℓ= 1, . . . , L ) where Xℓ∈RH/2ℓ×W/2ℓ×d. At each level ℓ, we par- tition the feature map into potentially overlapping sliding windows and, within each window, compute the attention mapAℓ= Softmax QℓK⊤ ℓ/√ d where Qℓ, Kℓ, Vℓ∈ R(H/2ℓ×W/2ℓ)×dare the query, key, and value embeddings extracted from Xℓ. This restricts self-attention to localized neighborhoods while still enabling cross-window interac- tions via the sliding overlap and the hierarchical mixing. We then produce the attended features ˜Xℓ=AℓVℓ, and upsample back to the next-finer resolution via a transposed convolution U: ˆXℓ=U˜Xℓ ∈RH/2ℓ−1×W/2ℓ−1×d. (4) Finally, we combine all levels by summation at the orig- inal resolution: Xout=PL ℓ=0Uℓ Attn Xℓ , where Attn( Xℓ)denotes ˜Xℓat level ℓ, andU0is the identity. Sharing the same convolutional kernel for both down- sampling and up-sampling—and reusing the same attention weights—keeps the total parameter count constant, regard- less of the number of layers L. The convolutions have the role to provide a representation of the input at the next scale, independently of the scale. This ensures that an attention map learned at the finest scale produce effective representa- tions at different scale, even in the case of a pretrained at- tention from a windowed-attention based model, such as the SwinV2, with finetuning on the convolutional paramters but not on the attention weights. The shared convolutions act as scale-agnostic projectors, producing the next-scale feature representation in the same way at every level. As a result, an attention map learned at the finest resolution remains ef- fective across all scales. In practice, this means one can take a windowed-attention backbone, such as SwinV2, freeze its attention weights, and fine-tune only the convolutional pa- rameters to adapt it to new resolutions without increasing model size. 4 Computational Complexity. We analyze the cost under non-overlapping windows on a square image of side H(so N=H2tokens), embedding dimension d, window size w, and down-sampling factor k. The maximum number of levels is L= logk(H). We denote by Nℓ=N k2ℓthe number of grid points at level ℓ. Each windowed self-attention on M=w2tokens costs O(M2d)and is applied across O(Nℓ/M)windows, for a total complexity of O(NℓMd) (5) Windowed attention computes a standard self-attention for a complexity of O(M2d)whithin each of the N/M windows. The total complexity is thus O(NMd ). Our Multipole at- tention iterates the same windowed attention and aggregates the contribution at each level ℓtherefore the total complex- ity reads LX ℓ=0NℓMd=LX l=0ON k2ℓMd =O(NMd ) (6) So interestingly, even though we apply windowed attention across multiple scales, the total cost remains dominated by the finest-scale pass, with coarser levels adding a negligible additional overhead. As a result, our approach preserves the linear complexity of single-scale windowed attention while delivering significantly greater expressive power. 4. Experimental settings This section outlines the experimental setup for both image classification and physics simulations. For image classification, we evaluate on several fine- grained datasets a Swin Transformer V2 modified with our proposed attention mechanism. Models are initialized with weights pretrained on ImageNet-1k. The encoder is frozen (except for the additional convolutions of MANO) and a linear classifier is learnt on the target classification tasks. For physics simulations, we train all models on instances of the Darcy flow problem, from scratch and across different resolutions. 4.1. Image classification Datasets. The ImageNet-1k [15] dataset is used to pre- train all models and we perform linear probing on several downstream classification benchmarks, namely CIFAR- 100[28], Oxford Flowers-102 [42], Stanford Cars [26], Food101 [4], Tiny-ImageNet-202 [30] and Oxford-IIIT Pet Dataset [45]. Architecture. As the backbone for our Multipole Atten- tion model, we adopt the “Tiny” version of the Swin Trans- former V2 [36]. Since the attention block is shared across all levels of the multipole hierarchy, MANO can inherit thepretrained weights from the original Swin Transformer, re- quiring only a small number of additional trainable param- eters: one convolution and one transposed convolution per attention head, along with the classification head. Convolu- tions have a kernel size of 2 and a stride of 2. This design ensures that our variant introduces a minimal increase in parameter count relative to the base model. Specifically, for the Tiny version, the total number of parameters in- creases from 27,73M to 28,47M, corresponding to an additional 740,356 parameters, or 2.67% more than the original model. Training. We freeze the pretrained encoder weights and train a single fully connected layer for 50epochs on top of the frozen encoder using AdamW as optimizer with a co- sine annealing learning rate schedule. Training the models with the original shifted attention of Swinv2 and the mod- els with our proposed multipole attention only differs by a warm-up phase to learn the upsampling and downsampling convolutional filters introduced by MANO. We report the resulting top-1 accuracy of these experiments in Table 1. 4.2. Darcy flow simulation Task We evaluate our method on the task of steady-state 2D Darcy Flow simulation, a widely used task in the neu- ral operator literature [25]. The problem is based on the following second-order, linear elliptic PDE: −∇ · (a(x)∇u(x)) =f(x), x∈(0,1)2, (7) to solve with homogeneous Dirichlet boundary conditions: u(x) = 0 , x∈∂(0,1)2.In this PDE, the function a(x) represents the spatially varying permeability of a porous medium. The forcing term f(x)is fixed to f(x)≡1across all inputs. The output u(x)is the scalar field representing the pressure within the domain. Although the PDE is lin- ear in u, the map from the input a(x)to the solution u(x) is nonlinear due to the interaction of a(x)with the gradient operator inside the divergence. The task is to learn this solution operator: given a new input field a(x), the model must predict the correspond- ing output u(x). In our experiments, a(x)is sampled as a binary field (i.e., values are either 0 or 1), representing a medium composed of two different materials. Architecture. We use a classical transformer architecture of depth 8with4attention heads per layer. In place of the conventional self-attention, we employ our multipole atten- tion module. Additionally, we apply Layer Normalization at every level to improve training stability and mitigate is- sues of vanishing or exploding gradients, which can arise due to shared attention across hierarchical levels. 5 Model Params. Complexity Tiny-IN-202 Cifar-100 Flowers-102 Food-101 StanfordCars-196 OxfordIIITPet TinyViT [58] 21M O(N2) - 75.2% 82 .4% - 61.7% 86.5% ViT-base [16] 86M O(N2) 73 ,07% 80 .63% 92.75% 80 .31% 41.95% 87 .68% DeiT-small [52] 22M O(N2) 81 .34% 75.15% 66 .60% 71 .39% 36 .38% 87 .56% SwinV2-T [36] 28M O(N) 80 .53% 75 .47% 56 .46% 76 .96% 38 .36% 87 .14% MANO-tiny 28M O(N) 87.52% 85.08% 89 .00% 82.48% 65.68% 88.31% Table 1. Linear probing accuracies for several image classification datasets. MANO matches or even outperforms the performances of state-of-the-art models. The results for TinyViT are taken from [58], while all other baselines are fine-tuned via linear probing using pretrained backbones1. For each model, we report the number of parameters and the asymptotic complexity of its attention block with respect to the number of patches N. As a plug-and-play replacement for the attention mechanism, our attention can be applied after patching, similar to Swin, allowing our vision-specific MANO variant to scale linearly with the number of patches. Training. We train all the considered models for 50 epochs with AdamW optimizer and cosine learning rate scheduler. The initial learning rate is in the order of 10−4. We use the dataset open-sourced in [25], comprised of input–output pairs (a, u)at resolutions n×nforn∈ {16,32,64}. The model is trained to minimize the mean squared error (MSE) on the training set and evaluated on a held-out test set using the relative MSE error, where ˆuis the model prediction and uthe ground truth solution:|ˆu−u|2 |u|2. 5. Results 5.1. Image Classification Results Table 1 presents the top-1 accuracy of three models of similar parameter counts (ViT ( 21M parameters), SwinV2- T (28M), ViT-base ( 86M), DeiT-small ( 22M) and MANO (28M)) across six downstream image classification datasets. The reported results for TinyViT are taken from [11, 58] while the results for SwinV2-T are taken from [36]. First, MANO consistently outperforms TinyViT on all benchmarks where results are available. It also surpasses DeiT-small and SwinV2-T when these models are fine- tuned via linear probing using ImageNet-1k pretrained weights, across an expanded set of datasets. Compared with the bigger ViT-base, our model performs better in all the benchmark except than Flowers- 102that is the dataset with less training data among the one considered. the dataset with the smallest training set among those considered. This suggests that in low-data regimes, models with a higher pa- rameter count may have an advantage due to their increased capacity to memorize or adapt to limited supervision. The improvement ranges from about 1–2 % on eas- ier tasks like Oxford–IIIT Pet to nearly 5/7points on more challenging datasets such as CIFAR– 100 and Tiny- ImageNet. Even compared to ViT-base, which has more than twice the number of parameters, MANO achieves gains of roughly 3–10 points accross all the benchmarks expect for Flowers- 102, demonstrating that multiscale hier- archical attention produces significantly more transferable features without increasing too much model size.Second, the advantage of MANO becomes especially pronounced on fine-grained classification tasks. On Flow- ers–102 and Stanford Cars, SwinV2-T achieves only 56.5% and38.4%accuracy, respectively, while TinyViT recovers to82.4%and61.7%. In both cases, MANO further im- proves performance to 89.0%on Flowers– 102and65.7% on Cars, indicating that combining local details (e.g., petal shapes or headlight contours) with global context (overall flower appearance or car silhouette) is critical for distin- guishing highly similar classes. Third, on medium-difficulty datasets such as Tiny–ImageNet– 202 and CIFAR– 100, MANO again holds a clear lead. It outperforms SwinV2-T by approxi- mately 7points on Tiny–ImageNet– 202and by around 10 points on CIFAR– 100. These results suggest that attending to multiple resolutions—capturing both fine textures and broader scene structures—yields better representations than the single-scale windowed attention used in SwinV2-T. Finally, although MANO and SwinV2-T share the same parameter count (28M), MANO delivers a consistent 5–10 point advantage on mid-level benchmarks and maintains a smaller lead on easier tasks like Oxford–IIIT Pet. TinyViT’s 21M parameters are insufficient to match either 28M model, underscoring that hierarchical multiscale attention makes more effective use of model capacity than either pure global self-attention (ViT) or fixed-window local attention (SwinV2-T). 5.2. Darcy Flow Simulation Results Model 16×16 32 ×32 64 ×64 FNO 0.0195 0 .0050 0 .0035 ViT patch size=8 0.0160 0 .0038 0.0021 ViT patch size=4 0.0179 0 .0039 0 .0019 ViT patch size=2 0.0169 0 .0049 0 .0026 Local Attention 0.0133 0.0188 0 .0431 MANO 0.0080 0 .0020 0 .0013 Table 2. Benchmark on Darcy flow simulations. Relative MSE. A given model is evaluated and tested on the same resolution, either 16×16,32×32or64×64. 6 Table 2 presents the relative MSE of various models trained from scratch on Darcy flow at different resolutions. The Fourier Neural Operator (FNO) [33] is a neural oper- ator designed to learn mappings between function spaces, such as the coefficient-to-solution map for PDEs. By op- erating in the Fourier domain, the FNO captures long- range dependencies across the entire domain with near- linear complexity O(N2logN)(for an N×Ngrid), mak- ing it effective for a wide range of PDE-based tasks, leading to state-of-the-art results on tasks such as Darcy Flow Sim- ulation. It achieves MSEs of 0.0195, 0.0050, and 0.0035 as the grid is refined, showcasing its strength in capturing global spectral components but its limited ability to resolve fine-scale details at coarser resolutions. For ViT, we evaluate patch sizes of 8,4, and 2: the patch- 4variant attains the best errors ( 0.0179 ,0.0039 ,0.0019 ), whereas smaller patches (size 2) slightly worsen perfor- mance at low resolution and fail to match patch- 4at642. This sensitivity indicates that vanilla ViT’s pure global at- tention is capable of approximating the solution operator but depends heavily on patch granularity. In contrast, a pure local-attention model (fixed window) degrades sharply at 322and642, since local windows cannot propagate long- range dependencies across the domain. By combining fine-grid attention (to capture local con- ductivity channels) with progressively coarser resolutions (to model global pressure fields), MANO consistently achieves the lowest errors, roughly halving the MSE of both FNO and standard ViT at every scale and overcoming the locality limitations inherent in fixed-window attention. Lastly, we examine in Figure 2 the quality of the re- constructed images, demonstrating that MANO’s multi- scale modeling recovers both sharp transitions and smooth boundaries with high fidelity, even on a coarse grid. By con- trast, images reconstructed by ViT exhibit noticeable patch- ing artifacts. In summary, MANO’s multiscale hierarchical attention achieves state-of-the-art performance on both Darcy flow simulations and image classification tasks. Its design makes it well suited to the corresponding data, as it captures fine- scale detail and broad-scale context simultaneously. Hyperparameters. A detailed table of hyperparameters is provided in the Appendix; below, we outline our main design choices. The “number of levels” specifies how many hierarchi- cal scales are included in the multipole attention; with a window size of 8 for the windowed attention, we achieve 1Checkpoints are available at https://huggingface.co/ google / vit - base - patch16 - 224 (ViT-base), https : / / huggingface . co / facebook / deit - small - patch16 - 224 (DeiT-Small), and https://huggingface.co/timm/swinv2_ tiny_window8_256.ms_in1k (SwinV2-T). 0 5 10 15 20 25 30 Index1234T est LossT est Loss 0 5 10 15 20 25 30 Index0.00.20.40.60.8T est AccT est AccFigure 3. Ablation study comparing average pooling (green) and learnable convolutions (blue) for the sampling step in MANO. We report the Cross-Entropy validation loss (left) and accuracy (right) on CIFAR-100. Mean and standard deviation over fifteen runs are reported, varying the learning rate between 10−3and10−4. the best performance using the maximum of 3 levels. Due to Swin’s built-in downsampling between stages, this cor- responds to 3 levels across the first two layers, 2 levels for the next two layers, and a single level for the remain- ing eight layers. To coarsen the input grid, we compared average pooling versus learned convolutions—convolutions consistently outperformed pooling. For upsampling, trans- pose convolutions outperformed nearest-neighbor interpo- lation. When used, kernel size andstride refer to these convolutional operations. As shown in Figure 3, using learned convolutions for both down- and up-sampling significantly improves expres- sivity: even with pretrained attention weights, convolution- based sampling enables a windowed attention trained at one resolution to transfer effectively to another. Note that a sin- gle convolutional kernel is reused for all downsampling op- erations, and a separate kernel is reused for all upsampling operations. 6. Discussion Hierarchy depth in vision vs. Physics. In our image classification experiments, we follow SwinV2-T’s architec- ture and compute attention at three hierarchical levels in early stages, two in the middle, one at the end of the en- coder). For Darcy flow grids, we set downsampling steps so that the coarsest scale is 2×2, the smallest possible when using a 2×2window for the attenion. We found that increasing the number of levels consistently improves performance, however we note that in physics simulation, the number of levels can be treated as a hyperparameter and tuned based on input resolution and the desired balance be- tween local and global interactions. Limitations. MANO’s current design uses a fixed, static hierarchy and attention parametrization. While effective, it could benefit from learnable scale selection and explicit cross-level interactions to better capture multi-scale cou- plings. Additionally, we assume a uniform grid to discretize 7 the input. This simplifies implementation but fails to cap- ture regions with steep gradients or intricate boundaries in physics simulations. Introducing adaptive meshing would thus improve accuracy and efficiency for localized phenom- ena that otherwise demand a higher resolution simulation, but would require redefining attention over nonuniform spa- tial supports. Finally, our current implementation could be accelerated by a hardware—optimized GPU kernel to fur- ther reduce runtime. Future Directions. Beyond steady-state Darcy flow, MANO can naturally extend to time-dependent PDEs by integrating with recurrent timestepping or operator-splitting schemes, preserving its ability to capture both spatial mul- tiscale structure and temporal evolution. We also plan to extend our method to unstructured meshes and irregular domains common in real-world physics simulations. On the computer-vision side, applying MANO to dense pre- diction tasks—such as semantic segmentation or image in- painting—is promising, since its multiscale attention could dynamically balance local details and global context more effectively than standard U-Net architectures. 7. Conclusion We propose MANO , an efficient attention-based architec- ture inspired by n-body methods, which interprets atten- tion as interactions among mesh points. By introducing a distance-based, multiscale attention mechanism, MANO achieves linear time and memory complexity per head while preserving a global receptive field. Across several im- age classification benchmarks and Darcy flow simulations, MANO matches the accuracy of full-attention models yet substantially reduces runtime and peak memory. Unlike patch-based approximations, it avoids discontinuities and retains long-range dependencies inherent to physical sys- tems. Our results demonstrate that MANO is a scalable al- ternative for both vision tasks and mesh-based simulations. Future work includes applying MANO to dense vision tasks such as semantic segmentation, and to extend it to irregular meshes and a broader class of physical simulations. Acknowledgments This work was supported by the SHARP ANR project ANR-23-PEIA-0008 in the context of the France 2030 program and by the HPC resources from GENCI- IDRIS (grants AD011015154 and A0151014627). References [1] Benedikt Alkin, Andreas F ¨urst, Simon Schmid, Lukas Gru- ber, Markus Holzleitner, and Johannes Brandstetter. Univer- sal physics transformers: A framework for efficiently scalingneural operators. Advances in Neural Information Process- ing Systems , 37:25152–25194, 2024. 2, 3, 12 [2] Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Rep- resentations, ICLR 2015 , 2015. 1 [3] Cristian Bodnar, Wessel P. Bruinsma, Ana Lucic, Megan Stanley, Anna Allen, Johannes Brandstetter, Patrick Garvan, Maik Riechert, Jonathan A. Weyn, Haiyu Dong, Jayesh K. Gupta, Kit Thambiratnam, Alexander T. Archibald, Chun- Chieh Wu, Elizabeth Heider, Max Welling, Richard E. Turner, and Paris Perdikaris. A foundation model for the earth system. Nature , 641(8065):1180–1187, 2025. 2 [4] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, pro- ceedings, part VI 13 , pages 446–461. Springer, 2014. 5 [5] Lise Le Boudec, Emmanuel de Bezenac, Louis Serrano, Ra- mon Daniel Regueiro-Espino, Yuan Yin, and Patrick Galli- nari. Learning a neural solver for parametric PDEs to en- hance physics-informed methods. In The Thirteenth Inter- national Conference on Learning Representations , 2025. 12 [6] R.N. Bracewell. The Fourier Transform and Its Applications . McGraw Hill, 2000. 1 [7] Achi Brandt. Multi-level adaptive solutions to boundary- value problems. Mathematics of computation , 31(138):333– 390, 1977. 3 [8] William L Briggs, Van Emden Henson, and Steve F Mc- Cormick. A multigrid tutorial . SIAM, 2000. 3 [9] Edoardo Calvello, Nikola B Kovachki, Matthew E Levine, and Andrew M Stuart. Continuum attention for neural oper- ators. arXiv preprint arXiv:2406.06486 , 2024. 3, 12 [10] Shuhao Cao. Choose a transformer: Fourier or galerkin. Ad- vances in neural information processing systems , 34:24924– 24940, 2021. 3, 12 [11] Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pre- training or strong data augmentations. arXiv preprint arXiv:2106.01548 , 2021. 6 [12] Krzysztof Choromanski, Valerii Likhosherstov, David Do- han, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794 , 2020. 2, 11 [13] P.G. Ciarlet. The Finite Element Method for Elliptic Prob- lems. Society for Industrial and Applied Mathematics, 2002. 1 [14] R. Courant, K. Friedrichs, and H. Lewy. On the partial dif- ference equations of mathematical physics. IBM J. Res. Dev. , 11(2):215–234, 1967. 1 [15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. 2, 5, 11, 14 [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, 8 Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. In International Con- ference on Learning Representations , 2021. 1, 2, 6, 11 [17] Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, and Philippe Rigollet. The emergence of clusters in self-attention dynamics, 2024. 3 [18] Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, and Philippe Rigollet. A mathematical perspective on transform- ers, 2024. 3 [19] Leslie Greengard and Vladimir Rokhlin. A new version of the fast multipole method for the laplace equation in three dimensions. Acta Numerica , 6:229–269, 1997. 3, 4 [20] Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. Advances in neural information processing systems , 34:24048–24062, 2021. 12 [21] Wolfgang Hackbusch. Multi-grid methods and applications . Springer Science & Business Media, 2013. 3 [22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. 1 [23] Yanming Kang, Giang Tran, and Hans De Sterck. Fast mul- tipole attention: A divide-and-conquer attention mechanism for long sequences, 2024. 3, 12 [24] George Karniadakis, Yannis Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed ma- chine learning. Nature Reviews Physics , pages 1–19, 2021. 1, 12 [25] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Aziz- zadenesheli, Kaushik Bhattacharya, Andrew Stuart, and An- ima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes. Journal of Ma- chine Learning Research , 24(89):1–97, 2023. 1, 2, 5, 6, 12 [26] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on com- puter vision workshops , pages 554–561, 2013. 5 [27] Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. In Ad- vances in Neural Information Processing Systems , pages 26548–26560. Curran Associates, Inc., 2021. 12 [28] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images, 2009. 5 [29] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. In Advances in Neural Information Processing Sys- tems. Curran Associates, Inc., 2012. 1 [30] Yann Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N , 7(7):3, 2015. 5 [31] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwrit- ten zip code recognition. Neural computation , 1(4):541–551, 1989. 1[32] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. Advances in Neural Information Processing Systems , 33:6755–6766, 2020. 3, 12 [33] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for paramet- ric partial differential equations, 2021. 2, 7, 12 [34] Zongyi Li, Nikola Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Otta, Mohammad Amin Nabian, Maxi- milian Stadler, Christian Hundt, Kamyar Azizzadenesheli, et al. Geometry-informed neural operator for large-scale 3d pdes. Advances in Neural Information Processing Systems , 36:35836–35854, 2023. 3, 12 [35] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision , pages 10012–10022, 2021. 2, 11 [36] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition , pages 12009–12019, 2022. 2, 5, 6, 11 [37] Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deep- onet: Learning nonlinear operators for identifying differen- tial equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193 , 2019. 2, 12 [38] Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence , 3(3):218–229, 2021. 12 [39] Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karni- adakis. Deepxde: A deep learning library for solving differ- ential equations. SIAM review , 63(1):208–228, 2021. 1 [40] Leon Migus, Yuan Yin, Jocelyn Ahmed Mazari, and Patrick Gallinari. Multi-scale physical representations for approxi- mating pde solutions with graph neural operators. In Topo- logical, Algebraic and Geometric Learning Workshops 2022 , pages 332–340. PMLR, 2022. 3, 12 [41] Tan M. Nguyen, Vai Suliafu, Stanley J. Osher, Long Chen, and Bao Wang. Fmmformer: Efficient and flexible trans- former via decomposed near-field and far-field attention, 2021. 3, 12 [42] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & im- age processing , pages 722–729. IEEE, 2008. 5 [43] G. Of. An efficient algebraic multigrid preconditioner for a fast multipole boundary element method. Computing , 82(2): 139–155, 2008. 3 [44] Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. 9 Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 , 2023. 1, 2, 11 [45] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition , pages 3498–3505. IEEE, 2012. 5 [46] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solu- tions of nonlinear partial differential equations, 2017. 1, 12 [47] Tim De Ryck, Florent Bonnet, Siddhartha Mishra, and Em- manuel de Bezenac. An operator preconditioning perspective on training in physics-informed machine learning. In The Twelfth International Conference on Learning Representa- tions , 2024. 12 [48] Michael E. Sander, Pierre Ablin, Mathieu Blondel, and Gabriel Peyr ´e. Sinkformers: Transformers with doubly stochastic attention, 2022. 3 [49] Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. Synthesizer: Rethinking self-attention for transformer models. In International conference on ma- chine learning , pages 10183–10192. PMLR, 2021. 2 [50] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lu- cas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in neural information processing systems , 34:24261–24272, 2021. 11 [51] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Herv ´e J´egou. Going deeper with im- age transformers. In Proceedings of the IEEE/CVF interna- tional conference on computer vision , pages 32–42, 2021. 2, 11 [52] Hugo Touvron, Matthieu Cord, and Herv ´e J´egou. Deit iii: Revenge of the vit. In European conference on computer vision , pages 516–533. Springer, 2022. 2, 6, 11 [53] Tapas Tripura and Souvik Chakraborty. Wavelet neural op- erator: a neural operator for parametric partial differential equations. arXiv preprint arXiv:2205.02191 , 2022. 12 [54] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neu- ral Information Processing Systems . Curran Associates, Inc., 2017. 1, 4 [55] Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768 , 2020. 2, 11 [56] Sifan Wang, Jacob H Seidman, Shyam Sankaran, Hanwen Wang, George J Pappas, and Paris Perdikaris. Cvit: Contin- uous vision transformer for operator learning. arXiv preprint arXiv:2405.13998 , 2024. 3, 12 [57] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision , pages 568–578, 2021. 11 [58] Kan Wu, Jinnian Zhang, Houwen Peng, Mengchen Liu, Bin Xiao, Jianlong Fu, and Lu Yuan. Tinyvit: Fast pretrainingdistillation for small vision transformers. In European con- ference on computer vision , pages 68–85. Springer, 2022. 2, 6, 11 [59] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformers for longer sequences. Advances in neu- ral information processing systems , 33:17283–17297, 2020. 2 [60] Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh. Multi resolution analysis (mra) for approx- imate self-attention, 2022. 3, 12 [61] Maksim Zhdanov, Max Welling, and Jan-Willem van de Meent. Erwin: A tree-based hierarchical transformer for large-scale physical systems. CoRR , 2025. 3, 12 [62] Zhenhai Zhu and Radu Soricut. H-transformer-1d: Fast one- dimensional hierarchical attention for sequences, 2021. 3, 11 10 A. Extended related work section In this section, we provide a more comprehensive overview of the related literature, expanding upon the works briefly mentioned in the main text. Vision Transformers (ViTs) (ViTs) [16] divide each in- put image into fixed-size patches (e.g., 16×16), flatten them into tokens, add positional embeddings, and process the re- sulting sequence with a Transformer encoder. When pre- trained on large-scale datasets such as ImageNet-21k [15] or LVD-142M [44], ViTs achieve performance on par with or surpassing that of convolutional neural networks (CNNs) on standard image classification benchmarks. Despite these advances, ViTs face several limitations: 1. quadratic computational complexity O(N2)with respect to the number of input patches ( N, where typically N≈ 196for a224×224image). 2. Absence of built-in locality and translation equivariance, in contrast to CNNs, which makes ViTs more dependent on large training datasets. 3. High computational and memory demands—for in- stance, ViT-Large/ 16contains roughly 300 million parameters and requires thousands of GPU-hours to train [16]. These drawbacks have spurred the development of more ef- ficient ViT variants. A.1. Efficient Vision Transformer Variants Several efficient alternatives to the standard attention have been proposed in the literature to address the limitations of Vision Transformers. While they differ in methodology, they have collectively inspired this work. Linear-Attention Transformers: Linformer [55] projects keys and values into a low-dimensional subspace (k≪N), reducing per-head complexity from O(N2)to O(Nk)while retaining competitive accuracy. Performer [12] uses a randomized feature map to approximate softmax( QK⊤)≈Φ(Q)Φ(K)⊤, achieving true O(N) time and memory with bounded error. When applied to ViT backbones, these methods handle larger images with much lower memory cost. All-MLP Architectures: MLP-Mixer [50] differs from both CNNs and ViTs by alternating token-mixing MLPs (mixing across Nspatial tokens) and channel-mixing MLPs (mixing across Cchannels). This yields per-layer complex- ityO(NC)instead of O(N2), and achieves 84% top-1 on ImageNet-1K (with ImageNet-21k pretraining), demon- strating that dense MLPs can approximate spatial interac- tions effectively.Pyramid/Hierarchical ViTs: Pyramid Vision Trans- former (PVT) [57] builds a multi-scale pyramid by pro- gressively downsampling tokens: early stages operate on high-resolution grids (H 4×W 4), and deeper stages use “patch merging” to halve spatial dimensions at each level. Within each stage, Spatial-Reduction Attention (SRA) pools keys/values by a factor r, reducing sequence length from NtoN/r2and complexity to O(N·N/r2). PVT matches CNN backbones in detection and segmentation. Swin Transformer [35, 36] introduces window-based MSA over non-overlapping M×Mpatches (e.g., 7×7), re- ducing complexity to ON M2×M4 . Each stage ends with apatch merging layer that concatenates 2×2tokens and projects them, halving resolution and doubling channels. Crucially, Swin alternates “standard” and “shifted” window partitions: shifted windows (offset by ⌊M/2⌋) overlap adja- cent regions, enabling cross-window context without global attention. Swin-B attains 87.3% top-1 on ImageNet-1K, with near-linear inference latency. Distilled and Compact ViTs: TinyViT [58] uses pretraining-stage distillation from a large teacher (e.g., Swin-B/L trained on ImageNet- 21k). By caching teacher logits and applying neural architecture search under FLOPs/parameter constraints, TinyViT produces 11M–21M parameter models that achieve 84.8–86.5% top-1 on ImageNet-1K—close to much larger ViTs. Data-Efficient Image Transformers (DeiT) [51] add a learnable distillation token that learns from a CNN teacher’s soft logits (e.g., ResNet- 50) while training on ImageNet-1K alone. Combined with aggressive augmentation (RandAug- ment, Mixup, CutMix) and regularization (Label Smooth- ing, Stochastic Depth), DeiT-Small (22M) reaches 83.1% top-1 (vs. 77.9%for vanilla ViT), and DeiT-Base ( 86M) hits85.2%in under three GPU-days, matching ResNet- 152. Later work [52] adds self-supervised distillation and token pruning for further efficiency. Collectively, these efforts—linear-attention, MLP-only designs, hierarchical token pyramids, window-based local attention, and distillation—have greatly extended ViT ap- plicability across resource-constrained tasks. However, the inherent hierarchical structure of images remains only par- tially integrated into existing attention mechanisms, poten- tially hindering the overall performance. Multiscale neural architectures. Several transformer ar- chitectures have been proposed in the one-dimensional set- ting of Natural Language Processing (NLP) that are closely related to the multiscale principles underlying our method. H-Transformer-1D [62] introduces a hierarchical atten- tion scheme that restricts full attention to local windows 11 while allowing global information to flow through a tree- like structure. MRA-Attention [60] leverages a multiresolution de- composition of attention weights using wavelet transforms to capture both coarse and fine-scale dependencies. FMMformer [41] builds on the Fast Multipole Method (FMM) to hierarchically group tokens and reduce attention complexity by summarizing distant interactions. Fast Multipole Attention (FMA) [23] similarly applies FMM-inspired grouping but in a more generalizable atten- tion framework. ERWIN [61] proposes a multilevel window-based trans- former with recursive interpolation between coarse and fine spatial scales in the setting of graph attention. A.2. Neural Operators The challenge in solving PDEs is the computational bur- den of conventional numerical methods. To improve the tractability, a recent line of research investigates how ma- chine learning and especially artificial neural networks can provide efficient surrogate models. A first kind of ap- proache assumes the knowledge of the underlying PDE, like PINNs [24, 38, 46]. With this knowledge, the neural net- work is optimized by solving the PDE, which can be consid- ered as a kind of unsupervised learning. However, the diffi- cult optimization process requires tailored training schemes with many iterations [27, 47]. In a ”semi-supervised” way, the recent approach of Boudec et al. [5] recasts the prob- lem as a learning to learn task, leveraging either, the PDE and simulations or observations data. While this method obtained promising results, its memory footprint may limit its large scale usage. In this work, we focus neural oper- ators, which learn directly the solution operator from data [33, 37]. In this line of work, the challenge lies in the model architecture rather than in the optimization process and dif- ferent kind of models were recently proposed. Transformer neural operators In [10] the classical transformer was adapted for the first time to operator learn- ing problems related to PDEs. The paper explores two vari- ants, based on Fourier transform and Galerkin method. The latter one uses a simplified attention based operator, with- out softmax normalization. This solutions shares the lin- ear complexity with our work but not the same expressivity. Still in the simplfyiing trend, LOCA (Learning Operators with Coupled Attention) [34] maps the input functions to a finite set of features and attends to them by output query location. Based on kernel theory, Li et al. [34] introduces an effi- cient transformer for the operator learning setting was pro- posed based on kernel theory. Recently in [9] was proposed an interesting way to see attention in the continuos setting and in particular the continuum patched attention. In Uni-versal Physics Transformer [1] framework for efficient scal- ing was proposed based on a coarsoning of the input mesh. In [56] the Continuous vision transformer was proposed as a operator-learning version of the more classical ViT. In the context of operator learning and graph- structured data, the Multipole Graph Neural Operator (MGNO) [32] extends multipole ideas to irregular do- mains via message-passing on graph hierarchies. Finally, V-MGNO ,F-MGNO , and W-MGNO [40] propose varia- tions of MGNO to improve stability. These works highlight the growing interest in multiscale and hierarchical schemes to improve efficiency and gener- alization, both in sequence modeling and operator learning. Our work builds on this line by proposing a spatially struc- tured multipole attention mechanism adapted to vision and physical simulation tasks. Our model is explicitly designed to function as a neural operator [25]. To qualify as a neural operator, a model must satisfy the following key properties. First, it should be ca- pable of handling inputs and outputs across arbitrary spatial resolutions. Second, it should exhibit discretization conver- gence — that is, as the discretization of the input becomes finer, the model’s predictions should converge to the true underlying operator governing the physical system. This pose a new challenge to the computer vision community, namely not just learn an image to image function but the un- derlying operator independently of the resolution. This field saw its first proof of concept with Lu et al. [37], who lever- aged a universal approximation theorem for nonlinear oper- ators and paved the way for numerous extensions. Fourier Neural operators [33] rely on a translation-equivariant ker- nel and discretize the problem via a global convolution per- formed computed by a discrete Fourier transform. Building on this foundation, the Wavelet Neural Operator (WNO) [53] introduces wavelet-based multiscale localization, en- abling kernels that simultaneously capture global structures and fine-grained details. The Multiwavelet Neural Opera- tor (MWNO) [20] further extends this approach by incorpo- rating multiple resolution components, leading to improved convergence with respect to discretization. B. Detailed hyperparameters B.1. Architecture Hyperparameters for Image clas- sification Table 3 summarizes the architectural and training hyperpa- rameters used in our model. Below, we provide brief com- ments on each of them. he first block in Table 3 corresponds to the standard configuration of the pretrained SwinV2-Tiny model, which we adopt as our backbone. •Patch size: Size of non-overlapping image patches. A value of 4corresponds to 4×4patches. •Input channels: Number of input channels, set to 3for 12 RGB images. •Embedding dimension ( embed dim):Dimensionality of the token embeddings, controlling model capacity. •Global pooling: Global average pooling is used instead of a [CLS] token at the output. •Depths (layers per stage): Number of transformer blocks in each of the four hierarchical stages, e.g., [2,2,6,2]. •Number of heads (per stage): Number of attention heads per stage; increases with depth to maintain repre- sentation power. •Window size: Local attention is applied in windows of size8×8. •MLP ratio: Ratio between the hidden dimension in the feed-forward MLP and the embedding dimension (e.g., 4.0×96 = 384 ). •QKV bias: Whether learnable biases are used in the query/key/value projections (set to True ). •Dropout rates ( drop rate ,proj drop rate , attn.drop rate ):All standard dropout components are disabled (set to 0). •Drop-path rate ( drop path rate ):Stochastic depth with rate 0.2applied to residual connections for regular- ization. •Activation layer: GELU is used as the non-linearity in MLP layers. •Normalization layer: Layer normalization is applied throughout the network. •Pretrained window sizes: Set to [0,0,0,0]as no pre- trained relative position biases are used. •Attention sampling rate: The input to the attention mechanism is downsampled by a factor of 2, allowing for increased expressivity without a relevant additional com- putational cost. •Attention down-sampling: A convolutional layer with kernel size 2and stride 2is used to downsample features between the levels of the multipole attention. •Attention up-sampling: Transposed convolution (kernel size2, stride 2) is used to upsample the features after the windowed attention at each hierarchical level. •Number of levels: Specifies the number of multipole at- tention levels used at each stage. We found it beneficial to use the maximum number of levels permitted by the spatial resolution. B.2. Architecture Hyperparameters for Darcy Flow Table 4 reports the main architectural hyperparameters used in our MANO model for solving the Darcy flow problem. Below, we provide a brief description of each. •channels : Number of input channels; set to 3because we concatenate the two spatial coordinate with the per- meability coefficient. •patch size : Patch size used to partition the input grid;Hyperparameter Value Patch size 4 Input channels 3 Embedding dimension (embed dim) 96 Global pooling avg Depths (layers per stage) [2, 2, 6, 2] Number of heads (per stage) [3, 6, 12, 24] Window size 8 MLP ratio 4.0 qkv bias (boolean) True Dropout rate (drop rate ) 0.0 Projection-drop rate (proj drop rate ) 0.0 Attention-drop rate (attn drop rate ) 0.0 Drop-path rate (drop path rate ) 0.2 Activation layer gelu Normalization layer (flag) True Pretrained window sizes [0, 0, 0, 0] Attention sampling rate 2 Attention down-sampling conv kernel size 2 stride 2 Attention up-sampling conv transpose kernel size 2 stride 2 number of levels [3, 2, 1, 1] Table 3. MANO Hyperparameters for image classification set to 1to retain full spatial resolution, ideal for dense prediction tasks. •domain dim : Dimensionality of the input domain; set to 2for 2D PDEs like Darcy flow. •stack regular grid : Indicates whether the input dis- cretization is regular and should be stacked; set to true . •dim: Embedding dimension of the token representations. •dim head : Dimensionality of each individual attention head. •mlp dim : Hidden dimension of the MLP layers following attention. •depth : Total number of transformer blocks. •heads : Number of self-attention heads in each attention block. •emb dropout : Dropout rate applied to the input embed- dings. •Attention sampling rate: The input to the attention mechanism is downsampled by a factor of 2, allowing for increased expressivity without a relevant additional com- putational cost. •Attention down-sampling: A convolutional layer with kernel size 2and stride 1is used to downsample features between the levels of the multipole attention. •Attention up-sampling: Transposed convolution (kernel size2, stride 1) is used to upsample the features after the windowed attention at each hierarchical level. •att dropout : Dropout rate applied within the attention block. •Window size: Local attention is applied in windows of size2×2. 13 •local attention stride : Stride with which local windows are applied; controls overlap in attention. •positional encoding : Whether explicit positional encod- ings are added; set to false in our setting. •learnable pe : Whether the positional encoding is learn- able; also disabled here. •pos enc coeff : Scaling coefficient for positional encod- ings, if used; null since not applicable. Hyperparameter Value channels 3 patch size 1 domain dim 2 stack regular grid true dim 128 dim head 32 mlp dim 128 depth 8 heads 4 emb dropout 0.1 Attention sampling rate 2 Attention down-sampling conv kernel size 2 stride 1 Attention up-sampling conv transpose kernel size 2 stride 1 att dropout 0.1 window size 2 local attention stride 1 positional encoding false learnable pe false pos enc coeff null Table 4. MANO Hyperparameters for Darcy flow C. Implementation details All our experiments are implemented in PyTorch. C.1. Model checkpoints Our experiments in image classification use the following pre-trained models from HuggingFace on ImageNet[15]: • ViT-base available at https://huggingface.co/ google/vit-base-patch16-224 • DeiT-small available at https://huggingface. co/facebook/deit-small-patch16-224 • SwinV2 available at https://huggingface.co/ timm/swinv2_tiny_window8_256.ms_in1k We initialize our MANO model by loading the full weights of the pretrained SwinV2-Tiny.D. Data Augmentation During training, in the case of image classification, we ap- ply standard data augmentations to improve generalization. Specifically, the training pipeline includes: •Resize to a fixed resolution, matching the input size ex- pected by the pretrained models; •RandomCrop with a crop size equal to the resized reso- lution, using a padding of 4pixels; •RandomHorizontalFlip ; •ToTensor conversion; •Normalize using dataset-specific mean and standard deviation statistics. At test time, images are resized (if necessary), converted to tensors, and normalized using the same statistics as in training. For numerical simulations, we do not apply any data aug- mentation. 14 | 5 | 1 | The MANO model is based on the 'Tiny' version of the Swin Transformer V2, which has approximately 28.47M parameters, leading to a manageable memory footprint on modern GPUs. Given that the training is conducted on the ImageNet-1k dataset and several other benchmarks for a total of 50 epochs, the expected total training time for a model of this size with reasonable batch sizes (likely between 16 to 64) should be around 5 hours on a single high-end GPU like NVIDIA A100 or similar. If batch sizes are optimized, this can easily fit on a single GPU. The efficiency of the Multipole Attention mechanism also enhances training speed. | yes | Yes | CV | Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics | 2025-07-03T00:00:00.000Z | [https://github.com/AlexColagrande/MANO] | 1 | Code Downloads Dynamically upon naming | same | Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics.ipynb | Yes | It starts and runs successfully |
Gowalla | RLAE-DAN | [] | Why is Normalization Necessary for Linear Recommenders? | 2025-04-08T00:00:00 | https://arxiv.org/abs/2504.05805v2 | [
"https://github.com/psm1206/dan"
] | {'Recall@20': '0.1922', 'nDCG@20': '0.1605'} | [
"nDCG@20",
"Recall@20",
"HR@10",
"HR@100",
"PSP@10",
"nDCG@10",
"nDCG@100"
] | Given the following paper and codebase:
Paper: Why is Normalization Necessary for Linear Recommenders?
Codebase: https://github.com/psm1206/dan
Improve the RLAE-DAN model on the Gowalla dataset. The result
should improve on the following metrics: {'Recall@20': '0.1922', 'nDCG@20': '0.1605'}. You must use only the codebase provided.
| Why is Normalization Necessary for Linear Recommenders? Seongmin Park Sungkyunkwan University Suwon, Republic of Korea psm1206@skku.eduMincheol Yoon Sungkyunkwan University Suwon, Republic of Korea yoon56@skku.eduHye-young Kim Sungkyunkwan University Suwon, Republic of Korea khyaa3966@skku.eduJongwuk Lee∗ Sungkyunkwan University Suwon, Republic of Korea jongwuklee@skku.edu Abstract Despite their simplicity, linear autoencoder (LAE)-based models have shown comparable or even better performance with faster inference speed than neural recommender models. However, LAEs face two critical challenges: (i) popularity bias , which tends to rec- ommend popular items, and (ii) neighborhood bias , which overly focuses on capturing local item correlations. To address these issues, this paper first analyzes the effect of two existing normalization methods for LAEs, i.e.,random-walk andsymmetric normalization. Our theoretical analysis reveals that normalization highly affects the degree of popularity and neighborhood biases among items. Inspired by this analysis, we propose a versatile normalization so- lution, called Data-Adaptive Normalization ( DAN ), which flexibly controls the popularity and neighborhood biases by adjusting item- and user-side normalization to align with unique dataset charac- teristics. Owing to its model-agnostic property, DAN can be easily applied to various LAE-based models. Experimental results show that DAN-equipped LAEs consistently improve existing LAE-based models across six benchmark datasets, with significant gains of up to 128.57% and 12.36% for long-tail items and unbiased evaluations, respectively. Refer to our code in https://github.com/psm1206/DAN. CCS Concepts •Information systems →Recommender systems . Keywords Collaborative filtering; linear autoencoders; normalization; popu- larity bias; neighborhood bias ACM Reference Format: Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee. 2025. Why is Normalization Necessary for Linear Recommenders?. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’25), July 13–18, 2025, Padua, Italy. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3726302.3730116 1 Introduction Collaborative filtering (CF) [ 15,18] is the dominant solution for developing recommender systems because it uncovers hidden col- laborative signals from user-item interactions. Existing CF models ∗Corresponding author This work is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License. SIGIR ’25, July 13–18, 2025, Padua, Italy ©2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-1592-1/2025/07 https://doi.org/10.1145/3726302.3730116can be categorized into linear andnon-linear approaches depend- ing on how user/item correlations are learned. With the success of deep learning, a lot of non-linear CF models have employed various neural architectures, including autoencoders [ 27,28,42,55,59] (AEs), recurrent neural networks (RNNs) [ 19,26], graph neural networks [ 5,6,17,30,41,53] (GNNs), and transformers [ 23,43,49]. Although these non-linear models initially showed promising results through complex user/item relationships, recent studies [ 5, 11–13,38,39,50] have revealed that the linear models can achieve competitive or even significant gains over the non-linear models. This is because linear models are less prone to overfitting in learning sparse user-item interactions. Also, their computational efficiency enables rapid adoption in real-world applications. In this sense, this paper focuses on the linear models using item neighborhoods, also known as linear autoencoders (LAEs) [21, 33, 45–48, 51]. Formally, LAEs operate on a user-item interaction matrix X∈ {0,1}𝑚×𝑛for𝑚users and𝑛items, learning an item-to-item weight matrix B∈R𝑛×𝑛to reconstruct the original matrix Xfrom matrix multiplication X·B. Conceptually, Bdenotes a single hidden layer to act as both an encoder and a decoder for the input matrix X. Existing studies [ 21,33,45–48,51] formulate LAEs as a convex optimiza- tion with additional constraints, yielding a closed-form solution while avoiding a trivial solution, i.e.,ˆB=I. Notably, the represen- tative LAE-based models, such as EASER[45], DLAE/EDLAE [ 46], and RLAE/RDLAE [ 31], have shown state-of-the-art performance results on large-scale datasets, e.g., ML-20M, Netflix, and MSD. However, existing LAEs face two challenges. (C1) Popularity bias: Inevitably, the learned weight matrix ˆBis heavily influenced by popular items, leading to popular items being excessively rec- ommended to users [ 2,16,40,54,62].(C2) Neighborhood bias : It refers to the tendency to overly focus on local item relationships that capture individual user preferences, predominantly from a few highly engaged users [ 32]. Since LAEs directly operate on the input matrix X, they primarily capture these local relationships while failing to discover global item relationships shared by most users, which are crucial for capturing principal CF patterns. To address these challenges, we delve into normalizing the user- item matrix Xfor LAEs. As the conventional technique, normaliza- tion has been widely adopted in numerous linear and non-linear recommendation models [ 5,17,20,35,41,53,60]. Despite its wide- spread usage, there has been a lack of in-depth analysis exploring the underlying effects of normalization, particularly in the context of linear recommenders. This motivates us to ask the following key questions about the normalization for LAEs: (i) How do we apply normalization to LAEs? (ii)How does normalization affect popularity bias? (iii)How does normalization influence neighborhood bias? Firstly, we examine how to apply normalization to LAEs by ex- ploring item- and user-side normalizations on the reconstructionarXiv:2504.05805v2 [cs.IR] 28 Apr 2025 SIGIR ’25, July 13–18, 2025, Padua, Italy Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee Head Tail0.320.330.340.350.36Head PerformanceW/O Norm. W/ Norm. (ours) 0.0000.0020.0040.0060.0080.0100.0120.014 Tail Performance+12700% Head Tail0.080.100.120.140.16Head PerformanceW/O Norm. W/ Norm. (ours) 0.0000.0050.0100.0150.020 Tail Performance+349% (a) ML-20M (b) Yelp2018 Figure 1: Performance of popular and unpopular items on ML-20M and Yelp2018. The x-axis categorizes items into ‘Head’ (top 20% popular items) and ‘Tail’ (the remaining items), while the y-axis represents NDCG@20. ‘W/O Norm.’ and ‘W/ Norm.’ denote LAE without and with normalization. term∥X−XB∥2 𝐹and the regularization term ∥B∥2 𝐹. For item normal- ization, we employ a diagonal matrix D𝐼∈R𝑛×𝑛to control the row- and column-wise importance of items in forming the weight matrix B. For user normalization, we adopt a diagonal matrix D𝑈∈R𝑚×𝑚 to adjust user-wise importance in reconstructing X. These nor- malizations directly affect the gram matrix P=X⊤X∈R𝑛×𝑛, which determines item co-occurrences in LAE’s closed-form solu- tion ˆB=(P+𝜆I)−1P. We analyze these effects through existing methods such as random-walk andsymmetric normalization . While these approaches handle item/user importance, they both suffer from a fundamental limitation of using fixed weights. Secondly, we conduct an empirical study to analyze the impact of normalization on popularity bias. Figure 1 illustrates the perfor- mance for both popular (head) and unpopular (tail) items. LAEs without normalization ( i.e., W/O Norm.) predominantly recommend popular items, leading to high performance for head items but low performance for tail items. In contrast, LAEs with normalization ( i.e., W/ Norm.) substantially improve Tail performance across datasets while maintaining competitive Head performance. It reveals that normalization effectively adjusts the degree of popularity bias. We lastly investigate the normalization effect by categorizing six datasets into high- and low-homophilic groups. High-homophilic datasets have densely connected similar items, indicating shared global patterns. (Detailed homophily metric is discussed in Sec- tion 4.) As depicted in Figure 2, normalization consistently improves LAEs across datasets by capturing global item correlations while mitigating local relationships. This is especially evident in high- homophilic datasets, which show substantial performance gains. They also show high absolute performance due to their inherent item relationships that align with CF principles. These findings demonstrate that normalization is a key mechanism for balancing global and local item relationships, modulating neighborhood bias. Based on these findings, we propose a simple yet effective normal- ization, called Data-Adaptive Normalization ( DAN ), effectively bal- ancing the popularity of items/users depending on dataset-specific characteristics. It comprises two key components: (i) Item-adaptive normalization modulates the strength of item popularity by the skewness of item distributions, where we theoretically demonstrate its effectiveness in adjusting popularity bias. (ii) User-adaptive normalization adjusts the influence of users by the newly pro- posed homophily metric tailored for recommendations, which helps ML-20M Netflix MSD0.280.300.320.340.360.38NDCG@200.323 0.324 0.2740.341 0.341 0.321W/O Norm. W/ Norm. (ours) Gowalla Yelp2018 Book0.100.150.200.25NDCG@200.171 0.0950.1750.191 0.1000.181W/O Norm. W/ Norm. (ours)(a) High-homophilic group (b) Low-homophilic group Figure 2: Performance for six datasets categorized into two groups: High-homophilic group (ML-20M, Netflix, and MSD) with low neighborhood bias and Low-homophilic group (Gowalla, Yelp2018, and Amazon-book) with high neighbor- hood bias. The x-axis lists the individual datasets. control neighborhood bias by discovering meaningful global rela- tionships among items. Thanks to its model-agnostic property, DAN can be easily adapted to various LAE-based models. Experimental results show that DAN-equipped LAEs outperform existing nor- malization methods and state-of-the-art models on six benchmark datasets. Notably, it achieves up to 128.57% and 12.36% performance gains for long-tail items and unbiased evaluation, respectively. The key contributions of this paper are summarized as follows. •Mathematical analysis : We are the first to investigate existing normalization methods on LAEs. It is observed that existing normalization methods can alleviate the popularity bias and neighborhood bias, but they are limited in handling the unique characteristics of the datasets. (Section 3) •Data-customized normalization : We propose simple yet ef- fective normalization DAN , which adjusts the degree of popu- larity and neighborhood biases by the dataset characteristics, such as the skewness of item distributions ( e.g., item-side Gini index) and the weighted homophily ratio. (Section 4) •Extensive validation : We demonstrate that DAN-equipped LAEs achieve superior performance over fourteen existing CF models on six datasets. We also conduct experiments to evaluate DAN’s efficiency, showing that it adds negligible computational cost compared to existing LAEs. (Sections 5–6) 2 Preliminaries 2.1 Linear Autoencoders (LAEs) Assuming implicit user feedback, we represent the user-item inter- action matrix Xas a binary matrix, i.e.,X∈{0,1}𝑚×𝑛. If a user𝑢 has interacted with an item 𝑖, then X𝑢𝑖=1; otherwise, X𝑢𝑖=0. In this paper, we mainly focus on addressing LAE-based rec- ommender models [ 21,33,45,46,48,51], to learn an item-to-item weight matrix B∈R𝑛×𝑛. Specifically, the objective function of LAEs is formulated by the reconstruction error and L2 regularization. min B∥X−XB∥2 𝐹+𝜆∥B∥2 𝐹, (1) where𝜆is the hyperparameter to control the regularization in B. When𝜆=0, it becomes a trivial solution, i.e.,ˆB𝐿𝐴𝐸=I. The closed-form solution of LAE is easily derived as follows. ˆB𝐿𝐴𝐸=X⊤X+𝜆I−1X⊤X=(P+𝜆I)−1P. (2) Why is Normalization Necessary for Linear Recommenders? SIGIR ’25, July 13–18, 2025, Padua, Italy Here, P=X⊤X∈R𝑛×𝑛represents a gram matrix for items. It is symmetric, and the entry P𝑖𝑗indicates the co-occurrence between two items𝑖and𝑗by all users. P𝑖𝑗=X⊤X 𝑖𝑗=𝑓𝑟𝑒𝑞(𝑖,𝑗). (3) The gram matrix Ptends to be biased in two aspects: (i) it reflects only raw co-occurrence between items, leading to the popularity bias inˆB𝐿𝐴𝐸, and (ii) it considers only direct item relationships, resulting in the neighborhood bias , indicating that the global item correlations are non-trivial to reflect. For inference, the prediction score 𝑠𝑢𝑖for the item 𝑖to the user 𝑢is computed as follows. 𝑠𝑢𝑖=X𝑢∗·ˆB∗𝑖, (4) where X𝑢∗and ˆB∗𝑖are the row vector for the user 𝑢inXand the column vector for the item 𝑖inB, respectively. That is, each column inˆBmeans a target item to be recommended to the user. Since item 𝑖in the user history is multiplied by the 𝑖-th row in ˆB, each row inˆBrepresents a source item the user interacts with. We can also interpret the rows and columns of the gram matrix Pas the source and target items because ˆBis composed of Pin Eq. (2). 2.2 Existing Normalization Methods This section introduces two representative normalization methods, such as random-walk (RW) and symmetric (Sym) normalization , commonly used in recommendation models [5, 9, 10, 17, 41, 53]. RW normalization . LetUandIbe a set of𝑚users and𝑛items, respectively. User-item interactions are interpreted by a user-item bipartite graphG= (U∪I ,E), where nodes represent users and items. Here, an edge 𝑒∈Eindicates the interaction between the user and the item. Let Adenote an adjacency matrix representing the user-item interactions on the graph G. By considering the ran- dom walk movement, the transition probability is computed by normalizing the number of either users or items. ˜A=D−1A=D−1 𝑈0 0 D−1 𝐼 0 X X⊤0 =0 D−1 𝑈X D−1 𝐼X⊤0 ,(5) where D∈R(𝑚+𝑛)×(𝑚+𝑛)is a diagonal degree matrix. D𝑈∈R𝑚×𝑚 represents a diagonal degree matrix for users, where each entry D𝑢𝑢 is the number of items interacted by the user 𝑢,i.e.,D𝑢𝑢=Í𝑛 𝑖=1X𝑢𝑖, and vice versa for items with D𝐼∈R𝑛×𝑛where D𝑖𝑖=Í𝑚 𝑢=1X𝑢𝑖. Two distinct matrices D−1 𝑈XandD−1 𝐼X⊤are the user-to-item and item-to-user transition probabilities, respectively. They are interpreted as user/item normalization. ˜X(𝑢𝑠𝑒𝑟) 𝑟𝑤 =D−1 𝑈X, and ˜X(𝑖𝑡𝑒𝑚) 𝑟𝑤⊤ =D−1 𝐼X⊤. (6) Using RW normalization, the original gram matrix P=X⊤Xis converted to ˜P𝑟𝑤=D−1 𝐼X⊤D−1 𝑈X. To analyze the effect of item- side normalization, if we assume that D𝑈=I, the normalized gram matrix is represented by ˜P(𝑖𝑡𝑒𝑚) 𝑟𝑤 =D−1 𝐼X⊤X. Since D−1 𝐼performs row-wise item normalization in X⊤X, each entry(˜P(𝑖𝑡𝑒𝑚) 𝑟𝑤)𝑖𝑗in- dicates the co-occurrence of two items 𝑖and𝑗normalized by the popularity for the source item 𝑖. ˜P(𝑖𝑡𝑒𝑚) 𝑟𝑤 𝑖𝑗= D−1 𝐼X⊤X 𝑖𝑗= D−1 𝐼P 𝑖𝑗=𝑓𝑟𝑒𝑞(𝑖,𝑗) 𝑓𝑟𝑒𝑞(𝑖).(7)Although RW normalization mitigates the popularity of source items, it does not consider the popularity of target items. Sym normalization . Unlike RW normalization, it normalizes the input matrix Xin both user and item sides. ˜X𝑠𝑦𝑚=D−1/2 𝑈XD−1/2 𝐼. (8) Using Sym normalization, the gram matrix is updated into ˜P𝑠𝑦𝑚= D−1/2 𝐼X⊤D−1 𝑈XD−1/2 𝐼. Assuming that D𝑈=I, the gram matrix equals ˜P(𝑖𝑡𝑒𝑚) 𝑠𝑦𝑚 =D−1/2 𝐼X⊤XD−1/2 𝐼. Because it deals with both row- and column-wise item normalization, (˜P(𝑖𝑡𝑒𝑚) 𝑠𝑦𝑚)𝑖𝑗is normalized by the popularity of both the source item 𝑖and the target item 𝑗. ˜P(𝑖𝑡𝑒𝑚) 𝑠𝑦𝑚 𝑖𝑗= D−1/2 𝐼PD−1/2 𝐼 𝑖𝑗=𝑓𝑟𝑒𝑞(𝑖,𝑗) 𝑓𝑟𝑒𝑞(𝑖)1/2𝑓𝑟𝑒𝑞(𝑗)1/2.(9) In contrast to RW normalization, Sym normalization can penal- ize the popularity of target items, making it effective in directly alleviating the recommendation of popular items. However, it em- ploys the same normalization weight for source and target items, despite their different influences in the gram matrix ˜P𝑠𝑦𝑚. 3 Normalized Linear Autoencoders This section presents the incorporation of user/item normaliza- tion in the objective function of LAEs, focusing on RW and Sym normalization methods and their limitations. 3.1 Generalized Normalization to LAEs To deal with user/item normalization, we formulate a generalized objective function of LAEs using a diagonal user weight matrix1 W𝑈∈R𝑚×𝑚and two diagonal item weight matrices W(1) 𝐼,W(2) 𝐼∈ R𝑛×𝑛. Note that we can extend the objective function to other LAEs with additional constraints, such as EASER[45] and RLAE [31]. min B∥W𝑈(XW(1) 𝐼−XW(1) 𝐼B)∥2 𝐹+𝜆∥W(2) 𝐼B∥2 𝐹. (10) For user normalization, the user weight matrix W𝑈modulates user importance in the reconstruction error, where setting W𝑈= D𝑈adjusts the influence of active users. For item normalization, W(1) 𝐼transforms the gram matrix into W(1) 𝐼X⊤XW(1) 𝐼, enabling row- and column-wise weighting that controls both source and tar- get item weights in computing B. The weight matrix W(2) 𝐼provides row-wise weighting in the L2 regularization term, controlling the weight of source items in computing B. By adopting item normal- ization D𝐼forW(1) 𝐼andW(2) 𝐼, we can mitigate item popularity bias by weakening the importance of popular items. Through simple convex optimization of Eq. (10), we readily de- rive the following closed-form solution. ˆB𝑔𝑒𝑛= ˜P𝑔𝑒𝑛+𝜆I−1˜P𝑔𝑒𝑛,where ˜P𝑔𝑒𝑛=(W(2) 𝐼)−2W(1) 𝐼X⊤W2 𝑈XW(1) 𝐼. (11) User normalization is performed in between the input matrix X (i.e.,X⊤W2 𝑈X). For item normalization, row-wise ( i.e.,(W(2) 𝐼)−2W(1) 𝐼) and column-wise ( i.e.,W(1) 𝐼) normalization is used before and after 1We assume all the weight matrices as the diagonal matrices, not the full matrices. We leave them as a future design choice. SIGIR ’25, July 13–18, 2025, Padua, Italy Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee the gram matrix, respectively. This reveals that W(1) 𝐼solely cannot differentiate source/target item normalization, necessitating W(2) 𝐼. 3.2 Existing Normalization for LAEs RW normalization . It performs user normalization and row-wise item normalization. We thus replace W𝑈andW(2) 𝐼in Eq. (10)with D−1/2 𝑈andD1/2 𝐼, respectively. min B∥D−1/2 𝑈(X−XB)∥2 𝐹+𝜆∥D1/2 𝐼B∥2 𝐹. (12) The same solution form is derived by substituting the original gram matrix Pwith the RW normalized gram matrix ˜P𝑟𝑤. ˆB𝑟𝑤= ˜P𝑟𝑤+𝜆I−1 ˜P𝑟𝑤 ,where ˜P𝑟𝑤=D−1 𝐼X⊤D−1 𝑈X.(13) Sym normalization . Unlike RW normalization, it handles both row- and column-wise item normalization. For W𝑈andW(1) 𝐼, we adopt two diagonal degree matrices D−1/2 𝑈andD−1/2 𝐼, respectively. min B∥D−1/2 𝑈(XD−1/2 𝐼−XD−1/2 𝐼B)∥2 𝐹+𝜆∥B∥2 𝐹. (14) We also derive the solution by only replacing the gram matrix P with the Sym normalized gram matrix ˜P𝑠𝑦𝑚. ˆB𝑠𝑦𝑚= ˜P𝑠𝑦𝑚+𝜆I−1 ˜P𝑠𝑦𝑚 ,where ˜P𝑠𝑦𝑚=D−1/2 𝐼X⊤D−1 𝑈XD−1/2 𝐼. (15) Using the generalized objective function in Eq. (10), we discuss the limitations of existing normalization methods. Firstly, RW nor- malization only considers source item popularity, while Sym nor- malization considers the source and target items with equal weight. Besides, both of them assign the equal weight to the user and item sides ( e.g., For Sym normalization, D−1/2 𝑈andD−1/2 𝐼). To address these limitations, we (i) fully utilize three weight matrices ( i.e.,W𝑈, W(1) 𝐼, and W(2) 𝐼) and (ii) adaptively modulate the popularity for items and users by considering the dataset characteristics. 4 Data-Adaptive Normalization (DAN) This section proposes a simple yet effective normalization method called DAN . According to the dataset characteristics, it adjusts the importance of items and users through its key components, i.e., item- and user-adaptive normalization . The objective function of LAEs using DAN employs all three normalization weight matrices in Eq. (10). min B∥D−𝛽/2 𝑈(XD−𝛼 𝐼−XD−𝛼 𝐼B)∥2 𝐹+𝜆∥D1/2−𝛼 𝐼B∥2 𝐹.(16) Depending on 𝛼and𝛽, the objective function is equivalent to that of RW normalized LAEs in Eq. (12)for𝛼=0and𝛽=1, or to that of Sym normalized LAEs in Eq. (14)for𝛼=1/2and𝛽=1. The closed-form solution of LAEs with DAN is as follows. ˆB𝑑𝑎𝑛= ˜P𝑑𝑎𝑛+𝜆I−1 ˜P𝑑𝑎𝑛 ,where ˜P𝑑𝑎𝑛=D−(1−𝛼) 𝐼X⊤D−𝛽 𝑈XD−𝛼 𝐼. (17) To thoroughly analyze the effect of DAN, we dissect the DAN gram matrix ˜P𝑑𝑎𝑛in Eq. (17) into item and user aspects. (a) W/O normalization (b) 𝛼=0 (c)𝛼=0.5 (d)𝛼=0.2(ours) Figure 3: Distribution of the weights learned [ 45] with dif- ferent𝛼values on ML-20M. The red and blue lines are the estimated probability density functions (PDFs) for head and tail items. The weights are averaged over ˆBin a column-wise direction. The x-axis is the average weight of items, and the area under the curve corresponds to the probability of having the weights within that range. 4.1 Item-Adaptive Normalization The first component of DAN uses the parameter 𝛼to adjust popu- larity bias, whose properties we prove in Theorem 4.1. (The detailed proof can be found in Appendix A.1.) Theorem 4.1. Item-adaptive normalization (i) provides a denois- ing effect [ 46] and (ii) controls popularity bias: A larger 𝛼alleviates target items’ popularity bias, while a smaller 𝛼focuses on source items’ popularity bias. ˆB𝐿𝐴𝐸(P=D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼)=D𝛼 𝐼ˆB𝐷𝐿𝐴𝐸 D−𝛼 𝐼. (18) Here, ˆB𝐷𝐿𝐴𝐸 means the closed-form solution of DLAE [ 46], which applies a dropout to the input matrix and reconstructs the original matrix ( i.e., the denoising process)2. Then, D−𝛼 𝐼mitigates the item popularity bias by penalizing the target items’ popularity on the right-hand side of the weight matrix ˆB𝐷𝐿𝐴𝐸 . A higher𝛼 penalizes target items’ popularity more strongly, particularly effec- tive for datasets with low Gini index [ 4] where users interact with various items evenly. To empirically validate Theorem 4.1, we analyze the weight distributions of ˆBfor head and tail items under different 𝛼. Fig- ure 3 illustrates the probability density functions, where weight magnitude determines an item’s recommendation likelihood. If nor- malization is not applied (Figure 3(a)), head items dominate with large weights due to their frequent occurrences. When 𝛼=0(i.e., D−1 𝐼X⊤X), the normalization partially mitigates the head item dom- inance by considering only source item popularity. When 𝛼=0.5 (i.e.,D−1/2 𝐼X⊤XD−1/2 𝐼), normalization of source and target items further reduces the head-tail distribution gap, but large differences remain. In contrast, item-adaptive normalization ( 𝛼=0.2) achieves the most balanced distribution through dynamic adjustment. 2Appendix B.1 shows DAN’s superior robustness against noisy inputs. Why is Normalization Necessary for Linear Recommenders? SIGIR ’25, July 13–18, 2025, Padua, Italy Beyond benefits, item-adaptive normalization preserves eigenval- ues. In Eq. (18), since D𝛼 𝐼ˆB𝐷𝐿𝐴𝐸 D−𝛼 𝐼is a similarity transformation ofˆB𝐷𝐿𝐴𝐸 , they share identical eigenvalues for any 𝛼∈R, ensuring neighborhood bias remains invariant. 4.2 User-Adaptive Normalization User-adaptive normalization controls neighborhood bias using the parameter𝛽. Excluding item-adaptive normalization ( i.e.,D−(1−𝛼) 𝐼 andD−𝛼 𝐼) from Eq. (17), the LAE solution with user-adaptive nor- malization is as follows. ˆB𝐿𝐴𝐸(P=X⊤D−𝛽 𝑈X)= X⊤D−𝛽 𝑈X+𝜆I−1 X⊤D−𝛽 𝑈X. (19) Through eigen-decomposition, the matrix ˜P𝑢𝑠𝑒𝑟=X⊤D−𝛽 𝑈Xis decomposed into three matrices. The eigenvalues of ˜P𝑢𝑠𝑒𝑟 follow the descending order, i.e.,1≥𝜇1≥···≥𝜇𝑛≥0. ˜P𝑢𝑠𝑒𝑟=Vdiag(𝜇1,𝜇2,...,𝜇𝑛)V⊤=𝑛∑︁ 𝑖=1𝜇𝑖v𝑖v⊤ 𝑖, (20) where V∈R𝑛×𝑛represents the eigenvectors, and v𝑖∈R𝑛is an 𝑖-th eigenvector of the matrix ˜P𝑢𝑠𝑒𝑟. From the perspective of signal processing [ 34], high eigenvalues ( e.g.,𝜇1and𝜇2) correspond to low- frequency signals capturing global item relationships, while low eigenvalues ( e.g.,𝜇𝑛−1and𝜇𝑛) represent high-frequency compo- nents encoding local neighborhood correlations, which are predom- inantly formed by active users. Neighborhood bias can be mitigated by emphasizing low-frequency components while suppressing high- frequency ones ( i.e., higher𝛽). Notably, normalization efficiently modulates eigenvalues without costly eigen-decomposition [58]. Theorem 4.2 characterizes user-adaptive normalization building on Lemmas 4.1 and 4.2. (Full proof is presented in Appendix A.2.) Lemma 4.1 (Eigenvalue Relationship between Weight Ma- trix and Gram Matrix). Following previous work [ 31], for weight matrix ˆB=(P+𝜆I)−1P, its eigenvalues 𝛾𝑖can be expressed in terms of the eigenvalues 𝜇𝑖of gram matrix Pas: 𝛾𝑖=𝜇𝑖 𝜇𝑖+𝜆,for all𝑖=1,2,...,𝑛. (21) Lemma 4.2 (Monotonicity of Eigenvalues via Rayleigh Quo- tient). For a symmetric matrix A∈R𝑛×𝑛, if the Rayleigh quotient 𝑅(A,v)=v⊤Av v⊤vdecreases for all non-zero vectors v∈R𝑛, then all eigenvalues of Adecrease. Theorem 4.2. User-adaptive normalization adjusts the eigenval- ues (𝛾1,...,𝛾𝑛) of the weight matrix B: A larger𝛽pushes eigenvalues toward zero, while a smaller 𝛽keeps them closer to one. For any 𝛽1>𝛽2≥0, all eigenvalues strictly decrease, i.e., 𝛾𝑖(𝛽1)<𝛾𝑖(𝛽2) for all𝑖. To empirically verify Theorem 4.2, we observe the eigenvalue distribution depending on 𝛽. Figure 4 shows the eigenvalue distribu- tions of two datasets, with high-frequency components increasing from left to right. When 𝛽increases from 0to1, most eigenvalues gradually decrease and approach zero, while a few low-frequency eigenvalues remain significant. This shows that user-adaptive nor- malization can balance low- and high-frequency signals while pre- serving essential global patterns, thus adjusting neighborhood bias. 0 5000 10000 15000 Eigenvalue index0.00.20.40.60.81.0Eigenvalues=0.0 =0.2 =0.4 =0.6 =0.8 =1.0 0 10000 20000 Eigenvalue index0.00.20.40.60.81.0Eigenvalues=0.0 =0.2 =0.4 =0.6 =0.8 =1.0 (a) ML-20M (b) Yelp2018 Figure 4: Eigenvalue distribution of the weight matrix ˆBac- cording to 𝛽on ML-20M and Yelp2018. Table 1: Statistics of six benchmark datasets. 𝐺𝑖𝑛𝑖𝑖denotes the Gini index [ 4] that measures the skewness of item distri- butions.H𝑤means the weighted homophily ratio, in Eq. (22). Dataset #Users #Items #Inter. Density𝐺𝑖𝑛𝑖𝑖H𝑤 ML-20M 136,677 20,108 10.0M 0.36% 0.90 0.109 Netflix 463,435 17,769 56.9M 0.69% 0.86 0.127 MSD 571,355 41,140 33.6M 0.36% 0.56 0.123 Gowalla 29,858 40,981 1.03M 0.08% 0.44 0.085 Yelp2018 31,668 38,048 1.56M 0.13% 0.51 0.044 Amazon-book 52,643 91,599 2.98M 0.06% 0.46 0.059 The prior study [ 56] demonstrates that high-homophilic datasets primarily leverage low-frequency information rather than high- frequency components. While the recent study [ 22] has proposed using Jaccard similarity to measure homophily in recommender systems, we introduce a more nuanced weighted homophily ratio : H𝑤=Í (𝑖,𝑗)∈E𝑤𝑖𝑗·𝑠𝑖𝑗Í (𝑖,𝑗)∈E𝑤𝑖𝑗,where𝑤𝑖𝑗=|V𝑖∩V𝑗|𝛿·|V𝑖∩V𝑗| 𝑚𝑖𝑛(|V𝑖|,|V𝑗|) (22) Here,𝑠𝑖𝑗=|V𝑖∩V𝑖| |V𝑖∪V𝑗|∈[0,1]is Jaccard similarity between user sets for items 𝑖and𝑗, andV𝑖is a set of users who interacted with item𝑖. A high homophily ratio means that items are largely con- nected with homogeneous items, implying low neighborhood bias. The weighted homophily ratio has two improvements: •Absolute intersection size: The first term|V𝑖∩V𝑗|𝛿increases as the number of shared users grows. For example, given two item pairs with 𝑠𝑖𝑗=100 200and10 20, the absolute count of 100 suggests a more relevant relationship than 10. We set 𝛿as 1.5. •Subset relationship: The second term|V𝑖∩V𝑗| min(|V𝑖|,|V𝑗|)increases when one item set is nearly a subset of another. It reflects relat- edness when a smaller set is entirely contained within a larger one, also known as the Szymkiewicz-Simpson coefficient [52]. Based on the weighted homophily ratio, user-adaptive normal- ization adjusts 𝛽to control neighborhood bias. Highly engaged users tend to form local item relationships through numerous in- teractions, which can dominate and obscure global patterns. For high-homophilic datasets where similar items naturally form dense connections, a higher 𝛽helps emphasize these global patterns by reducing the influence of individual user interactions. Conversely, for low-homophilic datasets where items exhibit more diverse rela- tionships, a lower 𝛽preserves the local neighborhood information captured in individual user preferences. SIGIR ’25, July 13–18, 2025, Padua, Italy Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee 5 Experimental Setup Datasets . We conduct experiments on six benchmark datasets, i.e., ML-20M, Netflix, MSD, Gowalla, Yelp2018, and Amazon-book, as summarized in Table 1. Following [ 5,27,31,41], for ML-20M and Netflix, we used a 5-core setting, and for MSD, we used only users with at least 20 songs and songs listened to by at least 200 users. For Gowalla, Yelp2018, and Amazon-book, we used a 10-core setting. Evaluation protocols . We evaluate DAN using two protocols: (i) Strong generalization evaluates users who were not seen in the training phase, so we split the training, validation, and test sets into 8:1:1 based on the user side [ 31,45]. In the inference phase, we assume that users in the validation and test sets have 80% of their total ratings and evaluate for the remaining 20%. (ii) Weak generalization only considers users covered in the training phase. Following the convention [ 5,17,31,41], we split the ratings of the entire interaction matrix into 8:2 to serve as training and test sets. Evaluation metrics . We adopt two ranking metrics, i.e.,Recall andNDCG . To validate the overall performance and the effect of mitigating popularity bias, we introduce three different measures: Average-over-all (AOA) ,Tail, and Unbiased [25,31,57] evaluation. We split items into head (top 20% popular) and tail (bottom 80% unpopular) groups based on popularity. AOA is the measure for both item groups, while Tail is measured for only tail items. For unbiased evaluation, we set 𝛾=2along [ 25,57] to ensure that the effect of popularity bias is not reflected in the evaluation. Baseline models . We equip DAN to three linear autoencoder mod- els,i.e., LAE, EASER[45], and RLAE [ 31]. We used DLAE, ED- LAE [ 46], and RDLAE [ 31], which add a denoising effect to the pre- vious three models. Since item-adaptive normalization integrates the denoising effect, we skip experiments on the denoising LAEs with DAN. We also employed eight CF models as the baselines. MultVAE [ 27], LightGCN [ 17], and XSimGCL [ 60] are represen- tative non-linear models. GF-CF [ 41], BSPM [ 5], Turbo-CF [ 35], SVD-AE [ 20], and SGFCF [ 38] are representative linear models. Among these, only five models (MultVAE, GF-CF, BSPM, Turbo-CF, and SVD-AE) can be evaluated under strong generalization. For brevity, we denote EASERas EASE. Reproducibility . All models are implemented in the same frame- work code as [ 5,31,41]. For LAEs without DAN, 𝜆was searched in [10, 20, ..., 500, 1000], and with DAN in [1e-3, 2e-3, ..., 20, 50]. For the denoising LAEs ( i.e., DLAE, EDLAE, and RDLAE), the dropout ratio𝑝was searched in [0.1, 0.9] with step size 0.1. 𝛼and𝛽were searched in [0, 0.5] and [0, 1] with step size 0.1, respectively. For weak generalization protocol, we use the reported results of GF-CF, BSPM, MultVAE, and LightGCN from [ 5]. For all other models, we reproduced the experiments by referring to the best hyperparame- ters in their original papers. Since linear models are deterministic, we conducted a single run, while the performance of non-linear models was averaged over five seeds. We run all experiments on NVIDIA RTX-3090 24GB GPU and Intel Xeon Gold 6226R CPU. 6 Experimental Results 6.1 Performance Comparison Strong generalization . Tables 2–3 present the performance of state-of-the-art CF models in non-LAE-based and LAE-based groups.Table 2: Performance comparison on ML-20M, Netflix, and MSD under the strong generalization. ‘LAE DAN’ indicates LAE equipped with our proposed DAN. The best results are marked in bold, and the second best in underlined . ModelAOA Tail UnbiasedML-20MR@20 N@20 R@20 N@20 R@20 N@20 MultVAE 0.3895 0.3278 0.0133 0.0077 0.2996 0.0509 GF-CF 0.3250 0.2736 0.0029 0.0022 0.2188 0.0371 BSPM 0.3725 0.3183 0.0104 0.0060 0.2761 0.0463 TurboCF 0.3276 0.2728 0.0166 0.0111 0.2336 0.0402 SVD-AE 0.3720 0.3205 0.0075 0.0041 0.2672 0.0454 LAE 0.3757 0.3228 0.0005 0.0001 0.2827 0.0473 EASE 0.3905 0.3390 0.0052 0.0022 0.2857 0.0479 RLAE 0.3913 0.3402 0.0137 0.0069 0.2951 0.0487 DLAE 0.3923 0.3408 0.0084 0.0047 0.2898 0.0477 EDLAE 0.3925 0.3421 0.0066 0.0035 0.2859 0.0480 RDLAE 0.3932 0.3422 0.0123 0.0062 0.2987 0.0489 LAE DAN 0.3930 0.3414 0.0234 0.0128 0.2925 0.0493 EASE DAN 0.3950 0.3430 0.0217 0.0120 0.2955 0.0497 RLAE DAN 0.3956 0.3432 0.0227 0.0127 0.2973 0.0501NetflixMultVAE 0.3434 0.3129 0.0291 0.0215 0.2432 0.0345 GF-CF 0.2972 0.2724 0.0185 0.0123 0.1868 0.0264 BSPM 0.3163 0.2909 0.0339 0.0227 0.2145 0.0302 TurboCF 0.2826 0.2586 0.0265 0.0188 0.1777 0.0279 SVD-AE 0.3206 0.2977 0.0289 0.0186 0.2121 0.0308 LAE 0.3465 0.3237 0.0066 0.0036 0.2357 0.0326 EASE 0.3618 0.3388 0.0404 0.0222 0.2554 0.0351 RLAE 0.3623 0.3392 0.0585 0.0377 0.2606 0.0355 DLAE 0.3621 0.3400 0.0597 0.0381 0.2549 0.0355 EDLAE 0.3659 0.3428 0.0470 0.0279 0.2569 0.0358 RDLAE 0.3661 0.3431 0.0545 0.0344 0.2598 0.0360 LAE DAN 0.3631 0.3405 0.0623 0.0411 0.2598 0.0363 EASE DAN 0.3666 0.3433 0.0658 0.0424 0.2646 0.0371 RLAE DAN 0.3662 0.3434 0.0628 0.0400 0.2628 0.0370MSDMultVAE 0.2443 0.2270 0.1372 0.0988 0.2013 0.0258 GF-CF 0.2513 0.2457 0.1727 0.1331 0.2137 0.0282 BSPM 0.2682 0.2616 0.2121 0.1583 0.2494 0.0321 TurboCF 0.2666 0.2593 0.2153 0.1652 0.2497 0.0324 SVD-AE 0.2859 0.2743 0.1984 0.1379 0.2502 0.0305 LAE 0.2848 0.2740 0.1862 0.1234 0.2568 0.0320 EASE 0.3338 0.3261 0.2504 0.1758 0.3019 0.0377 RLAE 0.3338 0.3261 0.2507 0.1767 0.3021 0.0378 DLAE 0.3288 0.3208 0.2526 0.1863 0.2993 0.0378 EDLAE 0.3336 0.3258 0.2503 0.1782 0.3014 0.0378 RDLAE 0.3341 0.3265 0.2511 0.1784 0.3022 0.0379 LAE DAN 0.3290 0.3209 0.2530 0.1873 0.2999 0.0380 EASE DAN 0.3336 0.3259 0.2621 0.1926 0.3071 0.0389 RLAE DAN 0.3342 0.3265 0.2573 0.1864 0.3049 0.0384 We found three observations: (i) LAE equipped with DAN performs better on high-homophilic datasets. On high-homophilic datasets (ML-20M, Netflix, MSD), LAE 𝐷𝐴𝑁 shows an average AOA perfor- mance gain of 9.36% at NDCG@20 compared to LAE, while on low- homophilic datasets (Gowalla, Yelp2018, Amazon-book), it shows a 6.86% gain. This demonstrates that normalization effectively cap- tures global correlations in datasets with more global patterns. (ii) Why is Normalization Necessary for Linear Recommenders? SIGIR ’25, July 13–18, 2025, Padua, Italy Table 3: Performance comparison on Gowalla, Yelp2018, and Amazon-book under the strong generalization. ‘LAE DAN’ in- dicates LAE equipped with our proposed DAN. The best re- sults are marked in bold, and the second best in underlined . ModelAOA Tail UnbiasedGowallaR@20 N@20 R@20 N@20 R@20 N@20 MultVAE 0.1788 0.1269 0.0698 0.0381 0.1289 0.0256 GF-CF 0.2252 0.1660 0.1151 0.0591 0.1734 0.0343 BSPM 0.2373 0.1757 0.1270 0.0638 0.1849 0.0360 TurboCF 0.2281 0.1686 0.1166 0.0627 0.1770 0.0357 SVD-AE 0.2292 0.1717 0.0898 0.0414 0.1639 0.0317 LAE 0.2271 0.1706 0.0799 0.0371 0.1672 0.0326 EASE 0.2414 0.1831 0.0941 0.0428 0.1753 0.0335 RLAE 0.2448 0.1873 0.1243 0.0625 0.1912 0.0370 DLAE 0.2495 0.1891 0.1109 0.0532 0.1881 0.0366 EDLAE 0.2469 0.1859 0.0951 0.0432 0.1790 0.0344 RDLAE 0.2499 0.1900 0.1210 0.0587 0.1923 0.0373 LAE DAN 0.2491 0.1911 0.1392 0.0741 0.2003 0.0398 EASE DAN 0.2527 0.1918 0.1306 0.0670 0.1983 0.0392 RLAE DAN 0.2520 0.1919 0.1332 0.0694 0.1988 0.0393Yelp2018MultVAE 0.0963 0.0747 0.0193 0.0121 0.0643 0.0074 GF-CF 0.1134 0.0900 0.0155 0.0078 0.0685 0.0081 BSPM 0.1198 0.0951 0.0251 0.0129 0.0776 0.0090 TurboCF 0.1144 0.0930 0.0218 0.0133 0.0735 0.0092 SVD-AE 0.1145 0.0923 0.0147 0.0076 0.0693 0.0082 LAE 0.1160 0.0954 0.0086 0.0039 0.0705 0.0086 EASE 0.1144 0.0933 0.0091 0.0042 0.0679 0.0081 RLAE 0.1173 0.0968 0.0127 0.0060 0.0735 0.0089 DLAE 0.1190 0.0971 0.0121 0.0057 0.0724 0.0087 EDLAE 0.1171 0.0957 0.0103 0.0049 0.0698 0.0084 RDLAE 0.1190 0.0976 0.0161 0.0077 0.0741 0.0089 LAE DAN 0.1230 0.1002 0.0313 0.0175 0.0834 0.0100 EASE DAN 0.1238 0.1011 0.0294 0.0163 0.0828 0.0100 RLAE DAN 0.1237 0.1010 0.0302 0.0169 0.0832 0.0101Amazon-bookMultVAE 0.1005 0.0816 0.0374 0.0250 0.0771 0.0096 GF-CF 0.1668 0.1492 0.0988 0.0702 0.1401 0.0195 BSPM 0.1742 0.1569 0.1156 0.0819 0.1546 0.0212 TurboCF 0.1720 0.1557 0.0957 0.0662 0.1454 0.0199 SVD-AE 0.1433 0.1239 0.0579 0.0372 0.1065 0.0139 LAE 0.1920 0.1749 0.1012 0.0635 0.1644 0.0220 EASE 0.1912 0.1734 0.0761 0.0444 0.1481 0.0195 RLAE 0.1968 0.1804 0.1057 0.0672 0.1649 0.0221 DLAE 0.1994 0.1820 0.0993 0.0631 0.1637 0.0220 EDLAE 0.1940 0.1756 0.0829 0.0512 0.1523 0.0205 RDLAE 0.2011 0.1834 0.1043 0.0670 0.1663 0.0225 LAE DAN 0.1979 0.1811 0.1314 0.0886 0.1766 0.0239 EASE DAN 0.2017 0.1835 0.1136 0.0746 0.1705 0.0231 RLAE DAN 0.2019 0.1836 0.1226 0.0820 0.1747 0.0236 DAN consistently achieves the best performance in tail and unbi- ased evaluation, mitigating popularity bias across backbone models and datasets. On ML-20M and Yelp2018, RLAE DAN results in Tail performance gain of 84.06% and 181.67% at NDCG@20, respec- tively. (iii) Most LAEs with DAN outperform the existing denoising LAE models (DLAE, EDLAE, RDLAE), thanks to DAN’s inherent denoising effect.Table 4: Performance comparison on Gowalla, Yelp2018, and Amazon-book with the weak generalization. ‘LAE DAN’ indi- cates LAE equipped with our proposed DAN. The best results are marked in bold, and the second best in underlined . Dataset Gowalla Yelp2018 Amazon-book Model R@20 N@20 R@20 N@20 R@20 N@20 MultVAE [27] 0.1641 0.1335 0.0584 0.0450 0.0407 0.0315 LightGCN [ 17]0.1830 0.1554 0.0649 0.0530 0.0411 0.0315 XSimGCL [60] 0.1861 0.1580 0.0711 0.0584 0.0541 0.0420 GF-CF [41] 0.1849 0.1536 0.0697 0.0571 0.0710 0.0584 BSPM [5] 0.1920 0.1597 0.0720 0.0593 0.0733 0.0609 TurboCF [35] 0.1835 0.1531 0.0693 0.0574 0.0730 0.0611 SVD-AE [20] 0.1860 0.1550 0.0683 0.0571 0.0569 0.0451 SGFCF [38] 0.1899 0.1566 0.0713 0.0588 0.0694 0.0565 LAE 0.1630 0.1295 0.0658 0.0555 0.0746 0.0611 EASE [45] 0.1765 0.1467 0.0657 0.0552 0.0710 0.0566 RLAE [31] 0.1772 0.1467 0.0667 0.0562 0.0754 0.0615 DLAE [46] 0.1839 0.1533 0.0678 0.0570 0.0751 0.0610 EDLAE [46] 0.1844 0.1539 0.0673 0.0565 0.0711 0.0566 RDLAE [31] 0.1845 0.1539 0.0679 0.0569 0.0754 0.0613 LAE DAN 0.1901 0.1591 0.0703 0.0586 0.0759 0.0627 EASE DAN 0.1905 0.1594 0.0706 0.0587 0.0762 0.0630 RLAE DAN 0.1922 0.1605 0.0706 0.0587 0.0762 0.0630 Weak generalization . Table 4 presents the performance of state-of- the-art CF models in three groups: non-linear models, linear models (excluding LAEs), and LAE-based models. (i) Despite weak gen- eralization, DAN consistently enhances LAE model performance. LAEs equipped with DAN surpass both the baseline LAEs ( i.e., LAE, EASE, and RLAE) and their denoising versions, with RLAE DAN demonstrating superior performance among DAN-equipped mod- els. (ii) RLAE DAN achieves better or comparable performance to state-of-the-art linear model BSPM [ 5], outperforming BSPM on Amazon-book by up to 3.45% at NDCG@20. In particular, RLAE DAN performs well on sparse datasets such as Gowalla and Amazon- book. (iii) Furthermore, all LAEs with DAN outperform non-linear models, with RLAE DAN notably achieving up to 50% performance gains over XSimGCL [60] at NDCG@20 on Amazon-book. To validate the generalizability of DAN, we applied DAN to a linear model SLIST [ 7] proposed for the session-based recommen- dation. We found that DAN significantly improves the performance of the linear session-based recommender model. (Refer to Appen- dix B.2) 6.2 Hyperparameter Sensitivity Effect of item normalization . Figure 5 illustrates the head and tail item performance, over adjusting 𝛼. For brevity, Figures 5-6 and Table 5 report only NDCG@20; Recall@20 showed similar trends. Appendix B.3 shows the results in the remaining datasets, i.e., MSD and Gowalla. •On the four datasets, Tail performance increases while Head performance decreases as 𝛼increase. This is due to the higher 𝛼 strengthening the normalization for the target items. It directly mitigates the popularity bias in the recommendation process and makes unpopular items more likely to be recommended. SIGIR ’25, July 13–18, 2025, Padua, Italy Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee (a) ML-20M (b) Netflix (c) Yelp2018 (d) Amazon-book Figure 5: NDCG@20 of LAE DANover𝛼for item normalization on four datasets. When adjusting 𝛼, we keep𝛽fixed at the optimal parameter. (a) ML-20M (b) Netflix (c) Yelp2018 (d) Amazon-book Figure 6: NDCG@20 of LAE DANover𝛽for user normalization on four datasets. Active and Inactive mean the performance of the top 20% users with high-activity and the rest. When adjusting𝛽, we keep𝛼fixed at the optimal parameter. •To understand the importance of adjusting 𝛼, we compare the difference between the best and worst AOA performance for each dataset: ML-20M and Netflix have a difference of 2.62% and 2.51%, respectively; Yelp2018 and Amazon-book have a difference of 9.00% and 6.76%, respectively. The relatively lower performance difference of DAN on ML-20M and Netflix com- pared to Yelp2018 and Amazon-book is attributed to the larger 𝐺𝑖𝑛𝑖𝑖of ML-20M and Netflix. (For statistics on 𝐺𝑖𝑛𝑖𝑖, refer to Table 1.) It means that recommending popular items may have the AOA performance advantage in ML-20M and Netflix. Thus, wesuggest adjusting𝛼lower fordatasets with high𝐺𝑖𝑛𝑖𝑖and higher fordatasets with low𝐺𝑖𝑛𝑖𝑖.Table 5: Performance comparison for various normalization methods on ML-20M and Yelp2018. The backbone model is the LAE, and the metric is NDCG@20. Appendix B.4 provides the results in the remaining datasets. Dataset Method AOA Head Tail ML-20MW/O norm 0.3228 0.3264 0.0001 RW norm 0.3358 0.3386 0.0088 Sym norm 0.3321 0.3338 0.0184 User norm 0.3390 0.3428 0.0001 Item norm 0.3341 0.3379 0.0140 DAN (ours) 0.3414 0.3433 0.0128 Yelp2018W/O norm 0.0954 0.1285 0.0039 RW norm 0.0840 0.1128 0.0044 Sym norm 0.0922 0.1052 0.0217 User norm 0.0966 0.1274 0.0115 Item norm 0.0965 0.1225 0.0249 DAN (ours) 0.1002 0.1212 0.0175 Effect of user normalization . Figure 6 depicts the performance for two user groups ( i.e., Active and Inactive) depending on 𝛽. •For high-homophilic datasets, i.e., ML-20M and Netflix, both Active and Inactive performances trend up and down as 𝛽in- creases. This is because these datasets have global interaction patterns favoring popular items. Specifically, ML-20M and Net- flix have largeH𝑤of 0.109 and 0.127, respectively. By mod- erately penalizing active users, DAN can improve the perfor- mance of both user groups because it learns global item corre- lations enough from those of inactive users while discarding unnecessary patterns ( e.g., interaction noises) from active users. However, if we over-penalize active users, such as RW and Sym normalization, both user groups have a performance drop. •For low-homophilic datasets, i.e., Yelp2018 and Amazon-book, both Active and Inactive performances consistently decrease as𝛽increases. This is due to the diverse local patterns in these datasets;H𝑤for Yelp2018 and Amazon-book are 0.044 and 0.059, respectively. Thus, penalizing active users degrades both performances because it is difficult to learn the diverse high- frequency patterns from those of inactive users alone. To sum- marize, werecommend tuning𝛽higher fordatasets with high H𝑤andlower fordatasets with lowH𝑤. 6.3 Comparing Various Normalization Table 5 reports the performance of applying different normalization methods to LAE on the strong generalization. We also conducted a case study for these normalization methods in Appendix B.5. •On both datasets, DAN demonstrates the highest AOA perfor- mance while significantly improving Tail performance, mean- ing that it can adaptively normalize items and users. For ML- 20M, RW and Sym normalizations improve AOA, Head, and Tail performance compared to LAE without normalization ( i.e., W/O). In contrast, for Yelp2018, both normalizations decreased AOA performance but improved Tail performance compared to W/O. This reveals the limitations of RW and Sym normaliza- tions, which cannot normalize depending on the dataset. Why is Normalization Necessary for Linear Recommenders? SIGIR ’25, July 13–18, 2025, Padua, Italy Table 6: Efficiency comparison for DAN and SOTA models on three datasets. Train and Infer indicate runtime (in seconds) for training and inference. The inference time is measured with a batch size of 4,096 for all test users. Dataset Gowalla Yelp2018 Amazon-book Model Train Infer Train Infer Train Infer XSimGCL [60] 967 6 1,420 6 9,587 18 BSPM [5] 74 257 80 180 43 3,211 LAE 132 28 94 26 1,279 215 LAE DAN 139 28 98 26 1,346 215 •Item-adaptive normalization ( i.e., Item norm) consistently im- proves Tail performance. For ML-20M and Yelp2018, Tail per- formance gain is 13,900% and 538%, respectively. On the other hand, user-adaptive normalization ( i.e., User norm) shows dif- ferent tendencies over the datasets. For ML-20M, it has a gain of 5.02% in Head performance only, while for Yelp2018, it has a gain of 195% in Tail performance only. This is because users in ML-20M strongly prefer popular items, while users in Yelp2018 prefer popular and unpopular items similarly. 6.4 Efficiency Table 6 compares the computational costs between DAN and state- of-the-art models ( i.e., XSimGCL [ 60] and BSPM [ 5])3. (i) LAE demonstrates superior speed in both training and inference com- pared to other models. On Gowalla, LAE achieves 608% and 198% faster execution times than XSimGCL and BSPM, respectively, in total training and inference time. (ii) The integration of DAN intro- duces minimal overhead, with LAE DAN showing only 5.30%, 4.26%, and 5.24% increases in training time across the three datasets. This additional cost is attributed to the multiplication of D𝑈andD𝐼by the gram matrix X⊤Xduring training. The efficiency of DAN en- ables rapid hyperparameter optimization, particularly as LAE DAN requires only three parameters ( i.e.,𝜆,𝛼, and𝛽), while eight and seven parameters for XSimGCL and BSPM, respectively. 7 Related Work Linear recommender models . They are classified into latent factor-based models and neighborhood-based models. While latent factor-based models decompose the user-item interaction matrix into lower-dimensional latent factors, neighborhood-based mod- els typically use a co-occurrence matrix as an item-item similar- ity matrix. SLIM [ 33] is an item neighborhood-based model that learns with the objective function of ridge regression. EASER[45] derives a closed-form solution using L2-regularization and zero- diagonal constraints from SLIM [ 33]. DLAE/EDLAE [ 46] learns with stronger regularization for popular items, while RLAE/RDLAE [ 31] relieves diagonal constraints to emphasize unpopular items. Unlike RLAE, which indirectly adjusts the effect of users/items via the weight matrix B, our work directly modulates the user-item matrix X. Furthermore, HIGHER [ 48] leverages high-order connectivity, 3All models were evaluated on CPU for fair comparison, except XSimGCL [ 60] which required GPU for reasonable runtime.where three or more items appear in one user simultaneously. Re- cently, SVD-AE [ 20] utilizes truncated SVD on the reconstructed matrix to improve robustness against noise in the user-item matrix. SGFCF [ 38] applies graph filtering that is generalizable to noise removal tailored to dataset density variations. Normalization . In recommender systems, RW [ 10,37] and Sym normalization methods [ 5,14,17,53] are widely used to relieve popularity bias [ 2,62] by suppressing the effects of active users and popular items. Some works [ 10,37] using RW normalization pre- vent the transition probability to popular item nodes in a user-item bipartite graph from becoming too large. Also, Sym normaliza- tion [ 5,17,53,60] is used in graph convolution networks (GCN) to prevent some nodes from propagating dominantly throughout the graph. Steck [44] presents the only work in LAEs that makes use of normalization by heuristically applying column-wise weight- ing to the closed-form solution of EASER[45]4. Also, Zhao et al . [61] performs normalization on the adjacency matrix during the neighborhood aggregation process of GCN-based models. Recently, SGFCF [ 38] applies normalization to reduce high-frequency sig- nals for denoising but lacks distinction between user and item normalization in its popularity bias analysis. Meanwhile, a few studies [ 3,16,24] have shown the effectiveness of normalization in latent space by resizing the length of user/item embeddings. In contrast, we focus on normalization suitable for efficient linear models without embeddings. 8 Conclusion This paper proposed a simple yet effective normalization, named Data-Adaptive Normalization ( DAN ). The existing LAEs with DAN demonstrated up to 128.57% and 12.36% performance gains for long- tail items and unbiased evaluation scenarios across six benchmark datasets. Additionally, DAN can be easily tailored to reflect the characteristics of the datasets, such as the skewness of the distribu- tion over items ( i.e.,𝐺𝑖𝑛𝑖𝑖) and the homophily ratio ( i.e.,H𝑤). We observed that RW and Sym normalization methods alleviate popu- larity bias while they utilize fixed weights between source/target items and users. Unlike these normalizations, DAN can dynami- cally adjust source/target item popularity and neighborhood biases. DAN can be extended to neural models by normalizing the input user-item matrix or intermediate layers as future work. Acknowledgments This work was partly supported by the Institute of Information & communications Technology Planning & evaluation (IITP) grant and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2019-II190421, IITP-2025- RS-2020-II201821, RS-2022-II221045, IITP-2025-RS-2024-00437633, and RS-2025-00564083, each contributing 20% to this research). References [1]Huiyuan Chen, Yusan Lin, Menghai Pan, Lan Wang, Chin-Chia Michael Yeh, Xiaoting Li, Yan Zheng, Fei Wang, and Hao Yang. 2022. Denoising Self-Attentive Sequential Recommendation. In RecSys . 92–101. [2]Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and Debias in Recommender System: A Survey and Future Directions. ACM Trans. Inf. Syst. 41, 3 (2023), 67:1–67:39. 4Appendix A.3 shows that the relation between DAN and the normalization [44]. SIGIR ’25, July 13–18, 2025, Padua, Italy Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee [3]Jiawei Chen, Junkang Wu, Jiancan Wu, Xuezhi Cao, Sheng Zhou, and Xiangnan He. 2023. Adap- 𝜏: Adaptively Modulating Embedding Magnitude for Recommen- dation. In WWW . 1085–1096. [4]Jin Yao Chin, Yile Chen, and Gao Cong. 2022. The Datasets Dilemma: How Much Do We Really Know About Recommendation Datasets?. In WSDM , K. Selcuk Candan, Huan Liu, Leman Akoglu, Xin Luna Dong, and Jiliang Tang (Eds.). 141–149. [5]Jeongwhan Choi, Seoyoung Hong, Noseong Park, and Sung-Bae Cho. 2023. Blurring-Sharpening Process Models for Collaborative Filtering. In SIGIR . 1096– 1106. [6]Jeongwhan Choi, Jinsung Jeon, and Noseong Park. 2021. LT-OCF: Learnable-Time ODE-based Collaborative Filtering. In CIKM . 251–260. [7]Minjin Choi, Jinhong Kim, Joonseok Lee, Hyunjung Shim, and Jongwuk Lee. 2021. Session-aware Linear Item-Item Models for Session-based Recommendation. In WWW . 2186–2197. [8]Minjin Choi, Jinhong Kim, Joonseok Lee, Hyunjung Shim, and Jongwuk Lee. 2022. S-Walk: Accurate and Scalable Session-based Recommendation with Random Walks. In WSDM . 150–160. [9]Fabian Christoffel, Bibek Paudel, Chris Newell, and Abraham Bernstein. 2015. Blockbusters and Wallflowers: Accurate, Diverse, and Scalable Recommendations with Random Walks. In RecSys . 163–170. [10] Colin Cooper, Sang-Hyuk Lee, Tomasz Radzik, and Yiannis Siantos. 2014. Random walks in recommender systems: exact computation and simulations. In WWW . 811–816. [11] Maurizio Ferrari Dacrema, Simone Boglio, Paolo Cremonesi, and Dietmar Jannach. 2021. A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research. ACM Trans. Inf. Syst. 39, 2 (2021), 20:1–20:49. [12] Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. 2019. Are we really making much progress? A worrying analysis of recent neural recommen- dation approaches. In RecSys . 101–109. [13] Yushun Dong, Jundong Li, and Tobias Schnabel. 2023. When Newer is Not Better: Does Deep Learning Really Benefit Recommendation From Implicit Feedback?. InSIGIR . [14] Hao-Ming Fu, Patrick Poirson, Kwot Sin Lee, and Chen Wang. 2022. Revisiting Neighborhood-based Link Prediction for Collaborative Filtering. In WWW . 1009– 1018. [15] David Goldberg, David A. Nichols, Brian M. Oki, and Douglas B. Terry. 1992. Using Collaborative Filtering to Weave an Information Tapestry. Commun. ACM 35, 12 (1992), 61–70. [16] Priyanka Gupta, Diksha Garg, Pankaj Malhotra, Lovekesh Vig, and Gautam Shroff. 2019. NISER: Normalized Item and Session Representations with Graph Neural Networks. CoRR abs/1909.04276. [17] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-Dong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR . 639–648. [18] Jonathan L. Herlocker, Joseph A. Konstan, Al Borchers, and John Riedl. 1999. An Algorithmic Framework for Performing Collaborative Filtering. In SIGIR . 230–237. [19] Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016. Session-based Recommendations with Recurrent Neural Networks. In ICLR . [20] Seoyoung Hong, Jeongwhan Choi, Yeon-Chang Lee, Srijan Kumar, and Noseong Park. 2024. SVD-AE: Simple Autoencoders for Collaborative Filtering. In IJCAI . [21] Olivier Jeunen, Jan Van Balen, and Bart Goethals. 2020. Closed-Form Models for Collaborative Filtering with Side-Information. In RecSys . 651–656. [22] Wei Jiang, Xinyi Gao, Guandong Xu, Tong Chen, and Hongzhi Yin. 2024. Chal- lenging Low Homophily in Social Recommendation. In WWW . 3476–3484. [23] Wang-Cheng Kang and Julian J. McAuley. 2018. Self-Attentive Sequential Rec- ommendation. In ICDM . 197–206. [24] Dain Kim, Jinhyeok Park, and Dongwoo Kim. 2023. Test-Time Embedding Nor- malization for Popularity Bias Mitigation. In CIKM . 4023–4027. [25] Jae-woong Lee, Seongmin Park, Joonseok Lee, and Jongwuk Lee. 2022. Bilateral Self-unbiased Learning from Biased Implicit Feedback. In SIGIR . 29–39. [26] Jing Li, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Tao Lian, and Jun Ma. 2017. Neural Attentive Session-based Recommendation. In CIKM . 1419–1428. [27] Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, and Tony Jebara. 2018. Variational Autoencoders for Collaborative Filtering. In WWW . 689–698. [28] Sam Lobel, Chunyuan Li, Jianfeng Gao, and Lawrence Carin. 2020. RaCT: Toward Amortized Ranking-Critical Training For Collaborative Filtering. In ICLR . [29] Jianxin Ma, Chang Zhou, Hongxia Yang, Peng Cui, Xin Wang, and Wenwu Zhu. 2020. Disentangled Self-Supervision in Sequential Recommenders. In KDD . 483– 491. [30] Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, and Xiuqiang He. 2021. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM . 1253–1262. [31] Jaewan Moon, Hye young Kim, and Jongwuk Lee. 2023. It’s Enough: Relaxing Diagonal Constraints in Linear Autoencoders for Recommendation. In SIGIR .[32] Zhenyu Mu, Jianghao Lin, Xiaoyu Zhu, Weinan Zhang, and Yong Yu. 2024. In- variant Graph Contrastive Learning for Mitigating Neighborhood Bias in Graph Neural Network Based Recommender Systems. In ICANN , Vol. 15020. 143–158. [33] Xia Ning and George Karypis. 2011. SLIM: Sparse Linear Methods for Top-N Recommender Systems. In ICDM . 497–506. [34] Antonio Ortega, Pascal Frossard, Jelena Kovacevic, José M. F. Moura, and Pierre Vandergheynst. 2018. Graph Signal Processing: Overview, Challenges, and Ap- plications. Proc. IEEE 106, 5 (2018), 808–828. [35] Jin-Duk Park, Yong-Min Shin, and Won-Yong Shin. 2024. Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation. In SIGIR . [36] Seongmin Park, Mincheol Yoon, Minjin Choi, and Jongwuk Lee. 2025. Temporal Linear Item-Item Model for Sequential Recommendation. In WSDM . [37] Bibek Paudel, Fabian Christoffel, Chris Newell, and Abraham Bernstein. 2017. Updatable, Accurate, Diverse, and Scalable Recommendations for Interactive Applications. ACM Trans. Interact. Intell. Syst. 7, 1 (2017), 1:1–1:34. [38] Shaowen Peng, Xin Liu, Kazunari Sugiyama, and Tsunenori Mine. 2024. How Powerful is Graph Filtering for Recommendation. In KDD . [39] Steffen Rendle, Li Zhang, and Yehuda Koren. 2019. On the Difficulty of Evaluating Baselines: A Study on Recommender Systems. CoRR abs/1905.01395 (2019). [40] Yuta Saito, Suguru Yaginuma, Yuta Nishino, Hayato Sakata, and Kazuhide Nakata. 2020. Unbiased Recommender Learning from Missing-Not-At-Random Implicit Feedback. In WSDM . 501–509. [41] Yifei Shen, Yongji Wu, Yao Zhang, Caihua Shan, Jun Zhang, Khaled B. Letaief, and Dongsheng Li. 2021. How Powerful is Graph Convolution for Recommendation?. InCIKM . 1619–1629. [42] Ilya Shenbin, Anton Alekseev, Elena Tutubalina, Valentin Malykh, and Sergey I. Nikolenko. 2020. RecVAE: A New Variational Autoencoder for Top-N Recom- mendations with Implicit Feedback. In WSDM . 528–536. [43] Yehjin Shin, Jeongwhan Choi, Hyowon Wi, and Noseong Park. 2024. An Attentive Inductive Bias for Sequential Recommendation beyond the Self-Attention. In AAAI . 8984–8992. [44] Harald Steck. 2019. Collaborative Filtering via High-Dimensional Regression. CoRR abs/1904.13033 (2019). [45] Harald Steck. 2019. Embarrassingly Shallow Autoencoders for Sparse Data. In WWW . 3251–3257. [46] Harald Steck. 2020. Autoencoders that don’t overfit towards the Identity. In NeurIPS . 19598–19608. [47] Harald Steck, Maria Dimakopoulou, Nickolai Riabov, and Tony Jebara. 2020. ADMM SLIM: Sparse Recommendations for Many Users. In WSDM . 555–563. [48] Harald Steck and Dawen Liang. 2021. Negative Interactions for Improved Collab- orative Filtering: Don’t go Deeper, go Higher. In RecSys . 34–43. [49] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Repre- sentations from Transformer. In CIKM . 1441–1450. [50] Zhu Sun, Di Yu, Hui Fang, Jie Yang, Xinghua Qu, Jie Zhang, and Cong Geng. 2020. Are We Evaluating Rigorously? Benchmarking Recommendation for Re- producible Evaluation and Fair Comparison. In RecSys . 23–32. [51] Vojtech Vancura, Rodrigo Alves, Petr Kasalický, and Pavel Kordík. 2022. Scalable Linear Shallow Autoencoder for Collaborative Filtering. In RecSys . 604–609. [52] Vijay Verma and Rajesh Kumar Aggarwal. 2020. A comparative analysis of similarity measures akin to the Jaccard index in collaborative recommendations: empirical and theoretical perspective. Soc. Netw. Anal. Min. 10, 1 (2020), 43. [53] Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural Graph Collaborative Filtering. In SIGIR . 165–174. [54] Tianxin Wei, Fuli Feng, Jiawei Chen, Ziwei Wu, Jinfeng Yi, and Xiangnan He. 2021. Model-agnostic counterfactual reasoning for eliminating popularity bias in recommender system. In KDD . 1791–1800. [55] Yao Wu, Christopher DuBois, Alice X. Zheng, and Martin Ester. 2016. Collabo- rative Denoising Auto-Encoders for Top-N Recommender Systems. In WSDM . 153–162. [56] Yuchen Yan, Yuzhong Chen, Huiyuan Chen, Minghua Xu, Mahashweta Das, Hao Yang, and Hanghang Tong. 2023. From Trainable Negative Depth to Edge Heterophily in Graphs. In NeurIPS . [57] Longqi Yang, Yin Cui, Yuan Xuan, Chenyang Wang, Serge J. Belongie, and Debo- rah Estrin. 2018. Unbiased offline recommender evaluation for missing-not-at- random implicit feedback. In RecSys . 279–287. [58] Mingqi Yang, Yanming Shen, Rui Li, Heng Qi, Qiang Zhang, and Baocai Yin. 2022. A New Perspective on the Effects of Spectrum in Graph Neural Networks. In ICML . 25261–25279. [59] Yaowen Ye, Lianghao Xia, and Chao Huang. 2023. Graph Masked Autoencoder for Sequential Recommendation. In SIGIR . 321–330. [60] Junliang Yu, Xin Xia, Tong Chen, Lizhen Cui, Nguyen Quoc Viet Hung, and Hongzhi Yin. 2024. XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation. IEEE Trans. Knowl. Data Eng. 36, 2 (2024), 913– 926. [61] Minghao Zhao, Le Wu, Yile Liang, Lei Chen, Jian Zhang, Qilin Deng, Kai Wang, Xudong Shen, Tangjie Lv, and Runze Wu. 2022. Investigating Accuracy-Novelty Performance for Graph-based Collaborative Filtering. In SIGIR . 50–59. Why is Normalization Necessary for Linear Recommenders? SIGIR ’25, July 13–18, 2025, Padua, Italy [62] Ziwei Zhu, Yun He, Xing Zhao, Yin Zhang, Jianling Wang, and James Caverlee. 2021. Popularity-Opportunity Bias in Collaborative Filtering. In WSDM . 85–93. A Theoretical Proofs A.1 Item-Adaptive Normalization The LAE solution with item-adaptive normalized gram matrix D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼can be expanded as follows. ˆB𝐿𝐴𝐸(P=D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼) = D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼+𝜆I−1 D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼 = D−(1−𝛼) 𝐼(X⊤X+𝜆D𝐼)D−𝛼 𝐼−1 D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼 =D𝛼 𝐼X⊤X+𝜆D𝐼−1D1−𝛼 𝐼D−(1−𝛼) 𝐼X⊤XD−𝛼 𝐼 =D𝛼 𝐼X⊤X+𝜆D𝐼−1X⊤XD−𝛼 𝐼=D𝛼 𝐼ˆB𝐷𝐿𝐴𝐸 D−𝛼 𝐼(23) In Eq. (23),X⊤X+𝜆D𝐼−1X⊤Xis identical to the closed-form solution ˆB𝐷𝐿𝐴𝐸 of DLAE [46] with dropout probability 𝑝=𝜆 1+𝜆.5 A.2 User-Adaptive Normalization For any non-zero vector v∈R𝑛, the Rayleigh quotient of ˜P𝑢𝑠𝑒𝑟(𝛽)= X⊤D−𝛽 𝑈Xis: 𝑅(˜P𝑢𝑠𝑒𝑟(𝛽),v)=v⊤˜P𝑢𝑠𝑒𝑟(𝛽)v v⊤v=(Xv)⊤D−𝛽 𝑈(Xv) v⊤v=Í𝑚 𝑘=1(Xv)2 𝑘𝑑−𝛽 𝑘 v⊤v where(Xv)𝑘is the weighted sum of item interactions for user 𝑘. Since𝑑𝑘>1for all users 𝑘, for𝛽1>𝛽2:𝑑−𝛽1 𝑘<𝑑−𝛽2 𝑘. Therefore, 𝑅(˜P𝑢𝑠𝑒𝑟(𝛽1),v)=Í𝑚 𝑘=1(Xv)2 𝑘𝑑−𝛽1 𝑘 v⊤v<Í𝑚 𝑘=1(Xv)2 𝑘𝑑−𝛽2 𝑘 v⊤v=𝑅(˜P𝑢𝑠𝑒𝑟(𝛽2),v) By Lemma 4.2, since this inequality holds for all non-zero vectors vand ˜𝑃𝑢𝑠𝑒𝑟 is symmetric, we can conclude that: 𝜇𝑖(𝛽1)<𝜇𝑖(𝛽2)for all𝑖 (24) By Lemma 4.1, since 𝛾𝑖(𝛽)=𝜇𝑖(𝛽) 𝜇𝑖(𝛽)+𝜆is strictly increasing in 𝜇𝑖(𝛽), we can conclude that 𝛾𝑖(𝛽1)<𝛾𝑖(𝛽2)for all𝑖. A.3 Relation between DAN and Column-wise Item Normalization [44] We theoretically show that DAN includes column-wise item normal- ization [ 44]. The LAE solution for column-wise item normalization unfolds as follows. 𝜕 𝜕B(min B∥(XD−𝛾 𝐼−XB)∥2 𝐹+𝜆∥B∥2 𝐹)=0 (25) ⇔− 2X⊤(XD−𝛾 𝐼−XB)+2𝜆B=0 (26) ∴ˆB=X⊤X+𝜆I−1X⊤XD−𝛾 𝐼=ˆB𝐿𝐴𝐸D−𝛾 𝐼, (27) where𝛾adjusts the degree of item normalization. Column-wise item normalization [ 44] solely mitigates the popu- larity of target items by employing D−𝛼 𝐼in the first term of XD−𝛼 𝐼− XD−𝛼 𝐼Bin Eq. (16). We also report the performance of column-wise 5A study [ 46] found that applying dropout to LAEs is equivalent to performing a weighted regularization by item popularity. ˆB𝐷𝐿𝐴𝐸 =(P+𝚲)−1P, where 𝚲=𝑝 1−𝑝· diagMat(diag(P)). Since P𝑖,𝑖means the popularity of item 𝑖, it is straightforward to see that diagMat(diag(P))is equal to D𝐼. (a) ML-20M (AOA) (b) ML-20M (Tail) (c) Yelp2018 (AOA) (d) Yelp2018 (Tail) Figure 7: Relative performance drop at NDCG@20 on ML- 20M and Yelp2018. The y-axis is the decreased ratio of the performance with the noisy training dataset to the perfor- mance with the original training dataset. item normalization in Appendix B.4. The results show a perfor- mance improvement for tail items, but it does not reach DAN’s performance. B Additional Experiments B.1 Robustness to Synthetic Noises Following the strategy [1, 29], we evaluate the robustness of item- adaptive normalization of DAN. We randomly replace 𝑟% of the observed interactions with unobserved interactions, and we in- crease the noise ratios 𝑟with 0, 2, 5, 10, and 20%. Figure 7 depicts the performance of the four models ( i.e., LAE, DLAE, LAE ItemNorm , and LAE DAN) with different noise ratios. (i) The performance drop of LAE models on ML-20M is less than that on Yelp2018. (ii) We observe that DLAE, LAE ItemNorm , and LAE DAN have the lowest relative performance drop in AOA performance. This is due to the denoising ability of item-adaptive normalization, as proven in Appendix A.1. (iii) In Figure 7-(b,d), LAE has the largest relative degradation, while LAE ItemNorm and LAE DAN have lower relative performance drop. This also shows that the denoising effect of DAN is effective for tail items. B.2 Applying DAN to Linear Models in Session-based Recommendation To demonstrate the generalizability of DAN, we perform additional experiments in a session-based recommendation environment. Backbone models . SLIST [ 7] is the only session-based recom- mendation model that utilizes a linear item-item matrix6. Thus, we utilize SLIST and its two key components, SLIS and SLIT, as the back- bone models. SLIS adjusts the diagonal constraints of EASER[45] 6TALE [ 36] is a linear item-item model for sequential recommendation, but we did not adopt it as a backbone model because it requires additional temporal information (i.e., timestamps). SIGIR ’25, July 13–18, 2025, Padua, Italy Seongmin Park, Mincheol Yoon, Hye-young Kim, and Jongwuk Lee Table 7: Performance comparison for existing linear session- based recommender models. ‘SLIST DAN’ indicates SLIST equipped with our proposed DAN. The best results are marked in bold, and the second best results are underlined . ModelAOA Tail Unbiased R@20 M@20 R@20 M@20 R@20 M@20DigineticaSLIS 0.4988 0.1808 0.3583 0.1206 0.0175 0.0052 SLIT 0.4311 0.1475 0.2897 0.0830 0.0109 0.0042 SLIST 0.4910 0.1818 0.3919 0.1366 0.0205 0.0062 SLIS DAN 0.5067 0.1858 0.3943 0.1497 0.0224 0.0066 SLIT DAN 0.4430 0.1472 0.2943 0.0833 0.0110 0.0045 SLIST DAN 0.5087 0.1861 0.3948 0.1496 0.0223 0.0066RetailRocketSLIS 0.5867 0.3625 0.5153 0.3261 0.0488 0.0091 SLIT 0.5340 0.3106 0.4249 0.2319 0.0301 0.0081 SLIST 0.5907 0.3769 0.5177 0.3244 0.0493 0.0103 SLIS DAN 0.5939 0.3767 0.5448 0.3667 0.0556 0.0172 SLIT DAN 0.5413 0.3239 0.4451 0.2693 0.0322 0.0117 SLIST DAN 0.6020 0.3799 0.5507 0.3719 0.0558 0.0163YoochooseSLIS 0.5907 0.2580 0.2944 0.1319 0.0047 0.0015 SLIT 0.6136 0.2809 0.3488 0.1360 0.0045 0.0028 SLIST 0.6332 0.2830 0.4058 0.1812 0.0043 0.0024 SLIS DAN 0.6317 0.2773 0.4057 0.1975 0.0082 0.0056 SLIT DAN 0.6204 0.2829 0.3728 0.1517 0.0049 0.0043 SLIST DAN 0.6375 0.2909 0.4174 0.1827 0.0080 0.0049 to recommend repeated items, and SLIT organizes the source/target matrix of EASERinto items that have interacted with the user’s past/future to predict the next item. Further, SLIST is represented as a closed-form solution that optimizes both SLIS and SLIT at once. Implementation details . To adapt DAN to SLIS, SLIT, and SLIST, we modified their objective functions. Since SLIS has the same objective function as LAE, it is applied DAN in the same way as for LAE, i.e., Eq. (16). For the objective function of SLIT, the user-item interaction Xis divided into two matrices: (i) The past matrix ( i.e., S) as input and (ii) the future matrix ( i.e.,T) as target. Therefore, the diagonal degree matrix D𝐼is first obtained from X. Then, the separated two matrices are multiplied by D−𝛼 𝐼,i.e.,SD−𝛼 𝐼andTD−𝛼 𝐼. Since SLIT divides the user-item interaction matrix Xinto the past matrix Sand future matrix T, the length of the user history must be calculated differently from traditional recommendation (Eq. (16)). Datasets and Evaluation protocols . We conducted experiments on three benchmark datasets collected from e-commerce services: Yoochoose7, Diginetica8, and RetailRocket9. Following [ 7,8], we adopted an iterative revealing scheme to consider the situation where users repeatedly purchase the same item. We randomly divided the training, validation, and test sets into 8:1:1 on the session side. For evaluation metrics, we adopted two ranking metrics, i.e., Recall and MRR, and also introduced two other metrics to verify the ability to eliminate popularity bias, i.e., Tail and Unbiased evaluation. Experimental results . Table 7 shows the performance of three linear models ( i.e., SLIS, SLIT, and SLIST) with DAN. We observe 7https://www.kaggle.com/datasets/chadgostopp/recsys-challenge-2015 8https://competitions.codalab.org/competitions/11161 9https://darel13712.github.io/rs_datasets/Datasets/retail_rocket/ (a) MSD (b) Gowalla (c) MSD (d) Gowalla Figure 8: NDCG@20 of LAE DAN over𝛼and𝛽for user nor- malization on MSD and Gowalla. the following two findings: (i) Applying DAN to linear models for session-based recommendation can also improve performance in Tail and unbiased evaluation. In particular, the average gain of MRR@20 in Tail and unbiased evaluation is 10.33% and 43.13%, respectively. It demonstrates that DAN can mitigate popularity bias in session-based recommendation as well as in traditional recom- mendation. (ii) Incorporating DAN into linear models results in an AOA performance improvement. The average gain of MRR@20 on the Yoochoose, Diginetica, and RetailRocket datasets is 2.56%, 1.64%, and 3.00%. Specifically, for SLIST, the performance gain is 2.36% and 2.79% on the Diginetica and Yoochoose datasets, respectively. These results demonstrate that DAN appropriately mitigates popularity bias while maintaining or improving overall performance. B.3 Hyperparameter Sensitivity Figure 8 indicates the performance of LAE DAN over adjusting 𝛼and 𝛽for the two remaining datasets, i.e., MSD and Gowalla. Due to their similar𝐺𝑖𝑛𝑖𝑖values, both datasets exhibit analogous tendencies in AOA, Head, and Tail performance with respect to 𝛼. Meanwhile, Gowalla’s higherH𝑤value leads to superior AOA performance at lower𝛽values than MSD. These findings highlight how the dataset characteristics directly influence the optimal hyperparameters. B.4 Comparing Various Normalization Table 8 shows the experimental results for the remaining datasets with respect to Table 5. It shows the performance of different nor- malization methods on each dataset. (i) On ML-20M, Netflix, and MSD datasets, RW normalization outperforms W/O normalization, but in the remaining datasets, both RW and Sym normalization degrade AOA performance. However, DAN consistently achieves the highest AOA performance on all datasets, and we can see that the performance of the tail items is also significantly improved. (ii) Column-wise item normalization ( i.e., Col norm) [ 44] is effective for tail items compared to W/O normalization. It simply mitigates the item popularity bias. In particular, it is more effective on datasets with low𝐺𝑖𝑛𝑖𝑖such as Yelp2018. Why is Normalization Necessary for Linear Recommenders? SIGIR ’25, July 13–18, 2025, Padua, Italy Table 8: Performance comparison for normalization methods on six datasets with the strong generalization protocol. ‘Most pop’ refers to recommending only the most popular items. ‘Col norm’ refers to the column-wise item normalization [ 44]. The backbone model is LAE. Each metric is NDCG@20. The best results are marked in bold, and the second best models are underlined . Dataset Method AOA Head Tail ML-20MMost pop 0.1355 0.1366 0.0000 W/O norm 0.3228 0.3264 0.0001 Col norm 0.3087 0.3122 0.0023 RW norm 0.3358 0.3386 0.0088 Sym norm 0.3321 0.3338 0.0184 User norm 0.3390 0.3428 0.0001 Item norm 0.3341 0.3379 0.0140 DAN (ours) 0.3414 0.3433 0.0128 NetflixMost pop 0.1097 0.1118 0.0000 W/O norm 0.3237 0.3316 0.0036 Col norm 0.3230 0.3281 0.0182 RW norm 0.3346 0.3358 0.0295 Sym norm 0.3307 0.3287 0.0405 User norm 0.3315 0.3401 0.0010 Item norm 0.3207 0.3200 0.0485 DAN (ours) 0.3405 0.3388 0.0411 MSDMost pop 0.0382 0.0447 0.0000 W/O norm 0.2740 0.2619 0.1234 Col norm 0.3030 0.2301 0.2063 RW norm 0.3204 0.2745 0.1845 Sym norm 0.2991 0.2294 0.2061 User norm 0.2549 0.2804 0.0717 Item norm 0.3169 0.2630 0.1896 DAN (ours) 0.3209 0.2731 0.1873 GowallaMost pop 0.0219 0.0334 0.0000 W/O norm 0.1706 0.2387 0.0371 Col norm 0.1822 0.2197 0.0533 RW norm 0.1693 0.2350 0.0389 Sym norm 0.1637 0.1691 0.0892 User norm 0.1797 0.2317 0.0468 Item norm 0.1853 0.2148 0.0798 DAN (ours) 0.1911 0.2294 0.0741 Yelp2018Most pop 0.0132 0.0157 0.0000 W/O norm 0.0954 0.1285 0.0039 Col norm 0.0963 0.1256 0.0084 RW norm 0.0840 0.1128 0.0044 Sym norm 0.0922 0.1052 0.0217 User norm 0.0966 0.1208 0.0101 Item norm 0.0965 0.1225 0.0249 DAN (ours) 0.1002 0.1212 0.0175 Amazon-bookMost pop 0.0085 0.0106 0.0000 W/O norm 0.1749 0.1865 0.0635 Col norm 0.1771 0.1842 0.0829 RW norm 0.1585 0.1668 0.0593 Sym norm 0.1627 0.1392 0.0928 User norm 0.1721 0.1975 0.0472 Item norm 0.1782 0.1711 0.0859 DAN (ours) 0.1811 0.1700 0.0886 Figure 9: Case study for various normalization methods on user #91935 in the ML-20M dataset. Head items are bordered in red, and tail items are bordered in blue. B.5 Case Study Figure 9 depicts the interaction history of the user #91935 in ML- 20M and the top-5 recommendation lists from four normalization methods ( i.e., W/O, RW, DAN, and Sym). User #91935 interacted with movies from both action and romantic genres, including one head item and four tail items. From this case study, we made the following two observations: •While W/O extremely recommends five head items, the other three methods recommend tail items appropriately. Even though the user watched three romantic movies out of five, W/O fo- cuses on the most popular action movie "The Dark Nights" (its popularity: 14004), recommending five action movies ( e.g.,"Iron Man" and"Batman Begins" ). The W/O recommendations exhibit high popularity and low diversity, whereas the recommenda- tions from the three normalization methods include tail items from various genres. Notably, all three methods effectively cap- ture user preferences by providing the tail item "Step Up 2" (related to "Step Up 1" in the user history) as the top-1 item. •DAN provides more balanced recommendations by appropri- ately mitigating popularity bias while maintaining user prefer- ences. RW normalization recommends four head items out of five, indicating that it still does not sufficiently mitigate the pop- ularity bias. In contrast, Sym normalization recommends four tail items out of five, meaning that it excessively alleviates the popularity bias. Unlike the two normalization methods, DAN successfully captures the user preferences and recommends highly relevant items while balancing both head and tail items. | 5 | 1 | The model described in the paper is a linear autoencoder (LAE) based on existing LAE architectures, which typically have a smaller parameter count compared to non-linear models. The datasets used for training (like ML-20M, Netflix, and others) are substantial but manageable for a single GPU. Given the simpler architecture, I estimate that training could take around 5 hours on a single GPU, assuming moderate batch sizes (32-256) and typical evaluation metrics that require computation per epoch. The proposed Data-Adaptive Normalization (DAN) does introduce additional computation, but given that it reportedly adds negligible computational cost compared to existing LAEs, I maintain that the overall training time will remain reasonable. Thus, the model is trainable in under 8 hours on a single GPU. | yes | Yes | Graph | Why is Normalization Necessary for Linear Recommenders? | 2025-04-08T00:00:00.000Z | [https://github.com/psm1206/dan] | 1 | https://github.com/psm1206/DAN/tree/main/data | 15 min | https://colab.research.google.com/drive/1euiNcqVAl4SgDK75YJJEP_DGBXQzxd08?usp=sharing | Yes | Everthing is working fine. |
Weather (192) | xPatch | [] | xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition | 2024-12-23T00:00:00 | https://arxiv.org/abs/2412.17323v2 | [
"https://github.com/stitsyuk/xpatch"
] | {'MSE': '0.189', 'MAE': '0.227'} | [
"MSE",
"MAE",
"Accuracy"
] | Given the following paper and codebase:
Paper: xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition
Codebase: https://github.com/stitsyuk/xpatch
Improve the xPatch model on the Weather (192) dataset. The result
should improve on the following metrics: {'MSE': '0.189', 'MAE': '0.227'}. You must use only the codebase provided.
| xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition Artyom Stitsyuk1, Jaesik Choi1,2 1Korea Advanced Institute of Science and Technology (KAIST), South Korea 2INEEJI, South Korea {stitsyuk, jaesik.choi }@kaist.ac.kr Abstract In recent years, the application of transformer-based mod- els in time-series forecasting has received significant atten- tion. While often demonstrating promising results, the trans- former architecture encounters challenges in fully exploiting the temporal relations within time series data due to its atten- tion mechanism. In this work, we design e Xponential Patch (xPatch for short), a novel dual-stream architecture that uti- lizes exponential decomposition. Inspired by the classical ex- ponential smoothing approaches, xPatch introduces the in- novative seasonal-trend exponential decomposition module. Additionally, we propose a dual-flow architecture that con- sists of an MLP-based linear stream and a CNN-based non- linear stream. This model investigates the benefits of employ- ing patching and channel-independence techniques within a non-transformer model. Finally, we develop a robust arct- angent loss function and a sigmoid learning rate adjustment scheme, which prevent overfitting and boost forecasting per- formance. The code is available at the following repository: https://github.com/stitsyuk/xPatch. 1 Introduction Long-term time series forecasting (LTSF) is one of the fun- damental tasks in time series analysis. The task is focused on predicting future values over an extended period, based on historical data. With the advent of deep learning mod- els, they have recently demonstrated superior performance in LTSF compared to traditional approaches such as ARIMA (Box et al. 2015) and LSTM (Bahdanau, Cho, and Bengio 2015). Transformer-based models (Vaswani et al. 2017) have revolutionized the LTSF task, enabling powerful AI systems to achieve state-of-the-art performance. The transformer ar- chitecture is considered highly successful in capturing se- mantic correlations among elements in long sequences. Re- cent research efforts have been primarily focused on adapt- ing transformers to the LTSF task and addressing such limi- tations of the vanilla transformer as quadratic time and mem- ory complexity (Li et al. 2019; Zhou et al. 2021; Wen et al. 2022). The self-attention mechanism employed in transform- ers is permutation-invariant. Although techniques like po- sitional encoding can partially retain ordering information, Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.preserving temporal information remains a challenge for transformer-based models. This limitation can adversely af- fect the performance of the LTSF task dealing with a contin- uous set of points. As a result, the effectiveness of transform- ers in the LTSF task has been challenged by a simple lin- ear approach utilizing a Multi-Layer Perceptron (MLP) net- work (Zeng et al. 2023). Surprisingly, a simple linear model named DLinear has surpassed the state-of-the-art forecast- ing performance of all previous transformer-based models, raising a fundamental question: “Are Transformers effective for long-term time series forecasting?”. Due to the non-stationary nature of real-world systems, time series data usually contain complex temporal patterns. To handle this complexity and non-stationarity (Liu et al. 2022), many recent LTSF models have adopted a paradigm of decomposing inputs. They use a seasonal-trend decom- position to capture linear trend features and non-linear sea- sonal variations. For handling time series trend features, cer- tain transformer-based models, including Autoformer (Wu et al. 2021) and FEDformer (Zhou et al. 2022), incorporate seasonal-trend data decomposition. By partitioning the sig- nal into two components, each with distinct function behav- ior, it becomes more feasible to capture semantic features from each component and make separate predictions. Both Autoformer and FEDformer focus on refining the transformer architecture by introducing an auto-correlation mechanism and a frequency-enhanced method while decom- posing the signal using a simple average pooling method. This technique requires padding at both ends, essentially re- peating the last and first values. Consequently, we argue that this approach introduces a bias towards the initial and final values, potentially altering the behavior of trend values. We propose a simple yet effective decomposition tech- nique based on a generally applicable time series smoothing method named Exponential Moving Average (EMA) (Gard- ner Jr 1985). The proposed strategy assigns exponentially decreasing weights over time, facilitating more efficient fea- ture learning from the decomposed data. The resulting expo- nentially smoothed sequence represents the trend, while the residual difference encapsulates the seasonality. Currently, the state-of-the-art models for the LTSF task are transformer-based architectures CARD (Wang et al. 2024b) and PatchTST (Nie et al. 2023). These models rely on channel-independence and segmentation of time se-arXiv:2412.17323v2 [cs.LG] 12 Jan 2025 ries into patches, which are used as input tokens for the transformer. However, we assume that the permutation- invariance of the attention mechanism in transformers may impede the model from attaining the optimal forecast- ing performance. Therefore, we are aiming to explore channel-independence and patching approaches within a non-transformer architecture, proposing the xPatch model. In this study, we introduce the utilization of the exponen- tial seasonal-trend decomposition technique. Furthermore, we propose a robust arctangent loss with weight decay and a novel learning rate adjustment strategy that improves train- ing adaptability. Additionally, we present the xPatch, a novel dual-flow network architecture that integrates Convolutional Neural Networks (CNNs), Multi-Layer Perceptrons (MLPs), patching, channel-independence, exponential seasonal-trend decomposition, and dual stream prediction. We summarized our main contributions as follows: • We propose a novel method for seasonal-trend decom- position that utilizes an Exponential Moving Average (EMA). • We introduce the dual-flow network and investigate the patching and channel-independence approaches within the CNN-based backbone. • We develop a robust arctangent loss and a novel sigmoid learning rate adjustment scheme with a warm-up that re- sults in smoother training. 2 Related Work Informer (Zhou et al. 2021) is the first well-known transformer-based model designed for the LTSF task. It em- ploys ProbSparse self-attention and a generative style de- coder for addressing quadratic time and memory complex- ity. Notably, this work also contributes to the field by curat- ing data and introducing the Electricity Transformer Tem- perature (ETT) benchmark dataset that is now commonly used for LTSF experiments by most of the models. TimesNet (Wu et al. 2023) utilizes Fourier Transform to decompose time series into multiple components with vary- ing period lengths, enhancing its focus on temporal variation modeling. The official repository provides a forecasting pro- tocol with standardized hyperparameter settings and fairly implemented baselines. To address the issue of non-stationarity in time series data, several models employ series decomposition to better capture complex temporal patterns. Autoformer (Wu et al. 2021) and FEDformer (Zhou et al. 2022) are two recent transformer-based solutions for the LTSF task, leveraging auto-correlation mechanism and frequency-enhanced struc- ture, respectively. Both models incorporate seasonal-trend decomposition within each neural block to enhance the pre- dictability of time-series data. Specifically, they apply a moving average kernel to the input sequence with padding at both ends, extracting the trend component. The difference between the original time series and the extracted trend com- ponent is identified as the seasonal component. DLinear (Zeng et al. 2023) is a recent one-layer lin- ear model that uses seasonal-trend decomposition as a pre- processing step. Initially, the model decomposes the rawdata into trend and seasonal components using a moving av- erage technique. Two linear layers are then applied indepen- dently to each of these components. The resulting features are subsequently aggregated to generate the final prediction. MICN (Wang et al. 2023) is a recent CNN-based solution that employs multi-scale hybrid seasonal-trend decomposi- tion. After decomposing the input series into seasonal and trend components, the model integrates both global and lo- cal contexts to enhance forecasting accuracy. TimeMixer (Wang et al. 2024a) is an MLP-based ap- proach that employs a decomposable multiscale-mixing method. The model uses the same series decomposition block from Autoformer (Wu et al. 2021) to break down multiscale time series into multiple seasonal and trend com- ponents. By leveraging the multiscale past information ob- tained after seasonal and trend mixing, the model predicts future values. ETSformer (Woo et al. 2022) and CARD (Wang et al. 2024b) are two transformer-based architectures that incor- porate the exponential smoothing approach. ETSformer in- troduces Exponential Smoothing Attention (ESA), while CARD applies exponential smoothing to the query and key tokens before the token blending module within one predic- tion head of the attention mechanism. In contrast to these models, the proposed xPatch architecture employs Exponen- tial Moving Average (EMA) decomposition to separate the time series into trend and seasonal components, which are then processed separately. Crossformer (Zhang and Yan 2022) and PatchTST (Nie et al. 2023) are transformer-based models that introduce a segmentation technique to LTSF. PatchTST divides time se- ries data into subseries-level patches that serve as input to- kens for the transformer. This approach is motivated by the vision transformer (Dosovitskiy et al. 2021) and designed for LTSF with channel-independence. Currently, PatchTST is recognized as the state-of-the-art solution for multivari- ate long-term forecasting. In our proposed xPatch model, we also incorporate patching and channel-independence ap- proaches. Given that xPatch is a CNN-based approach, we investigate whether the superior performance of PatchTST can be attributed to its patching and channel-independence modules rather than its transformer architecture. To explore this, we examine if a CNN-based model can achieve im- proved results by leveraging these techniques. MobileNet (Howard et al. 2017) and ConvMixer (Trock- man and Kolter 2022) are notable models designed for Com- puter Vision (CV) tasks that demonstrate the advantages of depthwise separable convolutions. In the proposed xPatch approach, we incorporate depthwise separable convolution as the non-linear stream of the dual-flow network. 3 Proposed Method In multivariate time series forecasting, given the observation of the historical Lvalues x= (x1, x2, ..., x L), the task is to predict the future Ttimesteps ˆx= (xL+1, xL+2, ..., x L+T). Each xtvalue at timestep tis multivariate, representing a vector of Mvariables. Therefore, the multivariate lookback series is denoted as x∈RM×Land the multivariate predic- tion is represented by ˆx∈RM×T. 3.1 Seasonal-Trend Decomposition Seasonal-trend decomposition facilitates the learning of complex temporal patterns by separating the time series sig- nal into trend and seasonal components. Trend features gen- erally represent the long-term direction of the data, which can be linear or smoothly varying. In contrast, seasonal com- ponents capture repeating patterns or cycles that occur at regular intervals and are often non-linear due to the com- plexities and variations in periodic behavior. The model first learns the features of these components individually and then combines them to generate the final forecast. Simple Moving Average (SMA) is the decomposition ap- proach utilized in Autoformer (Wu et al. 2021), FEDformer (Zhou et al. 2022), DLinear (Zeng et al. 2023), MICN (Wang et al. 2023), and TimeMixer (Wang et al. 2024a) models. SMA is defined as the unweighted mean of the previous k data points. Moving average mean point stof the kentries with tbeing moving step, nbeing dataset length, and X= x1, x2, ..., x nbeing data points is calculated as: st=xt+xt+1+...+xt+k−1 k=1 kt+k−1X i=txi XT=AvgPool (Padding (X)) XS=X−XT(1) where AvgPool (·)denotes moving average with the padding operation, while XTandXScorrespond to trend and sea- sonality components. Padding is employed to maintain the length of the time series unchanged after performing average pooling. Figure 1 illustrates an example of SMA decompo- sition. 0 24 48 72 964681012Data Data 0 24 48 72 962 024681012Simple Moving Average Trend Seasonality Figure 1: Example of SMA decomposition with kernel k = 25 on a 96-length sample from the ETTh1 dataset. Firstly, we argue that the average pooling operation results in the loss of significant trend features (see Appendix B). Additionally, alignment requires padding on both ends of the series, which can distort the sequence at the head and tail. Secondly, the primary goal of decomposition is to en- hance the interpretability of both decomposed signals. This entails improving the clarity of the trend and seasonality components while enriching them with more distinct fea- tures for learning. However, SMA produces an overly sim- plistic trend signal with limited diverse features and a com- plex seasonality pattern. As a result, we investigate an alter- native decomposition method to address this issue.Exponential Moving Average (EMA) (Gardner Jr 1985) is an exponential smoothing method that assigns greater weight to more recent data points while smoothing out older data. This exponential weighting scheme allows EMA to re- spond more promptly to changes in the underlying trends of the time series, without the need for padding repeated val- ues. EMA point stof data xtbeginning at time t= 0is repre- sented by: s0=x0 st=αxt+ (1−α)st−1, t > 0 XT=EMA (X) XS=X−XT(2) where αis the smoothing factor, 0< α < 1, EMA (·)de- notes exponential moving average, while XTandXScorre- spond to trend and seasonality components. Figure 2 shows an example of EMA decomposition. 0 24 48 72 960510EMA, alpha = 0.1 0 24 48 72 960510EMA, alpha = 0.3 0 24 48 72 960510EMA, alpha = 0.5 0 24 48 72 960510EMA, alpha = 0.7 0 24 48 72 960510EMA, alpha = 0.9 0 24 48 72 960510EMA, alpha = 1 Trend Seasonality Figure 2: Example of EMA decomposition with α= {0.1,0.3,0.5,0.7,0.9,1}on a 96-length sample from the ETTh1 dataset. The exponential method offers greater control over the be- havior of both trend and seasonality components. Given that data can exhibit diverse patterns, including stationary and non-stationary characteristics with varying periods and be- haviors, the adaptability of exponential decomposition pro- vides advantages in feature extraction (see Appendix B). Compared to SMA, EMA presents a more flexible approach to decomposition, as it adjusts its weighting scheme based on the exponential decay of data points. This adaptability allows EMA to capture changing trends more effectively, making it particularly suitable for time series with dynamic and evolving patterns (see Appendix C). 3.2 Model Architecture Channel-Independence . The multivariate time series x= (x1, x2, ..., x L)is divided into Munivariate sequences x(i)= (x(i) 1, x(i) 2, ..., x(i) L), where x(i)∈RLandLis lookback of recent historical data points. Each of these uni- variate series is then individually fed into the backbone model, which consequently generates a prediction sequence ˆx(i)= (ˆx(i) L+1,ˆx(i) L+2, ...,ˆx(i) L+T), where ˆx(i)∈RTandT Patc hing Depthwise CNN GELU Batch Norm Pointwise CNN GELU Batch Norm MLP FlattenFully- connected Layer Norm Fully- connected Average Pool Fully- connectedResidual Stream Non-linear block Linear blockUnivariate Output Series x̂(i)∈R1xT, T –prediction lengthFlatten Fully- connected GELU Fully- connected Feature ConcacinationGELU Batch Norm Fully- connected Univariate Input Series x(i)∈R1xL, L –lookback window Seasonal Component Trend ComponentEMA Decomposition Average Pool Layer NormFigure 3: xPatch Model Overview. Every univariate series is passed through exponential decomposition. Consequently, the trend and seasonal components are processed through the dual flow network. is future steps observations. This partitioning approach has proven to work well in both linear models and transformers (Zeng et al. 2023; Nie et al. 2023; Han, Ye, and Zhan 2023). Exponential Decomposition . Using the EMA method, we decompose each univariate series into trend and sea- sonality components, which are then processed separately by the dual-flow architecture. After processing, the learned trend and seasonal features are aggregated and passed to the final output layer to comprise the final prediction as illus- trated in Figure 3. Details on optimization and ablation stud- ies of EMA are available in Appendix D, E. Dual Flow Net . As the main backbone, we employ two distinct flows to analyze trend and seasonality: linear and non-linear streams. The trend component is processed through the linear MLP-based stream, while the seasonal component is handled by the non-linear CNN-based block. Seasonality represents periodic fluctuations around a con- stant level, meaning that the statistical properties of these fluctuations, such as mean and variance, remain stable over time, meaning that the seasonal component is stationary. In contrast, the trend reflects long-term progression with ei- ther increasing or decreasing behavior and a changing mean, which makes the trend component non-stationary. To summarize, in most cases, the seasonal component is non-linear and stationary, while the trend component is lin- ear and non-stationary. However, some datasets might ex- hibit unusual behavior, such as a stationary trend. There- fore, the dual-stream architecture is designed to enhance the model’s adaptability to both stationary and non-stationary data. For the exploration of the dual-flow architecture, see Appendix F. Linear Stream. The linear stream is an MLP-based net- work that includes average pooling and layer normalization, intentionally omitting activation functions to emphasize lin- ear features. The decomposed data x(i)is processed through two linearblocks, each consisting of a fully connected layer followed by average pooling with a kernel k= 2for feature smooth- ing and layer normalization for training stability. Each lin- ear layer and average pooling operation contribute to dimen- sionality reduction, encouraging the network to compress feature representations to fit the available space effectively. This reduction in the number of features, combined with the absence of activation functions and a bottleneck architec- ture, aims to retain only the most significant linear features of the smoothed trend. x(i)=LayerNorm (AvgPool (Linear (x(i)), k= 2)) (3) The final expansion layer takes the bottleneck representation and upscales it to the prediction length. ˆx(i) lin=Linear (x(i)) (4) Patching. Patching is a technique inspired by the vision transformer (Dosovitskiy et al. 2021) and was first intro- duced in the context of LTSF by PatchTST (Nie et al. 2023). This method unfolds each univariate time series using a slid- ing window. We incorporate patching into the non-linear block to emphasize repetitive seasonal features. By using patching, the model can better focus on these repetitive pat- terns, effectively capturing their inter-pattern dependencies more effectively. The patch length is denoted as P, and the non-overlapping region between two consecutive patches is referred to as stride S. We apply patching in the non-linear stream to each normalized univariate decomposed sequence x(i)∈RL, which generates a sequence of N2D patches x(i) p∈RN×P. The number of patches is calculated as N=⌊L−P S⌋+ 2. In our implementation, for a fair comparison with PatchTST and CARD, we adopt their setup for patch embedding, set- tingP= 16 andS= 8. Non-linear Stream. The non-linear stream is a CNN- based network that introduces non-linearity through activa- tion functions. By applying convolutions on top of patch- ing, the CNN-based stream captures spatio-temporal pat- terns and inter-patch correlations, focusing on the non-linear features of the seasonal signal. First, the patched data x(i) p∈RN×Pis embedded for in- creasing the number of features with activation function σ and batch normalization (Ioffe and Szegedy 2015). Since the seasonal variations have many zero values, we employ GELU (Hendrycks and Gimpel 2016) as an activation func- tion for its smooth transition around zero and non-linearity. The resulting embedded shape is denoted as xN×P2 p . xN×P2 p =BatchNorm (σ(Embed (x(i) p))) (5) Following embedding, the data is processed through depth- wise separable convolution. This method splits the compu- tation into two steps: depthwise convolution applies a single convolutional filter per input channel, and pointwise con- volution creates a linear combination of the output of the depthwise convolution, with an additional residual stream between them. Given that the xPatch architecture leverages channel- independence, it was determined to employ patching to increment the number of dimensions, enabling patches to function as channels in the data xN×P2 p . Consequently, rather than relying on inter-channel feature representations, we utilize channel-independent inter-patch representations. This approach aims to capture comprehensive semantic in- formation that may not be available at the point level and allows to focus on non-linear features. For depthwise convolution, we employ grouped convolu- tion with the number of groups gequal to the number of patches N, a large kernel size kequal to the patch length P, and a convolution stride sequal to the patch length P. xN×P p =Conv N→N(xN×P2 p , k=P, s=P, g=N)(6) xN×P p =BatchNorm (σ(xN×P p)) (7) Depthwise convolution applies a single convolutional filter per input channel, generating Nfeature maps, each cor- responding to a specific patch. This approach enables the model to capture temporal features with group convolution that is consistent for periodic patches. Subsequently, the data is updated with a linear residual connection spanning the depthwise convolution. Although depthwise convolution captures temporal relations between periodic patterns, it may not effectively capture inter-patch feature correlations. Therefore, the sequence is further pro- cessed through the pointwise convolution layer with the number of groups g= 1, a small kernel size k= 1, and a convolution stride s= 1. xN×P p =DepthwiseConv (xN×P2 p ) +xN×P2 p (8) xN×P p =Conv N→N(xN×P p, k= 1, s= 1, g= 1) (9) xN×P p =BatchNorm (σ(xN×P p)) (10) Pointwise convolution creates a linear combination of the output and aggregates features across different patches with- out skipping elements.These predictions are then processed through the MLP flatten layer. This layer is designed in a similar style to PatchTST: the first linear layer doubles the hidden dimen- sion, while the second linear layer projects it back with a GELU activation function between them. ˆx(i) nonlin=Linear (σ(Linear (Flatten (xN×P p)))) (11) Finally, linear features (4) and non-linear features (11) are concatenated and fed into the final linear layer, which merges linear and non-linear features for the output predic- tion. ˆx(i)=Linear (concat (ˆx(i) lin,ˆx(i) nonlin)) (12) We concatenate the linear and non-linear features from the two flows, representing learned representations from the MLP and CNN streams. This mechanism enables the model to dynamically weigh the significance of both linear and non-linear features in the final prediction, providing adapt- ability to diverse patterns in time series data. 3.3 Loss Function Mean Squared Error (MSE) loss is a training loss scheme commonly used by LTSF models. The MSE loss LMSEbe- tween the predicted univariate sequence ˆx(i) 1:Tand the ground truth observations x(i) 1:T, where Tis future prediction length, is denoted as: LMSE=1 TTX i=1||ˆx(i) 1:T−x(i) 1:T||2 2 (13) The recent transformer-based model CARD (Wang et al. 2024b) introduced a novel signal decay-based loss function, where they scale down the far-future Mean Absolute Error (MAE) loss to address the high variance. MAE was chosen since it is more resilient to outliers than MSE. LCARD =1 TTX i=1i−1 2||ˆx(i) 1:T−x(i) 1:T|| (14) where icorresponds to the prediction point in the future. This training scheme was proven by CARD to be efficient and to increase the performance of existing models. To identify a more effective scaling loss coefficient, we extend Equation (14) to a universally applicable MAE scal- able loss function: L=1 TTX i=1ρ(i)||ˆx(i) 1:T−x(i) 1:T|| (15) where ρ(i)represents the scaling coefficient. Thus, the LCARD loss defined in Equation (14) emerges as a specific instance of the scalable loss function delineated in Equation (15), with ρ(i) =i−1 2. We find that the scaling coefficient ρCARD (i) =i−1 2ex- hibits a too rapid decrease rate for our task. Therefore, we propose a novel arctangent loss Larctan , which features a ModelsxPatch CARD TimeMixer iTransformer RLinear PatchTST MICN DLinear TimesNet ETSformer (ours) (2024) (2024) (2024) (2023) (2023) (2023) (2023) (2023) (2022) Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ETTh1 0.428 0.419 0.442 0.429 0.447 0.44 0.454 0.448 0.438 0.427 0.45 0.441 0.559 0.535 0.456 0.452 0.458 0.450 0.542 0.510 ETTh2 0.319 0.361 0.368 0.390 0.365 0.395 0.383 0.407 0.362 0.394 0.365 0.394 0.588 0.525 0.559 0.515 0.414 0.427 0.439 0.452 ETTm1 0.377 0.384 0.382 0.383 0.381 0.396 0.407 0.410 0.409 0.401 0.383 0.394 0.392 0.414 0.403 0.407 0.400 0.406 0.429 0.425 ETTm2 0.267 0.313 0.272 0.317 0.275 0.323 0.288 0.332 0.286 0.328 0.284 0.327 0.328 0.382 0.350 0.401 0.291 0.333 0.293 0.342 Weather 0.232 0.261 0.239 0.265 0.240 0.272 0.258 0.278 0.269 0.289 0.257 0.280 0.243 0.299 0.265 0.317 0.259 0.287 0.271 0.334 Traffic 0.499 0.279 0.453 0.282 0.485 0.298 0.428 0.282 0.623 0.372 0.467 0.292 0.542 0.316 0.625 0.383 0.620 0.336 0.621 0.396 Electricity 0.179 0.264 0.168 0.258 0.182 0.273 0.178 0.270 0.214 0.291 0.190 0.275 0.187 0.295 0.212 0.300 0.193 0.295 0.208 0.323 Exchange 0.375 0.408 0.360 0.402 0.408 0.422 0.360 0.403 0.380 0.410 0.364 0.400 0.315 0.404 0.354 0.414 0.416 0.443 0.410 0.427 Solar 0.239 0.236 0.237 0.239 0.216 0.280 0.233 0.262 0.369 0.357 0.254 0.289 0.283 0.358 0.327 0.398 0.301 0.319 0.603 0.615 ILI 1.442 0.725 1.916 0.842 1.708 0.820 2.918 1.154 2.452 0.978 1.626 0.804 2.664 1.086 2.616 1.090 2.139 0.931 2.497 1.004 Table 1: Averaged long-term forecasting results with unified lookback window L= 36 for the ILI dataset, and L= 96 for all other datasets. All results are averaged from 4 different prediction lengths: T={24,36,48,60}for the ILI dataset, and T={96,192,336,720}for all other datasets, respectively. The best model is boldface and the second best is underlined . See Table 13 in Appendix K for the full results. slower increase rate compared to the exponential functions analyzed in CARD (Wang et al. 2024b): Larctan =1 TTX i=1ρarctan (i)||ˆx(i) 1:T−x(i) 1:T|| (16) ρarctan (i) =−arctan( i) +π 4+ 1 (17) Mathematical proofs, ablation studies on state-of-the-art models employing the arctangent loss, and the arctangent function’s scaling analysis can be found in Appendix G. 3.4 Learning Rate Adjustment Scheme Most recent LTSF models (Zhou et al. 2021; Wu et al. 2021; Zhou et al. 2022; Woo et al. 2022; Wu et al. 2023; Zeng et al. 2023; Li et al. 2023; Liu et al. 2024) adapt standard learning rate adjustment technique. Learning rate αtat epoch twith initial learning rate α0is calculated as: αt=αt−1∗0.5t−1,fort≥1 (18) This strategy results in a decreasing learning rate with each successive epoch. Such a rapidly decreasing scheme was ef- fective since the models were trained with a small number of epochs, usually limited to 10. PatchTST (Nie et al. 2023) introduced a long training ap- proach with an upper limit of 100 epochs and a new learning rate adjustment schedule: αt=α0,fort <3, (19) αt=αt−1∗0.9t−3,fort≥3 (20) Consequently, CARD (Wang et al. 2024b) developed a new linear warm-up of the model with subsequent cosine learning rate decay. Learning rate αtat epoch twith initial learning rate α0, number of warmup epochs w, and upper limit of 100 epochs is calculated as: αt=αt−1∗t w,fort < w, (21) αt= 0.5α(1 +cos(π∗(t−w) 100−w)),fort≥w (22)We introduce a novel sigmoid learning rate adjustment scheme. The learning rate αtat epoch t, with an initial learn- ing rate α0, logistic growth rate k, decreasing curve smooth- ing rate s, and warm-up coefficient w, is calculated as fol- lows: αt=α0 1 +e−k(t−w)−α0 1 +e−k s(t−sw)(23) Mathematical proofs, ablation studies on state-of-the-art models using the sigmoid learning rate adjustment approach, and hyperparameters selection are available in Appendix H. 4 Experiments Datasets. We conduct extensive experiments on nine real- world multivariate time series datasets, including Electricity Transform Temperature (ETTh1, ETTh2, ETTm1, ETTm2) (Zhou et al. 2021), Weather, Traffic, Electricity, Exchange- rate, ILI (Wu et al. 2021), and Solar-energy (Lai et al. 2018). Evaluation Metrics. Following previous works, we use Mean Squared Error (MSE) and Mean Absolute Error (MAE) metrics to assess the performance. Implementation Details. All the experiments are imple- mented in PyTorch (Paszke et al. 2019), and conducted on a single Quadro RTX 6000 GPU. Baselines. We choose the last state-of-the-art LTSF mod- els, including Autoformer (2021) (Wu et al. 2021), FED- former (2022) (Zhou et al. 2022), ETSformer (2022) (Woo et al. 2022), TimesNet (2023) (Wu et al. 2023), DLin- ear (2023) (Zeng et al. 2023), RLinear (2023) (Li et al. 2023), MICN (2023) (Wang et al. 2023), PatchTST (2023) (Nie et al. 2023), iTransformer (2024) (Liu et al. 2024), TimeMixer (2024) (Wang et al. 2024a), and CARD (2024) (Wang et al. 2024b) as baselines for our experiments. Unified Experimental Settings. To ensure a fair compar- ison, we conduct 2 types of experiments. The first experi- ment uses unified settings based on the forecasting protocol proposed by TimesNet (Wu et al. 2023): a lookback length L= 36 , prediction lengths T={24,36,48,60}for the ILI dataset, and L= 96 ,T={96,192,336,720}for all other datasets. The averaged results are reported in Table 1. ModelsxPatch CARD TimeMixer iTransformer RLinear PatchTST MICN DLinear TimesNet ETSformer (ours) (2024) (2024) (2024) (2023) (2023) (2023) (2023) (2023) (2022) Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ETTh1 0.391 0.412 0.401 0.422 0.411 0.423 0.501 0.492 0.413 0.427 0.413 0.434 0.440 0.462 0.423 0.437 0.458 0.450 0.542 0.510 ETTh2 0.299 0.351 0.321 0.373 0.316 0.384 0.385 0.417 0.328 0.382 0.331 0.381 0.403 0.437 0.431 0.447 0.414 0.427 0.439 0.452 ETTm1 0.341 0.368 0.350 0.368 0.348 0.376 0.373 0.404 0.359 0.378 0.353 0.382 0.387 0.411 0.357 0.379 0.400 0.406 0.429 0.425 ETTm2 0.242 0.300 0.255 0.310 0.256 0.316 0.274 0.335 0.253 0.313 0.256 0.317 0.284 0.340 0.267 0.332 0.291 0.333 0.293 0.342 Weather 0.211 0.247 0.220 0.248 0.222 0.262 0.271 0.297 0.242 0.278 0.226 0.264 0.243 0.299 0.246 0.300 0.259 0.287 0.271 0.334 Traffic 0.392 0.248 0.381 0.251 0.388 0.263 0.378 0.270 0.417 0.283 0.391 0.264 0.542 0.316 0.434 0.295 0.620 0.336 0.621 0.396 Electricity 0.153 0.245 0.157 0.251 0.156 0.247 0.161 0.257 0.164 0.257 0.159 0.253 0.187 0.295 0.166 0.264 0.193 0.295 0.208 0.323 Exchange 0.366 0.404 0.360 0.402 0.471 0.452 0.458 0.469 0.423 0.427 0.405 0.426 0.315 0.404 0.297 0.378 0.416 0.443 0.410 0.427 Solar 0.194 0.214 0.198 0.225 0.192 0.244 0.197 0.262 0.235 0.266 0.256 0.298 0.213 0.266 0.329 0.400 0.244 0.334 0.603 0.615 ILI 1.281 0.688 1.916 0.842 1.971 0.924 2.947 1.193 1.803 0.874 1.480 0.807 2.567 1.056 2.169 1.041 2.139 0.931 2.497 1.004 Table 2: Averaged long-term forecasting results under hyperparameter searching. All results are averaged from 4 different prediction lengths: T={24,36,48,60}for the ILI dataset, and T={96,192,336,720}for all other datasets, respectively. The best model is boldface and the second best is underlined . See Table 14 in Appendix K for the full results. To handle data heterogeneity and distribution shift, we ap- ply reversible instance normalization (Kim et al. 2021). In Appendix J, we examine the impact of instance normaliza- tion on the forecasting results of xPatch and other state-of- the-art models, comparing their performance with and with- out the RevIN module. Hyperparameter Search. In the second experiment, we aim to determine the upper bounds of the compared mod- els and conduct a hyperparameter search. We evaluate all models to see if they benefit from longer historical data to identify the optimal lookback length for each, as detailed in Appendix I. For the models that benefit from a longer input length, namely xPatch, CARD, TimeMixer, iTransformer, RLinear, PatchTST, and DLinear, we perform a hyperparam- eter search similar to TimeMixer (Wang et al. 2024a). The averaged results are reported in Table 2. All implementations are derived from the models’ official repository code, maintaining the same configurations. It is also important to note that we strictly adhere to the settings specified in the official implementations, including the num- ber of epochs (100 for CARD and PatchTST, 15 for RLin- ear) and the learning rate adjustment strategy. Results. In the unified experimental settings, xPatch achieves the best averaged performance on 60% of the datasets using the MSE metric and 70% of the datasets using the MAE metric. Compared to CARD, xPatch sur- passes it by 2.46% in MSE and 2.34% in MAE. Compared to TimeMixer, xPatch surpasses it by 3.34% in MSE and 6.34% in MAE. Compared to PatchTST, xPatch surpasses it by 4.76% in MSE and 6.20% in MAE. In the hyperparameter search settings, xPatch achieves the best averaged performance on 70% of the datasets using the MSE metric and 90% of the datasets using the MAE met- ric. Compared to CARD, xPatch surpasses it by 5.29% in MSE and 3.81% in MAE. Compared to TimeMixer, xPatch surpasses it by 7.45% in MSE and 7.85% in MAE. Com- pared to PatchTST, xPatch surpasses it by 7.87% in MSE and 8.59% in MAE. Computational Cost. While it is true that the proposed dual-flow architecture incurs higher computational costs compared to single-stream CNN and MLP models, it is im-portant to note that convolution and linear operations are initially not as computationally expensive as transformer- based solutions. The overall increase in computational costs remains relatively small, as shown in Table 3. Moreover, the enhanced performance of the introduced dual-stream archi- tecture outweighs these additional computational costs. Method Training time Inference time MLP-stream 0.948 msec 0.540 msec CNN-stream 1.811 msec 0.963 msec xPatch 3.099 msec 1.303 msec CARD 14.877 msec 7.162 msec TimeMixer 13.174 msec 8.848 msec iTransformer 6.290 msec 2.743 msec PatchTST 6.618 msec 2.917 msec DLinear 0.420 msec 0.310 msec Table 3: The average per step running and inference time maintaining the same settings for all benchmarks. 5 Conclusion This study introduces xPatch, a novel dual-flow architecture for long-term time series forecasting (LTSF). xPatch com- bines the strengths of both Convolutional Neural Networks (CNNs) and Multi-Layer Perceptrons (MLPs) to achieve su- perior performance. Our findings demonstrate that the inte- gration of an Exponential Moving Average (EMA) seasonal- trend decomposition module effectively captures underlying trends and enhances forecasting accuracy. The dual-stream network further enhances xPatch’s adaptability by dynami- cally weighing the importance of linear and non-linear fea- tures for diverse time series patterns. Additionally, this study introduces a robust arctangent loss function and a novel sig- moid learning rate adjustment approach, both of which con- sistently improve the performance of existing models. By investigating patching and channel-independence within a CNN-based backbone, xPatch offers a compelling alterna- tive to transformer-based architectures, achieving superior performance while maintaining computational efficiency. 6 Acknowledgements We would like to thank Enver Menadjiev, Kyowoon Lee, Ji- hyeon Seong, Jiyeon Han and the anonymous reviewers for their valuable comments. This work was supported by NA VER, Institute of Infor- mation & Communications Technology Planning & Evalu- ation (IITP), and the Korean Ministry of Science and ICT (MSIT) under the grant agreement No. RS-2019-II190075, Artificial Intelligence Graduate School Program (KAIST); No. RS-2022-II220984, Development of Artificial Intelli- gence Technology for Personalized Plug-and-Play Explana- tion and Verification of Explanation; No.RS-2022-II220184, Development and Study of AI Technologies to Inexpen- sively Conform to Evolving Policy on Ethics. References Bahdanau, D.; Cho, K.; and Bengio, Y . 2015. Neural ma- chine translation by jointly learning to align and translate. InInternational Conference on Learning Representations . Box, G. E.; Jenkins, G. M.; Reinsel, G. C.; and Ljung, G. M. 2015. Time series analysis: forecasting and control . John Wiley & Sons. Dickey, D. A.; and Fuller, W. A. 1979. Distribution of the estimators for autoregressive time series with a unit root. Journal of the American statistical association , 74(366a): 427–431. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR . Gardner Jr, E. S. 1985. Exponential smoothing: The state of the art. Journal of forecasting , 4(1): 1–28. Han, L.; Ye, H.-J.; and Zhan, D.-C. 2023. The Capacity and Robustness Trade-off: Revisiting the Channel Independent Strategy for Multivariate Time Series Forecasting. arXiv preprint arXiv:2304.05206 . Hendrycks, D.; and Gimpel, K. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 . Howard, A. G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; and Adam, H. 2017. Mo- bilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 . Ioffe, S.; and Szegedy, C. 2015. Batch normalization: Accel- erating deep network training by reducing internal covariate shift. In International conference on machine learning , 448– 456. pmlr. Kim, T.; Kim, J.; Tae, Y .; Park, C.; Choi, J.-H.; and Choo, J. 2021. Reversible instance normalization for accurate time- series forecasting against distribution shift. In International Conference on Learning Representations . Lai, G.; Chang, W.-C.; Yang, Y .; and Liu, H. 2018. Modeling long-and short-term temporal patterns with deep neural net- works. In The 41st international ACM SIGIR conference on research & development in information retrieval , 95–104.Li, S.; Jin, X.; Xuan, Y .; Zhou, X.; Chen, W.; Wang, Y .-X.; and Yan, X. 2019. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecast- ing.Advances in neural information processing systems , 32. Li, Z.; Qi, S.; Li, Y .; and Xu, Z. 2023. Revisiting Long- term Time Series Forecasting: An Investigation on Linear Mapping. arXiv preprint arXiv:2305.10721 . Liu, Y .; Hu, T.; Zhang, H.; Wu, H.; Wang, S.; Ma, L.; and Long, M. 2024. iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. In The Twelfth Inter- national Conference on Learning Representations . Liu, Y .; Wu, H.; Wang, J.; and Long, M. 2022. Non- stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Process- ing Systems , 35: 9881–9893. Nie, Y .; H. Nguyen, N.; Sinthong, P.; and Kalagnanam, J. 2023. A Time Series is Worth 64 Words: Long-term Fore- casting with Transformers. In International Conference on Learning Representations . Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information pro- cessing systems , 32. Trockman, A.; and Kolter, J. Z. 2022. Patches Are All You Need? Trans. Mach. Learn. Res. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. Advances in neural information pro- cessing systems , 30. Wang, H.; Peng, J.; Huang, F.; Wang, J.; Chen, J.; and Xiao, Y . 2023. Micn: Multi-scale local and global context model- ing for long-term series forecasting. In The eleventh inter- national conference on learning representations . Wang, S.; Wu, H.; Shi, X.; Hu, T.; Luo, H.; Ma, L.; Zhang, J. Y .; and Zhou, J. 2024a. Timemixer: Decomposable mul- tiscale mixing for time series forecasting. arXiv preprint arXiv:2405.14616 . Wang, X.; Zhou, T.; Wen, Q.; Gao, J.; Ding, B.; and Jin, R. 2024b. CARD: Channel Aligned Robust Blend Transformer for Time Series Forecasting. In The Twelfth International Conference on Learning Representations . Wen, Q.; Zhou, T.; Zhang, C.; Chen, W.; Ma, Z.; Yan, J.; and Sun, L. 2022. Transformers in time series: A survey. arXiv preprint arXiv:2202.07125 . Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; and Hoi, S. 2022. Etsformer: Exponential smoothing transformers for time- series forecasting. arXiv preprint arXiv:2202.01381 . Wu, H.; Hu, T.; Liu, Y .; Zhou, H.; Wang, J.; and Long, M. 2023. TimesNet: Temporal 2D-Variation Modeling for Gen- eral Time Series Analysis. In International Conference on Learning Representations . Wu, H.; Xu, J.; Wang, J.; and Long, M. 2021. Autoformer: Decomposition transformers with auto-correlation for long- term series forecasting. Advances in Neural Information Processing Systems , 34: 22419–22430. Zeng, A.; Chen, M.; Zhang, L.; and Xu, Q. 2023. Are trans- formers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence , volume 37, 11121–11128. Zhang, Y .; and Yan, J. 2022. Crossformer: Transformer uti- lizing cross-dimension dependency for multivariate time se- ries forecasting. In The Eleventh International Conference on Learning Representations . Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; and Zhang, W. 2021. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence , volume 35, 11106–11115. Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; and Jin, R. 2022. Fedformer: Frequency enhanced decomposed trans- former for long-term series forecasting. In International Conference on Machine Learning , 27268–27286. PMLR. Appendix A Datasets We conduct experiments on nine real-world multivariate time series datasets to evaluate the performance of the pro- posed xPatch model: • Electricity Transformer Temperature (ETT)1: has four subsets, where ETTh1 and ETTh2 are recorded every hour, while ETTm1 and ETTm2 are recorded every 15 minutes. Data is collected from two different electric transformers. • Weather2: collects 21 meteorological indicators in Ger- many, such as humidity and air temperature, collected every 10 minutes from the Weather Station of the Max Planck Biogeochemistry Institute in 2020. • Traffic3: records hourly road occupancy rates measured by 862 sensors on San Francisco Bay area freeways from January 2015 to December 2016. • Electricity4: describes the hourly electricity consumption data of 321 clients from 2012 to 2014. • Exchange-rate5: collects the panel data of daily exchange rates from 8 countries from 1990 to 2016. • Solar-energy6: contains the solar power production records from 137 PV plants in 2006. • ILI7: records the weekly number of patients and influenza-like illness ratio in the USA between 2002 and 2021. Due to the size, the prediction length of the ILI dataset is{24, 36, 48, 60 }, while for all other datasets prediction length is set to {96, 192, 336, 720 }. Table 4 summarizes details of statistics of datasets. Dataset Dim Dataset Size Frequency ETTh1,ETTh2 7 (8545,2881,2881) Hourly ETTm1,ETTm2 7(34465,11521,11521) 15 min Weather 21 (36792,5271,10540) 10 min Traffic 862 (12185,1757,3509) Hourly Electricity 321 (18317,2633,5261) Hourly Exchange-rate 8 (5120,665,1422) Daily Solar-energy 137 (36792,5271,10540) 10 min ILI 7 (617,74,170) Weekly Table 4: Detailed dataset descriptions. Dim denotes dimen- sion, which is the variate number of each dataset. Dataset size denotes the total number of time points in (Train, Vali- dation, Test) split. Frequency denotes the sampling interval of time points. 1https://github.com/zhouhaoyi/ETDataset 2https://www.bgc-jena.mpg.de/wetter 3https://pems.dot.ca.gov 4https://archive.ics.uci.edu/dataset/321/ electricityloaddiagrams20112014 5https://github.com/laiguokun/multivariate-time-series-data 6https://www.nrel.gov/grid/solar-power-data.html 7https://gis.cdc.gov/grasp/fluview/fluportaldashboard.htmlB Seasonal-Trend Decomposition For a qualitative comparison between the Simple Moving Average (SMA) introduced in Equation (1) and the Expo- nential Moving Average (EMA) introduced in Equation (2), we provide a sample from the Traffic dataset in Figure 4. It is evident that when data exhibits spikes and waving patterns, SMA struggles to extract significant trend features. Due to its use of average pooling, SMA is unable to capture spikes since the average of a sliding window fails to account for sudden peaks. The increasing curves are observed after the actual spike, leaving the extremely changing features to rep- resent seasonality. Conversely, EMA effectively smooths the data, highlighting appropriate trend features. Figure 4a presents an example of SMA decomposition that fails to produce interpretable trend and seasonality pat- terns. The seasonal pattern shows minimal change from the initial data, primarily exhibiting a vertical shift. In contrast, Figure 4b shows EMA decomposition on the same sample. EMA effectively decomposes the sample into smoothed trend features and discernible seasonal variations. 0 20 40 60 800.00.10.20.30.40.5Simple Moving Average (SMA) Data Trend Seasonality (a) SMA decomposition. 0 20 40 60 800.1 0.00.10.20.30.40.5Exponential Moving Average (EMA) Data Trend Seasonality (b) EMA decomposition. Figure 4: SMA and EMA smoothing and decomposition on a 96-length sample from the Traffic dataset. C Stationarity of EMA The goal of seasonal-trend decomposition is to split data into simpler components with distinct features: a stationary sea- sonality and a non-stationary trend. To analyze the impact of decomposition on the stationarity of specific segments, we conduct experiments using the Augmented Dickey-Fuller (ADF) (Dickey and Fuller 1979) stationarity test. We select a window size L= 720 , as it is the longest lookback window analyzed in the experimental settings. We divide the dataset into chunks of length L, consistent with how forecasting is performed, and apply the proposed EMA decomposition to each chunk with α= 0.3. Table 5 illus- trates the effect of decomposition on data stationarity: Dataset ADF Stat Trend ADF Stat Seasonal ADF Stat ETTh1 0.131 NS 0.157 NS 0.369∗10−6S ETTh2 0.370 NS 0.198 NS 0.447∗10−6S Electricity 0.165 NS 0.134 NS 0.567∗10−9S Table 5: Stationarity of initial data and components decom- posed by EMA. ADF for mean ADF p-value, NS for non- stationary, and S for stationary. The primary objective of decomposition is to isolate a non-stationary trend component, making the remaining sea- sonality component a stationary sub-series. To compare the EMA and SMA decompositions, we use the Traffic dataset, which is the longest and the most complex among the bench- mark datasets. The complexity of the Traffic dataset stems from its highly fluctuating data, where the seasonality pat- tern is much stronger than the trend. For this reason, the en- tire dataset is initially classified as stationary by the ADF test. Table 6 compares the stationarity of the whole dataset with the trend-only components decomposed by SMA and EMA with varying αvalues: Chunks Dataset SMA EMA (0.1) EMA (0.3) EMA (0.5) ADF (L=720) ↑ 0.029 0.034 0.064 0.219 0.162 S (max 25) ↓ 20 22 10 3 5 Table 6: Mean ADF p-values for trend components filtered by the SMA and EMA with α={0.1,0.3,0.5}decomposi- tions. The dataset is Traffic. S for the number of stationary chunks of length L= 720 . ADF p-value <0.05indicates stationarity. The mean ADF p-value for the entire dataset is 0.029, with 20 out of 25 regions classified as stationary. This sug- gests that the Traffic dataset is stationary according to the ADF test. However, the dataset also contains non-stationary trend features, which are weaker than the seasonality com- ponent. Therefore, the objective of decomposition is to ex- tract even weak non-stationary characteristics into the trend component to enhance forecastability. The results reveal that SMA decomposition fails to cap- ture meaningful trend patterns, as most trend segments re- main stationary. In contrast, EMA effectively isolates the weak non-stationary trend from stationary seasonality. Addi- tionally, this experiment demonstrates that α= 0.3captures trend features optimally. D EMA Optimization The straightforward implementation of EMA introduced in Equation (2) requires a for loop, which is O(n)time com- plexity. Therefore, we are aiming to optimize the EMA de- composition module to constant time complexity. The equa- tion can be derived as the following sequence: st=αxt+ (1−α)st−1 =αxt+ (1−α)(αxt−1+ (1−α)st−2) =αxt+ (1−α)αxt−1+ (1−α)2st−2 =...+ (1−α)2(αxt−2+ (1−α)st−3) =...+ (1−α)2αxt−2+ (1−α)3st−3 =αxt+...+ (1−α)t−1αx1+ (1−α)tx0(24) To match the order of the data sequence x= [x0, x1, ..., x t−1, xt], we rewrite the Equation (24) back-wards: st= (1−α)tx0+ (1−α)t−1αx1+... ...+ (1−α)2αxt−2+ (1−α)αxt−1+αxt(25) Consequently, we store the weights wgeometric sequence of length twith the first term w0= 1, and common ratio being equal to (1−α), written in reverse order: w= [(1−α)t,(1−α)t−1, ...,(1−α),1] (26) All entries of the Equation (26), except the first one, are multiplied by α. The first entry does not have αweight by the definition of EMA smoothing since the first value s0is equal to the first data item x0: ˆw= [(1−α)t,(1−α)t−1α, ..., (1−α)α, α] (27) Finally, we apply dot product of data slice x= [x0, x1, ..., x t−1, xt]andˆwfrom the Equation (27): ˆw·x= (1−α)tx0+ (1−α)t−1αx1+... ...+ (1−α)2αxt−2+ (1−α)αxt−1+αxt(28) The resulting Equation (28) is equal to the Equation (25), which means that we optimized the EMA decomposition module to O(1)time complexity. E Ablation Study on EMA Initially, we conducted experiments to determine the suit- ableαparameter for each model. We tested five variations of fixed α={0.1,0.3,0.5,0.7,0.9}, where α= 0.1represents the heaviest smoothing and α= 0.9indicates slight smooth- ing. Since our main goal is to make both trend and season- ality streams more interpretable, we assume that smaller α values are more appropriate for this task. Larger values lead to the problem that the trend is not smooth enough, resulting in a complicated trend and easy seasonality. For this experi- ment, we utilized large datasets: Weather, Traffic, and Elec- tricity since they have longer time series data. It is important to note that decomposition is used in xPatch, DLinear, and PatchTST differently in comparison to Autoformer and FEDformer. In xPatch, DLinear, and PatchTST, data is decomposed into trend and seasonality, and both trend and seasonality are separately predicted by the model. Since the decomposed data is the final shape that should be predicted, both trend and seasonality should be close in terms of complexity and interpretability. On the other hand, Autoformer and FEDformer employ the inner series decomposition blocks. The transformer en- coder first eliminates the long-term trend part by series de- composition blocks and focuses on seasonal pattern model- ing. The decoder accumulates the trend part extracted from hidden variables progressively. The past seasonal informa- tion from the encoder is utilized by the encoder-decoder Auto-Correlation. In both Autoformer and FEDformer, the encoder contains two series decomposition blocks, while the decoder contains three series decomposition blocks, where they gradually smooth the data to focus on trend, rather than decompose the data into two streams. Method xPatch xPatch* PatchTST PatchTST* DLinear DLinear* FEDformer FEDformer* Autoformer Autoformer* Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.385 0.390 0.376 0.386 0.393 0.407 0.386 0.400 0.386 0.400 0.382 0.395 0.376 0.419 0.379 0.415 0.449 0.459 0.470 0.465 192 0.423 0.409 0.417 0.407 0.445 0.434 0.441 0.430 0.437 0.432 0.439 0.433 0.420 0.448 0.422 0.445 0.500 0.482 0.454 0.456 336 0.450 0.426 0.449 0.425 0.483 0.451 0.478 0.445 0.481 0.459 0.491 0.467 0.459 0.465 0.454 0.466 0.521 0.496 0.480 0.475 720 0.483 0.460 0.470 0.456 0.479 0.470 0.474 0.466 0.519 0.516 0.524 0.515 0.506 0.507 0.485 0.498 0.514 0.512 0.500 0.504ETTh296 0.232 0.300 0.233 0.300 0.293 0.342 0.291 0.340 0.333 0.387 0.329 0.384 0.346 0.388 0.335 0.382 0.358 0.397 0.371 0.407 192 0.289 0.338 0.291 0.338 0.377 0.393 0.371 0.390 0.477 0.476 0.431 0.443 0.429 0.439 0.426 0.443 0.456 0.452 0.457 0.457 336 0.339 0.376 0.344 0.377 0.380 0.408 0.375 0.407 0.594 0.541 0.445 0.454 0.496 0.487 0.470 0.472 0.482 0.486 0.467 0.474 720 0.406 0.430 0.407 0.427 0.411 0.433 0.408 0.432 0.831 0.657 0.776 0.632 0.463 0.474 0.460 0.475 0.515 0.511 0.454 0.477ETTm196 0.312 0.349 0.311 0.346 0.320 0.359 0.320 0.357 0.345 0.372 0.346 0.372 0.379 0.419 0.336 0.388 0.505 0.475 0.402 0.431 192 0.355 0.372 0.348 0.368 0.365 0.381 0.363 0.381 0.380 0.389 0.387 0.287 0.426 0.441 0.376 0.413 0.553 0.496 0.569 0.505 336 0.392 0.395 0.388 0.391 0.391 0.401 0.391 0.404 0.413 0.413 0.412 0.414 0.445 0.459 0.434 0.448 0.621 0.537 0.529 0.495 720 0.466 0.431 0.461 0.430 0.455 0.436 0.451 0.439 0.474 0.453 0.475 0.453 0.543 0.490 0.478 0.470 0.671 0.561 0.590 0.524ETTm296 0.165 0.250 0.164 0.248 0.177 0.259 0.176 0.258 0.193 0.292 0.186 0.277 0.203 0.287 0.181 0.275 0.255 0.339 0.210 0.297 192 0.230 0.293 0.230 0.291 0.248 0.306 0.240 0.303 0.284 0.362 0.256 0.328 0.269 0.328 0.248 0.319 0.281 0.340 0.283 0.340 336 0.292 0.333 0.292 0.331 0.313 0.346 0.300 0.341 0.369 0.427 0.324 0.374 0.325 0.366 0.314 0.362 0.339 0.372 0.330 0.368 720 0.381 0.384 0.381 0.383 0.399 0.397 0.403 0.400 0.554 0.522 0.511 0.498 0.421 0.415 0.427 0.421 0.433 0.432 0.431 0.427Weather96 0.170 0.205 0.168 0.203 0.177 0.218 0.175 0.217 0.196 0.255 0.199 0.261 0.217 0.296 0.241 0.320 0.266 0.336 0.249 0.332 192 0.218 0.248 0.214 0.245 0.225 0.259 0.223 0.257 0.237 0.296 0.237 0.294 0.276 0.336 0.273 0.342 0.307 0.367 0.326 0.380 336 0.240 0.277 0.236 0.273 0.277 0.297 0.276 0.296 0.283 0.335 0.270 0.329 0.339 0.380 0.348 0.382 0.359 0.395 0.339 0.379 720 0.310 0.322 0.309 0.321 0.350 0.345 0.349 0.346 0.345 0.381 0.342 0.381 0.403 0.428 0.395 0.406 0.419 0.428 0.482 0.470Traffic96 0.489 0.281 0.481 0.280 0.446 0.283 0.438 0.279 0.650 0.396 0.656 0.397 0.587 0.366 0.571 0.361 0.613 0.388 0.558 0.368 192 0.485 0.276 0.484 0.275 0.453 0.285 0.449 0.282 0.598 0.370 0.596 0.367 0.604 0.373 0.613 0.386 0.616 0.382 0.635 0.397 336 0.500 0.281 0.504 0.279 0.467 0.291 0.464 0.289 0.605 0.373 0.599 0.370 0.621 0.383 0.609 0.375 0.622 0.337 0.626 0.382 720 0.534 0.295 0.540 0.293 0.500 0.309 0.496 0.307 0.645 0.394 0.641 0.393 0.626 0.382 0.643 0.397 0.660 0.408 0.648 0.392Electricity96 0.166 0.248 0.159 0.244 0.166 0.252 0.164 0.251 0.197 0.282 0.194 0.277 0.193 0.308 0.181 0.297 0.201 0.317 0.199 0.311 192 0.165 0.252 0.160 0.248 0.174 0.260 0.172 0.259 0.196 0.285 0.189 0.278 0.201 0.315 0.195 0.309 0.222 0.334 0.217 0.326 336 0.186 0.269 0.182 0.267 0.190 0.277 0.188 0.276 0.209 0.301 0.205 0.295 0.214 0.329 0.211 0.324 0.231 0.338 0.241 0.342 720 0.222 0.300 0.216 0.298 0.230 0.311 0.230 0.312 0.245 0.333 0.241 0.329 0.246 0.355 0.249 0.356 0.254 0.361 0.249 0.357Exchange96 0.084 0.201 0.082 0.199 0.080 0.196 0.079 0.196 0.088 0.218 0.078 0.198 0.148 0.278 0.109 0.238 0.197 0.323 0.131 0.264 192 0.182 0.301 0.177 0.298 0.171 0.293 0.168 0.291 0.176 0.315 0.156 0.292 0.271 0.380 0.213 0.333 0.300 0.369 0.252 0.368 336 0.349 0.424 0.349 0.425 0.317 0.406 0.319 0.407 0.313 0.427 0.300 0.414 0.460 0.500 0.408 0.462 0.509 0.524 0.470 0.512 720 0.897 0.716 0.891 0.711 0.887 0.703 0.817 0.679 0.839 0.695 0.785 0.671 1.195 0.841 1.144 0.820 1.447 0.941 1.098 0.813ILI24 1.541 0.755 1.378 0.685 1.691 0.816 1.468 0.729 2.398 1.040 2.592 1.092 3.228 1.260 2.646 1.062 3.483 1.287 3.403 1.254 36 1.468 0.734 1.315 0.681 1.415 0.762 1.343 0.695 2.646 1.088 2.738 1.125 2.679 1.080 2.492 0.971 3.103 1.148 2.720 1.051 48 1.439 0.743 1.459 0.747 1.754 0.819 1.617 0.809 2.614 1.086 2.665 1.098 2.622 1.078 2.521 1.017 2.669 1.085 2.737 1.098 60 1.574 0.778 1.616 0.787 1.645 0.820 1.569 0.764 2.804 1.146 2.787 1.136 2.857 1.157 2.716 1.097 2.770 1.125 2.889 1.139 Gain 1.35% 1.01% 1.91% 1.25% 2.93% 3.00% 4.67% 2.90% 4.96% 2.30% Table 7: Comparison of forecasting errors between the baselines and the models with the EMA decomposition module with unified lookback window L= 36 for the ILI dataset, and L= 96 for all other datasets. The model name with * denotes the model with EMA decomposition. 0.1 0.3 0.5 0.7 0.9 Alpha0.2000.2500.3000.350MAE Weather, L=96, T=96 0.1 0.3 0.5 0.7 0.9 Alpha0.3000.3500.4000.450MAE Traffic, L=96, T=96 0.1 0.3 0.5 0.7 0.9 Alpha0.2500.2750.3000.325MAE Electricity, L=96, T=96 xPatch DLinear FEDformer Autoformer Figure 5: Forecasting performance (MAE), lookback win- dowL= 96 , prediction horizon T= 96 , alpha α= {0.1,0.3,0.5,0.7,0.9}. From the results in Figure 5, for Autoformer and FED- former we found optimal α= 0.1, while for xPatch, DLin- ear, and PatchTST we select α= 0.3. To conclude, for the transformer-based models that em-ploy Autoformer’s series decomposition blocks, the best α= 0.1, while for the models that initially decompose data into trend and seasonality patterns and predict them sepa- rately, the best α= 0.3. The αhyperparameter is con- sistent and robust to all datasets inside the mentioned two categories. Additionally, setting αas a learnable parameter showed a marginal performance improvement with a signif- icant training time increase. Therefore, we decided to use a fixed αparameter. Table 7 presents the full comparative analysis between the original state-of-the-art models and versions that incorporate the exponential decomposition module. The effectiveness of EMA was assessed on Autoformer, FEDformer, and DLin- ear models that were initially designed with the decomposi- tion unit, and additionally on PatchTST. We reproduce the models according to their official code and configurations, only replacing the decomposition module with EMA. All models are tested in unified experimental settings: lookback L= 36 for the ILI dataset and L= 96 for all other datasets. F Dual Flow Net We explore the impact of the dual flow network in xPatch ar- chitecture and assess the contribution of each stream. Figure 6 compares the dual flow network with separate non-linear and linear streams to analyze the contribution and behavior of each flow. The four possible configurations: • Original: Seasonality →non-linear stream, Trend →lin- ear stream • Reversed: Seasonality →linear stream, Trend →non- linear stream • Non-linear only: Seasonality →non-linear stream, Trend →non-linear stream • Linear only: Seasonality →linear stream, Trend →linear stream The results reveal that the model benefits from the orig- inal configuration of the dual flow architecture, effectively selecting between linear and non-linear features, leading to improved forecasting performance. 96 192 336 720 T0.2000.2500.300MAE Weather L=96 96 192 336 720 T0.2800.3000.3200.3400.360MAE Traffic L=96 96 192 336 720 T0.2600.2800.3000.320MAE Electricity L=96 Linear stream Non-linear stream Reversed xPatch Figure 6: Separate forecasting performance (MAE) of xPatch linear and non-linear streams, lookback window L= 96and prediction horizon T={96,192,336,720}. G Ablation Study on Arctangent Loss We find that the scaling coefficient ρCARD (i) =i−1 2from Equation (15) exhibits a too rapid decrease rate for the long- term time series forecasting task. Therefore, we propose a novel approach by exploring the arctangent function, which features a slower increase rate compared to the exponen- tial functions that were analyzed in the CARD (Wang et al. 2024b) paper. Initially, we employ the negative arctangent function −arctan (i)as a decreasing function required for our task. Subsequently, we perform a vertical translation to ensure that the function equals 1 when i= 1. In other words, we shift the entire graph of the function along the y-axis by y, solving the equation −arctan(1) + y= 1, which yields y=π 4+ 1. Therefore, the arctangent loss Larctan between the pre- dicted univariate sequence ˆx(i) 1:Tand the ground truth obser- vations x(i) 1:T, where Tis future prediction length, ρarctan (i)is loss scaling coefficient, and mis arctangent scaling pa- rameter is denoted as: Larctan =1 TTX i=1ρarctan (i)||ˆx(i) 1:T−x(i) 1:T|| (29) ρarctan (i) =−m(arctan( i)−π 4) + 1 (30) Subsequently, we investigate the arctangent loss function scaling parameter. In Table 8, we compare different arct- angent loss scaling parameters m={1,1 2,1 3,1 4}, where m= 1is the original arctangent loss function and m=1 4is the function closest to the original MAE loss function with- out scaling. Function m=1 m=0.5 m=0.33 m=0.25 Metric MSE MAE MSE MAE MSE MAE MSE MAEETTh296 0.226 0.297 0.226 0.297 0.226 0.297 0.226 0.297 192 0.275 0.330 0.276 0.330 0.276 0.330 0.276 0.330 336 0.312 0.360 0.313 0.360 0.313 0.360 0.313 0.360 720 0.384 0.418 0.384 0.418 0.384 0.418 0.384 0.418ETTm296 0.153 0.240 0.154 0.240 0.154 0.240 0.154 0.240 192 0.213 0.280 0.213 0.280 0.213 0.280 0.213 0.280 336 0.264 0.315 0.264 0.315 0.264 0.315 0.264 0.315 720 0.338 0.363 0.338 0.363 0.338 0.364 0.338 0.364Weather96 0.146 0.185 0.146 0.185 0.146 0.185 0.146 0.185 192 0.189 0.227 0.189 0.227 0.189 0.227 0.189 0.227 336 0.218 0.260 0.218 0.260 0.218 0.260 0.218 0.260 720 0.291 0.315 0.291 0.315 0.291 0.315 0.291 0.315 Table 8: Comparison of forecasting errors between xPatch versions with different arctangent loss scaling coefficient m={1,1 2,1 3,1 4}. The experiment results indicate that there is no need to scale the arctangent loss, therefore we maintain m= 1with- out scaling. The arctangent scaling coefficient is denoted as: ρarctan (i) =−arctan( i) +π 4+ 1 (31) Figure 7 illustrates the comparison between arctangent function and exponential functions i−1 2, i−1 3, and i−1 4an- alyzed in CARD (Wang et al. 2024b). 0 100 200 300 400 500 600 7000.00.20.40.60.81.0 i1/2 i1/3 i1/4 arctan Figure 7: Comparison between the arctangent function and exponential functions. Considering that the furthest predicting horizon under the common LTSF setting is i= 720 , the scaling coefficient of signal decay-based loss ρCARD(720) ≈0.037, whereas the scaling coefficient of the arctangent loss ρarctan(720)≈ Method xPatch xPatch* CARD CARD* PatchTST PatchTST* DLinear DLinear* FEDformer FEDformer* Autoformer Autoformer* Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh1960.375 0.394 0.376 0.386 0.383 0.391 0.382 0.393 0.393 0.407 0.385 0.398 0.386 0.400 0.381 0.387 0.376 0.419 0.370 0.400 0.449 0.459 0.451 0.440 192 0.413 0.412 0.417 0.407 0.435 0.420 0.435 0.421 0.445 0.434 0.443 0.427 0.437 0.432 0.430 0.417 0.420 0.448 0.420 0.432 0.500 0.482 0.458 0.444 336 0.438 0.430 0.449 0.425 0.479 0.442 0.474 0.439 0.483 0.451 0.483 0.444 0.481 0.459 0.478 0.449 0.459 0.465 0.462 0.455 0.521 0.496 0.537 0.487 720 0.466 0.464 0.470 0.456 0.471 0.461 0.462 0.455 0.479 0.470 0.474 0.460 0.519 0.516 0.508 0.499 0.506 0.507 0.498 0.495 0.514 0.512 0.571 0.532ETTh2960.236 0.311 0.233 0.300 0.281 0.330 0.283 0.332 0.293 0.342 0.292 0.336 0.333 0.387 0.294 0.344 0.346 0.388 0.328 0.370 0.358 0.397 0.337 0.375 192 0.292 0.346 0.291 0.338 0.363 0.381 0.363 0.383 0.377 0.393 0.371 0.385 0.477 0.476 0.379 0.396 0.429 0.439 0.414 0.423 0.456 0.452 0.422 0.427 336 0.344 0.386 0.344 0.377 0.411 0.418 0.378 0.401 0.380 0.408 0.378 0.402 0.594 0.541 0.436 0.442 0.496 0.487 0.462 0.463 0.482 0.486 0.439 0.449 720 0.414 0.440 0.407 0.427 0.416 0.431 0.399 0.421 0.411 0.433 0.406 0.427 0.831 0.657 0.584 0.534 0.463 0.474 0.433 0.455 0.515 0.511 0.427 0.448ETTm1960.329 0.373 0.311 0.346 0.316 0.347 0.316 0.347 0.320 0.359 0.310 0.338 0.345 0.372 0.331 0.350 0.379 0.419 0.345 0.382 0.505 0.475 0.407 0.422 192 0.351 0.386 0.348 0.368 0.363 0.370 0.363 0.368 0.365 0.381 0.367 0.367 0.380 0.389 0.376 0.373 0.426 0.441 0.390 0.407 0.553 0.496 0.555 0.483 336 0.390 0.409 0.388 0.391 0.392 0.390 0.394 0.391 0.391 0.401 0.390 0.388 0.413 0.413 0.407 0.395 0.445 0.459 0.431 0.430 0.621 0.537 0.487 0.458 720 0.458 0.445 0.461 0.430 0.458 0.425 0.462 0.428 0.455 0.436 0.459 0.428 0.474 0.453 0.469 0.433 0.543 0.490 0.482 0.465 0.671 0.561 0.488 0.462ETTm2960.167 0.258 0.164 0.248 0.169 0.248 0.169 0.248 0.177 0.259 0.175 0.252 0.193 0.292 0.182 0.257 0.203 0.287 0.185 0.272 0.255 0.339 0.218 0.302 192 0.232 0.301 0.230 0.291 0.234 0.292 0.236 0.293 0.248 0.306 0.244 0.297 0.284 0.362 0.244 0.302 0.269 0.328 0.252 0.314 0.281 0.340 0.270 0.328 336 0.291 0.338 0.292 0.331 0.294 0.339 0.293 0.329 0.313 0.346 0.305 0.337 0.369 0.427 0.306 0.346 0.325 0.366 0.317 0.356 0.339 0.372 0.322 0.361 720 0.378 0.391 0.381 0.383 0.390 0.388 0.392 0.388 0.399 0.397 0.399 0.391 0.554 0.522 0.415 0.413 0.421 0.415 0.415 0.408 0.433 0.432 0.414 0.411Weather960.173 0.218 0.170 0.212 0.150 0.188 0.154 0.193 0.177 0.218 0.176 0.208 0.196 0.255 0.207 0.233 0.217 0.296 0.228 0.298 0.266 0.336 0.230 0.288 192 0.217 0.256 0.210 0.248 0.202 0.238 0.205 0.240 0.225 0.259 0.222 0.249 0.237 0.296 0.243 0.268 0.276 0.336 0.266 0.322 0.307 0.367 0.296 0.343 336 0.238 0.283 0.226 0.273 0.260 0.282 0.264 0.286 0.277 0.297 0.276 0.289 0.283 0.335 0.286 0.305 0.339 0.380 0.349 0.387 0.359 0.395 0.347 0.372 720 0.310 0.329 0.309 0.321 0.343 0.353 0.342 0.337 0.350 0.345 0.350 0.339 0.345 0.381 0.345 0.355 0.403 0.428 0.383 0.412 0.419 0.428 0.421 0.423Traffic960.490 0.299 0.471 0.275 0.419 0.269 0.395 0.250 0.446 0.283 0.472 0.270 0.650 0.396 0.687 0.364 0.587 0.366 0.596 0.347 0.613 0.388 0.647 0.374 192 0.488 0.293 0.478 0.272 0.443 0.276 0.418 0.258 0.453 0.285 0.476 0.274 0.598 0.370 0.645 0.340 0.604 0.373 0.617 0.360 0.616 0.382 0.656 0.381 336 0.495 0.298 0.501 0.276 0.460 0.283 0.437 0.265 0.467 0.291 0.493 0.280 0.605 0.373 0.647 0.342 0.621 0.383 0.627 0.358 0.622 0.337 0.638 0.366 720 0.548 0.335 0.547 0.291 0.490 0.299 0.485 0.291 0.500 0.309 0.526 0.299 0.645 0.394 0.676 0.362 0.626 0.382 0.653 0.368 0.660 0.408 0.666 0.380Electricity960.159 0.251 0.159 0.244 0.141 0.233 0.144 0.235 0.166 0.252 0.172 0.248 0.197 0.282 0.198 0.269 0.193 0.308 0.187 0.293 0.201 0.317 0.196 0.304 192 0.160 0.255 0.160 0.248 0.160 0.250 0.158 0.247 0.174 0.260 0.179 0.256 0.196 0.285 0.196 0.272 0.201 0.315 0.197 0.304 0.222 0.334 0.217 0.322 336 0.183 0.275 0.182 0.267 0.173 0.263 0.173 0.265 0.190 0.277 0.194 0.272 0.209 0.301 0.209 0.287 0.214 0.329 0.209 0.315 0.231 0.338 0.243 0.341 720 0.226 0.313 0.216 0.298 0.197 0.284 0.201 0.286 0.230 0.311 0.234 0.306 0.245 0.333 0.244 0.319 0.246 0.355 0.244 0.346 0.254 0.361 0.258 0.351 Gain 0.95% 3.89% 0.76% 1.06% 0.55% 2.62% 4.61% 8.89% 2.69% 3.97% 4.69% 5.31% Table 9: Comparison of forecasting errors between the baselines and the models trained with arctangent loss with unified lookback window L= 96 . The model name with * denotes the model trained with arctangent loss. 0.216. Figure 7 highlights that the arctangent function ex- hibits a steeper initial curve compared to exponential func- tions, indicating an overall slower decay rate. The effectiveness of the arctangent loss function was eval- uated on PatchTST, DLinear, FEDformer, and Autoformer models that were trained with MSE loss objective, and ad- ditionally on the CARD model comparing arctangent loss with the signal decay-based loss. Table 9 presents the full comparative analysis between the original state-of-the-art models and versions that are trained using the proposed arc- tangent training loss. All models are tested with lookback L= 96 for all datasets. H Ablation Study on Sigmoid Learning Rate Adjustment Scheme In Equation 22, CARD employs linear warm-up initially and then adjusts the learning rate according to one cycle of the cosine function for the remaining epochs. While we recog- nize the effectiveness of the initial warm-up approach, we propose that the entire adjustment scheme should be encap- sulated within a single function featuring a non-linear warm- up. Therefore, we chose to investigate the sigmoid function. A logistic function is a common sigmoid curve with thefollowing equation: f(x) =L 1 +e−k(x−x0)(32) where x0is the xvalue of the function’s midpoint, Lis the supremum of the values of the function, and k is the logistic growth rate or steepness of the curve. To calculate the learning rate αfor the current epoch t, the supremum Lcorresponds to the initialized learning rate α0, while the midpoint t0serves as a warm-up coefficient denoted by w, as it extends the rising curve of the sigmoid function: αt=α0 1 +e−k(t−w)(33) To extend the function to slowly decrease after some point, Equation (33) requires updating with the subtraction of an- other sigmoid function featuring a smaller growth rate. Be- fore delving into the analysis of the second sigmoid func- tion, we examine the variable e: e−k(t−w)=e−kt+kw(34) Since the αtfunction at t= 0 is intended to be equal to 0, we can revise the −ktterm to −kt s, where srepresents the smoothing coefficient: e−kt s+kw=e−k s(t−sw)(35) Hence, we introduce the novel sigmoid learning rate adjust- ment scheme. The learning rate αtat epoch t, with an ini- tial learning rate α0, logistic growth rate k, decreasing curve smoothing rate s, and warm-up coefficient w, is calculated as follows: αt=α0 1 +e−k(t−w)−α0 1 +e−k s(t−sw)(36) Note that the rationale behind incorporating this specific smoothing coefficient sis to ensure that αt(0) = 0 , as the coefficient diminishes with t= 0. Equation (36) contains three hyperparameters: the logistic growth rate k, the decreasing curve smoothing rate s, and the warm-up coefficient w. In Table 10, we compare different sigmoid learning rate adjustment technique hyperparameters k={0.3,0.5,0.8},s={5,8,10},w={5,8,10}. Function 0.3,5,5 0.3,10,10 0.5,5,5 0.5,10,10 0.8,8,8 0.8,5,10 Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.359 0.389 0.355 0.379 0.362 0.392 0.354 0.379 0.354 0.379 0.355 0.381 192 0.378 0.401 0.376 0.396 0.379 0.403 0.376 0.395 0.377 0.396 0.377 0.396 336 0.394 0.419 0.394 0.417 0.392 0.416 0.391 0.415 0.392 0.416 0.392 0.416 720 0.443 0.463 0.442 0.459 0.444 0.463 0.442 0.459 0.444 0.460 0.443 0.459ETTh296 0.228 0.299 0.226 0.296 0.229 0.300 0.226 0.297 0.226 0.297 0.226 0.297 192 0.278 0.333 0.275 0.330 0.278 0.333 0.275 0.330 0.275 0.331 0.275 0.330 336 0.315 0.363 0.315 0.361 0.315 0.363 0.312 0.360 0.315 0.361 0.312 0.360 720 0.387 0.420 0.384 0.418 0.388 0.421 0.384 0.418 0.384 0.418 0.385 0.419ETTm196 0.276 0.330 0.275 0.330 0.276 0.330 0.275 0.330 0.275 0.330 0.275 0.330 192 0.317 0.356 0.315 0.356 0.317 0.357 0.315 0.355 0.316 0.356 0.316 0.356 336 0.356 0.377 0.355 0.376 0.356 0.377 0.355 0.376 0.355 0.376 0.355 0.376 720 0.420 0.412 0.419 0.412 0.421 0.412 0.419 0.411 0.419 0.411 0.422 0.414ETTm296 0.154 0.240 0.153 0.240 0.154 0.240 0.153 0.240 0.153 0.240 0.153 0.240 192 0.214 0.281 0.215 0.281 0.215 0.281 0.213 0.280 0.214 0.281 0.214 0.282 336 0.265 0.315 0.265 0.315 0.265 0.315 0.264 0.315 0.265 0.315 0.265 0.316 720 0.340 0.365 0.338 0.363 0.341 0.365 0.338 0.363 0.338 0.363 0.339 0.363Weather96 0.147 0.190 0.148 0.185 0.147 0.190 0.146 0.185 0.146 0.185 0.146 0.186 192 0.191 0.228 0.189 0.227 0.191 0.228 0.189 0.227 0.189 0.227 0.189 0.227 336 0.220 0.261 0.219 0.260 0.221 0.261 0.218 0.260 0.219 0.260 0.219 0.260 720 0.293 0.317 0.292 0.315 0.294 0.318 0.291 0.315 0.292 0.315 0.292 0.316 Table 10: Comparison of forecasting errors between xPatch versions trained with different sigmoid learning rate hyper- parameters (k,s,w). Figure 8 demonstrates the Sigmoid function with different hyperparameters. 0 20 40 60 80 1000.000000.000020.000040.000060.000080.00010 0.3, 5, 5 0.3, 10, 10 0.5, 5, 5 0.5, 10, 10 0.8, 8, 8 0.8, 5, 10 Figure 8: Sigmoid function with different (k, s, w) parame- ters. Following the experiment results, we follow the k= 0.5, s= 10 , and w= 10 hyperparameter combination. Fig- ure 9 demonstrates Standard (18), PatchTST (20), Cosine(22), and Sigmoid (23) learning rate adjustment schemes. The initial learning rate is set to α= 0.0001 , the number of warm-up epochs w= 10 , the logistic growth rate k= 0.5, and the decreasing curve smoothing rate s= 10 . 0 20 40 60 80 1000.000000.000020.000040.000060.000080.00010 Learning Rate Standard PatchTST Cosine Sigmoid Figure 9: LTSF learning rate adjustment strategies. The effectiveness of the sigmoid adjustment approach was evaluated on PatchTST and CARD models since these are the only models that utilize long training with 100 epochs and that introduced their own learning rate adjustment tech- niques. Table 11 presents the full comparative analysis be- tween the original state-of-the-art models and versions that are trained using the proposed sigmoid learning rate adjust- ment scheme. All models are tested with lookback L= 96 for all datasets. I Lookback Window In theory, a longer lookback window should increase the receptive field, potentially leading to improved forecasting performance. However, most of the transformer-based so- lutions do not follow this assumption. Transformer-based baselines do not necessarily benefit from longer historical data, which indicates the transformer architecture’s inef- fectiveness in capturing temporal dependencies. The only transformer-based model that has a strong preservation of temporal relation is PatchTST. Therefore, we hypothesize that this capability is largely attributed to the patching mech- anism and can be better leveraged with other architectures, such as CNNs. Figure 10 illustrates the ability of different models to learn from a longer lookback window. 48 96 192 336 512 720 L0.200.250.300.350.40MAE Weather T=96 48 96 192 336 512 720 L0.30.40.5MAE Traffic T=96 48 96 192 336 512 720 L0.250.30MAE Electricity T=96 48 96 192 336 512 720 L0.350.400.450.50MAE Weather T=720 48 96 192 336 512 720 L0.300.350.400.45MAE Traffic T=720 48 96 192 336 512 720 L0.300.350.40MAE Electricity T=720 Autoformer FEDformerDLinear RLineariTransformer PatchTSTCARD xPatch Figure 10: Forecasting performance (MAE), lookback win- dowL={48,96,192,336,512,720}and prediction hori- zonT= 96 . Method xPatch xPatch* CARD CARD* PatchTST PatchTST* Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.385 0.395 0.376 0.386 0.383 0.391 0.379 0.390 0.393 0.407 0.377 0.396 192 0.422 0.408 0.417 0.407 0.435 0.420 0.433 0.419 0.445 0.434 0.429 0.424 336 0.445 0.423 0.449 0.425 0.479 0.442 0.475 0.438 0.483 0.451 0.472 0.443 720 0.469 0.455 0.470 0.456 0.471 0.461 0.470 0.459 0.479 0.470 0.475 0.466ETTh296 0.234 0.301 0.233 0.300 0.281 0.330 0.286 0.333 0.293 0.342 0.290 0.339 192 0.289 0.337 0.291 0.338 0.363 0.381 0.360 0.379 0.377 0.393 0.364 0.386 336 0.341 0.376 0.344 0.377 0.411 0.418 0.369 0.396 0.380 0.408 0.373 0.405 720 0.407 0.428 0.407 0.427 0.416 0.431 0.404 0.425 0.411 0.433 0.409 0.433ETTm196 0.318 0.353 0.311 0.346 0.316 0.347 0.314 0.344 0.320 0.359 0.320 0.359 192 0.350 0.368 0.348 0.368 0.363 0.370 0.363 0.369 0.365 0.381 0.366 0.384 336 0.386 0.391 0.388 0.391 0.392 0.390 0.392 0.389 0.391 0.401 0.392 0.403 720 0.460 0.428 0.461 0.430 0.458 0.425 0.462 0.427 0.455 0.436 0.454 0.438ETTm296 0.168 0.251 0.164 0.248 0.169 0.248 0.168 0.248 0.177 0.259 0.175 0.258 192 0.231 0.292 0.230 0.291 0.234 0.292 0.233 0.291 0.248 0.306 0.243 0.302 336 0.293 0.331 0.292 0.331 0.294 0.339 0.292 0.329 0.313 0.346 0.302 0.341 720 0.380 0.384 0.381 0.383 0.390 0.388 0.392 0.388 0.399 0.397 0.400 0.396Weather96 0.178 0.212 0.170 0.212 0.150 0.188 0.154 0.192 0.177 0.218 0.173 0.215 192 0.223 0.251 0.210 0.248 0.202 0.238 0.205 0.240 0.225 0.259 0.219 0.256 336 0.241 0.276 0.226 0.273 0.260 0.282 0.264 0.285 0.277 0.297 0.275 0.297 720 0.313 0.323 0.309 0.321 0.343 0.353 0.342 0.337 0.350 0.345 0.350 0.346Traffic96 0.482 0.286 0.471 0.275 0.419 0.269 0.412 0.257 0.446 0.283 0.432 0.276 192 0.479 0.280 0.478 0.272 0.443 0.276 0.430 0.263 0.453 0.285 0.443 0.279 336 0.498 0.284 0.501 0.276 0.460 0.283 0.445 0.270 0.467 0.291 0.455 0.286 720 0.530 0.284 0.501 0.291 0.490 0.299 0.473 0.286 0.500 0.309 0.490 0.304Electricity96 0.166 0.249 0.159 0.244 0.141 0.233 0.136 0.229 0.166 0.252 0.165 0.251 192 0.166 0.253 0.160 0.248 0.160 0.250 0.155 0.245 0.174 0.260 0.172 0.259 336 0.189 0.272 0.182 0.267 0.173 0.263 0.168 0.259 0.190 0.277 0.189 0.276 720 0.229 0.306 0.216 0.298 0.197 0.284 0.193 0.280 0.230 0.311 0.231 0.313 Gain 1.42% 1.04% 1.14% 1.34% 1.33% 0.74% Table 11: Comparison of forecasting errors between the baselines and the models trained with sigmoid learning rate adjustment strategy with unified lookback window L= 96 . The model name with * denotes the model trained using the sigmoid learning rate adjustment technique. J Instance Normalization Table 12 presents a comprehensive comparative analysis between the original state-of-the-art models and versions trained without using RevIN instance normalization. All models are tested with a lookback L= 96 for all datasets. The forecasting performance of xPatch was enhanced by the RevIN module, with improvements of 8.67% in MSE and 6.53% in MAE, respectively. For CARD, instance normalization improved accuracy by 28.69% in MSE and 23.22% in MAE, while for PatchTST the gains are 9.96% in MSE and 10.07% in MAE, respectively. The greater benefit of instance normalization observed in CARD (Wang et al. 2024b) and PatchTST (Nie et al. 2023) can be attributed to xPatch’s use of EMA decomposition. According to non-stationary transformer (Liu et al. 2022), statistical methods like ARIMA (Box et al. 2015) employ moving average decomposition for stationarization, while most recent state-of-the-art solutions rely on RevIN (Kim et al. 2021). Consequently, xPatch incorporates two mecha- nisms to address data non-stationarity and distribution shift, which explains its superior performance compared to CARD and PatchTST even without the use of RevIN.Method xPatch xPatch* CARD CARD* PatchTST PatchTST* Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh2960.263 0.324 0.233 0.300 0.377 0.421 0.281 0.330 0.317 0.371 0.293 0.342 192 0.324 0.356 0.291 0.338 0.796 0.647 0.363 0.381 0.427 0.441 0.377 0.393 336 0.412 0.414 0.344 0.377 0.668 0.587 0.411 0.418 0.495 0.488 0.380 0.408 720 0.643 0.556 0.407 0.427 1.002 0.744 0.416 0.431 0.756 0.622 0.411 0.433ETTm2960.176 0.261 0.164 0.248 0.203 0.297 0.169 0.248 0.187 0.274 0.177 0.259 192 0.250 0.310 0.230 0.291 0.372 0.412 0.234 0.292 0.264 0.323 0.248 0.306 336 0.331 0.358 0.292 0.331 0.484 0.484 0.294 0.339 0.354 0.377 0.313 0.346 720 0.443 0.421 0.381 0.383 0.729 0.607 0.390 0.388 0.462 0.446 0.399 0.397Weather960.170 0.212 0.168 0.203 0.168 0.215 0.150 0.188 0.186 0.243 0.177 0.218 192 0.210 0.248 0.214 0.245 0.224 0.269 0.202 0.238 0.222 0.275 0.225 0.259 336 0.226 0.273 0.236 0.273 0.277 0.309 0.260 0.282 0.269 0.313 0.277 0.297 720 0.291 0.323 0.309 0.321 0.347 0.355 0.343 0.353 0.331 0.360 0.350 0.345 Table 12: Comparison of forecasting errors between the baselines and the models with RevIN instance normalization with unified lookback window L= 96 . The model name with * denotes the model trained with RevIN instance nor- malization. K Full Results Table 13 demonstrates the multivariate LTSF results aver- aged over three random seeds following the unified long- term forecasting protocol proposed by TimesNet (Wu et al. 2023) with unified lookback length L= 36 for ILI dataset, andL= 96 for all remaining datasets. Baseline results for DLinear, TimesNet, ETSformer, FEDformer, and Autoformer are collected from the Times- Net official repository (Wu et al. 2023), which is fairly structured according to each model’s official code settings. Results for CARD, TimeMixer, iTransformer, and MICN, which were implemented under the same experimental set- tings, are collected from their respective official papers. The official manuscripts report the results of RLinear and PatchTST with a lookback length L= 336 . Therefore, we reproduce these experiments using unified settings based on their publicly available official implementations. Table 14 demonstrates the multivariate LTSF results aver- aged over three random seeds under hyperparameter search- ing with the best lookback length in L={36,104,148} for ILI dataset, and L={96,192,336,512,720}for all re- maining datasets. Baseline results for CARD, TimeMixer, PatchTST, MICN, and DLinear are collected from their official papers, as they either include a hyperparameter search for their mod- els or provide the best results according to our hyperparam- eter search. For iTransformer and RLinear, we reproduce the experiments with hyperparameter search based on their pub- licly available official implementations. Additionally, we re- produced experiments for models that omit experiments on specific datasets (Exchange-rate, Solar-energy, and ILI). L Visualizations Figures 11, 12, 13, 14, 15, 16 provide qualitative visual- izations comparing the proposed xPatch model with recent state-of-the-art models, including CARD, iTransformer, and PatchTST. Figures 17, 18, 19, 20, 21, 22 illustrate qualitative visual- izations of the separate predictions from the CNN-only and MLP-only streams, compared to the original dual-stream forecast. ModelsxPatch CARD TimeMixer iTransformer RLinear PatchTST MICN DLinear TimesNet ETSformer (ours) (2024) (2024) (2024) (2023) (2023) (2023) (2023) (2023) (2022) Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.376 0.386 0.383 0.391 0.375 0.400 0.386 0.405 0.380 0.392 0.393 0.407 0.421 0.431 0.386 0.400 0.384 0.402 0.494 0.479 192 0.417 0.407 0.435 0.420 0.429 0.421 0.441 0.436 0.433 0.420 0.445 0.434 0.474 0.487 0.437 0.432 0.436 0.429 0.538 0.504 336 0.449 0.425 0.479 0.442 0.484 0.458 0.487 0.458 0.470 0.437 0.483 0.451 0.569 0.551 0.481 0.459 0.491 0.469 0.574 0.521 720 0.470 0.456 0.471 0.461 0.498 0.482 0.503 0.491 0.467 0.460 0.479 0.470 0.770 0.672 0.519 0.516 0.521 0.500 0.562 0.535 Avg 0.428 0.419 0.442 0.429 0.447 0.440 0.454 0.448 0.438 0.427 0.450 0.441 0.559 0.535 0.456 0.452 0.458 0.450 0.542 0.510ETTh296 0.233 0.300 0.281 0.330 0.289 0.341 0.297 0.349 0.278 0.333 0.293 0.342 0.299 0.364 0.333 0.387 0.340 0.374 0.340 0.391 192 0.291 0.338 0.363 0.381 0.372 0.392 0.380 0.400 0.360 0.387 0.377 0.393 0.441 0.454 0.477 0.476 0.402 0.414 0.430 0.439 336 0.344 0.377 0.411 0.418 0.386 0.414 0.428 0.432 0.379 0.410 0.380 0.408 0.654 0.567 0.594 0.541 0.452 0.452 0.485 0.479 720 0.407 0.427 0.416 0.431 0.412 0.434 0.427 0.445 0.430 0.445 0.411 0.433 0.956 0.716 0.831 0.657 0.462 0.468 0.500 0.497 Avg 0.319 0.361 0.368 0.390 0.365 0.395 0.383 0.407 0.362 0.394 0.365 0.394 0.588 0.525 0.559 0.515 0.414 0.427 0.439 0.452ETTm196 0.311 0.346 0.316 0.347 0.320 0.357 0.334 0.368 0.351 0.369 0.320 0.359 0.316 0.362 0.345 0.372 0.338 0.375 0.375 0.398 192 0.348 0.368 0.363 0.370 0.361 0.381 0.377 0.391 0.388 0.386 0.365 0.381 0.363 0.390 0.380 0.389 0.374 0.387 0.408 0.410 336 0.388 0.391 0.392 0.390 0.390 0.404 0.426 0.420 0.420 0.407 0.391 0.401 0.408 0.426 0.413 0.413 0.410 0.411 0.435 0.428 720 0.461 0.430 0.458 0.425 0.454 0.441 0.491 0.459 0.478 0.440 0.455 0.436 0.481 0.476 0.474 0.453 0.478 0.450 0.499 0.462 Avg 0.377 0.384 0.382 0.383 0.381 0.396 0.407 0.410 0.409 0.401 0.383 0.394 0.392 0.414 0.403 0.407 0.400 0.406 0.429 0.425ETTm296 0.164 0.248 0.169 0.248 0.175 0.258 0.180 0.264 0.182 0.265 0.177 0.259 0.179 0.275 0.193 0.292 0.187 0.267 0.189 0.280 192 0.230 0.291 0.234 0.292 0.237 0.299 0.250 0.309 0.247 0.305 0.248 0.306 0.307 0.376 0.284 0.362 0.249 0.309 0.253 0.319 336 0.292 0.331 0.294 0.339 0.298 0.340 0.311 0.348 0.309 0.343 0.313 0.346 0.325 0.388 0.369 0.427 0.321 0.351 0.314 0.357 720 0.381 0.383 0.390 0.388 0.391 0.396 0.412 0.407 0.405 0.397 0.399 0.397 0.502 0.490 0.554 0.522 0.408 0.403 0.414 0.413 Avg 0.267 0.313 0.272 0.317 0.275 0.323 0.288 0.332 0.286 0.328 0.284 0.327 0.328 0.382 0.350 0.401 0.291 0.333 0.293 0.342Weather96 0.168 0.203 0.150 0.188 0.163 0.209 0.174 0.214 0.194 0.234 0.177 0.218 0.161 0.229 0.196 0.255 0.172 0.220 0.197 0.281 192 0.214 0.245 0.202 0.238 0.208 0.250 0.221 0.254 0.238 0.269 0.225 0.259 0.220 0.281 0.237 0.296 0.219 0.261 0.237 0.312 336 0.236 0.273 0.260 0.282 0.251 0.287 0.278 0.296 0.287 0.304 0.277 0.297 0.278 0.331 0.283 0.335 0.280 0.306 0.298 0.353 720 0.309 0.321 0.343 0.353 0.339 0.341 0.358 0.349 0.355 0.349 0.350 0.345 0.311 0.356 0.345 0.381 0.365 0.359 0.352 0.388 Avg 0.232 0.261 0.239 0.265 0.240 0.272 0.258 0.278 0.269 0.289 0.257 0.280 0.243 0.299 0.265 0.317 0.259 0.287 0.271 0.334Traffic96 0.471 0.275 0.419 0.269 0.462 0.285 0.395 0.268 0.646 0.384 0.446 0.283 0.519 0.309 0.650 0.396 0.593 0.321 0.607 0.392 192 0.478 0.272 0.443 0.276 0.473 0.296 0.417 0.276 0.598 0.360 0.453 0.285 0.537 0.315 0.598 0.370 0.617 0.336 0.621 0.399 336 0.501 0.276 0.460 0.283 0.498 0.296 0.433 0.283 0.605 0.363 0.467 0.291 0.534 0.313 0.605 0.373 0.629 0.336 0.622 0.396 720 0.547 0.291 0.490 0.299 0.506 0.313 0.467 0.302 0.643 0.382 0.500 0.309 0.577 0.325 0.645 0.394 0.640 0.350 0.632 0.396 Avg 0.499 0.279 0.453 0.282 0.485 0.298 0.428 0.282 0.623 0.372 0.467 0.292 0.542 0.316 0.625 0.383 0.620 0.336 0.621 0.396Electricity96 0.159 0.244 0.141 0.233 0.153 0.247 0.148 0.240 0.197 0.273 0.166 0.252 0.164 0.269 0.197 0.282 0.168 0.272 0.187 0.304 192 0.160 0.248 0.160 0.250 0.166 0.256 0.162 0.253 0.196 0.276 0.174 0.260 0.177 0.285 0.196 0.285 0.184 0.289 0.199 0.315 336 0.182 0.267 0.173 0.263 0.185 0.277 0.178 0.269 0.211 0.291 0.190 0.277 0.193 0.304 0.209 0.301 0.198 0.300 0.212 0.329 720 0.216 0.298 0.197 0.284 0.225 0.310 0.225 0.317 0.253 0.324 0.230 0.311 0.212 0.321 0.245 0.333 0.220 0.320 0.233 0.345 Avg 0.179 0.264 0.168 0.258 0.182 0.273 0.178 0.270 0.214 0.291 0.190 0.275 0.187 0.295 0.212 0.300 0.193 0.295 0.208 0.323Exchange96 0.082 0.199 0.084 0.202 0.087 0.206 0.086 0.206 0.082 0.200 0.080 0.196 0.102 0.235 0.088 0.218 0.107 0.234 0.085 0.204 192 0.177 0.298 0.174 0.295 0.193 0.310 0.177 0.299 0.179 0.300 0.171 0.293 0.172 0.316 0.176 0.315 0.226 0.344 0.182 0.303 336 0.349 0.425 0.342 0.421 0.345 0.425 0.331 0.417 0.346 0.423 0.317 0.406 0.272 0.407 0.313 0.427 0.367 0.448 0.348 0.428 720 0.891 0.711 0.841 0.689 1.008 0.747 0.847 0.691 0.913 0.717 0.887 0.703 0.714 0.658 0.839 0.695 0.964 0.746 1.025 0.774 Avg 0.375 0.408 0.360 0.402 0.408 0.422 0.360 0.403 0.380 0.410 0.364 0.400 0.315 0.404 0.354 0.414 0.416 0.443 0.410 0.427Solar96 0.201 0.215 0.194 0.209 0.189 0.259 0.203 0.237 0.322 0.339 0.225 0.270 0.257 0.325 0.287 0.374 0.250 0.292 0.258 0.371 192 0.235 0.234 0.234 0.235 0.222 0.283 0.233 0.261 0.360 0.358 0.253 0.288 0.278 0.354 0.317 0.395 0.296 0.318 0.608 0.606 336 0.258 0.249 0.256 0.253 0.231 0.292 0.248 0.273 0.397 0.369 0.270 0.298 0.298 0.375 0.350 0.413 0.319 0.330 0.758 0.705 720 0.260 0.247 0.262 0.257 0.223 0.285 0.249 0.275 0.396 0.361 0.269 0.299 0.299 0.379 0.355 0.411 0.338 0.337 0.789 0.779 Avg 0.239 0.236 0.237 0.239 0.216 0.280 0.233 0.262 0.369 0.357 0.254 0.289 0.283 0.358 0.327 0.398 0.301 0.319 0.603 0.615ILI24 1.378 0.685 1.665 0.803 1.897 0.840 2.915 1.146 2.517 1.002 1.691 0.816 2.684 1.112 2.398 1.040 2.317 0.934 2.527 1.020 36 1.315 0.681 2.200 0.890 1.405 0.747 2.924 1.157 2.443 0.960 1.415 0.762 2.667 1.068 2.646 1.088 1.972 0.920 2.615 1.007 48 1.459 0.747 1.875 0.821 1.640 0.809 2.835 1.134 2.344 0.950 1.754 0.819 2.558 1.052 2.614 1.086 2.238 0.940 2.359 0.972 60 1.616 0.787 1.923 0.853 1.890 0.884 2.996 1.178 2.503 0.999 1.645 0.820 2.747 1.110 2.804 1.146 2.027 0.928 2.487 1.016 Avg 1.442 0.725 1.916 0.842 1.708 0.820 2.918 1.154 2.452 0.978 1.626 0.804 2.664 1.086 2.616 1.090 2.139 0.931 2.497 1.004 1stCount 26 34 7 11 7 0 5 1 1 0 2 4 3 1 0 0 0 0 0 0 Table 13: Full long-term forecasting results with unified lookback window L= 36 for the ILI dataset, and L= 96 for all other datasets. The best model is boldface and the second best is underlined . ModelsxPatch CARD TimeMixer iTransformer RLinear PatchTST MICN DLinear TimesNet ETSformer (ours) (2024) (2024) (2024) (2023) (2023) (2023) (2023) (2023) (2022) Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.354 0.379 0.368 0.396 0.361 0.390 0.396 0.425 0.364 0.391 0.370 0.400 0.398 0.427 0.375 0.399 0.384 0.402 0.494 0.479 192 0.376 0.395 0.406 0.418 0.409 0.414 0.430 0.450 0.418 0.429 0.413 0.429 0.430 0.453 0.405 0.416 0.436 0.429 0.538 0.504 336 0.391 0.415 0.415 0.424 0.430 0.429 0.479 0.485 0.418 0.425 0.422 0.440 0.440 0.460 0.439 0.443 0.491 0.469 0.574 0.521 720 0.442 0.459 0.416 0.448 0.445 0.460 0.700 0.608 0.450 0.462 0.447 0.468 0.491 0.509 0.472 0.490 0.521 0.500 0.562 0.535 Avg 0.391 0.412 0.401 0.422 0.411 0.423 0.501 0.492 0.413 0.427 0.413 0.434 0.440 0.462 0.423 0.437 0.458 0.450 0.542 0.510ETTh296 0.226 0.297 0.262 0.327 0.271 0.330 0.311 0.363 0.255 0.327 0.274 0.337 0.299 0.364 0.289 0.353 0.340 0.374 0.340 0.391 192 0.275 0.330 0.322 0.369 0.317 0.402 0.391 0.413 0.317 0.371 0.341 0.382 0.422 0.441 0.383 0.418 0.402 0.414 0.430 0.439 336 0.312 0.360 0.326 0.378 0.332 0.396 0.415 0.437 0.324 0.385 0.329 0.384 0.447 0.474 0.448 0.465 0.452 0.452 0.485 0.479 720 0.384 0.418 0.373 0.419 0.342 0.408 0.424 0.455 0.414 0.445 0.379 0.422 0.442 0.467 0.605 0.551 0.462 0.468 0.500 0.497 Avg 0.299 0.351 0.321 0.373 0.316 0.384 0.385 0.417 0.328 0.382 0.331 0.381 0.403 0.437 0.431 0.447 0.414 0.427 0.439 0.452ETTm196 0.275 0.330 0.288 0.332 0.291 0.340 0.313 0.366 0.310 0.350 0.293 0.346 0.316 0.364 0.299 0.343 0.338 0.375 0.375 0.398 192 0.315 0.355 0.332 0.357 0.327 0.365 0.349 0.388 0.337 0.366 0.333 0.370 0.363 0.390 0.335 0.365 0.374 0.387 0.408 0.410 336 0.355 0.376 0.364 0.376 0.360 0.381 0.381 0.411 0.369 0.384 0.369 0.392 0.408 0.426 0.369 0.386 0.410 0.411 0.435 0.428 720 0.419 0.411 0.414 0.407 0.415 0.417 0.448 0.449 0.419 0.411 0.416 0.420 0.459 0.464 0.425 0.421 0.478 0.450 0.499 0.462 Avg 0.341 0.368 0.350 0.368 0.348 0.376 0.373 0.404 0.359 0.378 0.353 0.382 0.387 0.411 0.357 0.379 0.400 0.406 0.429 0.425ETTm296 0.153 0.240 0.159 0.246 0.164 0.254 0.180 0.274 0.163 0.251 0.166 0.256 0.179 0.275 0.167 0.260 0.187 0.267 0.189 0.280 192 0.213 0.280 0.214 0.285 0.223 0.295 0.238 0.311 0.218 0.289 0.223 0.296 0.262 0.326 0.224 0.303 0.249 0.309 0.253 0.319 336 0.264 0.315 0.266 0.319 0.279 0.330 0.294 0.349 0.271 0.325 0.274 0.329 0.305 0.353 0.281 0.342 0.321 0.351 0.314 0.357 720 0.338 0.363 0.379 0.390 0.359 0.383 0.382 0.406 0.360 0.385 0.362 0.385 0.389 0.407 0.397 0.421 0.408 0.403 0.414 0.413 Avg 0.242 0.300 0.255 0.310 0.256 0.316 0.274 0.335 0.253 0.313 0.256 0.317 0.284 0.340 0.267 0.332 0.291 0.333 0.293 0.342Weather96 0.146 0.185 0.145 0.186 0.147 0.197 0.191 0.239 0.171 0.223 0.149 0.198 0.161 0.229 0.176 0.237 0.172 0.220 0.197 0.281 192 0.189 0.227 0.187 0.227 0.189 0.239 0.219 0.263 0.215 0.259 0.194 0.241 0.220 0.281 0.220 0.282 0.219 0.261 0.237 0.312 336 0.218 0.260 0.238 0.258 0.241 0.280 0.284 0.311 0.261 0.293 0.245 0.282 0.278 0.331 0.265 0.319 0.280 0.306 0.298 0.353 720 0.291 0.315 0.308 0.321 0.310 0.330 0.389 0.375 0.322 0.338 0.314 0.334 0.311 0.356 0.323 0.362 0.365 0.359 0.352 0.388 Avg 0.211 0.247 0.220 0.248 0.222 0.262 0.271 0.297 0.242 0.278 0.226 0.264 0.243 0.299 0.246 0.300 0.259 0.287 0.271 0.334Traffic96 0.364 0.233 0.341 0.229 0.360 0.249 0.348 0.255 0.395 0.272 0.360 0.249 0.519 0.309 0.410 0.282 0.593 0.321 0.607 0.392 192 0.377 0.241 0.367 0.243 0.375 0.250 0.366 0.266 0.406 0.276 0.379 0.256 0.537 0.315 0.423 0.287 0.617 0.336 0.621 0.399 336 0.388 0.243 0.388 0.254 0.385 0.270 0.383 0.273 0.415 0.281 0.392 0.264 0.534 0.313 0.436 0.296 0.629 0.336 0.622 0.396 720 0.437 0.273 0.427 0.276 0.430 0.281 0.413 0.287 0.453 0.302 0.432 0.286 0.577 0.325 0.466 0.315 0.640 0.350 0.632 0.396 Avg 0.392 0.248 0.381 0.251 0.388 0.263 0.378 0.270 0.417 0.283 0.391 0.264 0.542 0.316 0.434 0.295 0.620 0.336 0.621 0.396Electricity96 0.126 0.217 0.129 0.223 0.129 0.224 0.131 0.227 0.136 0.231 0.129 0.222 0.164 0.269 0.140 0.237 0.168 0.272 0.187 0.304 192 0.140 0.232 0.154 0.245 0.140 0.220 0.152 0.248 0.149 0.243 0.147 0.240 0.177 0.285 0.153 0.249 0.184 0.289 0.199 0.315 336 0.156 0.249 0.161 0.257 0.161 0.255 0.170 0.267 0.165 0.259 0.163 0.259 0.193 0.304 0.169 0.267 0.198 0.300 0.212 0.329 720 0.190 0.281 0.185 0.278 0.194 0.287 0.190 0.284 0.205 0.293 0.197 0.290 0.212 0.321 0.203 0.301 0.220 0.320 0.233 0.345 Avg 0.153 0.245 0.157 0.251 0.156 0.247 0.161 0.257 0.164 0.257 0.159 0.253 0.187 0.295 0.166 0.264 0.193 0.295 0.208 0.323Exchange96 0.081 0.197 0.084 0.202 0.096 0.219 0.108 0.239 0.090 0.211 0.086 0.208 0.102 0.235 0.081 0.203 0.107 0.234 0.085 0.204 192 0.178 0.298 0.174 0.295 0.205 0.324 0.253 0.376 0.190 0.309 0.195 0.316 0.172 0.316 0.157 0.293 0.226 0.344 0.182 0.303 336 0.339 0.418 0.342 0.421 0.378 0.456 0.390 0.471 0.368 0.434 0.342 0.425 0.272 0.407 0.305 0.414 0.367 0.448 0.348 0.428 720 0.867 0.701 0.841 0.689 1.205 0.810 1.080 0.789 1.045 0.755 0.998 0.756 0.714 0.658 0.643 0.601 0.964 0.746 1.025 0.774 Avg 0.366 0.404 0.360 0.402 0.471 0.452 0.458 0.469 0.423 0.427 0.405 0.426 0.315 0.404 0.297 0.378 0.416 0.443 0.410 0.427Solar96 0.173 0.197 0.179 0.212 0.167 0.220 0.179 0.248 0.211 0.254 0.224 0.278 0.188 0.252 0.289 0.377 0.219 0.314 0.258 0.371 192 0.193 0.216 0.200 0.227 0.187 0.249 0.197 0.266 0.230 0.264 0.253 0.298 0.215 0.280 0.319 0.397 0.231 0.322 0.608 0.606 336 0.196 0.224 0.203 0.226 0.200 0.258 0.202 0.263 0.246 0.272 0.273 0.306 0.222 0.267 0.352 0.415 0.246 0.337 0.758 0.705 720 0.212 0.219 0.209 0.236 0.215 0.250 0.209 0.271 0.252 0.274 0.272 0.308 0.226 0.264 0.356 0.412 0.280 0.363 0.789 0.779 Avg 0.194 0.214 0.198 0.225 0.192 0.244 0.197 0.262 0.235 0.266 0.256 0.298 0.213 0.266 0.329 0.400 0.244 0.334 0.603 0.615ILI24 1.188 0.638 1.665 0.803 1.693 0.872 2.743 1.150 1.734 0.864 1.319 0.754 2.684 1.112 2.215 1.081 2.317 0.934 2.527 1.020 36 1.226 0.653 2.200 0.890 2.002 0.942 2.887 1.183 1.771 0.849 1.579 0.870 2.507 1.013 1.963 0.963 1.972 0.920 2.615 1.007 48 1.254 0.686 1.875 0.821 2.086 0.937 2.998 1.206 1.777 0.864 1.553 0.815 2.423 1.012 2.130 1.024 2.238 0.940 2.359 0.972 60 1.455 0.773 1.923 0.853 2.102 0.946 3.160 1.234 1.929 0.919 1.470 0.788 2.653 1.085 2.368 1.096 2.027 0.928 2.487 1.016 Avg 1.281 0.688 1.916 0.842 1.971 0.924 2.947 1.193 1.803 0.874 1.480 0.807 2.567 1.056 2.169 1.041 2.139 0.931 2.497 1.004 1stCount 31 39 7 8 5 2 5 0 0 0 0 0 1 1 4 3 0 0 0 0 Table 14: Full long-term forecasting results under hyperparameter searching. The best model is boldface and the second best is underlined . 0 100 200 300 4001.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 GroundTruth Prediction(a) xPatch 0 100 200 300 4001.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 GroundTruth Prediction (b) CARD 0 100 200 300 4001.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 GroundTruth Prediction (c) iTransformer 0 100 200 300 4001.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 GroundTruth Prediction (d) PatchTST Figure 11: Sample prediction graph of the next T = 96 points with lookback window L = 336 from the ETTh1 dataset. 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (a) xPatch 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (b) CARD 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (c) iTransformer 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (d) PatchTST Figure 12: Sample prediction graph of the next T = 192 points with lookback window L = 96 from the ETTh2 dataset. 0 100 200 300 4001.00 0.75 0.50 0.25 0.000.250.500.75 GroundTruth Prediction(a) xPatch 0 100 200 300 4001.00 0.75 0.50 0.25 0.000.250.500.75 GroundTruth Prediction (b) CARD 0 100 200 300 4001.00 0.75 0.50 0.25 0.000.250.500.75 GroundTruth Prediction (c) iTransformer 0 100 200 300 4001.00 0.75 0.50 0.25 0.000.250.500.75 GroundTruth Prediction (d) PatchTST Figure 13: Sample prediction graph of the next T = 336 points with lookback window L = 96 from the ETTh2 dataset. 0 100 200 300 400 500 600 700 8001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (a) xPatch 0 100 200 300 400 500 600 700 8001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (b) CARD 0 100 200 300 400 500 600 700 8001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (c) iTransformer 0 100 200 300 400 500 600 700 8001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (d) PatchTST Figure 14: Sample prediction graph of the next T = 720 points with lookback window L = 96 from the ETTm2 dataset. 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction(a) xPatch 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction (b) CARD 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction (c) iTransformer 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction (d) PatchTST Figure 15: Sample prediction graph of the next T = 96 points with lookback window L = 96 from the Electricity dataset. 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (a) xPatch 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (b) CARD 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (c) iTransformer 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (d) PatchTST Figure 16: Sample prediction graph of the next T = 60 points with lookback window L = 36 from the Illness dataset. 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction(a) CNN-stream 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (b) MLP-stream 0 50 100 150 200 250 3001.00 0.75 0.50 0.25 0.000.250.500.75GroundTruth Prediction (c) Dual-stream Figure 17: Sample prediction graph of the next T = 192 points with lookback window L = 96 from the ETTh2 dataset. 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction (a) CNN-stream 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction (b) MLP-stream 0 25 50 75 100 125 150 175 2001.5 1.0 0.5 0.00.51.0GroundTruth Prediction (c) Dual-stream Figure 18: Sample prediction graph of the next T = 96 points with lookback window L = 96 from the Electricity dataset. 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (a) CNN-stream 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (b) MLP-stream 0 20 40 60 800.51.01.52.02.5GroundTruth Prediction (c) Dual-stream Figure 19: Sample prediction graph of the next T = 48 points with lookback window L = 36 from the Illness dataset. 0 25 50 75 100 125 150 175 2000.020.040.060.080.100.120.14 GroundTruth Prediction(a) CNN-stream 0 25 50 75 100 125 150 175 2000.020.040.060.080.100.120.14 GroundTruth Prediction (b) MLP-stream 0 25 50 75 100 125 150 175 2000.020.040.060.080.100.120.14 GroundTruth Prediction (c) Dual-stream Figure 20: Sample prediction graph of the next T = 96 points with lookback window L = 96 from the Weather dataset. 0 50 100 150 200 250 3000.000.020.040.060.080.100.12 GroundTruth Prediction (a) CNN-stream 0 50 100 150 200 250 3000.000.020.040.060.080.100.12 GroundTruth Prediction (b) MLP-stream 0 50 100 150 200 250 3000.000.020.040.060.080.100.12 GroundTruth Prediction (c) Dual-stream Figure 21: Sample prediction graph of the next T = 192 points with lookback window L = 96 from the Weather dataset. 0 100 200 300 4000.000.020.040.060.080.100.12 GroundTruth Prediction (a) CNN-stream 0 100 200 300 4000.000.020.040.060.080.100.12 GroundTruth Prediction (b) MLP-stream 0 100 200 300 4000.000.020.040.060.080.100.12 GroundTruth Prediction (c) Dual-stream Figure 22: Sample prediction graph of the next T = 336 points with lookback window L = 96 from the Weather dataset. | 5 | 1 | The xPatch model employs a dual-stream architecture leveraging MLP and CNN components for time series forecasting. Given the nature of time series data, an estimated dataset size around 100,000 to 1,000,000 samples (typical for LTSF tasks) is reasonable. The model is likely to have a moderate parameter count, roughly comparable to CNN architectures used in other tasks. Given recent advancements in GPU capabilities, training on a single modern GPU (like NVIDIA RTX 3080 or A100) would likely complete within 5 hours across several epochs (often 50-100) with batch sizes set between 32-128 samples. With efficient use of optimizers and learning rate schedules, the model can be trained under 8 hours on a single GPU. | yes | Yes | Time Series | xPatch: Dual-Stream Time Series Forecasting with Exponential Seasonal-Trend Decomposition | 2024-12-23T00:00:00.000Z | [https://github.com/stitsyuk/xpatch] | 1 | https://drive.usercontent.google.com/download?id=1NF7VEefXCmXuWNbnNe858WvQAkJ_7wuP&export=download&authuser=0 | 30 min | https://colab.research.google.com/drive/1JaT0PQUcJJLSUpemXlylsIULjRvMCW1G?usp=sharing | Yes sussessfully run | It is successfully run and working fine |
CNN/Daily Mail | Claude Instant + SigExt | [] | Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization | 2024-10-03T00:00:00 | https://arxiv.org/abs/2410.02741v2 | [
"https://github.com/amazon-science/SigExt"
] | {'ROUGE-1': '42', 'ROUGE-L': '26.6'} | [
"ROUGE-1",
"ROUGE-2",
"ROUGE-L"
] | Given the following paper and codebase:
Paper: Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization
Codebase: https://github.com/amazon-science/SigExt
Improve the Claude Instant + SigExt model on the CNN/Daily Mail dataset. The result
should improve on the following metrics: {'ROUGE-1': '42', 'ROUGE-L': '26.6'}. You must use only the codebase provided.
| Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization Lei Xu1, Mohammed Asad Karim2†, Saket Dingliwal1, Aparna Elangovan1 1Amazon AWS AI Labs 2Carnegie Mellon University {leixx, skdin, aeg}@amazon.com mkarim2@cs.cmu.edu Abstract Large language models (LLMs) can generate fluent summaries across domains using prompt- ing techniques, reducing the need to train mod- els for summarization applications. However, crafting effective prompts that guide LLMs to generate summaries with the appropriate level of detail and writing style remains a challenge. In this paper, we explore the use of salient in- formation extracted from the source document to enhance summarization prompts. We show that adding keyphrases in prompts can improve ROUGE F1 and recall, making the generated summaries more similar to the reference and more complete. The number of keyphrases can control the precision-recall trade-off. Fur- thermore, our analysis reveals that incorpo- rating phrase-level salient information is su- perior to word- or sentence-level. However, the impact on hallucination is not universally positive across LLMs. To conduct this anal- ysis, we introduce Keyphrase Signal Extrac- tor (SigExt), a lightweight model that can be finetuned to extract salient keyphrases. By us- ing SigExt, we achieve consistent ROUGE im- provements across datasets and open-weight and proprietary LLMs without any LLM cus- tomization. Our findings provide insights into leveraging salient information in build- ing prompt-based summarization systems. We release our code at https://github.com/ amazon-science/SigExt 1 Introduction Abstractive summarization aims to generate con- cise summaries that capture the most salient infor- mation from lengthy source documents. Prior work has shown that emphasizing keywords from source documents can enhance summarization perfor- mance on supervised finetuned (SFT) models (Gu et al., 2016). However, existing approaches (Nalla- pati et al., 2016; See et al., 2017; Liu et al., 2021) †Work done during an internship at AWS AI Labs.require extensive modifications to the architecture and loss functions, hindering widespread adoption, especially for large language models (LLMs) with billions of parameters. Recent work (Li et al., 2023a) trains a separate network using reinforce- ment learning (RL) to generate keyphrases for LLM prompts, but training RL model is non-trivial due to convergence and stability issues (Wang et al., 2024). Emphasizing salient information in the prompt can help zero-shot LLMs generate more complete summaries, and steer LLMs to gener- ate summaries that align with the desired use case. However, there is also a lack of analysis on how emphasizing salient information in prompts would affect the LLM behavior. We first address the challenge of applying salient information to LLMs. We obtain keyphrases us- ing a stand-alone keyphrase signal extractor called SigExt, and prompt the LLMs to consider these keyphrases when generating summaries. Unlike prior work relying on complex keyphrase genera- tors optimized for specific LLMs, SigExt is LLM- agnostic, allowing leveraging salient information with large API-based models that cannot be fine- tuned. We demonstrate consistent improvement in ROUGE scores on 4 representative summariza- tion datasets and 3 recent LLMs – Claude, Mis- tral (Jiang et al., 2023), and Falcon (Almazrouei et al., 2023) – highlighting the wide adaptability of our approach. Secondly, we conduct compre- hensive experiments using SigExt to gain insights into how keyphrases in prompts affect different as- pects of summary quality. We show that adding keyphrases improves ROUGE F1 and recall, mak- ing the generated summaries more similar to the ref- erence and more complete. Adjusting the number of keyphrases influences the trade-off between pre- cision and recall. Including additional keyphrases in the prompt tends to produce more detailed sum- maries, enhancing recall. Our findings indicate that using phrase-level salient information is more 1arXiv:2410.02741v2 [cs.CL] 2 Dec 2024 Longformer Phrase Extractor The 2025 NBA All - Star Game will take place at home of the Golden State W arriors .... phrase 1 phrase 2 phrase 3 phrase 4Article: Summary: San Francisco Bay Area to host NBA All - Star Game 20251 1111 1 0 0 0 0 0 0 Best character-level fuzzy matching scoreLabels for Training Longformer 75.5% 31.3% 24.2% 32.6%Figure 1: SigExt – a finetuned Longformer to extract keyphrases from an article. We construct labels by thresholding the character-level fuzzy matching score between phrases in the article and the summary. effective than word- or sentence-level approaches. However, for certain large language models like Mistral, adding keyphrases may lead to more hallu- cinations. Our analysis offers guidance for applying simi- lar strategies in real-world summarization applica- tions. While incorporating salient information is an effective method for enhancing and controlling the completeness of summaries, and using phrase-level granularity proves more effective, the risk of intro- ducing hallucinations must be carefully considered. This risk depends on the specific LLM being used, the method for gathering salient information, and the criticality of the application. Our key contributions are as follows: 1)We present SigExt, a simple yet effective keyphrase extraction model using a finetuned Long- former (Beltagy et al., 2020). Once trained, SigExt is LLM-agnostic, enabling performance boost for different LLMs by adding extracted keyphrases in prompts without requiring LLM finetuning. 2)We provide a comprehensive analysis on the impact of adding salient information in prompts for summarization, including insights on summary length, reference alignment, completeness, and hal- lucination. 3)We demonstrate that SigExt has cross-domain generalization capability through a general-purpose version (GP-SigExt) pretrained on 7 datasets. 2 Method In this section, we introduce SigExt – a keyphrase extractor designed for boosting summarization quality of prompt-based LLMs. Figure 1 gives an overview. SigExt tokenizes the source docu- ment into phrases (phrase tokenization is detailed in Section 2.1), and simultaneously predict whether each phrase is important. To train the model, wecreate target labels by identifying phrases appear in both the source document and the summary, then optimizing the cross entropy loss. Compared to pre- vious a keyphrase generator that uses RL (Li et al., 2023a), SigExt allows easier control of keyphrase numbers, faster training and inference, and better consistency across domains. We directly incorpo- rate keyphrases in prompt, making it generalizable across LLMs. To handle longer input lengths while maintaining efficiency, we build SigExt using Long- former, so that training and inference can be done on a single GPU. 2.1 Phrase tokenization Letx=x1, . . . , x nbe a source document of n tokens, and y=y1, . . . , y mbe the target sum- mary of mtokens. The document is segmented into non-overlapping phrases by removing stop- words and puctuation. After this, we get a se- quence of Tnon-overlapping phrases, denoted as Phrase (x) = [ pi=xli. . . x ri]i=1...T.Similarly, we get T′phrases from the summary denoted as Phrase (y) = [qi=yl′ j. . . y r′ j]j=1...T′. 2.2 Labels and learning objective We label each input phrase by compute the fuzzy matching score fuzz(a, b) =|longest_common_sequence (a,b)| max(|a|,|b|), against all phrases in the summary. If the maximum score exceeds certain threshold ϵ, it is considered a keyphrase, formally label(pi) =( 1 max j∈1...T′fuzz(pi, qj)≥ϵ, 0otherwise. We train a classification model to predict the label. Specifically, we use a Longformer and add a classi- fication head on top of each token. We compute the 2 cross entropy loss on tokens that belong to phrases, while ignoring predictions on punctuation and stop- word tokens. We apply class balancing weight λ when the label of the token is 0. 2.3 Application of SigExt on summarization We first finetune SigExt on the summarization dataset to get a task-specific keyphrase extrac- tor. During inference, we use SigExt to extract keyphrases, then wrap the source article with a summarization prompt, and include keyphrases in the prompt. Here is an example prompt: Here is an news article : <text > \ nHere are a few keyphrases from the article : < key_phrases > \ nPlease write an summary for the article . \ nSummary : To select keyphrases, we first score each phrase by calculating the average logits of its tokens. We then select the top- Kdeduplicated phrases accord- ing to their logits scores, removing any duplicates that exceed a fuzzy matching threshold ϵand keep- ing the longer phrase in those cases. We replace <key_phrases> with comma separated keyphrases. This prompt then serves as the input to the LLM which produces the final summary. 2.4 Cross domain generalization In order to generalize the keyphrase extractor model to new domains without fine-tuning for the target domain, we train a general purpose keyphrase extractor using a combination of 7 datasets. The datasets are XSUM (Narayan et al., 2018), Multi-News (Fabbri et al., 2019), Giga- word (Nallapati et al., 2017), Big-Patent (Sharma et al., 2019), AESLC (Zhang and Tetreault, 2019), BillSum (Kornilova and Eidelman, 2019), and Wik- iHow (Koupaee and Wang, 2018). We call this general-purpose keyphrase signal extractor model GP-SigExt. 3 Experiments Datasets: We select 4 representative datasets – SAMSum (Gliwa et al., 2019), CNN/Daily- Mail (Nallapati et al., 2016), ArXiv (Cohan et al., 2018), and MeetingBank (Hu et al., 2023) – to eval- uate our method. These datasets cover short and long text, as well as regular document and conver- sation summarization. Dataset details are shown in Table 11 in Appendix. We truncate input text to 4,000 tokens to fit the context window of the Long- former model. We follow the convention to eval-uate on 500 randomly sampled examples (Zhang et al., 2020). We report results averaged on 3 runs. LLMs and Prompts: We evaluate SigExt on Claude-Instant, Mistral-7B-Instruct, and Falcon- 40B-Instruct LLMs. We do not use Falcon on ArXiv and MeetingBank datasets due to its lim- ited context window. We manually optimized the prompts for each model and task to achieve com- petitive zero-shot performance. All prompts are listed in Appendix A. SigExt & GP-SigExt Parameters: We use Longformer-large (433M) for the keyphrase extrac- tor. We set the fuzzy matching threshold ϵ= 70% , and the class balancing weight λ= 0.1. For SigExt, we sample 1000 examples from training set, we train SigExt starting with original Longformer- large checkpoint. For GP-SigExt, we sample 10000 examples from each of the 7 dataset mentioned in Sec. 2.4. We train SigExt and GP-SigExt for 10 epochs, and use validation set to pick the best checkpoint based on recall@20 (Metric defined in Sec. 3.7). During prompting, we try K= 10 ,15,20 keyphrases for the CNN, SAMSum, and Meeting- Bank datasets, and K= 30,35,40keyphrases for the ArXiv dataset. We pick the best number of keyphrases based on ROUGE scores on the valida- tion set. We also conduct an ablation study on the effect of different numbers of keyphrases. Baseline: We compare our methods with naive zero-shot prompting. We adapt a 2-pass extract- then-abstract method (Zhang et al., 2023) to the three LLMs and use it as a baseline. This method uses the LLM to extract sentences from the source document in the first pass, then uses the second pass to revise the extracted sentences into an abstrac- tive summary. We also compare with Directional Stimulus Prompting (Li et al., 2023b) which utilize reinforcement learning to select good keywords. Evaluation Metrics: We compute ROUGE-1/-L F1 scores (abbreviated as R1-f ,RL-f ) to evaluate summary quality. We also report ROUGE-1 Recall (R1-r) to assess the completeness. We use Align- Score (Zha et al., 2023) to evaluate the faithfulness of the summary. 3.1 Main Results Table 1 shows the ROUGE scores on all 4 datasets. The F1 scores are improved by using GP-SigExt without any fine-tuning on new datasets. By fine- tuning only the phrase extractor, SigExt further improves the score, showing that using a super- 3 SAMSum CNN/DailyMail ArXiv MeetingBank Avg. Method R1-f RL-f R1-r R1-f RL-f R1-r R1-f RL-f R1-r R1-f RL-f R1-r ∆R1-f Claude-Ins. 40.0 30.3 52.8 38.1 23.9 41.9 44.4 23.1 53.2 32.2 21.8 43.4 +2-stage 40.3 31.0 46.9 39.2 24.6 48.3 44.0 22.9 50.4 30.8 20.7 43.8 -0.1 +GP-SigExt 40.0 30.0 57.3 40.2 24.9 47.5 44.7 23.2 53.5 36.3 25.7 53.1 1.6 +SigExt 41.6 30.9 59.5 42.0 26.6 48.6 45.2 23.5 53.7 42.3 31.9 60.5 4.1 Mistral-7B 40.5 31.7 48.2 38.9 24.8 42.6 43.1 24.6 41.6 34.4 25.2 50.3 +2-stage 38.7 30.6 45.4 38.0 24.4 48.6 39.5 22.0 41.9 32.0 23.5 52.0 -2.2 +GP-SigExt 41.9 32.2 50.7 39.5 25.2 45.3 42.8 23.8 44.7 34.1 24.7 54.8 0.4 +SigExt 44.1 33.9 54.5 40.9 26.0 47.9 43.6 24.2 45.2 37.0 27.2 58.7 2.2 Falcon-40B 37.1 28.7 46.3 25.7 16.4 33.8 +2-stage 36.1 28.1 54.1 34.2 22.1 53.2 3.8 +GP-SigExt 38.5 29.4 54.1 31.9 20.4 42.3 3.8 +SigExt 39.9 30.4 56.1 33.5 21.3 43.2 5.3 0-shot SOTA 38.8 30.6 - 36.0 22.3 - 34.6 18.3 36.4 26.8 - Table 1: Performance of SigExt & GP-SigExt on summarization using Claude Instant, Mistral-7B-Instruct, and Falcon-40B-Instruct. SigExt is trained with 1000 examples, while GP-SigExt is not fine-tuned on the dataset. We compare our methods with zero-shot prompting and 2-stage extract-then-abstract baselines. We show ROUGE-1 F-Measure (R1-f), ROUGE-L F-Measure (RL-f), and ROUGE-1 recall (R1-r). The LLMs are not fine-tuned. We directly copy zero-shot SOTA for SAMSum and CNN from Laskar et al. (2023), ArXiv from Xiao et al. (2022), and MeetingBank from Hu et al. (2023). visely learned keyphrase extractor can make the LLM generate summaries more similar to the ref- erence. On average, compared to the already strong zero-shot Claude Instant baseline, R1-F im- proves by 1.6% with GP-SigExt and 4.1% with SigExt. Similar improvements are also observed on Mistral and Falcon models. Besides F1 scores, adding keyphrases extracted by both SigExt and GP-SigExt into the prompts can significantly in- crease the R1-r score, showing that adding salient information can improve the completeness of the summary. Our method achieves a smaller gain on the ArXiv dataset compared to other datasets. We hypothesize that this is because paper abstracts have a standard format, and the keyphrases they should contain are thus more predictable. As a re- sult, the zero-shot LLM can already identify and include these keyphrases in the output. For other datasets, where the summary is more subjective, our method can help the LLM incorporate proper information in the summary to better align with the reference. Although the length of the summary slightly in- crease with the introduction of keyphrases, we do not achieve these improvements by excessively in- creasing the length of the summary. On average, the length of Claude Instant summaries increases by 4.7 words after adding keyphrases, whereas it increases by 13.6 words for Mistral and 12.3 words for Falcon.We also compare the performance of SigExt with recent Directional Stimulus Prompting baseline on ChatGPT( gpt-3.5-turbo ) in Table 2. We show that SigExt can also boost ChatGPT zero-shot per- formance, and outperform the baseline. Method #examples R1-f RL-f Vanilla 0 38.5 25.5 Directional Stimulus 4000 40.2 26.8 SigExt 4000 42.2 27.0 Table 2: Comparing SigExt with baselines using Chat- GPT and CNN dataset. 3.2 Human Qualitative Check To verify the quality of the notes, we follow Liu et al. (2023) and conduct a human evaluation, in which they annotated Atomic Content Units (ACUs) for several public datasets. Each ACU represents a fact that should appear in the sum- mary. We select 50 documents from the CNN and SAMSum datasets, respectively, and ask human an- notators to verify whether the given ACU appears in the summary. We report both the raw ACU cov- erage andlength-normalized ACU coverage , as proposed by Liu et al. (2023). Table 3 shows that SigExt consistently outperforms the vanilla LLM in terms of ACU coverage. 4 10 20 30 # keyphrases404550Claude InstantCNN/DailyMail 10 20 30 # keyphrases405060SAMSum 25 50 75 # keyphrases4050ArXiv 10 20 30 # keyphrases4060MeetingBankR1-f (vanilla) R1-f R1-r R1-p 10 20 30 # keyphrases4050Mistral-7B-InstructCNN/DailyMail 10 20 30 # keyphrases4050SAMSum 25 50 75 # keyphrases42.545.047.5ArXiv 10 20 30 # keyphrases4060MeetingBankFigure 2: Effect of using different number of keyphrases on the precision-recall trade off. Raw ACU Nomalized ACU Claude +SigExt Claude +SigExt CNN 43.8% 52.4% 40.7% 47.3% SAMSum 53.6% 63.3% 38.4% 40.7% Table 3: ACU coverage human evaluation on CNN and SAMSum using Claude Instant generated summaries. 3.3 Number of Keyphrases We try different numbers of keyphrases in the prompt for each dataset, and show the ROUGE- 1 Precision/Recall/F1 curves in Figure 2. The F1 scores of our model are stable when changing the number of keyphrases within a fairly wide range, showing that introducing keyphrases can consis- tently improve the summary quality. As we increase the number of keyphrases, there is a clear trend of increasing recall and decreasing precision for the Mistral model. This is less evident for the Claude model. Since we add a length con- straint explicitly in the prompt (e.g., "write a sum- mary in 3 sentences"), the Claude model appears to follow these instructions better than the Mistral models. Mistral models tend to try to cover all the keywords provided in the prompt. Consequently, the recall increases significantly when increasing the number of keywords for the Mistral models. 3.4 Granularity of Salient Information We also explore how different granularity of salient information can affect the summarization perfor- mance. We compare word-, phrase-, and sentence-level SigExt. The results are shown in Table 4. The phrase-level salient information can always achieve top or near-top performance, while the word-level and sentence-level approaches have much larger variance. The word-level information performs poorly on the ArXiv dataset because for academic papers, there are many multi-word phrases that are important in the summary. If these are split, they are no longer helpful for summarization. In contrast, the sentence-level information is not so effective, especially on the MeetingBank dataset. When the dataset is highly abstractive, the impor- tant words are dispersed across the document, mak- ing it difficult to extract a few sentences to cover the content of the summary (See examples in Ap- pendix Table 10). Claude-Instant R1-f RL-f R1-f RL-f SAMSum CNN +SigExt (word) 41.4 30.9 42.0 26.2 +SigExt (phrase) 41.6 30.9 42.0 26.6 +SigExt (sent) 39.1 29.7 40.3 25.7 ArXiv M.Bank +SigExt (word) 42.2 21.0 41.9 31.7 +SigExt (phrase) 45.2 23.5 42.3 31.9 +SigExt (sent) 44.8 23.8 36.2 25.8 Table 4: Different granularity of salient information. 3.5 Summary Factuality As shown in Table 5, the effect of adding keyphrases on the AlignScore is LLM and task- specific. For the Claude Instant and Falcon models, 5 the AlignScore is typically improved by incorporat- ing keyphrases. In contrast, the AlignScore always decreases for the Mistral model. These results sug- gest that keyphrases are not universally helpful for improving the faithfulness of the generated sum- maries. Table 8 shows a few examples where hal- lucination is introduced in the summary due to the keyphrases. The failure pattern is if a keyphrase is negated in the document, Mistral model would ignore the negation. SamSum CNN ArXiv M.Bank Claude Ins. 85.8 83.8 53.7 73.1 +SigExt 88.0 82.3 60.0 74.7 Mistral-7B 88.9 88.8 56.9 79.1 +SigExt 84.7 87.0 49.5 77.1 Falcon-40B 81.6 67.7 +SigExt 81.6 75.0 Table 5: Summary factuality measured by AlignScore. 3.6 Introducing External Oracle Keyphrases We also analyze how external keyphrases which do appear in the source document would affect the performance. We use oracle keyphrases that appears in the reference summary but do not appear in the source document as additional information in the prompt. The ROUGE-1 score and AlignScore are shown on Table 6. The ROUGE score increases significantly while the AlignScore falls. It indicates that introducing external keyphrases might hurt the factuality of the summary. Claude-Ins. R1-f Align. R1-f Align. SamSum CNN +SigExt 41.6 88.0 42.0 82.3 +Oracle 50.0 86.3 50.0 78.8 ArXiv M.Bank +SigExt 45.2 60.0 42.3 74.7 +Oracle 51.6 45.9 48.2 56.7 Table 6: Summary quality with oracle keyphrases. 3.7 Effectiveness of keyphrase extraction In this part, we analyze the effectiveness of the Longformer keyphrase extractor. We define recall@ Kmetric to evaluate the keyphrase extrac- tion performance. We define the recall@ Kas the recall of oracle keyphrases in the top- Kdedupli- cated keywords, where oracle keyphrases are con- structed by finding the phrase in the source docu- ment with the highest fuzzy match score to eachphrase in the target summary. We compare our method with two statistical methods, Rake (Rose et al., 2010) and TextRank (Mihalcea and Tarau, 2004). Recent work has proposed transformer- based keyphrase extraction models (Sun et al., 2020; Ding and Luo, 2021) that focus on generating noun phrases to better align with human annota- tion. However, in our setting, the oracle keyphrases are constructed heuristically and are not limited to noun phrases, making these models a poor fit for comparison. Therefore, we do not include them. The evaluation results are shown on Table 7. We show that GP-SigExt already outperforms statisti- cal methods. And the fine-tuned SigExt achieves additional 5.9% and 3.7% improvements on two datasets respectively. SAMSum CNN M.Bank Arxiv Method R@15 R@15 R@15 R@35 Rake 68.3 11.9 17.1 14.2 TextRank 80.5 20.8 19.3 22.4 GP-SigExt 75.5 27.7 40.3 31.7 (+32 ex.) 81.5 29.7 47.3 32.0 (+128 ex.) 85.5 32.9 62.2 32.7 SigExt (1k ex.) 83.3 33.6 65.7 35.4 Table 7: Keyphrase extraction performance. 3.8 Case study We show some examples in Appendix Table 9. We found the extracted keyphrases can help the LLM incorporate precise details in the summary, hence the summaries better align with the gold summary. In the first two examples, the keyphrases contain exact numbers and times, and the LLM was able to include them in the summary. In the third example, with SigExt, the summary covers more topics than the vanilla model. Since we instruct the LLM to “consider” these keyphrases, the LLM was able to skip or rephrase some to get more fluent results. 4 Related Work Leveraging keyword in abstractive summarization has been explored in many works. Switching Generator-Pointer (Nallapati et al., 2016) and Copy- Net (Gu et al., 2016) modify a recurrent neural net- work model (Chopra et al., 2016) to directly copy keywords from the source text. More recent work has adopted transformer architectures (Vaswani et al., 2017), which have become dominant in nat- ural language processing. Liu et al. (2022) intro- duces a bias in the attention matrix to help trans- 6 former models focus on keywords. All these mod- els need to be trained or finetuned on large-scale training data. While finetuned models typically achieve higher ROUGE scores than prompting a pretrained model, prompt-based summarizers are preferred in some industrial use cases due to their flexibility and reduced need for data collection. In- corporating keyphrases in the prompt can effec- tively control the length and content coverage of the summary, something that fine-tuning methods can- not easily achieve. Therefore, we cannot compare with these methods using metrics like ROUGE. Instruction finetuned LLMs (Chung et al., 2022; Touvron et al., 2023; Zhang et al., 2022) have shown strong performance on summarization purely via prompts, without finetuning data. Such models are often offered via APIs, enabling eas- ier development and deployment of summariza- tion applications. Keyphrases are still helpful for these large models, as Li et al. (2023a) show that a keyphrase generater trained with reinforcement learning can improve summarization performance. There has been interest in 2-stage extractive- then-abstractive approaches (Su et al., 2020; Liu et al., 2021; Li et al., 2021; Su et al., 2022; Yang et al., 2023). These first extract keyphrases or sentences before abstractively summarizing them. These methods are trained end-to-end for domain- specific use cases, while our method can be pre- trained for general purpose zero-shot use cases. Practically, any keyword extractor, for example KeyBERT or LLMBERT (Grootendorst, 2020), can be used for the first stage to enhance the summariza- tion in the second stage. The 2-stage methods could also be implemented as Chain-of-Thought (CoT) by generating intermediate hints and final results in the same prompt, such as Adams et al. (2023). In our experiments, we compare our method with a 2-stage prompting approach – first generating key- words using one prompt, then using those keywords for summarization in the second prompt. While slightly different from previous work, the 2-stage baseline effectively captures the use of intermediate reasoning steps of LLMs. 5 Conclusion In this paper, we propose a lightweight approach to incorporate keyphrases into the prompt for LLM- based abstractive summarization. SigExt involves training a phrase extractor using supervised learn- ing to identify salient keyphrases from the inputtext. These keyphrases are then injected into the prompt provided to the LLM for summary genera- tion. We demonstrate that this approach can effec- tively improve the ROUGE scores of the generated summaries, indicating a higher similarity to ref- erence summaries. Introducing keyphrases in the prompt enhances the faithfulness of the summary by ensuring that important information is captured. Additionally, our approach offers control over the length and precision/recall trade-off of the sum- mary. Notably, our pretrained keyphrase extractor – GP-SigExt– can improve summarization perfor- mance out-of-the-box without any finetuning, even in cases where training data is not available. Limitations Model Design: We use Longformer as the back- bone model to build SigExt because it is light- weight and supports long context length. However, we do not evaluate the impact of using other similar- sized pre-trained language models. Additionally, we extract training labels using a fuzzy matching approach to make the model more generalizable, but more domain-specific approaches for keyphrase extraction may yield better performance. Evaluation: As is common in summarization re- search, we rely primarily on automatic metrics and qualitative example checks to evaluate perfor- mance. These techniques have known limitations in assessing summary quality. Meanwhile, human evaluation has its own challenges. Therefore, how to best evaluate the quality of abstractive summa- rization models remain as an open question. References Griffin Adams, Alex Fabbri, Faisal Ladhak, Eric Lehman, and Noémie Elhadad. 2023. From sparse to dense: GPT-4 summarization with chain of density prompting. In Proceedings of the 4th New Frontiers in Summarization Workshop , pages 68–74, Singapore. Association for Computational Linguistics. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, Daniele Mazzotta, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. The falcon series of open language models. Preprint , arXiv:2311.16867. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. Preprint , arXiv:2004.05150. 7 Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies , pages 93–98, San Diego, California. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 . Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 615–621, New Or- leans, Louisiana. Association for Computational Lin- guistics. Haoran Ding and Xiao Luo. 2021. Attentionrank: Un- supervised keyphrase extraction using self and cross attentions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 1919–1928. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstrac- tive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 1074–1084, Florence, Italy. Asso- ciation for Computational Linguistics. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Alek- sander Wawer. 2019. SAMSum corpus: A human- annotated dialogue dataset for abstractive summa- rization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization , pages 70–79, Hong Kong, China. Association for Computational Linguis- tics. Maarten Grootendorst. 2020. Keybert: Minimal key- word extraction with bert. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence- to-sequence learning. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1631– 1640, Berlin, Germany. Association for Computa- tional Linguistics. Yebowen Hu, Timothy Ganter, Hanieh Deilamsalehy, Franck Dernoncourt, Hassan Foroosh, and Fei Liu. 2023. MeetingBank: A benchmark dataset for meet- ing summarization. In Proceedings of the 61st An- nual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , pages 16409– 16423, Toronto, Canada. Association for Computa- tional Linguistics. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. Anastassia Kornilova and Vladimir Eidelman. 2019. BillSum: A corpus for automatic summarization of US legislation. In Proceedings of the 2nd Workshop on New Frontiers in Summarization , pages 48–56, Hong Kong, China. Association for Computational Linguistics. Mahnaz Koupaee and William Yang Wang. 2018. Wik- ihow: A large scale text summarization dataset. Preprint , arXiv:1810.09305. Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Huang. 2023. A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets. In Findings of the Association for Com- putational Linguistics: ACL 2023 , pages 431–469, Toronto, Canada. Association for Computational Lin- guistics. Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhar- gavi Paranjape, Yashar Mehdad, Sonal Gupta, and Marjan Ghazvininejad. 2021. Ease: Extractive- abstractive summarization with explanations. arXiv preprint arXiv:2105.06982 . Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, and Xifeng Yan. 2023a. Guiding large language models via directional stimulus prompting. arXiv preprint arXiv:2302.11520 . Zekun Li, Baolin Peng, Pengcheng He, Michel Galley, Jianfeng Gao, and Xifeng Yan. 2023b. Guiding large language models via directional stimulus prompting. arXiv preprint arXiv:2302.11520 . Shuaiqi Liu, Jiannong Cao, Ruosong Yang, and Zhiyuan Wen. 2022. Key phrase aware transformer for ab- stractive summarization. Information Processing & Management , 59(3):102913. Yixin Liu, Alex Fabbri, Pengfei Liu, Yilun Zhao, Liny- ong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, et al. 2023. Re- visiting the gold standard: Grounding summarization evaluation with robust human evaluation. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4140–4170. Yizhu Liu, Qi Jia, and Kenny Zhu. 2021. Keyword- aware abstractive summarization by extracting set- level intermediate summaries. In Proceedings of the Web Conference 2021 , pages 3042–3054. 8 Rada Mihalcea and Paul Tarau. 2004. TextRank: Bring- ing order into text. In Proceedings of the 2004 Con- ference on Empirical Methods in Natural Language Processing , pages 404–411, Barcelona, Spain. Asso- ciation for Computational Linguistics. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of doc- uments. In Proceedings of the AAAI conference on artificial intelligence . Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Lan- guage Learning , pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing , pages 1797–1807, Brussels, Bel- gium. Association for Computational Linguistics. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory , pages 1–20. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1073– 1083, Vancouver, Canada. Association for Computa- tional Linguistics. Eva Sharma, Chen Li, and Lu Wang. 2019. BIG- PATENT: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 2204–2213, Florence, Italy. Asso- ciation for Computational Linguistics. Jing Su, Longxiang Zhang, Hamid Reza Hassanzadeh, and Thomas Schaaf. 2022. Extract and abstract with bart for clinical notes from doctor-patient conversa- tions. Proc. Interspeech 2022 , pages 2488–2492. Ming-Hsiang Su, Chung-Hsien Wu, and Hao-Tse Cheng. 2020. A two-stage transformer-based ap- proach for variable-length abstractive summarization. IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing , 28:2061–2072. Yi Sun, Hangping Qiu, Yu Zheng, Zhongwei Wang, and Chaoran Zhang. 2020. Sifrank: a new baseline for un- supervised keyphrase extraction based on pre-trained language model. IEEE Access , 8:10896–10906. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems , volume 30. Curran Associates, Inc. Xu Wang, Sen Wang, Xingxing Liang, Dawei Zhao, Jincai Huang, Xin Xu, Bin Dai, and Qiguang Miao. 2024. Deep reinforcement learning: A survey. IEEE Transactions on Neural Networks and Learning Sys- tems, 35(4):5064–5078. Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: Pyramid-based masked sentence pre-training for multi-document summariza- tion. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 5245–5263, Dublin, Ireland. Association for Computational Linguistics. Chengran Yang, Jiakun Liu, Bowen Xu, Christoph Treude, Yunbo Lyu, Ming Li, and David Lo. 2023. Apidocbooster: An extract-then-abstract framework leveraging large language models for augmenting api documentation. arXiv preprint arXiv:2312.10934 . Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023. AlignScore: Evaluating factual consistency with a unified alignment function. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11328–11348, Toronto, Canada. Association for Computational Linguistics. Haopeng Zhang, Xiao Liu, and Jiawei Zhang. 2023. Extractive summarization via ChatGPT for faithful summary generation. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 , pages 3270–3278, Singapore. Association for Com- putational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J Liu. 2020. Pegasus: pre-training with extracted gap-sentences for abstractive summarization. In Pro- ceedings of the 37th International Conference on Machine Learning , pages 11328–11339. Rui Zhang and Joel Tetreault. 2019. This email could save your life: Introducing the task of email subject line generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics , pages 446–456, Florence, Italy. Association for Computational Linguistics. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 . 9 Appendix A All Prompts Here we show all the prompts we used in the experiments. In prompt, <text> will be replaced with source documents, and <keywords> will be replaced with comma separated keyphrases extracted by SigExt. We conduct light prompt engineering to get a reasonably good zero-shot prompt. A.1 Zero-shot Claude Instant Prompts SAMSum Here is a conversation : <text > Please write a very short 1 sentence summary . SAMSum with SigExt Here is a conversation : <text > Please write a very short 1 sentence summary . Consider include the following information : <keywords > CNN/DailyMail Here is a news article : <text > Please write a summary for the article in 2-3 sentences . CNN/DailyMail with SigExt Here is a news article : <text > Please write a summary for the article in 2-3 sentences . Consider include the following information : <keywords >. ArXiv Here is a research paper : <text > Please write a comprehensive paper abstract section . ArXiv with SigExt Here is a research paper : <text > Please write a comprehensive paper abstract section . Consider include the following information : <keywords > MeetingBank Here is a conversation : <text > Please write a summary in about 5 sentences . MeetingBank with SigExt Here is a conversation : <text > Please write a summary in about 5 sentences . Consider include the following information : <keywords > 10 A.2 Zero-shot Mistral Prompts SAMSum <s >[ INST ] Here is a conversation : <text > Please write a short 1 sentence summary . [/ INST ] SAMSum with SigExt <s >[ INST ] Here is a conversation : <text > Please write a short 1 sentence summary . Consider include the following information : <keywords >[/ INST ] CNN/DailyMail <s >[ INST ] Here is a news article : <text > Please write a short summary for the article in 1-2 sentences .[/ INST ] CNN/DailyMail with SigExt <s >[ INST ] Here is a news article : <text > Please write a short summary for the article in 1-2 sentences . Consider include the following information : <keywords >[/ INST ] ArXiv <s >[ INST ] Here is a research paper : <text > Please write a short abstract in about 3 sentences .[/ INST ] ArXiv with SigExt <s >[ INST ] Here is a research paper : <text > Please write a short abstract in about 3 sentences . Consider include the following information : <keywords >[/ INST ] MeetingBank <s >[ INST ] Here is a conversation : <text > Please write a 2-3 sentence summary .[/ INST ] MeetingBank with SigExt <s >[ INST ] Here is a conversation : <text > Please write a 2-3 sentence summary . Consider include the following information : < keywords >[/ INST ] A.3 Zero-shot Falcon and Flan-T5 Prompts SAMSum Here is a conversation : <text > Please write a short 1 sentence summary . Summary : SAMSum with SigExt 11 Here is a conversation : <text > Please write a short 1 sentence summary . Consider include the following information : <keywords > Summary : CNN/DailyMail Here is a news article : <text > Please write a short summary for the article in 1-2 sentences . Make sure the summary is no more than 2 sentences . Summary : CNN/DailyMail with SigExt Here is a news article : <text > Please write a short summary for the article in 1-2 sentences . Consider include the following information : <keywords >. Make sure the summary is no more than 2 sentences . Summary : ArXiv Here is a research paper : <text > Please write a short abstract in about 3 sentences . Abstract : ArXiv with SigExt Here is a research paper : <text > Please write a short abstract in about 3 sentences . Consider include the following information : <keywords >. Abstract : MeetingBank Here is a conversation : <text > Please write a 2-3 sentence summary . Summary : MeetingBank with SigExt Here is a conversation : <text > Please write a 2-3 sentence summary . Consider include the following information : < keywords > Summary : B Loss Function The training objective for SigExt is L=−X i∈1...TX k∈li...r i[label(pi) logf(xk) +λ(1−label(pi)) log(1 −f(xk)) , where f(xk)denotes the binary classification probability on token xkby the classification head, and λis the class balancing weight. 12 DocumentJake: we don’t have english today Nadia: whooooah! Vanessa: cool! you sure though? Jake: Smith called in sick, they couldn’t find a replacement, oh yeah Keyphrases Vanessa; Smith called; Jake; replacement; english today Mistral Jake announced that they don’t have English class today due to Smith being sick and unable to find a replacement. +SigExt Vanessa is excited that Jake is teaching English today instead of Smith, who called in sick and couldn’t be replaced DocumentSam: Good afternoon, is this available? Lia: Hi, how can I help you? Sam: I’m interested in this product. Are the earphones available? Lia: Yes, they are. Sam: where can I collect them? Lia: Well, you can have them delivered to you through courrier - this costs £4 (pay on delivery) Lia: I’m afraid collecting them personally is a little difficult, as the product can only be collected in Garstang (where the company is located) Sam: Okay. Sam: Are they new? Do you have any other colors that white? Lia: Completely new - we also offer grey and pink. Sam: Then I would like the white ones please. Lia: Okay, so I just need your address and telephone Sam: The total costs £187 - can I lower the price to £180?? Lia: fraid not - company policy. Sam: Oh, okay. Sam: How long does it take for the product to arrive? Lia: It generally arrives the next day, in rare instances it’s longer. Keyphrases Lia; earphones available; Garstang; £180; collected; Sam; product; white; delivered; company. Mistral Sam is interested in purchasing earphones and asks about their availability, collection options, and delivery time. +SigExt Lia confirmed that the earphones are available, can be delivered for £4, and are only available for collection in Garstang, but offered Sam the option to purchase them for £180 DocumentKim: I’m going to Seoul! Agatha: wow finally! Mark: When? Bring us some kimchi! Kim: People on a plane will kill me if I do :D Kim: I’m going in April <unk>3 Agatha: Are you going with Jane? Kim: yes, of course - she’ll be our tour guide Mark: so jealous... I’d love to come as well Kim: Really? I asked you like a hundred times Keyphrases Seoul; Kim; Jane; Agatha; Mark; going; kimchi; tour guide; Kim. Mistral Kim is going to Seoul in April with Jane as their tour guide and Agatha and Mark express their excitement and jealousy. +SigExt Kim is going to Seoul in April with Jane as their tour guide, and they will bring some kimchi back for Mark who is jealous and wishes he could come too Table 8: Examples where adding keyphrases in prompt hurts the faithfulness of the summary. The failure pattern is if a keyphrase is negated in the document, the LLM would ignore the negation. 13 Gold Summary NHS gave out 404,500 prescriptions for suncream at a cost of £13m in 2014. Also handed out 4.7million prescriptions for indigestion pills costing £29m. Other items routinely prescribed include vitamins, Vaseline and toothpaste. Critics branded prescriptions ’ludicrous’ at time of financial crisis for NHS. Key Phrases toothpaste; suncream; patients; 404,500 prescriptions ; NHS; indigestion pills costing£29million ; NHS handed; £13million ; spending millions; prescribed Mistral-7B The NHS is spending millions on prescriptions for suncream and other household essentials, according to a damning analysis. The figures come as the UK ranks below many European countries in health and wellness, and as the NHS faces a huge financial crisis. +SigExt The NHS is spending millions of pounds on prescribing toothpaste and suncream to patients, ac- cording to a damning analysis. In 2014, the NHS handed out 404,500 prescriptions for suncream at a cost of £13million and 4.7million prescriptions for indigestion pills costing£29million . Gold Summary Cloned camel was born in 2009 and called Inzaz, which means achievement. She was cloned from ovarian cells and born by surrogate mother. Injaz is now six years old and is said to have conceived naturally. She’s expected to give birth late this year, proving cloned animals’ fertility. Key Phrases Injaz; pregnant; slaughtered camel; cloned; ovarian cells; surrogate mother; first cloned camel; 2009 ;conceived naturally ; whose name Mistral-7B The world’s first cloned camel, Injaz, is pregnant and expected to have a calf. This marks the first time that a cloned camel has been able to reproduce naturally, and scientists are excited to see if cloned camels are as fertile as naturally produced ones. +SigExt Injaz, the world’s first cloned camel born from the ovarian cells of a slaughtered camel in 2009 , is now pregnant and expected to give birth to a calf later this year. The camel’s conception was natural , proving that cloned camels are fertile and capable of reproducing. Gold Summary Lewis Hamilton won Sunday’s Barhain Grand Prix ahead of Kimi Raikkonen. Hamilton is out of contract at the end of the year and is yet to sign new deal. It has been suggested that Hamilton could replace Raikkonen at Ferrari. But team principal Maurizio Arrivabene says he is happy with driver line-up. Key Phrases Kimi Raikkonen; Ferrari; next season ;replace Raikkonen ; Mercedes; 30; Lewis Hamilton yet; Bahrain; Ferrari team principal Maurizio Arrivabene; Vettel Mistral-7B Ferrari team principal Maurizio Arrivabene has revealed that he uses a carrot-and-stick method with Kimi Raikkonen to keep his desire to remain with the Maranello marque high. The Finn has arguably never been happier during his Formula One career, thanks to the atmosphere generated within the team by the arrival of Arrivabene. +SigExt Ferrari team principal Maurizio Arrivabene has revealed that he uses a carrot-and-stick method to keep Kimi Raikkonen’s desire to remain with the Maranello marque high. With Lewis Hamilton yet to sign a new contract, it has been suggested that he will replace Raikkonen at Ferrari for next season . In the Bahrain Grand Prix, Raikkonen finished on the podium for the first time in 26 races as runner-up to Hamilton. Table 9: Examples of using SigExt with Mistral-7B model on CNN dataset. 14 DocumentJenkin : hey what is your spirit animal ? Sophie : what ? Jenkin : go on ? Sophie : I dont know a foxlol Jenkin : are you wiley ? Sophie : sometimes Jenkin : I am a Sophie : I think you are a bit mad like the mad Jenkin : I have been reading about animal spirits its quite good Sophie : you will have to tell me about the fox.. do you decide what your animal is or does someone tell you ? Jenkin : There is a pack ofcards and you choose the one that you are drawn to Sophie : oh right I would choose theFox Jenkin : well I did n’t know but I was drawn to the dolphin Sophie : oh Jenkin : I will bring them over tomorrow Sophie : oh yes please that will be great Reference Jenkin has been reading about spirit animals and he was drawn to a dolphin. Sophie would choose a fox. Jenkin will bring pack of cards with spirit animals to Sophie tomorrow. DocumentJacky : I think you were right yesterday . David : What about ? I ’m right about most things : P Jacky : Yeah , whole you ; ) Jacky : About taking theblame etc . David : Okey , I remeber . We ’ll talklater ? Jacky : With pleasure . I ’ll call you when I get home . Reference According to Jacky, David did the right thing taking the blame. They will talk when Jack comes back home. DocumentJill: Sobored ! Nate : Well ... ca n’t help you there Nate : Still at work Jill: ugh I need to find a job Jill: I ’ve watched everything on youtube already Nate : Doubt it : P I ’ll callyou when I get off work Reference Jill is bored and has watched YouTube. Nate is at work and will call Jill when he finishes it. Table 10: Visualization of overlapping words between the document and reference summary on the SAMSum dataset. The words are dispersed across the document, making it difficult to extract sentence-level salient information. Dataset Description Input/Output CNN News article headline generation 773/58 SAMSum Messenger-like conversations summarization 127/23 ArXiv Research paper abstract generation 6446/166 MeetingBank Meeting transcript summarization 3095/66 Table 11: Dataset description and input/output length. 15 | 5 | 1 | The paper describes SigExt, which uses a fine-tuned Longformer with 433M parameters. The model is trained on a dataset consisting of 1000 to 10000 examples (for the general-purpose variant), with an unspecified batch size, but given the model's size, it can be assumed that a moderate batch size like 16-32 can be used effectively on a single GPU. Given its training for 10 epochs and ability to be trained on a single GPU, I estimate the training time to be around 5 hours in total, which is reasonable for such a model with manageable dataset sizes and epochs. There are no indications of extensive resource requirements or data sizes beyond one GPU capability. Thus, the model can efficiently be trained on a single GPU under 8 hours. | yes | Yes | NLP | Salient Information Prompting to Steer Content in Prompt-based Abstractive Summarization | 2024-10-03 0:00:00 | https://github.com/amazon-science/SigExt | 1 | run the script to download and process data inside the repo | 10 min * 10 = 1hr 40 min | https://colab.research.google.com/drive/1Wzlo_ybMDNuEVs4wDC4GJq93kFwX6rMJ?usp=sharing | Yes | -- Justneed to change the argument while calling the python and need to add some line of code on data process script. I have included all on the colab file |
ETTh1 (720) Multivariate | SparseTSF | [] | SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters | 2024-05-02T00:00:00 | https://arxiv.org/abs/2405.00946v2 | [
"https://github.com/lss-1138/SparseTSF"
] | {'MSE': '0.426'} | [
"MSE",
"MAE"
] | Given the following paper and codebase:
Paper: SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters
Codebase: https://github.com/lss-1138/SparseTSF
Improve the SparseTSF model on the ETTh1 (720) Multivariate dataset. The result
should improve on the following metrics: {'MSE': '0.426'}. You must use only the codebase provided.
| SparseTSF: Modeling Long-term Time Series Forecasting with 1kParameters Shengsheng Lin1Weiwei Lin1 2Wentai Wu3Haojun Chen1Junjie Yang1 Abstract This paper introduces SparseTSF, a novel, ex- tremely lightweight model for Long-term Time Series Forecasting (LTSF), designed to address the challenges of modeling complex temporal dependencies over extended horizons with min- imal computational resources. At the heart of SparseTSF lies the Cross-Period Sparse Forecast- ing technique, which simplifies the forecasting task by decoupling the periodicity and trend in time series data. This technique involves down- sampling the original sequences to focus on cross- period trend prediction, effectively extracting pe- riodic features while minimizing the model’s com- plexity and parameter count. Based on this tech- nique, the SparseTSF model uses fewer than 1k parameters to achieve competitive or superior performance compared to state-of-the-art mod- els. Furthermore, SparseTSF showcases remark- able generalization capabilities, making it well- suited for scenarios with limited computational resources, small samples, or low-quality data. The code is publicly available at this repository: https://github.com/lss-1138/SparseTSF. 1. Introduction Time series forecasting holds significant value in domains such as traffic flow, product sales, and energy consump- tion, as accurate predictions enable decision-makers to plan proactively. Achieving precise forecasts typically re- lies on powerful yet complex deep learning models, such as RNNs (Zhang et al., 2023), TCNs (Bai et al., 2018; Franceschi et al., 2019), and Transformers (Wen et al., 2022). In recent years, there has been a growing interest in Long- term Time Series Forecasting (LTSF), which demands mod- 1School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China2Peng Cheng Laboratory, Shenzhen 518066, China3College of Information Science and Technology, Jinan University, Guangzhou 510632, China. Correspondence to: Weiwei Lin <linww@scut.edu.cn >. Proceedings of the 41stInternational Conference on Machine Learning , Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s).els to provide an extended predictive view for advanced planning (Zhou et al., 2021). Although a longer predictive horizon offers convenience, it also introduces greater uncertainty (Lin et al., 2023b). This demands models capable of extracting more exten- sive temporal dependencies from longer historical windows. Consequently, modeling becomes more complex to cap- ture these long-term temporal dependencies. For instance, Transformer-based models often have millions or tens of millions of parameters, limiting their practical usability, es- pecially in scenarios with restricted computational resources (Deng et al., 2024). In fact, the basis for accurate long-term time series fore- casting lies in the inherent periodicity and trend of the data. For example, long-term forecasts of household electricity consumption are feasible due to the clear daily and weekly patterns in such data. Particularly for daily patterns, if we resample the electricity consumption at a certain time of the day into a daily sequence, each subsequence exhibits similar or consistent trends. In this case, the original sequence’s pe- riodicity and trend are decomposed and transformed. That is, periodic patterns are transformed into inter-subsequence dynamics, while trend patterns are reinterpreted as intra- subsequence characteristics. This decomposition offers a novel perspective for designing lightweight LTSF models. In this paper, we pioneer the exploration of how to utilize this inherent periodicity and decomposition in data to con- struct specialized lightweight time series forecasting mod- els. Specifically, we introduce SparseTSF , an extremely lightweight LTSF model. Technically, we propose the Cross- Period Sparse Forecasting technique (hereinafter referred to as Sparse technique). It first downsamples the original sequences with constant periodicity into subsequences, then performs predictions on each downsampled subsequence, simplifying the original time series forecasting task into a cross-period trend prediction task. This approach yields two benefits: (i) effective decoupling of data periodicity and trend, enabling the model to stably identify and ex- tract periodic features while focusing on predicting trend changes, and (ii) extreme compression of the model’s pa- rameter size, significantly reducing the demand for computa- tional resources. As shown in Figure 1, SparseTSF achieves near state-of-the-art prediction performance with less than 1arXiv:2405.00946v2 [cs.LG] 3 Jun 2024 SparseTSF: Modeling LTSF with 1k Parameters 1ktrainable parameters, which makes it 1∼4 orders of mag- nitude smaller than its counterparts. /s49/s48/s50/s49/s48/s51/s49/s48/s52/s49/s48/s53/s49/s48/s54/s49/s48/s55/s49/s48/s56/s48/s46/s49/s53/s48/s46/s50/s48/s48/s46/s50/s53/s48/s46/s51/s48/s48/s46/s51/s53/s48/s46/s52/s48 Informer (2021) Autoformer (2021) FEDformer (2022) FiLM (2022) PatchTST (2023) DLinear (2023) FITS (2024) SparseTSF (Ours) Mean Squared Error (MSE)P arameters Figure 1: Comparison of MSE and parameters between SparseTSF and other mainstream models on the Electricity dataset with a forecast horizon of 720. In summary, our contributions in this paper are as follows: •We propose a novel Cross-Period Sparse Forecasting technique, which downsamples the original sequences to focus on cross-period trend prediction, effectively ex- tracting periodic features while minimizing the model’s complexity and parameter count. •Based on the Sparse technique, we present the SparseTSF model, which requires fewer than 1kparam- eters, significantly reducing the computational resource demand of forecasting models. •The proposed SparseTSF model not only attains com- petitive or surpasses state-of-the-art predictive accu- racy with a remarkably minimal parameter scale but also demonstrates robust generalization capabilities. 2. Related Work Development of Long-term Time Series Forecasting The LTSF tasks, which aim at predicting over an extended horizon, are inherently more challenging. Initially, the Transformer architecture (Vaswani et al., 2017), known for its robust long-term dependency modeling capabilities, gained widespread attention in the LTSF domain. Mod- els such as Informer (Zhou et al., 2021), Autoformer (Wu et al., 2021), and FEDformer (Zhou et al., 2022b) have modified the native structure of Transformer to suit time series forecasting tasks. More recent advancements, like PatchTST (Nie et al., 2023) and PETformer (Lin et al., 2023a), demonstrate that the original Transformer archi- tecture can achieve impressive results with an appropriatepatch strategy, a technique that is prevalently employed in the realm of computer vision (Dosovitskiy et al., 2020; He et al., 2022). Besides Transformer architectures, Convo- lutional Neural Networks (CNNs) and Multilayer Percep- trons (MLPs) are also mainstream approaches, including SCINet (Liu et al., 2022a), TimesNet (Wu et al., 2023), MICN (Wang et al., 2022), TiDE (Das et al., 2023), and HD- Mixer (Huang et al., 2024a). Recent studies have shown that transferring pretrained Large Language Models (LLMs) to the time series domain can also yield commendable results (Chang et al., 2024; Jin et al., 2023; Xue & Salim, 2023). Moreover, recent works have revealed that RNN and GNN networks can also perform well in LTSF tasks, as exempli- fied by SegRNN (Lin et al., 2023b) and CrossGNN (Huang et al., 2024b). Progress in Lightweight Forecasting Models Since DLinear (Zeng et al., 2023) demonstrated that simple mod- els could already extract strong temporal periodic depen- dencies, numerous studies have been pushing LTSF models towards lightweight designs, including LightTS(Zhang et al., 2022), TiDE (Das et al., 2023), TSMixer (Ekambaram et al., 2023), and HDformer (Deng et al., 2024). Recently, FITS emerged as a milestone in the lightweight LTSF process, being the first to reduce the LTSF model scale to the 10kpa- rameter level while maintaining excellent predictive perfor- mance (Xu et al., 2024). FITS achieved this by transforming time-domain forecasting tasks into frequency-domain ones and using low-pass filters to reduce the required number of parameters. In this paper, our proposed SparseTSF model takes lightweight model design to the extreme. Utilizing the Cross-Period Sparse Forecasting technique, it’s the first to reduce model parameters to below 1k. 3. Methodology 3.1. Preliminaries Long-term Time Series Forecasting The task of LTSF involves predicting future values over an extended horizon using previously observed multivariate time series (MTS) data. It is formalized as ¯xt+1:t+H=f(xt−L+1:t), where xt−L+1:t∈RL×Cand¯xt+1:t+H∈RH×C. In this formu- lation, Lrepresents the length of the historical observation window, Cis the number of distinct features or channels, andHis the length of the forecast horizon. The main goal of LTSF is to extend the forecast horizon Has it provides richer and more advanced guidance in practical applications. However, an extended forecast horizon Halso increases the complexity of the model, leading to a significant increase in parameters in mainstream models. To address this challenge, our research focuses on developing models that are not only extremely lightweight but also robust and effective. 2 SparseTSF: Modeling LTSF with 1k Parameters Channel Independent Strategy Recent advancements in the field of LTSF have seen a shift towards a Channel Independent (CI) approach, especially when dealing with multivariate time series data (Han et al., 2024). This strategy simplifies the forecasting process by focusing on individual univariate time series within the dataset. Instead of the tra- ditional approach, which utilizes the entire multivariate his- torical data to predict future outcomes, the CI method finds a shared function f:x(i) t−L+1:t∈RL→¯x(i) t+1:t+H∈RH for each univariate series. This approach provides a more targeted and simplified prediction model for each channel, reducing the complexity of accounting for inter-channel relationships. As a result, the main goal of mainstream state-of-the-art models in recent years has shifted towards effectively predict by modeling long-term dependencies, including periodicity and trends, in univariate sequences. For instance, models like DLinear achieve this by extracting dominant periodicity from univariate sequences using a single linear layer (Zeng et al., 2023). More advanced models, such as PatchTST (Nie et al., 2023) and TiDE (Das et al., 2023), employ more complex structures on single channels to extract temporal dependencies, aiming for superior predictive performance. In this paper, we adopt this CI strategy as well and focus on how to create an even more lightweight yet effective approach for capturing long-term dependencies in single- channel time series. 3.2. SparseTSF Given that the data to be forecasted often exhibits constant, periodicity a priori (e.g., electricity consumption and traf- fic flow typically have fixed daily cycles), we propose the Cross-Period Sparse Forecasting technique to enhance the extraction of long-term sequential dependencies while re- ducing the model’s parameter scale. Utilizing a single linear layer to model the LTSF task within this framework leads to our SparseTSF model, as illustrated in Figure 2. Cross-Period Sparse Forecasting Assuming that the time series x(i) t−L+1:thas a known periodicity w, the first step is to downsample the original series into wsubsequences of length n=L w . A model with shared parameters is then applied to these subsequences for prediction. After prediction, the wsubsequences, each of length m=H w , are upsampled back to a complete forecast sequence of length H. Intuitively, this forecasting process appears as a sliding forecast with a sparse interval of w, performed by a fully connected layer with parameter sharing within a constant period w. This can be viewed as a model performing sparse sliding prediction across periods.Technically, the downsampling process is equivalent to re- shaping x(i) t−L+1:tinto a n×wmatrix, which is then trans- posed to a w×nmatrix. The sparse sliding prediction is equivalent to applying a linear layer of size n×mon the last dimension of the matrix, resulting in a w×mmatrix. The upsampling step is equivalent to transposing the w×mma- trix and reshaping it back into a complete forecast sequence of length H. However, this approach currently still faces two issues: (i) loss of information, as only one data point per period is utilized for prediction, while the rest are ignored; and (ii) amplification of the impact of outliers, as the presence of ex- treme values in the downsampled subsequences can directly affect the prediction. To address these issues, we additionally perform a sliding aggregation on the original sequence before executing sparse prediction, as depicted in Figure 2. Each aggregated data point incorporates information from other points within its surrounding period, addressing issue (i). Moreover, as the aggregated value is essentially a weighted average of surrounding points, it mitigates the impact of outliers, thus resolving issue (ii). Technically, this sliding aggregation can be implemented using a 1D convolution with zero-padding and a kernel size of 2×w 2 + 1. The process can be formulated as follows: x(i) t−L+1:t=x(i) t−L+1:t+Conv1D (x(i) t−L+1:t) (1) Instance Normalization Time series data often exhibit distributional shifts between training and testing datasets. Recent studies have shown that employing simple sample normalization strategies between the input and output of models can help mitigate this issue (Kim et al., 2021; Zeng et al., 2023). In our work, we also utilize a straightforward normalization strategy. Specifically, we subtract the mean of the sequence from itself before it enters the model and add it back after the model’s output. This process is formulated as follows: x(i) t−L+1:t=x(i) t−L+1:t−Et(x(i) t−L+1:t), (2) ¯x(i) t+1:t+H= ¯x(i) t+1:t+H+Et(x(i) t−L+1:t). (3) Loss Function In alignment with current mainstream prac- tices in the field, we adopt the classic Mean Squared Er- ror (MSE) as the loss function for SparseTSF. This func- tion measures the discrepancy between the predicted values ¯x(i) t+1:t+Hand the actual ground truth y(i) t+1:t+H. It is formu- lated as: L=1 CCX i=1 y(i) t+1:t+H−¯x(i) t+1:t+H 2 2. (4) 3 SparseTSF: Modeling LTSF with 1k Parameters Downsample Upsample Aggregate 𝑥𝑡−𝐿+1:𝑡∈ℝ𝐿𝑥′𝑡−𝐿+1:𝑡∈ℝ𝐿𝑿∈ℝ𝑤×𝑛𝒀∈ℝ𝑤×𝑚ҧ𝑥𝑡+1:𝑡+𝐻∈ℝ𝐻by period w by period w within period wLinear LayerAggregating & Downsampling Upsampling Forecasting Figure 2: SparseTSF architecture. 3.3. Theoretical Analysis In this section, we provide a theoretical analysis of the SparseTSF model, focusing on its parameter efficiency and the effectiveness of the Sparse technique. The relevant theoretical proofs are provided in Appendix B. 3.3.1. P ARAMETER EFFICIENCY OF SPARSE TSF Theorem 3.1. Given a historical look-back window length L, a forecast horizon H, and a constant periodicity w, the total number of parameters required for the SparseTSF model isL w ×H w + 2×w 2 + 1. In LTSF tasks, the look-back window length Land forecast horizon Hare usually quite large, for instance, up to 720, while the intrinsic periodicity wof the data is also typically large, such as 24. In this scenario,L w ×H w + 2×w 2 + 1≪L×H. This means that the parameter scale of the SparseTSF model is much lighter than even the simplest single-layer linear model. This demonstrates the lightweight architecture of the SparseTSF model. 3.3.2. E FFECTIVENESS OF SPARSE TSF The time series targeted for long-term forecasting often exhibits constant periodicity. Here, we first define the repre- sentation of such a sequence X. Definition 3.2. Consider a univariate time series Xwith a known period w, which can be decomposed into a periodic component P(t)and a trend component T(t), such that X(t) =P(t) +T(t). Here, P(t)represents the periodic part and satisfies the condition: P(t) =P(t+w). (5) Furthermore, we can derive the form of the modeling task after downsampling. In the context of a truncated subsequence xt−L+1:tofX(t) and its corresponding future sequence xt+1:t+Hto be fore- casted, the conventional approach involves using xt−L+1:tdirectly to predict xt+1:t+H, essentially estimating the func- tion: xt+1:t+H=f(xt−L+1:t) (6) However, with the application of the Sparse technique, this forecasting task transforms into predicting downsampled subsequences, as per Lemma 3.3. Lemma 3.3. The SparseTSF model reformulates the fore- casting task into predicting downsampled subsequences, namely: x′ t+1:t+m=f(x′ t−n+1:t) (7) Combining Definition 3.2 and Lemma 3.3, we can further deduce Theorem 3.4. Theorem 3.4. Given a time series dataset that satisfies Definition 3.2, the SparseTSF model’s formulation becomes: p′ t+1:t+m+t′ t+1:t+m=f(p′ t−n+1:t+t′ t−n+1:t) (8) where, for any i∈[t−n+ 1 : t+m]andj∈[t−n+ 1 : t+m], satisfying: p′ i=p′ j (9) Theorem 3.4 implies that the task of the SparseTSF model effectively transforms into predicting future trend compo- nents (i.e., t′), using the constant periodic components (i.e., p′) as a reference. This process effectively separates the pe- riodic components, which are no longer explicitly modeled, allowing the model to focus more on the trend variations. Intuitively, We can further validate this finding from the perspective of autocorrelation , a powerful tool for identify- ing patterns such as seasonality or periodicity in time series data. Definition 3.5 (AutoCorrelation Function (ACF) (Madsen, 2007)) .Given a time series {Xt}, where trepresents dis- crete time points, the ACF at lag kis defined as: ACF(k) =PN−k t=1(Xt−µ)(Xt+k−µ) PN t=1(Xt−µ)2(10) 4 SparseTSF: Modeling LTSF with 1k Parameters where Nis the total number of observations in the time series, Xtis the value of the series at time t,Xt+kis the value of the series at time t+k, and µis the mean of the series{Xt}. 010203040 Lags0.00.20.40.60.81.0Autocorrelation (a) Original 010203040 Lags0.00.20.40.60.81.0Autocorrelation (b) Downsampled Figure 3: Comparison of autocorrelation in original and downsampled subsequences for the first channel in the ETTh1 dataset. The lag time kin the ACF reveals the periodic patterns in the series, that is, when kequals the periodic length of the series, the ACF value typically shows a significant peak. As shown in Figure 3, the original sequence exhibits clear periodicity, while the downsampled subsequences retain only trend characteristics. This demonstrates that, through its downsampling strategy, the SparseTSF model can effi- ciently separate and extract accurate periodic features from time series data. This not only reduces the complexity of the model but also enables it to focus on predicting trend variations, thereby exhibiting impressive performance in LTSF tasks. In summary, the SparseTSF model’s design, characterized by its parameter efficiency and focus on decoupling periodic features, makes it well-suited for LTSF tasks, especially in scenarios where the data exhibits clear periodic patterns. 4. Experiments In this section, we present the experimental results of SparseTSF on mainstream LTSF benchmarks. Addition- ally, we discuss the efficiency advantages brought by the lightweight architecture of SparseTSF. Furthermore, we conduct ablation studies and analysis to further reveal the effectiveness of the Sparse technique. 4.1. Experimental Setup Datasets We conducted experiments on four mainstream LTSF datasets that exhibit daily periodicity. These datasets include ETTh1&ETTh21, Electricity2, and Traffic3. The 1https://github.com/zhouhaoyi/ETDataset 2https://archive.ics.uci.edu/ml/datasets 3https://pems.dot.ca.gov/details of these datasets are presented in Table 1. Table 1: Summary of datasets. Datasets ETTh1 & ETTh2 Electricity Traffic Channels 7 321 862 Frequency hourly hourly hourly Timesteps 17,420 26,304 17,544 Baselines We compared our approach with state-of-the- art and representative methods in the field. These include Informer (Zhou et al., 2021), Autoformer (Wu et al., 2021), Pyraformer (Liu et al., 2022b), FEDformer (Zhou et al., 2022b), Film (Zhou et al., 2022a), TimesNet (Wu et al., 2023), and PatchTST (Nie et al., 2023). Additionally, we specifically compared SparseTSF with lightweight models, namely DLinear (Zeng et al., 2023) and FITS (Xu et al., 2024). Following FITS, SparseTSF defaults to a look-back length of 720. Environment All experiments in this study were imple- mented using PyTorch (Paszke et al., 2019) and conducted on a single NVIDIA RTX 4090 GPU with 24GB of memory. More experimental details are provided in Appendix A.2. 4.2. Main Results Table 2 presents a performance comparison between SparseTSF and other baseline models4. It is observable that SparseTSF ranks within the top two in all scenarios, achieving or closely approaching state-of-the-art levels with a significantly smaller parameter scale. This emphatically demonstrates the superiority of the Sparse technique pro- posed in this paper. Specifically, the Sparse technique is capable of more effectively extracting the periodicity and trends from data, thereby enabling exceptional predic- tive performance in long horizon scenarios. Additionally, the standard deviation of SparseTSF’s results is notably small. In most cases, the standard deviation across 5 runs is within 0.001, which strongly indicates the robustness of the SparseTSF model. 4.3. Efficiency Advantages of SparseTSF Beyond its powerful predictive performance, another sig- nificant benefit of the SparseTSF model is its extreme lightweight nature. Previously, Figure 1 visualized the parameter-performance comparison of SparseTSF with other mainstream models. Here, we further present a com- prehensive comparison between SparseTSF and these base- 4Recent works discovered a long-standing bug in the current benchmark framework, which may affect model performance on small datasets (Xu et al., 2024; Qiu et al., 2024). We reporte the comparison results after fixing this bug in Appendix D.1. 5 SparseTSF: Modeling LTSF with 1k Parameters Table 2: MSE results of multivariate long-term time series forecasting comparing SparseTSF with other mainstream models. The top two results are highlighted in bold . The reported results of SparseTSF are averaged over 5 runs with standard deviation included. ’Imp.’ denotes the improvement compared to the best-performing baseline models. Dataset ETTh1 ETTh2 Electricity Traffic Horizon 96 192 336 720 96 192 336 720 96 192 336 720 96 192 336 720 Informer (2021) 0.865 1.008 1.107 1.181 3.755 5.602 4.721 3.647 0.274 0.296 0.300 0.373 0.719 0.696 0.777 0.864 Autoformer (2021) 0.449 0.500 0.521 0.514 0.358 0.456 0.482 0.515 0.201 0.222 0.231 0.254 0.613 0.616 0.622 0.660 Pyraformer (2022b) 0.664 0.790 0.891 0.963 0.645 0.788 0.907 0.963 0.386 0.386 0.378 0.376 2.085 0.867 0.869 0.881 FEDformer (2022b) 0.376 0.420 0.459 0.506 0.346 0.429 0.496 0.463 0.193 0.201 0.214 0.246 0.587 0.604 0.621 0.626 FiLM (2022a) 0.371 0.414 0.442 0.465 0.284 0.357 0.377 0.439 0.154 0.164 0.188 0.236 0.416 0.408 0.425 0.520 TimesNet (2023) 0.384 0.436 0.491 0.521 0.340 0.402 0.452 0.462 0.168 0.184 0.198 0.220 0.593 0.617 0.629 0.640 PatchTST (2023) 0.370 0.413 0.422 0.447 0.274 0.341 0.329 0.379 0.129 0.147 0.163 0.197 0.360 0.379 0.392 0.432 DLinear (2023) 0.374 0.405 0.429 0.440 0.338 0.381 0.400 0.436 0.140 0.153 0.169 0.203 0.410 0.423 0.435 0.464 FITS (2024) 0.375 0.408 0.429 0.427 0.274 0.333 0.340 0.374 0.138 0.152 0.166 0.205 0.401 0.407 0.420 0.456 SparseTSF (ours)0.359 0.397 0.404 0.417 0.267 0.314 0.312 0.370 0.138 0.146 0.164 0.203 0.382 0.388 0.402 0.445 ±0.006 ±0.002 ±0.001 ±0.001 ±0.005 ±0.003 ±0.004 ±0.001 ±0.001 ±0.001 ±0.001 ±0.001 ±0.001 ±0.001 ±0.001 ±0.002 Imp. +0.011 +0.008 +0.018 +0.010 +0.007 +0.019 +0.017 +0.004 -0.009 +0.001 -0.001 -0.006 -0.022 -0.009 -0.010 -0.013 line models in terms of both static and runtime metrics, including: 1.Parameters : The total number of trainable parameters in the model, representing the model’s size. 2.MACs (Multiply-Accumulate Operations): A common measure of computational complexity in neural net- works, indicating the number of multiply-accumulate operations required by the model. 3.Max Memory : The maximum memory usage during the model training process. 4.Epoch Time : The training duration for a single epoch. This metric was averaged over 3 runs. Table 3: Static and runtime metrics of SparseTSF and other mainstream models on the Electricity Dataset with a forecast horizon of 720. Here, the look-back length for each model is set to be consistent with their respective official papers, such as 336 for DLinear and 720 for FITS. Model Parameters MACs Max Mem.(MB) Epoch Time(s) Informer (2021) 12.53 M 3.97 G 969.7 70.1 Autoformer (2021) 12.22 M 4.41 G 2631.2 107.7 FEDformer (2022b) 17.98 M 4.41 G 1102.5 238.7 FiLM (2022a) 12.22 M 4.41 G 1773.9 78.3 PatchTST (2023) 6.31 M 11.21 G 10882.3 290.3 DLinear (2023) 485.3 K 156.0 M 123.8 25.4 FITS (2024) 10.5 K 79.9 M 496.7 35.0 SparseTSF (Ours) 0.92 K 12.71 M 125.2 31.3 Table 3 displays the comparative results. It is evident that SparseTSF significantly outperforms other models in terms of static metrics like the number of parameters and MACs, being over ten times smaller than the next best model. This characteristic allows SparseTSF to be deployed on devices with very limited computational resources. Furthermore, in terms of runtime metrics, Max Memory and Epoch Time,SparseTSF significantly outperforms other mainstream mod- els, rivaling the existing lightweight models (i.e., DLinear and FITS). Herein, DLinear benefits from a shorter look- back length, achieving the lowest overhead, while FITS and SparseTSF incur additional overhead due to extra operations (i.e., Fourier transformation and resampling). Table 4: Comparison of the scale of parameters on Elec- tricity dataset between SparseTSF and FITS models under different configurations of look-back length and forecast horizon, where SparseTSF operates with w= 24 and FITS employs COF at the 2thharmonic. Model SparseTSF (Ours) FITS (2024) HorizonLook-back96 192 336 720 96 192 336 720 96 41 57 81 145 840 1,218 2,091 5,913 192 57 89 137 265 1,260 1,624 2,542 6,643 336 81 137 221 445 1,890 2,233 3,280 7,665 720 145 265 445 925 3,570 3,857 5,125 10,512 Additionally, we conducted a comprehensive comparison with FITS, a recent milestone work in the field of LTSF model lightweight progression. The results in Table 4 reveal that SparseTSF significantly surpasses FITS in terms of pa- rameter scale under any input-output length configuration. Therefore, SparseTSF marks another significant advance- ment in the journey towards lightweight LTSF models. 4.4. Ablation Studies and Analysis Beyond its ultra-lightweight characteristics, the Sparse tech- nique also possesses a robust capability to extract periodic features, which we will delve further into in this section. Effectiveness of the Sparse Technique The Sparse tech- nique, combined with a simple single-layer linear model, forms the core of our proposed model, SparseTSF. Addi- tionally, the Sparse technique can be integrated with other foundational models, including the Transformer (Vaswani 6 SparseTSF: Modeling LTSF with 1k Parameters Table 5: Ablation MSE results of the Sparse technique. All results are collected with a unified channel-independent and instance normalization strategy. The ’Boost’ indicates the percentage of performance improvement after incorporating the Sparse technique. Dataset ETTh1 ETTh2 Horizon 96 192 336 720 96 192 336 720 Linear 0.371 0.460 0.417 0.424 0.257 0.337 0.336 0.391 +sparse 0.359 0.397 0.404 0.417 0.267 0.314 0.312 0.370 Boost 3.3% 13.8% 3.1% 1.7% -3.9% 6.9% 7.1% 5.3% Transformer 0.697 0.732 0.714 0.770 0.340 0.376 0.366 0.468 +sparse 0.406 0.442 0.446 0.489 0.322 0.380 0.353 0.432 Boost 41.7% 39.6% 37.5% 36.5% 5.2% -1.0% 3.6% 7.7% GRU 0.415 0.529 0.512 0.620 0.296 0.345 0.363 0.454 +sparse 0.356 0.391 0.437 0.455 0.282 0.332 0.356 0.421 Boost 14.1% 26.1% 14.7% 26.7% 4.8% 3.7% 1.9% 7.2% et al., 2017) and GRU (Cho et al., 2014) models. As demon- strated in the results of Table 5, the incorporation of the Sparse technique significantly enhances the performance of all models, including Linear, Transformer, and GRU. Specifically, the Linear model showed an average improve- ment of 4.7%, the Transformer by 21.4%, and the GRU by 12.4%. These results emphatically illustrate the efficacy of the Sparse technique. Therefore, the Sparse technique can substantially improve the performance of base models in LTSF tasks. Representation Learning of the Sparse Technique In Section 3.3, we theoretically analyzed the reasons why the Sparse technique can enhance the performance of forecast- ing tasks. Here, we further reveal the role of the Sparse technique from a representation learning perspective. Fig- ure 3 shows the distribution of normalized weights for both the trained Linear model and the SparseTSF model. The weight of the Linear model is an L×Hmatrix, which can be directly obtained. However, as the SparseTSF model is a sparse model, we need to acquire its equivalent weights. To do this, we first input Hone-hot encoded vectors of length Linto the SparseTSF model (when Lequals H, this can be simplified to a diagonal matrix, i.e., diagonal elements are 1, and other elements are 0). We then obtain and transpose the corresponding output to get the equivalent L×Hweight matrix of SparseTSF. When Lequals H, this process is formulated as: weight′=SparseTSF ( 1 0 . . .0 0 1 . . .0 . . . . . . . . . 0 0 0 0 1 )⊤.(11) From the visualization in Figure 4, two observations can be made: (i) The Linear model can learn evenly spaced weight distribution stripes (i.e., periodic features) from the data, indicating that single linear layer can already extract theprimary periodic characteristics from a univariate series with the CI strategy. These findings are consistent with previous research conclusions (Zeng et al., 2023). (ii) Compared to the Linear model, SparseTSF learns more distinct evenly spaced weight distribution stripes, indicating that SparseTSF has a stronger capability in extracting periodic features. This phenomenon aligns with the conclusions of Section 3.3. Therefore, the Sparse technique can enhance the model’s performance in LTSF tasks by strengthening its ability to extract periodic features from data. Impact of the Hyperparameter wThe Sparse technique relies on the manual setting of the hyperparameter w, which represents the a priori main period. Here, we delve into the influence of different values of won the forecast outcomes. As indicated in the results from Table 6, SparseTSF exhibits optimal performance when w= 24 , aligning with the intrin- sic main period of the data. Conversely, when wdiverges from 24, a slight decline in performance is observed. This suggests that the hyperparameter wshould ideally be set consistent with the data’s a priori main period. Table 6: MSE results of SparseTSF on ETTh1 with varied hyperparameters w. HorizonSparseTSF (w=6)SparseTSF (w=12)SparseTSF (w=24)SparseTSF (w=48)FITS (2024)DLinear (2023)PatchTST (2023) 96 0.376 0.369 0.359 0.380 0.375 0.374 0.370 192 0.410 0.402 0.397 0.400 0.408 0.405 0.413 336 0.408 0.406 0.404 0.399 0.429 0.429 0.422 720 0.427 0.423 0.417 0.427 0.427 0.440 0.447 Avg. 0.405 0.400 0.394 0.402 0.410 0.412 0.413 In practical scenarios, datasets requiring long-term fore- casting often exhibit inherent periodicity, such as daily or weekly cycles, common in domains like electricity, trans- portation, energy, and consumer goods consumption. There- fore, empirically identifying the predominant period and setting the appropriate wfor such data is both feasible and straightforward. However, for data lacking clear pe- riodicity and patterns, such as financial data, current LTSF models may not be effective (Zeng et al., 2023). Thus, the SparseTSF model may not be the preferred choice for these types of data. Nonetheless, we will further discuss the existing limitations and potential improvements of the SparseTSF model in the Section 5.1. Generalization Ability of the SparseTSF Model The Sparse technique enhances the model’s ability to extract pe- riodic features from data. Therefore, the generalization ca- pability of a trained SparseTSF model on different datasets with the same principal periodicity is promising. To inves- tigate this, we further studied the cross-domain generaliza- tion performance of the SparseTSF model (i.e., training on a dataset from one domain and testing on a dataset from another). Specifically, we examined the performance from 7 SparseTSF: Modeling LTSF with 1k Parameters 0 20 40 60 800 20 40 60 80 0.00.20.40.60.81.0 (a) Linear 0 20 40 60 800 20 40 60 80 0.00.20.40.60.81.0 (b) SparseTSF Figure 4: Visualization of normalized weights of the model trained on the ETTh1 dataset with both look-back length (X-axis) and forecast horizon (Y-axis) of 96. ETTh2 to ETTh1, which are datasets of the same type but collected from different machines, each with 7 variables. Additionally, we explored the performance from Electric- ity to ETTh1, where these datasets originate from differ- ent domains and have a differing number of variables (i.e., Electricity has 321 variables). On datasets with different numbers of variables, models trained with traditional non- CI strategies (like Informer) cannot transfer, whereas those trained with CI strategies (like PatchTST) can, due to the de- coupling of CI strategies from channel relationships. These datasets all have a daily periodicity, i.e., a prior predominant period of w= 24 . Table 7: Comparison of generalization capabilities between SparseTSF and other mainstream models. ’Dataset A → Dataset B’ indicates training and validation on the training and validation sets of Dataset A, followed by testing on the test set of Dataset B. Dataset ETTh2 →ETTh1 Electricity →ETTh1 Horizon 96 192 336 720 96 192 336 720 Informer (2021) 0.844 0.921 0.898 0.829 \ \ \ \ Autoformer (2021) 0.978 1.058 0.944 0.921 \ \ \ \ FEDformer (2022b) 0.878 0.927 0.939 0.967 \ \ \ \ FiLM (2022a) 0.876 0.904 0.919 0.925 \ \ \ \ PatchTST (2023) 0.449 0.478 0.482 0.476 0.400 0.424 0.475 0.472 DLinear (2023) 0.430 0.478 0.458 0.506 0.397 0.428 0.447 0.470 Fits (2024) 0.419 0.427 0.428 0.445 0.380 0.414 0.440 0.448 SparseTSF (Ours) 0.370 0.401 0.412 0.419 0.373 0.409 0.433 0.439 Experimental results, as shown in Table 7, reveal that SparseTSF outperforms other models in both similar do- main generalization (ETTh2 to ETTh1) and less similar domain generalization (Electricity to ETTh1). It is expected that performance on ETTh2 to ETTh1 would be superior to Electricity to ETTh1. Furthermore, in both scenarios, the generalization performance of SparseTSF is nearly on par with the performance of direct modeling in the SparseTSFsource domain as shown in Table 2 and surpasses other base- lines that model directly in the source domain. This robustly demonstrates the generalization capability of SparseTSF, indirectly proving the Sparse technique’s ability to extract stable periodic features. Therefore, the SparseTSF model exhibits outstanding gener- alization capabilities. This characteristic is highly beneficial for the application of the SparseTSF model in scenarios involving small samples and low-quality data. 5. Discussion 5.1. Limitations and Future Work The SparseTSF model proposed in this paper excels in handling data with a stable main period, demonstrating enhanced feature extraction capabilities and an extremely lightweight architecture. However, there are two scenarios where SparseTSF may not be as effective: 1.Ultra-Long Periods : In cases involving ultra-long pe- riods (for example, periods exceeding 100), the Sparse technique results in overly sparse parameter connec- tions. Consequently, SparseTSF does not perform opti- mally in such scenarios. 2.Multiple Periods : SparseTSF may struggle with data that intertwines multiple periods, as the Sparse tech- nique can only downsample and decompose one main period. We have further investigated the performance of SparseTSF in these scenarios in Appendix C and concluded that: (1) in ultra-long period scenarios, a denser connected model would be a better choice; (2) SparseTSF can still perform 8 SparseTSF: Modeling LTSF with 1k Parameters excellently in some multi-period scenarios (such as daily periods superimposed with weekly periods). Finally, one of our key future research directions is to fur- ther address the these potential limitations by designing additional modules to enhance SparseTSF’s ability, thus achieving a balance between performance and parameter size. 5.2. Differences Compared to Existing Methods The Sparse technique proposed in this paper involves down- sampling/upsampling to achieve periodicity/trend decou- pling. It may share a similar idea with existing methods, as downsampling/upsampling and periodic/trend decomposi- tion techniques are prevalent in related literature nowadays. Specifically, we provide a detailed analysis of the differ- ences with respect to N-HiTS (Challu et al., 2023) and OneShotSTL (He et al., 2023) as follows, and present the comparison results in Appendix D.4. SparseTSF Compared to N-HiTS N-HiTS incorporates novel hierarchical interpolation and multi-rate data sampling techniques to achieve better results (Challu et al., 2023). The downsampling and upsampling techniques proposed in SparseTSF are indeed quite different from those used in N-HiTS, including: •The downsampling and upsampling in SparseTSF oc- cur before and after the model’s prediction process, respectively, whereas N-HiTS conducts these opera- tions within internally stacked modules. •SparseTSF’s downsampling involves resampling by a factor of wtowsubsequences of length L/w , which is technically equivalent to matrix reshaping and trans- position, whereas N-HiTS employs downsampling through max-pooling. •SparseTSF’s upsampling involves transposing and re- shaping the predicted subsequences back to the origi- nal sequence, whereas N-HiTS achieves upsampling through interpolation. SparseTSF Compared to OneShotSTL Seasonal-trend decomposition (STD) is a classical and powerful tool for time series forecasting, and OneShotSTL makes a great contribution to advancing the lightweight long-term fore- casting process, featuring fast, lightweight, and powerful capabilities (He et al., 2023). However, SparseTSF differs significantly from OneShotSTL in several aspects: •SparseTSF is a neural network model while OneShot- STL is a non-neural network method focused on online forecasting.•OneShotSTL minimizes residuals and calculates trend and seasonal subseries separately from the original sequence with lengths of L, whereas our SparseTSF resamples the original sequence into wsubseries of length L/w with a constant period w. •OneShotSTL accelerates inference by optimizing the original computation for online processing, while SparseTSF achieves lightweighting by using parameter- sharing linear layers for prediction across all subseries. 6. Conclusion In this paper, we introduce the Cross-Period Sparse Fore- casting technique and the corresponding SparseTSF model. Through detailed theoretical analysis and experimental val- idation, we demonstrated the lightweight nature of the SparseTSF model and its capability to extract periodic features effectively. Achieving competitive or even sur- passing the performance of current state-of-the-art models with a minimal parameter scale, SparseTSF emerges as a strong contender for deployment in computation resource- constrained environments. Additionally, SparseTSF exhibits potent generalization capabilities, opening new possibili- ties for applications in transferring to small samples and low-quality data scenarios. SparseTSF stands as another milestone in the journey towards lightweight models in the field of long-term time series forecasting. Finally, we aim to further tackle the challenges associated with extracting features from ultra-long-periodic and multi-periodic data in the future, striving to achieve an optimal balance between model performance and parameter size. Acknowledgements This work is supported by Guangdong Major Project of Basic and Applied Basic Research (2019B030302002), Na- tional Natural Science Foundation of China (62072187), Guangzhou Development Zone Science and Technology Project (2021GH10) and the Major Key Project of PCL, China under Grant PCL2023A09. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. References Bai, S., Kolter, J. Z., and Koltun, V . An empirical evalua- tion of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 , 9 SparseTSF: Modeling LTSF with 1k Parameters 2018. Challu, C., Olivares, K. G., Oreshkin, B. N., Ramirez, F. G., Canseco, M. M., and Dubrawski, A. Nhits: Neural hi- erarchical interpolation for time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelli- gence , volume 37, pp. 6989–6997, 2023. Chang, C., Wang, W.-Y ., Peng, W.-C., and Chen, T.-F. Llm4ts: Aligning pre-trained llms as data-efficient time- series forecasters, 2024. Cho, K., Van Merri ¨enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y . Learn- ing phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 , 2014. Das, A., Kong, W., Leach, A., Mathur, S., Sen, R., and Yu, R. Long-term forecasting with tide: Time-series dense encoder. arXiv preprint arXiv:2304.08424 , 2023. Deng, J., Song, X., Tsang, I. W., and Xiong, H. The bigger the better? rethinking the effective model scale in long-term time series forecasting. arXiv preprint arXiv:2401.11929 , 2024. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020. Ekambaram, V ., Jati, A., Nguyen, N., Sinthong, P., and Kalagnanam, J. Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 459–469, 2023. Franceschi, J.-Y ., Dieuleveut, A., and Jaggi, M. Unsuper- vised scalable representation learning for multivariate time series. Advances in neural information processing systems , 32, 2019. Han, L., Ye, H.-J., and Zhan, D.-C. The capacity and ro- bustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. IEEE Transactions on Knowledge and Data Engineering , 2024. He, K., Chen, X., Xie, S., Li, Y ., Doll ´ar, P., and Girshick, R. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pp. 16000–16009, 2022. He, X., Li, Y ., Tan, J., Wu, B., and Li, F. Oneshotstl: One-shot seasonal-trend decomposition for online time series anomaly detection and forecasting. arXiv preprint arXiv:2304.01506 , 2023.Huang, Q., Shen, L., Zhang, R., Cheng, J., Ding, S., Zhou, Z., and Wang, Y . Hdmixer: Hierarchical dependency with extendable patch for multivariate time series forecasting. InProceedings of the AAAI Conference on Artificial In- telligence , volume 38, pp. 12608–12616, 2024a. Huang, Q., Shen, L., Zhang, R., Ding, S., Wang, B., Zhou, Z., and Wang, Y . Crossgnn: Confronting noisy multivari- ate time series via cross interaction refinement. Advances in Neural Information Processing Systems , 36, 2024b. Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J. Y ., Shi, X., Chen, P.-Y ., Liang, Y ., Li, Y .-F., Pan, S., et al. Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728 , 2023. Kim, T., Kim, J., Tae, Y ., Park, C., Choi, J.-H., and Choo, J. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations , 2021. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. Lin, S., Lin, W., Wu, W., Wang, S., and Wang, Y . Petformer: Long-term time series forecasting via placeholder- enhanced transformer. arXiv preprint arXiv:2308.04791 , 2023a. Lin, S., Lin, W., Wu, W., Zhao, F., Mo, R., and Zhang, H. Segrnn: Segment recurrent neural network for long-term time series forecasting. arXiv preprint arXiv:2308.11200 , 2023b. Liu, M., Zeng, A., Chen, M., Xu, Z., Lai, Q., Ma, L., and Xu, Q. Scinet: Time series modeling and forecasting with sample convolution and interaction. Advances in Neural Information Processing Systems , 35:5816–5828, 2022a. Liu, S., Yu, H., Liao, C., Li, J., Lin, W., Liu, A. X., and Dustdar, S. Pyraformer: Low-complexity pyramidal atten- tion for long-range time series modeling and forecasting. InInternational conference on learning representations , 2022b. Madsen, H. Time series analysis . CRC Press, 2007. Nie, Y ., H. Nguyen, N., Sinthong, P., and Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations , 2023. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems , 32, 2019. 10 SparseTSF: Modeling LTSF with 1k Parameters Qiu, X., Hu, J., Zhou, L., Wu, X., Du, J., Zhang, B., Guo, C., Zhou, A., Jensen, C. S., Sheng, Z., et al. Tfb: Towards comprehensive and fair benchmarking of time series fore- casting methods. arXiv preprint arXiv:2403.20150 , 2024. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. Wang, H., Peng, J., Huang, F., Wang, J., Chen, J., and Xiao, Y . Micn: Multi-scale local and global context model- ing for long-term series forecasting. In The Eleventh International Conference on Learning Representations , 2022. Wen, Q., Zhou, T., Zhang, C., Chen, W., Ma, Z., Yan, J., and Sun, L. Transformers in time series: A survey. arXiv preprint arXiv:2202.07125 , 2022. Wu, H., Xu, J., Wang, J., and Long, M. Autoformer: Decom- position transformers with auto-correlation for long-term series forecasting. Advances in neural information pro- cessing systems , 34:22419–22430, 2021. Wu, H., Hu, T., Liu, Y ., Zhou, H., Wang, J., and Long, M. Timesnet: Temporal 2d-variation modeling for gen- eral time series analysis. In International Conference on Learning Representations , 2023. Xu, Z., Zeng, A., and Xu, Q. Fits: Modeling time series with 10kparameters. In The Twelfth International Conference on Learning Representations , 2024. Xue, H. and Salim, F. D. Promptcast: A new prompt- based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering , 2023. Zeng, A., Chen, M., Zhang, L., and Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence , volume 37, pp. 11121–11128, 2023. Zhang, T., Zhang, Y ., Cao, W., Bian, J., Yi, X., Zheng, S., and Li, J. Less is more: Fast multivariate time series forecasting with light sampling-oriented mlp structures. arXiv preprint arXiv:2207.01186 , 2022. Zhang, X., Zhong, C., Zhang, J., Wang, T., and Ng, W. W. Robust recurrent neural networks for time series forecast- ing. Neurocomputing , 526:143–157, 2023. Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., and Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pp. 11106–11115, 2021.Zhou, T., Ma, Z., Wen, Q., Sun, L., Yao, T., Yin, W., Jin, R., et al. Film: Frequency improved legendre memory model for long-term time series forecasting. Advances in Neural Information Processing Systems , 35:12677– 12690, 2022a. Zhou, T., Ma, Z., Wen, Q., Wang, X., Sun, L., and Jin, R. Fedformer: Frequency enhanced decomposed trans- former for long-term series forecasting. In Interna- tional conference on machine learning , pp. 27268–27286. PMLR, 2022b. 11 SparseTSF: Modeling LTSF with 1k Parameters A. More Details of SparseTSF A.1. Overall Workflow The complete workflow of SparseTSF is outlined in Al- gorithm 1, which takes a univariate historical look-back window xt−L+1:tas input and outputs the corresponding forecast results ¯xt+1:t+H. By integrating the CI strategy, i.e., modeling multiple channels using a model with shared parameters, multivariate time series forecasting can be ef- fectively achieved. Algorithm 1 The Overall Pseudocode of SparseTSF Require: Historical look-back window xt−L+1:t∈RL Ensure: Forecasting horizon ¯xt+1:t+H∈RH 1:et←Pt i=t−L+1xi L/* Calculate the mean of the look-back window */ 2:xt−L+1:t←xt−L+1:t−et /* Subtract the mean from each element */ 3:xt−L+1:t← Conv1d (xt−L+1:t,2×w 2 + 1) + xt−L+1:t /* Apply 1D convolution on the orig- inal window */ 4:X←Reshape (xt−L+1:t,(n, w)) /* Reshape xt−L+1:tinto a n×wmatrix */ 5:Y←Linear (X⊤)⊤/* Transpose, apply linear transformation n→m, and transpose back */ 6:¯xt+1:t+H←Reshape (Y,(H)) /* Reshape Y back into a length Hsequence */ 7:¯xt+1:t+H←¯xt+1:t+H+et /* Add the mean back to each element */ Additionally, intuitively, SparseTSF can be perceived as a sparsely connected linear layer performing sliding predic- tion across periods, as depicted in Figure 5. Time 𝑥𝑡−𝐿:𝑡 ҧ𝑥𝑡:𝑡+𝐻ForecastingSliding AggregationSliding Forecasting Constant Period 𝑤 Figure 5: Schematic illustration of SparseTSF. A.2. Experimental Details We implemented SparseTSF in PyTorch (Paszke et al., 2019) and trained it using the Adam optimizer (Kingma & Ba, 2014) for 30 epochs, with a learning rate decay of 0.8 after the initial 3 epochs, and early stopping with a patience of 5. The dataset splitting follows the procedures of FITS and Autoformer, where the ETT datasets are divided into proportions of 6:2:2, while the other datasets are split intoproportions of 7:1:2. SparseTSF has minimal hyperparameters due to its simple design. The period wis set to the inherent cycle of the data (e.g., w= 24 for ETTh1) or to a smaller value if the data has an extremely long cycle (e.g., w= 4for ETTm1). The choice of batch size depends on the size of the data samples (i.e., the number of channels). For datasets with fewer than 100 channels (such as ETTh1), the batch size is set to 256, while for datasets with fewer than 300 channels (such as Electricity), the batch size is set to 128. This setting maximizes the utilization of GPU parallel computing capabilities while avoiding GPU out-of-memory issues (i.e., with NVIDIA RTX 4090, 24GB). Additionally, the learning rate needs to be set relatively large (i.e., 0.02) due to the very small number of learnable parameters in SparseTSF. The complete details can be found in our official repository5. The baseline results in this paper are from the first version of the FITS paper6, where FITS adopted a uniform input length of 720 (we also use an input length of 720 for fair comparison with it). Here, the input lengths of other base- lines are set to be consistent with their respective official input lengths. B. Theoretical Proofs Proof of Theorem 3.1 Proof. The SparseTSF model consists of two main compo- nents: a 1D convolutional layer for sliding aggregation and a linear layer for sparse sliding prediction. The number of parameters in the 1D convolutional layer (without bias) is determined by the kernel size, which is 2×w 2 +1. For the linear layer (without bias), the number of parameters is the product of the input and output sizes, which are n=L w andm=H w , respectively. Thus, the total number of parameters in the linear layer is n×m. By combining the parameters from both layers, the total count is: n×m+2×w 2 +1 =L w ×H w +2×w 2 + 1. Proof of Lemma 3.3 Proof. Given the original time series xt−L+1:twith length L, the downsampling process segments it into wsubse- quences, each of which contains every w-th data point from the original series. The length of each downsampled sub- sequence, denoted as n, is thereforeL w , as it collects one data point from every wtime steps from the original series of length L. 5https://github.com/lss-1138/SparseTSF 6https://arxiv.org/pdf/2307.03756v1.pdf 12 SparseTSF: Modeling LTSF with 1k Parameters The SparseTSF model then applies a forecasting function f on each of these downsampled subsequences. The forecast- ing function fis designed to predict future values of the time series based on its past values. Specifically, it predicts the future subsequence x′ t+1:t+musing the past subsequence x′ t−n+1:t. Here, mis the length of the forecast horizon for the downsampled subsequences and is given byH w , where His the original forecast horizon. Therefore, the SparseTSF model effectively reformulates the original forecasting task of predicting xt+1:t+Hfrom xt−L+1:tinto a series of smaller tasks. Each of these smaller tasks involves using the downsampled past subsequence x′ t−n+1:tto predict the downsampled future subsequence x′ t+1:t+m. This is represented mathematically as: x′ t+1:t+m=f(x′ t−n+1:t). (12) Proof of Theorem 3.4 Proof. Theorem 3.4 is established based on the assump- tion of a time series dataset that can be decomposed into a periodic component P(t)and a trend component T(t), as de- fined in Definition 3.2. This decomposition implies that any time point in the series X(t)can be expressed as the sum of its periodic and trend components, i.e., X(t) =P(t)+T(t). Therefore, for the downsampled subsequences x′ t−n+1:tand x′ t+1:t+mbased on a periodicity w, we have: x′ t−n+1:t=p′ t−n+1:t+t′ t−n+1:t, (13) x′ t+1:t+m=p′ t+1:t+m+t′ t+1:t+m. (14) Hence, by combining with Lemma 3.3, the task formulation of the SparseTSF model can be expressed as: p′ t+1:t+m+t′ t+1:t+m=f(p′ t−n+1:t+t′ t−n+1:t).(15) Due to the periodic nature of P(t)as defined in Equation 5, for any two points iandjin the downsampled sequence (where i, j∈[t−n+ 1 : t+m]), the periodic component remains constant, i.e., p′ i=p′ j. This indicates that the task of the SparseTSF model is to predict future trend components while utilizing a constant periodic component as a reference. C. Case Study C.1. Multi-Period Scenarios In this section, we specifically examine the performance of the SparseTSF model in scenarios involving multiple peri- ods. Specifically, we study its performance on the Trafficdataset, as traffic flow data not only exhibits distinct daily periodicity but also demonstrates significant weekly cycles. For instance, the morning and evening rush hours repre- sent intra-day cycles, while the different patterns between weekdays and weekends exemplify weekly cycles. Figure 6 displays the autocorrelation in the original and day- period downsampled traffic flow data. It can be observed that even after downsampling with a daily period, the data still exhibits a clear weekly cycle ( w′= 7). Under these circumstances, with SparseTSF only decoupling the primary daily cycle, will it outperform the original fully connected linear model? 010203040 Lags−0.50−0.250.000.250.500.751.00Autocorrelation (a) Original 010203040 Lags0.00.20.40.60.81.0Autocorrelation (b) Downsampled Figure 6: Comparison of autocorrelation in original and downsampled subsequences for the last channel in the Traf- fic dataset. The results, as shown in Figure 7, indicate that the SparseTSF model captures stronger daily and weekly pe- riodic patterns (evident as more pronounced equidistant stripes) compared to the original approach. This is because, in the original method, a single linear layer is tasked with extracting both daily and weekly periodic patterns. In con- trast, the SparseTSF model, by decoupling the daily cycle, simplifies the task for its inherent linear layer to only extract the remaining weekly periodic features. Therefore, even in scenarios with multiple periods, SparseTSF can still achieve remarkable performance. C.2. Ultra-Long Period Scenarios This section is dedicated to examining the SparseTSF model’s performance in scenarios characterized by ultra- long periods. Specifically, our focus is on the ETTm1&ETTm27and Weather8datasets, as detailed in Table 8. These datasets are distinguished by their primary periods extending up to 96 and 144, respectively. We eval- uate the SparseTSF model’s performance under various settings of the hyperparameter w. As illustrated in Table 9, when wis set to a large value (for instance, 144, which aligns with the intrinsic primary period 7https://github.com/zhouhaoyi/ETDataset 8https://www.bgc-jena.mpg.de/wetter 13 SparseTSF: Modeling LTSF with 1k Parameters 0 50 100 150 200 250 3000 50 100 150 200 250 300 0.00.20.40.60.81.0 (a) Linear 0 50 100 150 200 250 3000 50 100 150 200 250 300 0.00.20.40.60.81.0 (b) SparseTSF Figure 7: Visualization of normalized weights of the model trained on the Traffic dataset with both look-back length (X-axis) and forecast horizon (Y-axis) of 336. Table 8: Summary of datasets with ultra-long periods. Datasets ETTm1 ETTm2 Weather Channels 7 7 21 Frequency 15 mins 15 mins 10 mins Timesteps 69,680 69,680 52,696 of the Weather dataset), the performance of the SparseTSF model tends to deteriorate. This decline is attributed to the excessive sparsity in connections caused by a large w, limit- ing the information available for the model to base its predic- tions on, thereby impairing its performance. Interestingly, aswincreases, there is a noticeable improvement in the SparseTSF model’s performance. This observation suggests that employing denser connections within the SparseTSF framework could be a more viable option for datasets with longer periods. Furthermore, an intriguing phenomenon is observed when w= 1, which corresponds to the scenario of employing a fully connected linear layer for prediction. The performance in this case is inferior compared to sparse connection-based predictions. This indicates that an appropriate level of spar- sity in connections (even when the sparse interval does not match the dataset’s inherent primary period) can enhance the model’s predictive accuracy. This could be due to the redundant nature of time series data, especially when data sampling is dense. In such cases, executing sparse pre- dictions might help eliminate some redundant information. However, these findings necessitate further investigation and exploration in future work. The findings above suggest that employing a denser sparse strategy would be beneficial in such cases. Therefore, we present in Table 10 a comparative performance ofTable 9: MSE results of SparseTSF on ultra-long period datasets with varied hyperparameters w. The forecast hori- zon is set as 720. DatasetParameter w 144 72 48 24 12 6 2 1 ETTm1 0.450 0.450 0.422 0.422 0.421 0.415 0.415 0.429 ETTm2 0.375 0.371 0.373 0.352 0.354 0.349 0.349 0.357 Weather 0.332 0.329 0.325 0.321 0.319 0.319 0.318 0.322 SparseTSF against other models under the setting of w= 4, where SparseTSF ranks within the top 3 in most cases. In this scenario, SparseTSF remains significantly lighter com- pared to other mainstream models. This indicates that the Sparse forecasting technique not only effectively reduces pa- rameter size but also enhances prediction accuracy in most scenarios. D. More Results and Analysis D.1. Comparison Results after Fixing the Code Bug Recent research has discovered a long-standing bug in the popular codebase used in the field since the introduction of the Informer (Zhou et al., 2021). This bug, which affected the calculation of test set metrics, caused the data that did not fill an entire batch to be discarded (Qiu et al., 2024). As a result, the batch size setting influenced the results. Theo- retically, the larger the batch size, the more test data might be discarded, leading to incorrect results. This bug signif- icantly improved the performance on ETTh1 and ETTh2 datasets when the batch size was large, while the impact on other datasets was relatively minor. To reassess the performance of SparseTSF, we present the performance of SparseTSF and existing models after fixing 14 SparseTSF: Modeling LTSF with 1k Parameters Table 10: MSE results on ultra-long period datasets comparing SparseTSF ( w= 4) with other mainstream models. The ranking of SparseTSF’s performance is shown in parentheses. Dataset ETTm1 ETTm2 Weather Horizon 96 192 336 720 96 192 336 720 96 192 336 720 Informer (2021) 0.672 0.795 1.212 1.166 0.365 0.533 1.363 3.379 0.300 0.598 0.578 1.059 Autoformer (2021) 0.505 0.553 0.621 0.671 0.255 0.281 0.339 0.433 0.266 0.307 0.359 0.419 Pyraformer (2022b) 0.543 0.557 0.754 0.908 0.435 0.730 1.201 3.625 0.896 0.622 0.739 1.004 FEDformer (2022b) 0.379 0.426 0.445 0.543 0.203 0.269 0.325 0.421 0.217 0.276 0.339 0.403 TimesNet (2023) 0.338 0.374 0.410 0.478 0.187 0.249 0.321 0.408 0.172 0.219 0.280 0.365 PatchTST (2023) 0.293 0.333 0.369 0.416 0.166 0.223 0.274 0.362 0.149 0.194 0.245 0.314 DLinear (2023) 0.299 0.335 0.369 0.425 0.167 0.221 0.274 0.368 0.176 0.218 0.262 0.323 FITS (2024) 0.305 0.339 0.367 0.418 0.164 0.217 0.269 0.347 0.145 0.188 0.236 0.308 SparseTSF (ours) 0.314(4) 0.343(4) 0.369(2) 0.418(2) 0.165(2) 0.218(2) 0.272(2) 0.35(2) 0.172(3) 0.215(3) 0.26(3) 0.318(3) Table 11: MSE results of multivariate long-term time series forecasting comparing SparseTSF with other mainstream models after fixing code bug. The top two results are highlighted in bold . Dataset ETTh1 ETTh2 Electricity Traffic Horizon 96 192 336 720 96 192 336 720 96 192 336 720 96 192 336 720 FEDformer (2022b) 0.375 0.427 0.459 0.484 0.340 0.433 0.508 0.480 0.188 0.197 0.212 0.244 0.573 0.611 0.621 0.630 TimesNet (2023) 0.384 0.436 0.491 0.521 0.340 0.402 0.452 0.462 0.168 0.184 0.198 0.220 0.593 0.617 0.629 0.640 PatchTST (2023) 0.385 0.413 0.440 0.456 0.274 0.338 0.367 0.391 0.129 0.149 0.166 0.210 0.366 0.388 0.398 0.457 DLinear (2023) 0.384 0.443 0.446 0.504 0.282 0.350 0.414 0.588 0.140 0.153 0.169 0.204 0.413 0.423 0.437 0.466 FITS (2024) 0.382 0.417 0.436 0.433 0.272 0.333 0.355 0.378 0.145 0.159 0.175 0.212 0.398 0.409 0.421 0.457 SparseTSF (ours) 0.362 0.403 0.434 0.426 0.294 0.339 0.359 0.383 0.138 0.151 0.166 0.205 0.389 0.398 0.411 0.448 this bug in Table 11. Here, we reran FITS under the condi- tions of lookback L= 720 and cutoff frequency COF = 5 (where the parameter count of SparseTSF is still tens of times smaller than that of FITS) for a fair comparison with SparseTSF. The results for other baselines were sourced from FITS’ reproduction, where they reran the baselines’ results after fixing the bug (Xu et al., 2024). As shown, after fixing the code bug, SparseTSF still achieves impres- sive performance with minimal overhead, aligning with the conclusions of Table 2. D.2. Impacts of Varying Look-Back Length The look-back length determines the richness of historical information the model can utilize. Generally, models are expected to perform better with longer input lengths if they possess robust long-term dependency modeling capabilities. Table 12 presents the performance of SparseTSF at different look-back lengths. It can be observed that two phenomena occur: (i) longer look-back windows perform better, indicating SparseTSF’s ability in long-term dependency modeling, and (ii) the per- formance of the ETTh1 & ETTh2 datasets remains relatively stable across different look-back windows, while the per- formance of the Traffic & Electricity datasets varies signifi- cantly, especially with a look-back of 96, where the accuracy notably decreases. In fact, we can further discuss the reasons behind the second point. As illustrated in Figure 3, ETTh1 only exhibits asignificant daily periodic pattern ( w= 24 ). In this case, look-back lengths of 96 can achieve good results because they fully encompass the daily periodic pattern. However, as shown in Figure 7, Traffic not only has a significant daily periodic pattern ( w= 24 ) but also a noticeable weekly periodic pattern ( w= 168 ). In this case, a look-back of 96 cannot cover the entire weekly periodic pattern, leading to a significant performance drop. This underscores the necessity of sufficiently long look-back lengths (at least covering the entire cycle length) for accurate prediction. Given the extreme lightweight nature of SparseTSF, we strongly recommend providing sufficiently long look-back windows whenever feasible. D.3. Impacts of Instance Normalization Instance Normalization (IN) strategy has become popular in mainstream methods. We also employ this strategy in SparseTSF to enhance its performance on datasets with significant distribution drift. We showcase the impact of the IN strategy in Table 13. It can be observed that IN is necessary for smaller datasets, namely ETTh1 and ETTh2 datasets. However, its effect is relatively limited on larger datasets such as Traffic and Electricity datasets. It must be clarified that, although the IN strategy is one of the factors contributing to SparseTSF’s success, it is not the key differentiator of SparseTSF’s core contributions compared to other models. 15 SparseTSF: Modeling LTSF with 1k Parameters Table 12: MSE results of SparseTSF with varied look-back lengths. Dataset ETTh1 ETTh2 Electricity Traffic HorizonLook-back96 192 336 720 96 192 336 720 96 192 336 720 96 192 336 720 96 0.380 0.371 0.393 0.354 0.288 0.285 0.272 0.278 0.209 0.160 0.146 0.138 0.672 0.455 0.412 0.383 192 0.433 0.434 0.418 0.398 0.363 0.346 0.323 0.315 0.202 0.166 0.154 0.147 0.608 0.453 0.415 0.388 336 0.447 0.420 0.390 0.405 0.366 0.335 0.314 0.311 0.217 0.184 0.172 0.164 0.609 0.468 0.428 0.403 720 0.451 0.426 0.413 0.418 0.407 0.389 0.372 0.371 0.259 0.223 0.210 0.205 0.650 0.493 0.462 0.446 Avg. 0.428 0.413 0.404 0.394 0.356 0.339 0.320 0.319 0.222 0.183 0.171 0.163 0.635 0.467 0.429 0.405 Table 13: Ablation results of IN strategy in SparseTSF. Dataset ETTh1 ETTh2 Electricity Traffic Horizon w/ IN w/o IN w/ IN w/o IN w/ IN w/o IN w/ IN w/o IN 96 0.359 0.37 0.267 0.327 0.138 0.138 0.382 0.382 192 0.397 0.413 0.314 0.426 0.146 0.146 0.388 0.387 336 0.404 0.431 0.312 0.482 0.164 0.163 0.402 0.401 720 0.417 0.462 0.37 0.866 0.203 0.198 0.445 0.444 D.4. Comparison Results with N-HiTS and OneShotSTL Table 14: Comparison Results with N-HiTS and OneShot- STL. In this comparison, SparseTSF and N-HiTS are evalu- ated based on multivariate prediction results (MSE), while SparseTSF and OneShotSTL are compared using univariate prediction results (MAE). Their results are sourced from their respective official papers. Dataset Horizon Nhit SparseTSF OneShotSTL SparseTSF ETTm296 0.176 0.165 0.211 0.187 192 0.245 0.218 0.244 0.233 336 0.295 0.272 0.273 0.268 720 0.401 0.350 0.321 0.324 Electricity96 0.147 0.138 0.331 0.314 192 0.167 0.146 0.355 0.334 336 0.186 0.164 0.389 0.366 720 0.243 0.203 0.444 0.416 Traffic96 0.402 0.382 0.181 0.179 192 0.42 0.388 0.181 0.175 336 0.448 0.402 0.182 0.184 720 0.539 0.445 0.199 0.203 Here, we present the comparison results between SparseTSF and N-HiTS and OneShotSTL in Table 14. It can be ob- served that in most cases, SparseTSF outperforms these methods, demonstrating the superiority of the SparseTSF approach. 16 | 5 | 1 | The SparseTSF model has less than 1,000 parameters, making it significantly lighter than most deep learning models typically trained on time series data. Given the dataset sizes (up to 26,304 timesteps for the Electricity dataset with multiple channels), it's reasonable to estimate that training may require about 5 hours on a single RTX 4090 GPU, which has 24GB of memory. The batch size can be reasonably set to accommodate the model's lightweight nature while ensuring efficient training, likely between 16 to 64 depending on specific requirements for memory utilization and model performance. Since there were no indications of requiring multiple GPUs or distributed training, a single GPU configuration suffices to handle the training within the estimated time frame. Additionally, considering the model has been designed for efficient memory usage with resizing and downsampling techniques, the hardware can accommodate it comfortably. Therefore, it is trainable under the 8-hour mark on a single GPU. | yes | Yes | Time Series | SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters | 2024-05-02 0:00:00 | https://github.com/lss-1138/SparseTSF | 1 | https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy | 2min | https://colab.research.google.com/drive/1OgVLdCqFrODVu7AdHeI2_N-1qGZe2AcD?usp=sharing | Yes | -- Just download the dataset and run. |
Peptides-struct | GCN+ | [] | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00 | https://arxiv.org/abs/2502.09263v1 | [
"https://github.com/LUOyk1999/GNNPlus"
] | {'MAE': '0.2421 ± 0.0016'} | [
"MAE"
] | Given the following paper and codebase:
Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
Codebase: https://github.com/LUOyk1999/GNNPlus
Improve the GCN+ model on the Peptides-struct dataset. The result
should improve on the following metrics: {'MAE': '0.2421 ± 0.0016'}. You must use only the codebase provided.
| Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in capturing long-range dependencies, while Graph Transformers (GTs) are considered superior due to their global atten- tion mechanisms. Literature frequently suggests that GTs outperform GNNs, particularly in graph- level tasks such as graph classification and re- gression. In this study, we explore the untapped potential of GNNs through an enhanced frame- work, GNN+, which integrates six widely used techniques: edge feature integration, normaliza- tion, dropout, residual connections, feed-forward networks, and positional encoding, to effectively tackle graph-level tasks. We conduct a systematic evaluation of three classic GNNs—GCN, GIN, and GatedGCN—enhanced by the GNN+frame- work across 14 well-known graph-level datasets. Our results show that, contrary to the prevailing belief, classic GNNs excel in graph-level tasks, securing top three rankings across all datasets and achieving first place in eight, while also demonstrating greater efficiency than GTs. This highlights the potential of simple GNN architec- tures, challenging the belief that complex mech- anisms in GTs are essential for superior graph- level performance. Our source code is available at https://github.com/LUOyk1999/tunedGNN-G. 1. Introduction Graph machine learning addresses both graph-level tasks and node-level tasks, as illustrated in Figure 1. These tasks fundamentally differ in their choice of the basic unit for dataset composition, splitting, and training, with graph-level tasks focusing on the entire graph, while node-level tasks focus on individual nodes. Graph-level tasks (Dwivedi et al., 1Beihang University2The Hong Kong Polytechnic University. *Corresponding authors: Lei Shi <{leishi, luoyk }@buaa.edu.cn >, Xiao-Ming Wu <xiao-ming.wu@polyu.edu.hk >. Preprint. Figure 1. Differences between graph-level and node-level tasks. 2023; Hu et al., 2020; Luo et al., 2023b;a) often involve the classification of relatively small molecular graphs in chem- istry (Morris et al., 2020) or the prediction of protein proper- ties in biology (Dwivedi et al., 2022). In contrast, node-level tasks typically involve large social networks (Tang et al., 2009) or citation networks (Yang et al., 2016), where the primary goal is node classification. This distinction in the fundamental unit of dataset leads to differences in method- ologies, training strategies, and application domains. Message-passing Graph Neural Networks (GNNs) (Gilmer et al., 2017), which iteratively aggregate information from local neighborhoods to learn node representations, have be- come the predominant approach for both graph-level and node-level tasks (Niepert et al., 2016; Kipf & Welling, 2017; Veliˇckovi ´c et al., 2018; Xu et al., 2018; Bresson & Laurent, 2017; Wu et al., 2020). Despite their widespread success, GNNs exhibit several inherent limitations, including re- stricted expressiveness (Xu et al., 2018; Morris et al., 2019), over-smoothing (Li et al., 2018; Chen et al., 2020), over- squashing (Alon & Yahav, 2020), and a limited capacity to capture long-range dependencies (Dwivedi et al., 2022). A prevalent perspective is that Graph Transformers (GTs) (M¨uller et al., 2023; Min et al., 2022; Hoang et al., 2024), as an alternative to GNNs, leverage global attention mech- anisms that enable each node to attend to all others (Yun et al., 2019; Dwivedi & Bresson, 2020), effectively model- 1arXiv:2502.09263v1 [cs.LG] 13 Feb 2025 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence ing long-range interactions and addressing issues such as over-smoothing, over-squashing, and limited expressiveness (Kreuzer et al., 2021; Ying et al., 2021; Zhang et al., 2023; Luo et al., 2023c; 2024b). However, the quadratic com- plexity of global attention mechanisms limits the scalability of GTs in large-scale, real-world applications (Behrouz & Hashemi, 2024; Sancak et al., 2024; Ding et al., 2024). Moreover, it has been noted that many state-of-the-art GTs (Chen et al., 2022; Ramp ´aˇsek et al., 2022; Shirzad et al., 2023; Ma et al., 2023) still rely—either explicitly or implic- itly—on the message passing mechanism of GNNs to learn local node representations, thereby enhancing performance. Recent studies (Luo et al., 2024a; 2025a;b) have shown that, contrary to common belief, classic GNNs such as GCN (Kipf & Welling, 2017), GAT (Veli ˇckovi ´c et al., 2018), and GraphSAGE (Hamilton et al., 2017) can achieve perfor- mance comparable to, or even exceeding, that of state-of-the- art GTs for node-level tasks. However, a similar conclusion has not yet been established for graph-level tasks. While T¨onshoff et al. (2023) conducted pioneering research demon- strating that tuning a few hyperparameters can significantly enhance the performance of classic GNNs, their results indi- cate that these models still do not match the overall perfor- mance of GTs. Furthermore, their investigation is limited to the Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). This raises an important question: “Can classic GNNs also excel in graph-level tasks?” To thoroughly investigate this question, we introduce GNN+, an enhanced GNN framework that incorporates es- tablished techniques into the message-passing mechanism, to effectively address graph-level tasks. As illustrated in Fig. 2, GNN+integrates six widely used techniques: the incorporation of edge features (Gilmer et al., 2017), normal- ization (Ioffe & Szegedy, 2015), dropout (Srivastava et al., 2014), residual connections (He et al., 2016), feed-forward networks (FFN) (Vaswani et al., 2017), and positional en- coding (Vaswani et al., 2017). Each technique serves as a hyperparameter that can be tuned to optimize performance. We systematically evaluate 3 classic GNNs—GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bres- son & Laurent, 2017)—enhanced by the GNN+frame- work across 14 well-known graph-level datasets from GNN Benchmark (Dwivedi et al., 2023), LRGB (Dwivedi et al., 2022), and OGB (Hu et al., 2020). The results demonstrate that the enhanced versions of classic GNNs match or even outperform state-of-the-art (SOTA) GTs, achieving rankings in the top three , including first place in eight datasets , while exhibiting superior efficiency. These findings pro- vide a positive answer to the previously posed question, suggesting that the true potential of GNNs for graph-level applications has been previously underestimated, and the GNN+framework effectively unlocks this potential whileaddressing their inherent limitations. Our ablation study also highlights the importance of each technique used in GNN+and offers valuable insights for future research. 2. Classic GNNs for Graph-level Tasks Define a graph as G= (V,E,X,E), where Vis the set of nodes, and E ⊆ V × V is the set of edges. The node feature matrix is X∈R|V|×dV, where |V|is the number of nodes, anddVis the dimension of the node features. The edge feature matrix is E∈R|E|×dE, where |E|is the number of edges and dEis the dimension of the edge features. Let A∈R|V|×|V|denote the adjacency matrix of G. Message-passing Graph Neural Networks (GNNs) com- pute node representations hl vat each layer lvia a message- passing mechanism, defined by Gilmer et al. (2017): hl v=UPDATEl hl−1 v,AGGln hl−1 u|u∈ N(v)o , (1) whereN(v)represents the neighboring nodes adjacent to v, AGGlis the message aggregation function, and UPDATEl is the update function. Initially, each node vis assigned a feature vector h0 v=xv∈Rd. The function AGGlis then used to aggregate information from the neighbors of vto update its representation. The output of the last layer L, i.e., GNN (v,A,X) =hL v, is the representation of vproduced by the GNN. In this work, we focus on three classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bresson & Laurent, 2017), which differ in their approach to learning the node representation hl v. Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), the vanilla GCN model, is formulated as: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl), (2) where ˆdv= 1+P u∈N(v)1,P u∈N(v)1denotes the degree of node v,Wlis the trainable weight matrix in layer l, and σis the activation function, e.g., ReLU(·) = max(0 ,·). Graph Isomorphism Networks (GIN) (Xu et al., 2018) learn node representations through a different approach: hl v=MLPl((1 + ϵ)·hl−1 v+X u∈N(v)hl−1 u), (3) where ϵis a constant, typicallyset to 0, and MLPldenotes a multi-layer perceptron, which usually consists of 2 layers. Residual Gated Graph Convolutional Networks (Gat- edGCN) (Bresson & Laurent, 2017) enhance traditional graph convolutions by incorporating gating mechanisms, improving adaptability and expressiveness: hl v=hl−1 vWl 1+X u∈N(v)ηv,u⊙hl−1 uWl 2, (4) 2 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence where ηv,u=σ(hl−1 vWl 3+hl−1 uWl 4)is the gating func- tion, and σdenotes the sigmoid activation function. This gating function determines how much each neighboring node contributes to updating the representation of the cur- rent node. The matrices Wl 1,Wl 2,Wl 3,Wl 4are trainable weight matrices specific to the layer l. Graph-level tasks treat the entire graph, rather than indi- vidual nodes or edges, as the fundamental unit for dataset composition, splitting, and training. Formally, given a la- beled graph dataset Γ ={(Gi,yi)}n i=1, each graph Giis associated with a label vector yi, representing either cat- egorical labels for classification or continuous values for regression. Next, the dataset Γis typically split into training, validation, and test sets, denoted as Γ = Γ train∪Γval∪Γtest. Graph-level tasks encompass inductive prediction tasks that operate on entire graphs, as well as on individual nodes or edges (Dwivedi et al., 2022), with each corresponding to a distinct label vector yi. Each type of task requires a tai- lored graph readout function R, which aggregates the output representations to compute the readout result, expressed as: hreadout i = Rn hL v:v∈ Vio , (5) where Virepresents the set of nodes in the graph Gi. For example, for graph prediction tasks , which aim to make predictions about the entire graph, the readout function R often operates as a global mean pooling function. Finally, for any graph Gi, the readout result is passed through a prediction head g(·)to obtain the predicted label ˆyi= g(hreadout i). The training objective is to minimize the total lossL(θ) =P Gi∈Γtrainℓ(ˆyi,yi)w.r.t. all graphs in the training set Γtrain, where yirepresents the ground-truth label ofGiandθdenotes the trainable GNN parameters. 3. GNN+: Enhancing Classic GNNs for Graph-level Tasks We propose an enhancement to classic GNNs for graph-level tasks by incorporating six popular techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks (FFN), and positional encoding. The enhanced framework, GNN+, is illustrated in Figure 2. 3.1. Edge Feature Integration Edge features were initially incorporated into some GNN frameworks (Gilmer et al., 2017; Hu et al., 2019) by directly integrating them into the message-passing process to en- hance information propagation between nodes. Following this practice, GraphGPS (Ramp ´aˇsek et al., 2022) and subse- quent GTs encode edge features within their local modules to enrich node representations. Taking GCN (Eq. 2) as an example, the edge features are Figure 2. The architecture of GNN+. integrated into the massage-passing process as follows: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e),(6) where Wl eis the trainable weight matrix in layer l, andeuv is the feature vector of the edge between uandv. 3.2. Normalization Normalization techniques play a critical role in stabilizing the training of GNNs by mitigating the effects of covariate shift, where the distribution of node embeddings changes across layers during training. By normalizing node em- beddings at each layer, the training process becomes more stable, enabling the use of higher learning rates and achiev- ing faster convergence (Cai et al., 2021). Batch Normalization (BN) (Ioffe & Szegedy, 2015) and Layer Normalization (LN) (Ba et al., 2016) are widely used techniques, typically applied to the output of each layer before the activation function σ(·). Here, we use BN: hl v=σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e)). (7) 3.3. Dropout Dropout (Srivastava et al., 2014), a technique widely used in convolutional neural networks (CNNs) to address overfitting by reducing co-adaptation among hidden neurons (Hinton et al., 2012; Yosinski et al., 2014), has also been found to be effective in addressing similar issues in GNNs (Shu et al., 2022), where the co-adaptation effects propagate and accu- mulate via message passing among different nodes. Typi- 3 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence cally, dropout is applied to the embeddings after activation: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))).(8) 3.4. Residual Connection Residual connections (He et al., 2016) significantly enhance CNN performance by directly connecting the input of a layer to its output, thus alleviating the problem of vanishing gra- dient. They were first adopted by the vanilla GCN (Kipf & Welling, 2017) and has since been incorporated into subse- quent works such as GatedGCN (Bresson & Laurent, 2017) and DeepGCNs (Li et al., 2019). Formally, residual connec- tions can be integrated into GNNs as follows: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v.(9) While deeper networks, such as deep CNNs (He et al., 2016; Huang et al., 2017), are capable of extract more complex fea- tures, GNNs encounter challenges like over-smoothing (Li et al., 2018), where deeper models lead to indistinguishable node representations. Consequently, most GNNs are shal- low, typically with 2 to 5 layers. However, by incorporating residual connections, we show that deeper GNNs, ranging from 3 to 20 layers, can achieve strong performance. 3.5. Feed-Forward Network GTs incorporate a feed-forward network (FFN) as a crucial component within each of their layers. The FFN enhances the model’s ability to perform complex feature transforma- tions and introduces non-linearity, thereby increasing the network’s expressive power. Inspired by this, we propose appending a fully-connected FFN at the end of each layer of GNNs, defined as: FFN(h) =BN(σ(hWl FFN 1)Wl FFN 2+h), (10) where Wl FFN 1andWl FFN 2are the trainable weight matrices of the FFN at the l-th GNN layer. The node embeddings output by the FFN are then computed as: hl v=FFN(Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v). (11) 3.6. Positional Encoding Positional encoding (PE) was introduced in the Transformer model (Vaswani et al., 2017) to represent the positions of tokens within a sequence for language modeling. In GTs,Table 1. Overview of the datasets used for graph-level tasks. Dataset # graphs Avg. # nodes Avg. # edges Task Type ZINC 12,000 23.2 24.9 Graph regression MNIST 70,000 70.6 564.5 Graph classification CIFAR10 60,000 117.6 941.1 Graph classification PATTERN 14,000 118.9 3,039.3 Inductive node cls. CLUSTER 12,000 117.2 2,150.9 Inductive node cls. Peptides-func 15,535 150.9 307.3 Graph classification Peptides-struct 15,535 150.9 307.3 Graph regression PascalVOC-SP 11,355 479.4 2,710.5 Inductive node cls. COCO-SP 123,286 476.9 2,693.7 Inductive node cls. MalNet-Tiny 5,000 1,410.3 2,859.9 Graph classification ogbg-molhiv 41,127 25.5 27.5 Graph classification ogbg-molpcba 437,929 26.0 28.1 Graph classification ogbg-ppa 158,100 243.4 2,266.1 Graph classification ogbg-code2 452,741 125.2 124.2 Graph classification PE is used to incorporate graph positional or structural infor- mation. The encodings are typically added or concatenated to the input node features xvbefore being fed into the GTs. Various PE methods have been proposed, such as Laplacian Positional Encoding (LapPE) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), Weisfeiler-Lehman Positional Encod- ing (WLPE) (Zhang et al., 2020), Random Walk Structural Encoding (RWSE) (Li et al., 2020; Dwivedi et al., 2021; Ramp ´aˇsek et al., 2022), Learnable Structural and Positional Encodings (LSPE) (Dwivedi et al., 2021), and Relative Ran- dom Walk Probabilities (RRWP) (Ma et al., 2023). Follow- ing the practice, we use RWSE, one of the most efficient PE methods, to improve the performance of GNNs as follows: xv= [xv∥xRWSE v]WPE, (12) where [·∥·]denotes concatenation, xRWSE v represents the RWSE of node v, andWPEis the trainable weight matrix. 4. Assessment: Experimental Setup Datasets, Table 1 . We use widely adopted graph-level datasets in our experiments, including ZINC ,MNIST , CIFAR10 ,PATTERN , and CLUSTER from the GNN Benchmark (Dwivedi et al., 2023); Peptides-func ,Peptides- struct ,PascalVOC-SP ,COCO-SP , and MalNet-Tiny from Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021); and ogbg-molhiv ,ogbg- molpcba ,ogbg-ppa , and ogbg-code2 from Open Graph Benchmark (OGB) (Hu et al., 2020). We follow their re- spective standard evaluation protocols including the splits and metrics. For further details, refer to the Appendix A.2. Baselines. Our main focus lies on classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018; Hu et al., 2019), GatedGCN (Bresson & Laurent, 2017), the SOTA GTs: GT (2020), GraphTrans (2021), SAN (2021), Graphormer (2021), SAT (2022), EGT (2022), GraphGPS (2022; 2023), GRPE (2022), Graphormer-URPE (2022), Graphormer-GD (2023), Specformer (2023), LGI- GT (2023), GPTrans-Nano (2023b), Graph ViT/MLP-Mixer (2023), NAGphormer (2023a), DIFFormer (2023), MGT 4 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 2. Test performance on five benchmarks from (Dwivedi et al., 2023) (%). Shown is the mean ±s.d. of 5 runs with different random seeds.+denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for ZINC, PATTERN, and CLUSTER, and ∼100K for MNIST and CIFAR10. The top 1st,2ndand3rdresults are highlighted. ZINC MNIST CIFAR10 PATTERN CLUSTER # graphs 12,000 70,000 60,000 14,000 12,000 Avg. # nodes 23.2 70.6 117.6 118.9 117.2 Avg. # edges 24.9 564.5 941.1 3039.3 2150.9 Metric MAE↓ Accuracy ↑ Accuracy ↑ Accuracy ↑ Accuracy ↑ GT (2020) 0.226 ±0.014 90.831 ±0.161 59.753 ±0.293 84.808 ±0.068 73.169 ±0.622 SAN (2021) 0.139 ±0.006 – – 86.581 ±0.037 76.691 ±0.650 Graphormer (2021) 0.122 ±0.006 – – – – SAT (2022) 0.094 ±0.008 – – 86.848 ±0.037 77.856 ±0.104 EGT (2022) 0.108 ±0.009 98.173 ±0.087 68.702 ±0.409 86.821 ±0.020 79.232 ±0.348 GraphGPS (2022) 0.070 ±0.004 98.051 ±0.126 72.298 ±0.356 86.685 ±0.059 78.016 ±0.180 GRPE (2022) 0.094 ±0.002 – – 87.020 ±0.042 – Graphormer-URPE (2022) 0.086 ±0.007 – – – – Graphormer-GD (2023) 0.081 ±0.009 – – – – Specformer (2023) 0.066 ±0.003 – – – – LGI-GT (2023) – – – 86.930 ±0.040 – GPTrans-Nano (2023b) – – – 86.731 ±0.085 – Graph ViT/MLP-Mixer (2023) 0.073 ±0.001 98.460 ±0.090 73.960 ±0.330 – – Exphormer (2023) – 98.414 ±0.038 74.754 ±0.194 86.734 ±0.008 – GRIT (2023) 0.059 ±0.002 98.108 ±0.111 76.468 ±0.881 87.196 ±0.076 80.026 ±0.277 GRED (2024) 0.077 ±0.002 98.383 ±0.012 76.853 ±0.185 86.759 ±0.020 78.495 ±0.103 GEAET (2024) – 98.513 ±0.086 76.634 ±0.427 86.993 ±0.026 – TIGT (2024) 0.057 ±0.002 98.231 ±0.132 73.963 ±0.361 86.681 ±0.062 78.025 ±0.223 Cluster-GT (2024a) 0.071 ±0.004 – – – – GMN (2024) – 98.391 ±0.182 74.560 ±0.381 87.090 ±1.260 – Graph-Mamba (2024) – 98.420 ±0.080 73.700 ±0.340 86.710 ±0.050 76.800 ±0.360 GCN 0.367 ±0.011 90.705 ±0.218 55.710 ±0.381 71.892 ±0.334 68.498 ±0.976 GCN+0.076 ±0.00979.3%↓98.382 ±0.0958.5%↑69.824 ±0.41325.4%↑87.021 ±0.09521.1%↑77.109 ±0.87212.6%↑ GIN 0.526 ±0.051 96.485 ±0.252 55.255 ±1.527 85.387 ±0.136 64.716 ±1.553 GIN+0.065 ±0.00487.6%↓98.285 ±0.1031.9%↑69.592 ±0.28725.9%↑86.842 ±0.0481.7%↑ 74.794 ±0.21315.6%↑ GatedGCN 0.282 ±0.015 97.340 ±0.143 67.312 ±0.311 85.568 ±0.088 73.840 ±0.326 GatedGCN+0.077 ±0.00572.7%↓98.712 ±0.1371.4%↑77.218 ±0.38114.7%↑87.029 ±0.0371.7%↑ 79.128 ±0.2357.1%↑ Time (epoch) of GraphGPS 21s 76s 64s 32s 86s Time (epoch) of GCN+7s 60s 40s 19s 29s (2023), DRew (2023), Exphormer (2023), GRIT (2023), GRED (2024), GEAET (2024), Subgraphormer (2024), TIGT (2024), GECO (2024), GPNN (2024), Cluster-GT (2024a), and the SOTA graph state space models (GSSMs): GMN (2024), Graph-Mamba (2024), GSSC (2024b). Fur- thermore, various other GTs exist in related surveys (Hoang et al., 2024; Shehzad et al., 2024; M ¨uller et al., 2023), empir- ically shown to be inferior to the GTs we compared against for graph-level tasks. We report the performance results of baselines primarily from (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023), with the remaining obtained from their re- spective original papers or official leaderboards whenever possible, as those results are obtained by well-tuned models. Hyperparameter Configurations. We conduct hyperpa- rameter tuning on 3 classic GNNs, consistent with the hy- perparameter search space of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). Specifically, we utilize the AdamW optimizer (Loshchilov, 2017) with a learning rate from{0.0001,0.0005,0.001}and an epoch limit of 2000. As discussed in Section 3, we focus on whether to use the edge feature module, normalization (BN), residual connections, FFN, PE (RWSE), and dropout rates from {0.05,0.1,0.15,0.2,0.3}, the number of layers from 3 to 20. Considering the large number of hyperparameters anddatasets, we do not perform an exhaustive search. Addition- ally, we retrain baseline GTs using the same hyperparam- eter search space and training environments as the classic GNNs. Since the retrained results did not surpass those in their original papers, we present the results from those sources .GNN+denotes the enhanced version. We report mean scores and standard deviations after 5 independent runs with different random seeds. Detailed hyperparameters are provided in Appendix A. 5. Assessment: Results and Findings 5.1. Overall Performance We evaluate the performance of the enhanced versions of 3 classic GNNs across 14 well-known graph-level datasets. The enhanced versions of classic GNNs achieved state- of-the-art performance, ranking in the top three across 14 datasets , including first place in 8 of them , while also demonstrating superior efficiency . This suggests that the GNN+framework effectively harnesses the po- tential of classic GNNs for graph-level tasks and suc- cessfully mitigates their inherent limitations. 5 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 3. Test performance on five datasets from Long-Range Graph Benchmarks (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021). +denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for all. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny # graphs 15,535 15,535 11,355 123,286 5,000 Avg. # nodes 150.9 150.9 479.4 476.9 1,410.3 Avg. # edges 307.3 307.3 2,710.5 2,693.7 2,859.9 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ GT (2020) 0.6326 ±0.0126 0.2529 ±0.0016 0.2694 ±0.0098 0.2618 ±0.0031 – SAN (2021) 0.6439 ±0.0075 0.2545 ±0.0012 0.3230 ±0.0039 0.2592 ±0.0158 – GraphGPS (2022) 0.6535 ±0.0041 0.2500 ±0.0005 0.3748 ±0.0109 0.3412 ±0.0044 0.9350 ±0.0041 GraphGPS (2023) 0.6534 ±0.0091 0.2509 ±0.0014 0.4440 ±0.0065 0.3884 ±0.0055 0.9350 ±0.0041 NAGphormer (2023a) – – 0.4006 ±0.0061 0.3458 ±0.0070 – DIFFormer (2023) – – 0.3988 ±0.0045 0.3620 ±0.0012 – MGT (2023) 0.6817 ±0.0064 0.2453 ±0.0025 – – – DRew (2023) 0.7150 ±0.0044 0.2536 ±0.0015 0.3314 ±0.0024 – – Graph ViT/MLP-Mixer (2023) 0.6970 ±0.0080 0.2449 ±0.0016 – – – Exphormer (2023) 0.6258 ±0.0092 0.2512 ±0.0025 0.3446 ±0.0064 0.3430 ±0.0108 0.9402 ±0.0021 GRIT (2023) 0.6988 ±0.0082 0.2460 ±0.0012 – – – Subgraphormer (2024) 0.6415 ±0.0052 0.2475 ±0.0007 – – – GRED (2024) 0.7133 ±0.0011 0.2455 ±0.0013 – – – GEAET (2024) 0.6485 ±0.0035 0.2547 ±0.0009 0.3933 ±0.0027 0.3219 ±0.0052 – TIGT (2024) 0.6679 ±0.0074 0.2485 ±0.0015 – – – GECO (2024) 0.6975 ±0.0025 0.2464 ±0.0009 0.4210 ±0.0080 0.3320 ±0.0032 – GPNN (2024) 0.6955 ±0.0057 0.2454 ±0.0003 – – – Graph-Mamba (2024) 0.6739 ±0.0087 0.2478 ±0.0016 0.4191 ±0.0126 0.3960 ±0.0175 0.9340 ±0.0027 GSSC (2024b) 0.7081 ±0.0062 0.2459 ±0.0020 0.4561 ±0.0039 – 0.9406 ±0.0064 GCN 0.6860 ±0.0050 0.2460 ±0.0007 0.2078 ±0.0031 0.1338 ±0.0007 0.8100 ±0.0081 GCN+0.7261 ±0.0067 5.9%↑0.2421 ±0.0016 1.6%↓0.3357 ±0.0087 62.0%↑0.2733 ±0.0041 104.9% ↑0.9354 ±0.0045 15.5%↑ GIN 0.6621 ±0.0067 0.2473 ±0.0017 0.2718 ±0.0054 0.2125 ±0.0009 0.8898 ±0.0055 GIN+0.7059 ±0.0089 6.6%↑0.2429 ±0.0019 1.8%↓0.3189 ±0.0105 17.3%↑0.2483 ±0.0046 16.9%↑ 0.9325 ±0.0040 4.8%↑ GatedGCN 0.6765 ±0.0047 0.2477 ±0.0009 0.3880 ±0.0040 0.2922 ±0.0018 0.9223 ±0.0065 GatedGCN+0.7006 ±0.0033 3.6%↑0.2431 ±0.0020 1.9%↓0.4263 ±0.0057 9.9%↑ 0.3802 ±0.0015 30.1%↑ 0.9460 ±0.0057 2.6%↑ Time (epoch) of GraphGPS 6s 6s 17s 213s 46s Time (epoch) of GCN+6s 6s 12s 162s 6s GNN Benchmark, Table 2. We observe that our GNN+ implementation substantially enhances the performance of classic GNNs, with the most significant improvements on ZINC, PATTERN, and CLUSTER. On MNIST and CIFAR, GatedGCN+outperforms SOTA models such as GEAET and GRED, securing top rankings. Long-Range Graph Benchmark (LRGB), Table 3. The results reveal that classic GNNs can achieve strong perfor- mance across LRGB datasets. Specifically, GCN+excels on the Peptides-func and Peptides-struct datasets. On the other hand, GatedGCN+achieves the highest accuracy on MalNet-Tiny. Furthermore, on PascalVOC-SP and COCO- SP, GatedGCN+significantly improves performance, se- curing the third-best model ranking overall. These results highlight the potential of classic GNNs in capturing long- range interactions in graph-level tasks. Open Graph Benchmark (OGB), Table 4. Finally, we test our method on four OGB datasets. As shown in Table 4, GatedGCN+consistently ranks among the top three mod- els and achieves top performance on three out of the four datasets. On ogbg-ppa, GatedGCN+shows an improve- ment of approximately 9%, ranking first on the OGB leader- board. On ogbg-molhiv and ogbg-molpcba, GatedGCN+ even matches the performance of Graphormer and EGT pre-trained on other datasets. Additionally, on ogbg-code2, GatedGCN+secures the third-highest performance, under-scoring the potential of GNNs for large-scale OGB datasets. 5.2. Ablation Study To examine the unique contributions of different technique used in GNN+, we conduct a series of ablation analysis by selectively removing elements such as edge feature module (Edge.), normalization (Norm), dropout, residual connec- tions (RC), FFN, PE from GCN+, GIN+, and GatedGCN+. The effect of these ablations is assessed across GNN Bench- mark (see Table 5), LRGB, and OGB (see Table 6) datasets. Our ablation study demonstrates that each module incor- porated in GNN+—including edge feature integration, normalization, dropout, residual connections, FFN, and PE—is indispensable ; the removal of any single com- ponent results in a degradation of overall performance. Observation 1: The integration of edge features is par- ticularly effective in molecular and image superpixel datasets, where these features carry critical information. In molecular graphs such as ZINC and ogbg-molhiv, edge features represent chemical bond information, which is es- sential for molecular properties. Removing this module leads to a significant performance drop. In protein networks ogbg-ppa, edges represent normalized associations between proteins. Removing the edge feature module results in a sub- 6 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 4. Test performance in four benchmarks from Open Graph Benchmark (OGB) (Hu et al., 2020).+denotes the enhanced version, while the baseline results were obtained from their respective original papers.†indicates the use of additional pretraining datasets, included here for reference only and excluded from ranking. ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # graphs 41,127 437,929 158,100 452,741 Avg. # nodes 25.5 26.0 243.4 125.2 Avg. # edges 27.5 28.1 2,266.1 124.2 Metric AUROC ↑ Avg. Precision ↑ Accuracy ↑ F1 score ↑ GT (2020) – – 0.6454 ±0.0033 0.1670 ±0.0015 GraphTrans (2021) – 0.2761 ±0.0029 – 0.1830 ±0.0024 SAN (2021) 0.7785 ±0.2470 0.2765 ±0.0042 – – Graphormer (pre-trained) (2021) 0.8051 ±0.0053†– – – SAT (2022) – – 0.7522 ±0.0056 0.1937 ±0.0028 EGT (pre-trained) (2022) 0.8060 ±0.0065†0.2961 ±0.0024†– – GraphGPS (2022) 0.7880 ±0.0101 0.2907 ±0.0028 0.8015 ±0.0033 0.1894 ±0.0024 Specformer (2023) 0.7889 ±0.0124 0.2972 ±0.0023 – – Graph ViT/MLP-Mixer (2023) 0.7997 ±0.0102 – – – Exphormer (2023) 0.7834 ±0.0044 0.2849 ±0.0025 – – GRIT (2023) 0.7835 ±0.0054 0.2362 ±0.0020 – – Subgraphormer (2024) 0.8038 ±0.0192 – – – GECO (2024) 0.7980 ±0.0200 0.2961 ±0.0008 0.7982 ±0.0042 0.1915 ±0.0020 GSSC (2024b) 0.8035 ±0.0142 – – – GCN 0.7606 ±0.0097 0.2020 ±0.0024 0.6839 ±0.0084 0.1507 ±0.0018 GCN+0.8012 ±0.0124 5.4%↑0.2721 ±0.0046 34.7%↑0.8077 ±0.0041 18.1%↑0.1787 ±0.0026 18.6%↑ GIN 0.7835 ±0.0125 0.2266 ±0.0028 0.6892 ±0.0100 0.1495 ±0.0023 GIN+0.7928 ±0.0099 1.2%↑0.2703 ±0.0024 19.3%↑0.8107 ±0.0053 17.7%↑0.1803 ±0.0019 20.6%↑ GatedGCN 0.7687 ±0.0136 0.2670 ±0.0020 0.7531 ±0.0083 0.1606 ±0.0015 GatedGCN+0.8040 ±0.0164 4.6%↑0.2981 ±0.0024 11.6%↑0.8258 ±0.0055 9.7%↑ 0.1896 ±0.0024 18.1%↑ Time (epoch/s) of GraphGPS 96s 196s 276s 1919s Time (epoch/s) of GCN+16s 91s 178s 476s Table 5. Ablation study on GNN Benchmark (Dwivedi et al., 2023) (%). - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. ZINC MNIST CIFAR10 PATTERN CLUSTER Metric MAE↓ Accuracy ↑Accuracy ↑Accuracy ↑Accuracy ↑ GCN+0.076 ±0.009 98.382 ±0.095 69.824 ±0.413 87.021 ±0.095 77.109 ±0.872 (-) Edge. 0.135 ±0.004 98.153 ±0.042 68.256 ±0.357 86.854 ±0.054 – (-) Norm 0.107 ±0.011 97.886 ±0.066 60.765 ±0.829 52.769 ±0.874 16.563 ±0.134 (-) Dropout – 97.897 ±0.071 65.693 ±0.461 86.764 ±0.045 74.926 ±0.469 (-) RC 0.159 ±0.016 95.929 ±0.169 58.186 ±0.295 86.059 ±0.274 16.508 ±0.615 (-) FFN 0.132 ±0.021 97.174 ±0.063 63.573 ±0.346 86.746 ±0.088 72.606 ±1.243 (-) PE 0.127 ±0.010 – – 85.597 ±0.241 75.568 ±1.147 GIN+0.065 ±0.004 98.285 ±0.103 69.592 ±0.287 86.842 ±0.048 74.794 ±0.213 (-) Edge. 0.122 ±0.009 97.655 ±0.075 68.196 ±0.107 86.714 ±0.036 65.895 ±3.425 (-) Norm 0.096 ±0.006 97.695 ±0.065 64.918 ±0.059 86.815 ±0.855 72.119 ±0.359 (-) Dropout – 98.214 ±0.064 66.638 ±0.873 86.836 ±0.053 73.316 ±0.355 (-) RC 0.137 ±0.031 97.675 ±0.175 64.910 ±0.102 86.645 ±0.125 16.800 ±0.088 (-) FFN 0.104 ±0.003 11.350 ±0.008 60.582 ±0.395 58.511 ±0.016 62.175 ±2.895 (-) PE 0.123 ±0.014 – – 86.592 ±0.049 73.925 ±0.165 GatedGCN+0.077 ±0.005 98.712 ±0.137 77.218 ±0.381 87.029 ±0.037 79.128 ±0.235 (-) Edge. 0.119 ±0.001 98.085 ±0.045 72.128 ±0.275 86.879 ±0.017 76.075 ±0.845 (-) Norm 0.088 ±0.003 98.275 ±0.045 71.995 ±0.445 86.942 ±0.023 78.495 ±0.155 (-) Dropout 0.089 ±0.003 98.225 ±0.095 70.383 ±0.429 86.802 ±0.034 77.597 ±0.126 (-) RC 0.106 ±0.002 98.442 ±0.067 75.149 ±0.155 86.845 ±0.025 16.670 ±0.307 (-) FFN 0.098 ±0.005 98.438 ±0.151 76.243 ±0.131 86.935 ±0.025 78.975 ±0.145 (-) PE 0.174 ±0.009 – – 85.595 ±0.065 77.515 ±0.265 stantial accuracy decline, ranging from 0.5083 to 0.7310 for classic GNNs. Similarly, in image superpixel datasets like CIFAR-10, PascalVOC-SP, and COCO-SP, edge features encode spatial relationships between superpixels, which are crucial for maintaining image coherence. However, in codegraphs such as ogbg-code2 and MalNet-Tiny, where edges represent call types, edge features are less relevant to the prediction tasks, and their removal has minimal impact. Observation 2: Normalization tends to have a greater impact on larger-scale datasets, whereas its impact is less significant on smaller datasets. For large-scale datasets such as CIFAR 10, COCO-SP, and the OGB datasets, removing normalization leads to signifi- cant performance drops. Specifically, on ogbg-ppa, which has 158,100 graphs, ablating normalization results in an accuracy drop of around 15% for three classic GNNs. This result is consistent with Luo et al. (2024a), who found that normalization is more important for GNNs in node clas- sification on large graphs. In such datasets, where node feature distributions are more complex, normalizing node embeddings is essential for stabilizing the training process. Observation 3: Dropout proves advantageous for most datasets, with a very low dropout rate being sufficient and optimal . Our analysis highlights the crucial role of dropout in main- taining the performance of classic GNNs on GNN Bench- mark and LRGB and large-scale OGB datasets, with its ablation causing significant declines—for instance, an 8.8% relative decrease for GatedGCN+on CIFAR-10 and a 20.4% relative decrease on PascalVOC-SP. This trend continues in 7 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 6. Ablation study on LRGB and OGB datasets. - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ AUROC ↑Avg. Precision ↑Accuracy ↑ F1 score ↑ GCN+0.7261 ±0.0067 0.2421 ±0.0016 0.3357 ±0.0087 0.2733 ±0.0041 0.9354 ±0.0045 0.8012 ±0.0124 0.2721 ±0.0046 0.8077 ±0.0041 0.1787 ±0.0026 (-) Edge. 0.7191 ±0.0036 – 0.2942 ±0.0043 0.2219 ±0.0060 0.9292 ±0.0034 0.7714 ±0.0204 0.2628 ±0.0019 0.2994 ±0.0062 0.1785 ±0.0033 (-) Norm 0.7107 ±0.0027 0.2509 ±0.0026 0.1802 ±0.0111 0.2332 ±0.0079 0.9236 ±0.0054 0.7753 ±0.0049 0.2528 ±0.0016 0.6705 ±0.0104 0.1679 ±0.0027 (-) Dropout 0.6748 ±0.0055 0.2549 ±0.0025 0.3072 ±0.0069 0.2601 ±0.0046 – 0.7431 ±0.0185 0.2405 ±0.0047 0.7893 ±0.0052 0.1641 ±0.0043 (-) RC – – 0.2734 ±0.0036 0.1948 ±0.0096 0.8916 ±0.0048 – – 0.7520 ±0.0157 0.1785 ±0.0029 (-) FFN – – 0.2786 ±0.0068 0.2314 ±0.0073 0.9118 ±0.0078 0.7432 ±0.0052 0.2621 ±0.0019 0.7672 ±0.0071 0.1594 ±0.0020 (-) PE 0.7069 ±0.0093 0.2447 ±0.0015 – – – 0.7593 ±0.0051 0.2667 ±0.0034 – – GIN+0.7059 ±0.0089 0.2429 ±0.0019 0.3189 ±0.0105 0.2483 ±0.0046 0.9325 ±0.0040 0.7928 ±0.0099 0.2703 ±0.0024 0.8107 ±0.0053 0.1803 ±0.0019 (-) Edge. 0.7033 ±0.0015 0.2442 ±0.0028 0.2956 ±0.0047 0.2259 ±0.0053 0.9286 ±0.0049 0.7597 ±0.0103 0.2702 ±0.0021 0.2789 ±0.0031 0.1752 ±0.0020 (-) Norm 0.6934 ±0.0077 0.2444 ±0.0015 0.2707 ±0.0037 0.2244 ±0.0063 0.9322 ±0.0025 0.7874 ±0.0114 0.2556 ±0.0026 0.6484 ±0.0246 0.1722 ±0.0034 (-) Dropout 0.6384 ±0.0094 0.2531 ±0.0030 0.3153 ±0.0113 – – – 0.2545 ±0.0068 0.7673 ±0.0059 0.1730 ±0.0018 (-) RC 0.6975 ±0.0038 0.2527 ±0.0015 0.2350 ±0.0044 0.1741 ±0.0085 0.9150 ±0.0047 0.7733 ±0.0122 0.1454 ±0.0061 – 0.1617 ±0.0026 (-) FFN – – 0.2393 ±0.0049 0.1599 ±0.0081 0.8944 ±0.0074 – 0.2534 ±0.0033 0.6676 ±0.0039 0.1491 ±0.0016 (-) PE 0.6855 ±0.0027 0.2455 ±0.0019 0.3141 ±0.0031 – – 0.7791 ±0.0268 0.2601 ±0.0023 – – GatedGCN+0.7006 ±0.0033 0.2431 ±0.0020 0.4263 ±0.0057 0.3802 ±0.0015 0.9460 ±0.0057 0.8040 ±0.0164 0.2981 ±0.0024 0.8258 ±0.0055 0.1896 ±0.0024 (-) Edge. 0.6882 ±0.0028 0.2466 ±0.0018 0.3764 ±0.0117 0.3172 ±0.0109 0.9372 ±0.0062 0.7831 ±0.0157 0.2951 ±0.0028 0.0948 ±0.0000 0.1891 ±0.0021 (-) Norm 0.6733 ±0.0026 0.2474 ±0.0015 0.3628 ±0.0043 0.3527 ±0.0051 0.9326 ±0.0056 0.7879 ±0.0178 0.2748 ±0.0012 0.6864 ±0.0165 0.1743 ±0.0026 (-) Dropout 0.6695 ±0.0101 0.2508 ±0.0014 0.3389 ±0.0066 0.3393 ±0.0051 – – 0.2582 ±0.0036 0.8088 ±0.0062 0.1724 ±0.0027 (-) RC – 0.2498 ±0.0034 0.4075 ±0.0052 0.3475 ±0.0064 0.9402 ±0.0054 0.7833 ±0.0177 0.2897 ±0.0016 0.8099 ±0.0053 0.1844 ±0.0025 (-) FFN – – – 0.3508 ±0.0049 0.9364 ±0.0059 – 0.2875 ±0.0022 – 0.1718 ±0.0024 (-) PE 0.6729 ±0.0084 0.2461 ±0.0025 0.4052 ±0.0031 – – 0.7771 ±0.0057 0.2813 ±0.0022 – – large-scale OGB datasets, where removing dropout results in a 5–13% performance drop across 3 classic GNNs on ogbg-molpcba. Notably, 97% of the optimal dropout rates are≤0.2, and 64% are ≤0.1, indicating that a very low dropout rate is both sufficient and optimal for graph-level tasks. Interestingly, this finding for graph-level tasks con- trasts with Luo et al. (2024a)’s observations for node-level tasks, where a higher dropout rate is typically required. Observation 4: Residual connections are generally es- sential, except in shallow GNNs applied to small graphs. Removing residual connections generally leads to signifi- cant performance drops across datasets, with the only excep- tions being found in the peptide datasets. Although similar in the number of nodes to CLUSTER and PATTERN, pep- tide datasets involve GNNs with only 3-5 layers, while the others use deeper networks with over 10 layers. For shallow networks in small graphs, residual connections may not be as beneficial and can even hurt performance by disrupting feature flow. In contrast, deeper networks in larger graphs rely on residual connections to maintain gradient flow and enable stable, reliable long-range information exchange. Observation 5: FFN is crucial for GIN+and GCN+, greatly impacting their performance across datasets. Ablating FFN leads to substantial performance declines for GIN+and GCN+across almost all datasets, highlighting its essential role in graph-level tasks. Notably, on MNIST, removing FNN leads to an 88% relative accuracy drop for GIN+. This is likely because the architectures of GIN+and GCN+rely heavily on FFN for learning complex node fea-ture representations. In contrast, GatedGCN+uses gating mechanisms to adaptively adjust the importance of neigh- boring nodes’ information, reducing the need for additional feature transformations. The only exceptions are observed in the peptides datasets, where FFN is not used in all three models. This may be due to the shallow GNN architecture, where complex feature transformations are less necessary. Observation 6: PE is particularly effective for small- scale datasets, but negligible for large-scale datasets. Removing PE significantly reduces performance for classic GNNs on small-scale datasets like ZINC, PATTERN, CLUS- TER, Peptides-func, and ogbg-molhiv, which only contain 10,000-40,000 graphs. By contrast, on large-scale datasets like ogbg-code2, ogbg-molpcba, ogbg-ppa, and COCO-SP (over 100,000 graphs), the impact of PE is less pronounced. This may be because smaller datasets rely more on PE to capture graph structure, whereas larger datasets benefit from the abundance of data, reducing the need for PE. 6. Conclusion This study highlights the often-overlooked potential of clas- sic GNNs in tacking graph-level tasks. By integrating six widely used techniques into a unified GNN+framework, we enhance three classic GNNs for graph-level tasks. Evalu- ations on 14 benchmark datasets reveal that, these enhanced GNNs match or outperform GTs, while also demonstrating greater efficiency. These findings challenge the prevailing belief that GTs are inherently superior, reaffirming the capa- bility of simple GNN structures as powerful models. 8 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Impact Statements This paper presents work whose goal is to advance the field of Graph Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. References Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205 , 2020. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. Bar-Shalom, G., Bevilacqua, B., and Maron, H. Sub- graphormer: Unifying subgraph gnns and graph transformers via graph products. arXiv preprint arXiv:2402.08450 , 2024. Behrouz, A. and Hashemi, F. Graph mamba: Towards learn- ing on graphs with state space models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 119–130, 2024. Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028 , 2023. Bresson, X. and Laurent, T. Residual gated graph convnets. arXiv preprint arXiv:1711.07553 , 2017. Cai, T., Luo, S., Xu, K., He, D., Liu, T.-y., and Wang, L. Graphnorm: A principled approach to accelerating graph neural network training. In International Conference on Machine Learning , pp. 1204–1215. PMLR, 2021. Chen, D., Lin, Y ., Li, W., Li, P., Zhou, J., and Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelli- gence , volume 34, pp. 3438–3445, 2020. Chen, D., O’Bray, L., and Borgwardt, K. Structure-aware transformer for graph representation learning. In Interna- tional Conference on Machine Learning , pp. 3469–3489. PMLR, 2022. Chen, J., Gao, K., Li, G., and He, K. NAGphormer: A tokenized graph transformer for node classification in large graphs. In The Eleventh International Confer- ence on Learning Representations , 2023a. URL https: //openreview.net/forum?id=8KYeilT3Ow. Chen, Z., Tan, H., Wang, T., Shen, T., Lu, T., Peng, Q., Cheng, C., and Qi, Y . Graph propagation trans- former for graph representation learning. arXiv preprint arXiv:2305.11424 , 2023b.Choi, Y . Y ., Park, S. W., Lee, M., and Woo, Y . Topology-informed graph transformer. arXiv preprint arXiv:2402.02005 , 2024. Ding, Y ., Orvieto, A., He, B., and Hofmann, T. Recurrent distance-encoding neural networks for graph representa- tion learning, 2024. URL https://openreview.net/forum? id=lNIj5FdXsC. Dwivedi, V . P. and Bresson, X. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699 , 2020. Dwivedi, V . P., Luu, A. T., Laurent, T., Bengio, Y ., and Bres- son, X. Graph neural networks with learnable structural and positional representations. In International Confer- ence on Learning Representations , 2021. Dwivedi, V . P., Ramp ´aˇsek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph bench- mark. arXiv preprint arXiv:2206.08164 , 2022. Dwivedi, V . P., Joshi, C. K., Luu, A. T., Laurent, T., Ben- gio, Y ., and Bresson, X. Benchmarking graph neural networks. Journal of Machine Learning Research , 24 (43):1–48, 2023. Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 , 2019. Freitas, S. and Dong, Y . A large-scale database for graph representation learning. Advances in neural information processing systems , 2021. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chem- istry. In International conference on machine learning , pp. 1263–1272. PMLR, 2017. Gutteridge, B., Dong, X., Bronstein, M. M., and Di Gio- vanni, F. Drew: Dynamically rewired message pass- ing with delay. In International Conference on Machine Learning , pp. 12252–12267. PMLR, 2023. Hamilton, W., Ying, Z., and Leskovec, J. Inductive repre- sentation learning on large graphs. Advances in neural information processing systems , 30, 2017. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. A generalization of vit/mlp-mixer to graphs. InInternational conference on machine learning , pp. 12724–12745. PMLR, 2023. 9 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012. Hoang, V . T., Lee, O., et al. A survey on structure-preserving graph transformers. arXiv preprint arXiv:2401.16176 , 2024. Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V ., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 , 2019. Hu, W., Fey, M., Zitnik, M., Dong, Y ., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems , 33:22118–22133, 2020. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 4700–4708, 2017. Huang, S., Song, Y ., Zhou, J., and Lin, Z. Cluster-wise graph transformer with dual-granularity kernelized at- tention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. URL https://openreview.net/forum?id=3j2nasmKkP. Huang, Y ., Miao, S., and Li, P. What can we learn from state space models for machine learning on graphs? arXiv preprint arXiv:2406.05815 , 2024b. Hussain, M. S., Zaki, M. J., and Subramanian, D. Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 655–665, 2022. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. InInternational conference on machine learning , pp. 448– 456. pmlr, 2015. Kipf, T. N. and Welling, M. Semi-supervised classifica- tion with graph convolutional networks. In International Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl. Kreuzer, D., Beaini, D., Hamilton, W., L ´etourneau, V ., and Tossou, P. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems , 34:21618–21629, 2021. Li, G., Muller, M., Thabet, A., and Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision , pp. 9267–9276, 2019.Li, P., Wang, Y ., Wang, H., and Leskovec, J. Distance en- coding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems , 33:4465–4478, 2020. Li, Q., Han, Z., and Wu, X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI conference on artificial intelligence , 2018. Liang, J., Chen, M., and Liang, J. Graph external attention enhanced transformer. arXiv preprint arXiv:2405.21061 , 2024. Lin, C., Ma, L., Chen, Y ., Ouyang, W., Bronstein, M. M., and Torr, P. Understanding graph transformers by gen- eralized propagation, 2024. URL https://openreview.net/ forum?id=JfjduOxrTY. Loshchilov, I. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. Luo, S., Li, S., Zheng, S., Liu, T.-Y ., Wang, L., and He, D. Your transformer may not be as powerful as you expect. Advances in Neural Information Processing Systems , 35: 4301–4315, 2022. Luo, Y ., Shi, L., and Thost, V . Improving self-supervised molecular representation learning using persistent homol- ogy. In Thirty-seventh Conference on Neural Information Processing Systems , 2023a. URL https://openreview.net/ forum?id=wEiUGpcr0M. Luo, Y ., Shi, L., Xu, M., Ji, Y ., Xiao, F., Hu, C., and Shan, Z. Impact-oriented contextual scholar profiling using self-citation graphs. arXiv preprint arXiv:2304.12217 , 2023b. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. In Thirty-seventh Conference on Neural Information Processing Systems , 2023c. URL https:// openreview.net/forum?id=g49s1N5nmO. Luo, Y ., Shi, L., and Wu, X.-M. Classic GNNs are strong baselines: Reassessing GNNs for node classification. In The Thirty-eight Conference on Neural Information Pro- cessing Systems Datasets and Benchmarks Track , 2024a. URL https://openreview.net/forum?id=xkljKdGe4E. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. Advances in Neural Information Process- ing Systems , 36, 2024b. Luo, Y ., Li, H., Liu, Q., Shi, L., and Wu, X.-M. Node identifiers: Compact, discrete representations for effi- cient graph learning. In The Thirteenth International Conference on Learning Representations , 2025a. URL https://openreview.net/forum?id=t9lS1lX9FQ. 10 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Luo, Y ., Wu, X.-M., and Zhu, H. Beyond random masking: When dropout meets graph convolutional networks. In The Thirteenth International Conference on Learning Representations , 2025b. URL https://openreview.net/ forum?id=PwxYoMvmvy. Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P., and Lim, S.-N. Graph inductive biases in transformers without message passing. arXiv preprint arXiv:2305.17589 , 2023. Min, E., Chen, R., Bian, Y ., Xu, T., Zhao, K., Huang, W., Zhao, P., Huang, J., Ananiadou, S., and Rong, Y . Trans- former for graphs: An overview from architecture per- spective. arXiv preprint arXiv:2202.08455 , 2022. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Pro- ceedings of the AAAI conference on artificial intelligence , volume 33, pp. 4602–4609, 2019. Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M. Tudataset: A collection of bench- mark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 , 2020. M¨uller, L., Galkin, M., Morris, C., and Ramp ´aˇsek, L. Attending to graph transformers. arXiv preprint arXiv:2302.04181 , 2023. Ngo, N. K., Hy, T. S., and Kondor, R. Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics , 159(3), 2023. Niepert, M., Ahmed, M., and Kutzkov, K. Learning con- volutional neural networks for graphs. In International conference on machine learning , pp. 2014–2023. PMLR, 2016. Park, W., Chang, W., Lee, D., Kim, J., and Hwang, S.-w. Grpe: Relative positional encoding for graph transformer. arXiv preprint arXiv:2201.12787 , 2022. Ramp ´aˇsek, L., Galkin, M., Dwivedi, V . P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scal- able graph transformer. arXiv preprint arXiv:2205.12454 , 2022. Sancak, K., Hua, Z., Fang, J., Xie, Y ., Malevich, A., Long, B., Balin, M. F., and C ¸ataly ¨urek, ¨U. V . A scalable and effective alternative to graph transformers. arXiv preprint arXiv:2406.12059 , 2024. Shehzad, A., Xia, F., Abid, S., Peng, C., Yu, S., Zhang, D., and Verspoor, K. Graph transformers: A survey. arXiv preprint arXiv:2407.09777 , 2024.Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. arXiv preprint arXiv:2303.06147 , 2023. Shu, J., Xi, B., Li, Y ., Wu, F., Kamhoua, C., and Ma, J. Understanding dropout for graph neural networks. In Companion Proceedings of the Web Conference 2022 , pp. 1128–1138, 2022. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929–1958, 2014. Tang, J., Sun, J., Wang, C., and Yang, Z. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowl- edge discovery and data mining , pp. 807–816, 2009. T¨onshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. arXiv preprint arXiv:2309.00367 , 2023. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. Veliˇckovi ´c, P., Cucurull, G., Casanova, A., Romero, A., Li`o, P., and Bengio, Y . Graph attention networks. In International Conference on Learning Representations , 2018. Wang, C., Tsepa, O., Ma, J., and Wang, B. Graph-mamba: Towards long-range graph sequence modeling with se- lective state spaces. arXiv preprint arXiv:2402.00789 , 2024. Wu, Q., Yang, C., Zhao, W., He, Y ., Wipf, D., and Yan, J. DIFFormer: Scalable (graph) transformers induced by en- ergy constrained diffusion. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=j6zUzrapY3L. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y . A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning sys- tems, 32(1):4–24, 2020. Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems , 34:13266– 13279, 2021. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. 11 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yang, Z., Cohen, W., and Salakhudinov, R. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning , pp. 40–48. PMLR, 2016. Yin, S. and Zhong, G. Lgi-gt: Graph transformers with local and global operators interleaving. 2023. Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y ., and Liu, T.-Y . Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems , 34:28877–28888, 2021. Yosinski, J., Clune, J., Bengio, Y ., and Lipson, H. How trans- ferable are features in deep neural networks? Advances in neural information processing systems , 27, 2014. Yun, S., Jeong, M., Kim, R., Kang, J., and Kim, H. J. Graph transformer networks. Advances in neural information processing systems , 32, 2019. Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of GNNs via graph biconnectivity. In The Eleventh International Conference on Learning Rep- resentations , 2023. URL https://openreview.net/forum? id=r9hNv76KoT3. Zhang, J., Zhang, H., Xia, C., and Sun, L. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140 , 2020. 12 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence A. Datasets and Experimental Details A.1. Computing Environment Our implementation is based on PyG (Fey & Lenssen, 2019). The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. A.2. Datasets Table 7 presents a summary of the statistics and characteristics of the datasets. •GNN Benchmark (Dwivedi et al., 2023) . ZINC contains molecular graphs with node features representing atoms and edge features representing bonds The task is to regress the constrained solubility (logP) of the molecule. MNIST and CIFAR10 are adapted from image classification datasets, where each image is represented as an 8-nearest-neighbor graph of SLIC superpixels, with nodes representing superpixels and edges representing spatial relationships. The 10-class classification tasks follow the original image classification tasks. PATTERN andCLUSTER are synthetic datasets sampled from the Stochastic Block Model (SBM) for inductive node classification, with tasks involving sub-graph pattern recognition and cluster ID inference. For all datasets, we adhere to the respective training protocols and standard evaluation splits (Dwivedi et al., 2023). •Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021) . Peptides-func andPeptides- struct are atomic graphs of peptides from SATPdb, with tasks of multi-label graph classification into 10 peptide functional classes and graph regression for 11 3D structural properties, respectively. PascalVOC-SP andCOCO-SP are node classification datasets derived from the Pascal VOC and MS COCO images by SLIC superpixelization, where each superpixel node belongs to a particular object class. We did not use PCQM-Contact in (Dwivedi et al., 2022) as its download link was no longer valid. MalNet-Tiny (Freitas & Dong, 2021) is a subset of MalNet with 5,000 function call graphs (FCGs) from Android APKs, where the task is to predict software type based on structure alone. For each dataset, we follow standard training protocols and splits (Dwivedi et al., 2022; Freitas & Dong, 2021). •Open Graph Benchmark (OGB) (Hu et al., 2020) .We also consider a collection of larger-scale datasets from OGB, containing graphs in the range of hundreds of thousands to millions: ogbg-molhiv andogbg-molpcba are molecular property prediction datasets from MoleculeNet. ogbg-molhiv involves binary classification of HIV inhibition, while ogbg-molpcba predicts results of 128 bioassays in a multi-task setting. ogbg-ppa contains protein-protein association networks, where nodes represent proteins and edges encode normalized associations between them; the task is to classify the origin of the network among 37 taxonomic groups. ogbg-code2 consists of abstract syntax trees (ASTs) from Python source code, with the task of predicting the first 5 subtokens of the function’s name. We maintain all the OGB standard evaluation settings (Hu et al., 2020). Table 7. Overview of the datasets used for graph-level tasks (Dwivedi et al., 2023; 2022; Hu et al., 2020; Freitas & Dong, 2021). Dataset # graphs Avg. # nodes Avg. # edges # node/edge feats Prediction level Prediction task Metric ZINC 12,000 23.2 24.9 28/1 graph regression MAE MNIST 70,000 70.6 564.5 3/1 graph 10-class classif. Accuracy CIFAR10 60,000 117.6 941.1 5/1 graph 10-class classif. Accuracy PATTERN 14,000 118.9 3,039.3 3/1 inductive node binary classif. Accuracy CLUSTER 12,000 117.2 2,150.9 7/1 inductive node 6-class classif. Accuracy Peptides-func 15,535 150.9 307.3 9/3 graph 10-task classif. Avg. Precision Peptides-struct 15,535 150.9 307.3 9/3 graph 11-task regression MAE PascalVOC-SP 11,355 479.4 2,710.5 14/2 inductive node 21-class classif. F1 score COCO-SP 123,286 476.9 2,693.7 14/2 inductive node 81-class classif. F1 score MalNet-Tiny 5,000 1,410.3 2,859.9 5/1 graph 5-class classif. Accuracy ogbg-molhiv 41,127 25.5 27.5 9/3 graph binary classif. AUROC ogbg-molpcba 437,929 26.0 28.1 9/3 graph 128-task classif. Avg. Precision ogbg-ppa 158,100 243.4 2,266.1 1/7 graph 37-task classif. Accuracy ogbg-code2 452,741 125.2 124.2 2/2 graph 5 token sequence F1 score A.3. Hyperparameters and Reproducibility Please note that we mainly follow the experiment settings of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). For the hyperparameter selections of classic GNNs, in addition to what we have covered, we list other settings in Tables 8, 9, 10, 13 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence 11, 12, 13. Further details regarding hyperparameters can be found in our code. In all experiments, we use the validation set to select the best hyperparameters. GNN+denotes enhanced implementation of the GNN model. Our code is available under the MIT License. Table 8. Hyperparameter settings of GCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 6 5 12 12 Edge Feature Module True True True True False Normalization BN BN BN BN BN Dropout 0.0 0.15 0.05 0.05 0.1 Residual Connections True True True True True FFN True True True True True PE RWSE-32 False False RWSE-32 RWSE-20 Hidden Dim 64 60 65 90 90 Graph Pooling add mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.0005 0.001 0.001 0.001 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 260,177 112,570 114,345 517,219 516,674 Time (epoch) 7.6s 60.1s 40.2s 19.5s 29.7s Table 9. Hyperparameter settings of GCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 14 18 8 4 10 4 4 Edge Feature Module True False True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.05 0.0 0.1 0.2 0.2 0.2 Residual Connections False False True True True False False True True FFN False False True True True True True True True PE RWSE-32 RWSE-32 False False False RWSE-20 RWSE-16 False False Hidden Dim 275 255 85 70 110 256 512 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.001 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 400 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 507,351 506,127 520,986 460,611 494,235 1,407,641 13,316,700 5,549,605 23,291,826 Time (epoch) 6.9s 6.6s 12.5s 162.5s 6.6s 16.3s 91.4s 178.2s 476.3s 14 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 10. Hyperparameter settings of GIN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 5 5 8 10 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.0 0.1 0.05 0.05 0.05 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 80 60 60 100 90 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.001 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 477,241 118,990 115,450 511,829 497,594 Time (epoch) 9.4s 56.8s 46.3s 18.5s 20.5s Table 11. Hyperparameter settings of GIN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 16 16 5 3 16 5 4 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.0 0.0 0.0 0.3 0.15 0.1 Residual Connections True True True True True True True False True FFN False False True True True False True True True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 240 200 70 70 130 256 300 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 250 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 506,126 518,127 486,039 487,491 514,545 481,433 8,774,720 8,173,605 24,338,354 Time (epoch) 7.4s 6.1s 14.8s 169.2s 5.9s 10.9s 89.2s 213.9s 489.8s 15 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 12. Hyperparameter settings of GatedGCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 9 10 10 12 16 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.05 0.05 0.15 0.2 0.2 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 70 35 35 64 56 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.0005 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 413,355 118,940 116,490 466,001 474,574 Time (epoch) 10.5s 137.9s 115.0s 32.6s 34.1s Table 13. Hyperparameter settings of GatedGCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 5 4 12 20 6 3 10 4 5 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.05 0.2 0.15 0.05 0.0 0.0 0.2 0.15 0.2 Residual Connections False True True True True True True True True FFN False False False True True False True False True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 135 145 95 52 100 256 256 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 32 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 521,141 492,897 559,094 508,589 550,905 1,076,633 6,016,860 5,547,557 29,865,906 Time (epoch) 17.3s 8.0s 21.3s 208.8s 8.9s 15.1s 85.1s 479.8s 640.1s 16 | 6 | 2 | The model architectures described in the paper (GNN+, GCN, GIN, GatedGCN) are enhanced versions of classic GNNs with parameter counts estimated around 100K to 500K. Given that they are trained on 14 datasets with various sizes—each containing multiple graphs with hundreds of nodes and edges—it's reasonable to estimate a total training time of approximately 6 hours across multiple GPUs. Standard batch sizes for such tasks often range from 32 to 256, and due to the complexity of GNN operations, 2 GPUs are likely required to maintain manageable memory consumption and expedite training. The model with enhancements like residual layers and dropout would not escalate memory requirements excessively. Overall, it can be estimated that training could be completed in under 8 hours but may require 2 GPUs for efficiency. | yes | Yes | Graph | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00.000Z | [https://github.com/LUOyk1999/GNNPlus] | 1 | https://www.dropbox.com/s/ol2v01usvaxbsr8/peptide_multi_class_dataset.csv.gz?dl=1, https://www.dropbox.com/s/j4zcnx2eipuo0xz/splits_random_stratified_peptide.pickle?dl=1 | ETA - Under 1 hour as per model desc approx 0.8 hour | https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing | Yes | - Clone the repo and run requuirements, dataset is downloaded from dropbox on the train code execution. |
Tiny-ImageNet | PRO-DSC | [] | Exploring a Principled Framework for Deep Subspace Clustering | 2025-03-21T00:00:00 | https://arxiv.org/abs/2503.17288v1 | [
"https://github.com/mengxianghan123/PRO-DSC"
] | {'Accuracy': '0.698', 'NMI': '0.805'} | [
"Accuracy",
"NMI",
"ARI"
] | Given the following paper and codebase:
Paper: Exploring a Principled Framework for Deep Subspace Clustering
Codebase: https://github.com/mengxianghan123/PRO-DSC
Improve the PRO-DSC model on the Tiny-ImageNet dataset. The result
should improve on the following metrics: {'Accuracy': '0.698', 'NMI': '0.805'}. You must use only the codebase provided.
| Published as a conference paper at ICLR 2025 EXPLORING A PRINCIPLED FRAMEWORK FOR DEEP SUBSPACE CLUSTERING Xianghan Meng†, Zhiyuan Huang†& Wei He Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China {mengxianghan,huangzhiyuan,wei.he }@bupt.edu.cn Xianbiao Qi & Rong Xiao Intellifusion, Shenzhen, P.R. China Chun-Guang Li∗ Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China lichunguang@bupt.edu.cn ABSTRACT Subspace clustering is a classical unsupervised learning task, built on a basic assumption that high-dimensional data can be approximated by a union of sub- spaces (UoS). Nevertheless, the real-world data are often deviating from the UoS assumption. To address this challenge, state-of-the-art deep subspace clustering algorithms attempt to jointly learn UoS representations and self-expressive co- efficients. However, the general framework of the existing algorithms suffers from a catastrophic feature collapse and lacks a theoretical guarantee to learn desired UoS representation. In this paper, we present a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which is designed to learn struc- tured representations and self-expressive coefficients in a unified manner. Specif- ically, in PRO-DSC, we incorporate an effective regularization on the learned representations into the self-expressive model, prove that the regularized self- expressive model is able to prevent feature space collapse, and demonstrate that the learned optimal representations under certain condition lie on a union of or- thogonal subspaces. Moreover, we provide a scalable and efficient approach to implement our PRO-DSC and conduct extensive experiments to verify our theo- retical findings and demonstrate the superior performance of our proposed deep subspace clustering approach. The code is available at: https://github. com/mengxianghan123/PRO-DSC . 1 I NTRODUCTION Subspace clustering is an unsupervised learning task, aiming to partition high dimensional data that are approximately lying on a union of subspaces (UoS), and finds wide-ranging applications, such as motion segmentation (Costeira & Kanade, 1998; Vidal et al., 2008; Rao et al., 2010), hybrid system identification (Vidal, 2004; Bako & Vidal, 2008), image representation and clustering (Hong et al., 2006; Lu et al., 2012), genes expression clustering (McWilliams & Montana, 2014) and so on. Existing subspace clustering algorithms can be roughly divided into four categories: iterative meth- ods (Tseng, 2000; Ho et al., 2003; Zhang et al., 2009), algebraic geometry based methods (Vidal et al., 2005; Tsakiris & Vidal, 2017), statistical methods (Fischler & Bolles, 1981), and spectral clustering-based methods (Chen & Lerman, 2009; Elhamifar & Vidal, 2009; Liu et al., 2010; Lu et al., 2012; You et al., 2016a; Zhang et al., 2021). Among them, spectral clustering based methods gain the most popularity due to the broad theoretical guarantee and superior performance. ∗Corresponding author.†These two authors are equally contributed. 1arXiv:2503.17288v1 [cs.CV] 21 Mar 2025 Published as a conference paper at ICLR 2025 The vital component in spectral clustering based methods is a so-called self-expressive model (El- hamifar & Vidal, 2009; 2013). Formally, given a dataset X:={x1,···,xN}where xj∈RD, self-expressive model expresses each data point xjby a linear combination of other points, i.e., xj=X i̸=jcijxi, (1) where cijis the corresponding self-expressive coefficient. The most intriguing merit of the self- expressive model is that the solution of the self-expressive model under proper regularizer on the coefficients cijis guaranteed to satisfy a subspace-preserving property, namely, cij̸= 0 only if xiandxjare in the same subspace (Elhamifar & Vidal, 2013; Soltanolkotabi & Candes, 2012; Li et al., 2018). Having had the optimal self-expressive coefficients {cij}N i,j=1, the data affinity can be induced by |cij|+|cji|for which spectral clustering is applied to yield the partition of the data. Despite the broad theoretical guarantee, the vanilla self-expressive model still faces great challenges when applied to the complex real-world data that may not well align with the UoS assumption. Earlier works devote to address this deficiency by learning a linear transform of the data (Patel et al., 2013; 2015) or introducing a nonlinear kernel mapping (Patel & Vidal, 2014) under which the representations of the data are supposed to be aligned with the UoS assumption. However, there is a lack of principled mechanism to guide the learning of the linear transforms or the design of the nonlinear kernels to guarantee the representations of the data to form a UoS structure. To handle complex real-world data, in the past few years, there is a surge of interests in designing deep subspace clustering frameworks, e.g., (Ji et al., 2017; Peng et al., 2018; Zhou et al., 2018; Zhang et al., 2019a; Dang et al., 2020; Peng et al., 2020; Lv et al., 2021; Wang et al., 2023b; Zhao et al., 2024). In these works, usually a deep neural network-based representation learning module is integrated to the self-expressive model, to learn the representations Z∈Rd×Nand the self- expressive coefficients C={cij}N i,j=1in a joint optimization framework. However, as analyzed in (Haeffele et al., 2021) that, the optimal representations Zof these methods tend to catastrophically collapse into subspaces with dimensions much lower than the ambient space, which is detrimental to subspace clustering and there is no evidence that the learned representations form a UoS structure. In this paper, we attempt to propose a Principled fRamewOrk for Deep Subspace Clustering (PRO- DSC), which is able to simultaneously learn structured representations and self-expressive coeffi- cients. Specifically, in PRO-DSC, we incorporate an effective regularization on the learned rep- resentations into the self-expressive model and prove that our PRO-DSC can effectively prevent feature collapse. Moreover, we demonstrate that our PRO-DSC under certain condition can yield structured representations forming a UoS structure and provide a scalable and efficient approach to implement PRO-DSC. We conduct extensive experiments on the synthetic data and six benchmark datasets to verify our theoretical findings and the superior performance of our proposed approach. Contributions. The contributions of the paper are highlighted as follows. 1. We propose a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC) that learns both structured representations and self-expressive coefficients simultaneously, in which an effective regularization on the learned representations is incorporated to prevent feature space collapse. 2. We provide a rigorous analysis for the optimal solution of our PRO-DSC, derive a sufficient condition that guarantees the learned representations to escape from feature collapse, and further demonstrate that our PRO-DSC under certain condition can yield structured representations of a UoS structure. 3. We conduct extensive experiments to verify our theoretical findings and to demonstrate the supe- rior performance of the proposed approach. To the best of our knowledge, this is the first principled framework for deep subspace clustering that is guaranteed to prevent feature collapse problem and is shown to yield the UoS representations. 2 D EEPSUBSPACE CLUSTERING : A P RINCIPLED FRAMEWORK , JUSTIFICATION ,AND IMPLEMENTATION In this section, we review the popular framework for deep subspace clustering, called Self- Expressive Deep Subspace Clustering (SEDSC) at first, then present our principled framework for 2 Published as a conference paper at ICLR 2025 deep subspace clustering and provide a rigorous characterization of the optimal solution and the property of the learned structured representations. Finally we describe a scalable implementation based on differential programming for the proposed framework. Please refer to Appendix A for the detailed proofs of our theoretical results. 2.1 P REREQUISITE To apply subspace clustering to complex real-world data that may not well align with the UoS assumption, there has been a surge of interests in exploiting deep neural networks to learn represen- tations and then apply self-expressive model to the learned representations, e.g., (Peng et al., 2018; Ji et al., 2017; Zhou et al., 2018; Zhang et al., 2019a; Dang et al., 2020; Peng et al., 2020; Lv et al., 2021; Wang et al., 2023b; Zhao et al., 2024). Formally, the optimization problem of these SEDSC models can be formulated as follows:1 min Z,C1 2∥Z−ZC∥2 F+β·r(C) s.t.∥Z∥2 F=N, (2) where Z∈Rd×Ndenotes the learned representation, C∈RN×Ndenotes the self-expressive coefficient matrix, and β >0is a hyper-parameter. The following lemma characterizes the property of the optimal solution Zfor problem (2). Lemma 1 (Haeffele et al., 2021) .The rows of the optimal solution Zfor problem (2) are the eigen- vectors that associate with the smallest eigenvalues of (I−C)(I−C)⊤. In other words, the optimal representation Zin SEDSC is restricted to an extremely “narrow” sub- space whose dimension is much smaller than d, leading to an undesirable collapsed solution.2 2.2 O URPRINCIPLED FRAMEWORK FOR DEEPSUBSPACE CLUSTERING In this paper, we attempt to propose a principled framework for deep subspace clustering that prov- ably learns structured representations with maximal intrinsic dimensions. To be specific, we try to optimize the self-expressive model (2) while preserving the intrinsic di- mension of the representation space. Other than using the rank, which is a common measure of the dimension, inspired by (Fazel et al., 2003; Ma et al., 2007; Yu et al., 2020; Liu et al., 2022), we propose to prevent the feature space collapse by incorporating the log det( ·)-based concave smooth surrogate which is defined as follows: R(Z;α):= log det( I+αZ⊤Z), (3) where α > 0is the hyper-parameter. Unlike the commonly used nuclear norm, which is a convex surrogate of the rank, the log det( ·)-based function is concave and differentiable, offers a tighter approximation and encourages learning subspaces with maximal intrinsic dimensions.3 By incorporating the maximization of R(Z;α)as a regularizer into the formulation of SEDSC in (2), we have a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC): min Z,C−1 2log det I+αZ⊤Z +γ 2∥Z−ZC∥2 F+β·r(C) s.t.∥Z∥2 F=N, (4) where γ >0is a hyper-parameter. Now, we will give our theoretical findings for problem (4). Theorem 1 (Eigenspace Alignment) .Denote the optimal solution of PRO-DSC in (4) as (Z⋆,C⋆), G⋆:=Z⋆⊤Z⋆andM⋆:= (I−C⋆)(I−C⋆)⊤. Then G⋆andM⋆share eigenspaces, i.e., G⋆and M⋆can be diagonalized simultaneously by U∈ O(N)where O(N)is an orthogonal group. Note that Theorem 1 provides a perspective from eigenspace alignment for analyzing the property of the optimal solution. Figure 1(a) and (b) show empirical evidences to demonstrate that alignment 1Without loss of generality, we omit the constraint diag(C) =0throughout the analysis. 2The dimension equals to the multiplicity of the smallest eigenvalues of (I−C)(I−C)⊤. 3Please refer to (Ma et al., 2007) for a packing-ball interpretation. 3 Published as a conference paper at ICLR 2025 0 200 400 600 800 1000 Epochs0.0050.0100.0150.0200.0250.0300.0350.040Frobenius norm of commutator /bardblGbMb−MbGb/bardblF/nb (a)∥GbMb−MbGb∥F 101102103 Sorted eigenvalue index (descending)0.2 0.00.20.40.60.81.0Normalized correlation min{d,nb}=128Epoch 0 Epoch 10 Epoch 20 Epoch 25 Epoch 40 Epoch 45 Epoch 100 Epoch 200 Epoch 300 Epoch 900(b)⟨uj,Gbuj/∥Gbuj∥2⟩ 100101102103 Sorted eigenvalue index (descending)01020304050Eigenvalues min{d,nb}=128Epoch 0 Epoch 10 Epoch 20 Epoch 25 Epoch 40 Epoch 45 Epoch 100 Epoch 200 Epoch 300 Epoch 900 (c)λ(Gb) 100101102103 Sorted eigenvalue index (ascending)0.00.20.40.60.81.01.2Eigenvaluesmin{d,nb}=128 Epoch 0 Epoch 10 Epoch 20 Epoch 25 Epoch 40 Epoch 45 Epoch 100 Epoch 200 Epoch 300 Epoch 900 (d)λ(Mb) Figure 1: Empirical Validation to Eigenspace Alignment and Noncollapse Representation in Mini-batch on CIFAR-100. (a): Alignment error curve during the training period. (b): Eigenspace correlation curves measured via ⟨uj,Gbuj ∥Gbuj∥2⟩forj= 1,···, nb. (c) and (d): Eigenvalue curves. 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-10 60708090100 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-100 6065707580 (a) ACC (%) 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-10 20406080 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-100 405060708090 (b) SRE (%) Figure 2: Empirical Validation to Noncollapse Representation on CIFAR-10 and CIFAR-100. Clustering accuracy (ACC%) and subspace-preserving representation error (SRE%) are displayed under varying αandγ. When collapse occurs, both ACC and SRE dramatically degenerate. The perceivable phase transition phenomenon is consistent with the condition to avoid collapse. occurs during the training period, where Gb=Z⊤ bZb,Mb= (I−Cb)(I−Cb)⊤,Zb∈Rd×nb, Cb∈Rnb×nbandnbis batch size, are computed in mini-batch training at different epoch. Next, we will analyze problem (4) from the perspective of alternating optimization. When Zis fixed, the optimization problem with respect to (w.r.t.) Creduces to a standard self-expressive model, which has been extensively studied in (Soltanolkotabi & Candes, 2012; Pimentel-Alarcon & Nowak, 2016; Wang & Xu, 2016; Li et al., 2018; Tsakiris & Vidal, 2018). On the other hand, when Cis fixed, the optimization problem w.r.t. Zbecomes: min Z−1 2log det I+αZ⊤Z +γ 2∥Z−ZC∥2 Fs.t.∥Z∥2 F=N, (5) which is a non-convex optimization problem, whose optimal solution remains under-explored. In light of the fact that GandMconverge to share eigenspaces, we decompose GandM toUDiag( λ(1) G,···, λ(N) G)U⊤andUDiag( λ(1) M,···, λ(N) M)U⊤, respectively. Recall that G:= Z⊤Z,M:= (I−C)(I−C)⊤, by using the eigenvalue decomposition, we reformulate problem (5) into a convex problem w.r.t. {λ(i) G}min{d,N} i=1 (See Appendix A) and have the following result. Theorem 2 (Noncollapse Representation) .Suppose that GandMare aligned in the same eigenspaces and γ <1 λmax(M)α2 α+min{d N,1}. Then we have: a) rank(Z⋆) = min {d, N}, and b) the singular values σ(i) Z⋆=q1 γλ(i) M+ν⋆−1 αfor all i= 1, . . . , min{d, N}, where Z⋆andν⋆are the optimal primal solution and dual solution, respectively. Theorem 2 characterizes the optimal solution for problem (5). Recall that SEDSC in (2) yields a collapsed solution, where rank(Z⋆)≪min{d, N}; whereas the rank of the minimizers for PRO- DSC in (5) satisfies that rank(Z⋆) = min {d, N}. In Figure 1(c) and (d), we show the curves of the eigenvalues of GbandMb, which are computed in the mini-batch training at different epoch, demonstrating that the learned representation does no longer collapse. In Figure 2, we show the subspace clustering accuracy (ACC) and subspace-representation error4(SRE) as a function of the 4For each column cjinC, SRE is computed by100 NP j(1−P iwij· |cij|)/∥cj∥1, where wij∈ {0,1}is the ground-truth affinity. 4 Published as a conference paper at ICLR 2025 (a)|X⊤X| (b)|Z⊤Z| Airplane Automobile Dog (c)X(3)via PCA Airplane Automobile Dog (d)Z(3)via PCA Figure 3: Empirical Validation to Structured Representation on CIFAR-10. Gram matrices for CLIP features Xand learned representations Zare shown in (a) and (b); whereas Data visualization of the samples from three categories X(3)andZ(3)via PCA are shown in (c) and (d), respectively. parameters αandγ. The phase transition phenomenon around γ <1 λmax(M)α2 α+min{d N,1}well illustrates the sufficient condition in Theorem 2 to avoid representation collapse. Furthermore, from the perspective of joint optimizing ZandC, the following theorem demonstrates that PRO-DSC promotes a union-of-orthogonal-subspaces representation Zand block-diagonal self- expressive matrix Cunder certain condition. Theorem 3. Suppose that the sufficient conditions to prevent feature collapse are satisfied. With- out loss of generality, we further assume that the columns in matrix Zare arranged into kblocks according to a certain N×Npermutation matrix Γ, i.e.,Z= [Z1,Z2,···,Zk]. Then the con- dition for that PRO-DSC promotes the optimal solution (Z⋆,C⋆)to have desired structure (i.e., Z⊤ ⋆Z⋆andC⋆are both block-diagonal), is that ⟨(I−C)(I−C)⊤,G−G∗⟩ → 0, where G∗:= Diag G11,G22,···,Gkk andGjjis the block Gram matrix corresponding to Zj. 0 50 100 150 200 250 Iterations0.20.40.60.8Average block value|C∗ b||G∗ b||Cb−C∗ b||Gb−G∗ b| 800 700 600 500 400 300 200 100 0 CSC condition /angbracketleft(I−Cb) (I−Cb)/latticetop,Gb−G∗ b/angbracketright Figure 4: Empirical validation to Theorem 3 in Mini-batch on CIFAR-10. The mean curves of the absolute values of the in-block-diagonal entries (thick) and the off-block-diagonal entries (thin) are displayed along with the CSC condition (gray) during training PRO-DSC.Remark 1. Theorem 3 suggests that our PRO-DSC is able to promote learning representations and self-expressive matrix with desired structures, i.e., the represen- tations form a union of orthogonal sub- spaces and accordingly the self-expressive matrix is block-diagonal, when the condi- tion⟨(I−C) (I−C)⊤,G−G∗⟩ → 0 is met. We call this condition a compati- bly structured coherence (CSC), which re- lates to the properties of the distribution of the representations in Zand the self- coefficients in C. While it is not possi- ble for us to give a theoretical justification when the CSC condition will be satisfied in general, we do have the empirical ev- idence that our implementation for PRO- DSC with careful designs does approxi- mately satisfy such a condition and thus yields representations and self-expressive matrix with desired structure (See Fig- ure 3).5 In Figure 4, we show the curves for the compatibly structured coherence (CSC) condition, and for the average values of the entries in |G∗ b|,|Gb−G∗ b|,|C∗ b|,|Cb−C∗ b|computed in mini-batch during training PRO-DSC on CIFAR-10. As illustrated, the CSC condition is progressively satisfied and consequently the average off-block values |Gb−G∗ b|and|Cb−C∗ b|gradually decrease, while the average in-block values |G∗ b|and|C∗ b|gradually increase, which empirically validates that PRO- DSC promotes block-diagonal GbandCb. 5Please refer to Appendix B.2 for more details about Figures 1-4. 5 Published as a conference paper at ICLR 2025 2.3 S CALABLE IMPLEMENTATION Existing SEDSC models typically use autoencoders to learn the representations and learn the self- expressive matrix Cthrough an N×Nfully-connected layer (Ji et al., 2017; Peng et al., 2018; Zhou et al., 2018; Zhang et al., 2019a). While such implementation is straightforward, there are two major drawbacks: a) since that the number of self-expressive coefficients is quadratic to the number of data points, solving these coefficients suffers from expensive computation burden; b) the learning process is transductive, i.e., the network parameters cannot be generalized to unseen data. To address these issues, similar to (Zhang et al., 2021), we reparameterize the self-expressive co- efficients cijby a neural network. Specifically, the input data xiis fed into a neural network h(·;Ψ) :RD→Rdto yield normalized representations, i.e., yi:=h(xi;Ψ)/∥h(xi;Ψ)∥2, (6) where Ψdenotes all the parameters in h(·). Then, the parameterized self-expressive matrix CΨis generated by: CΨ:=P(Y⊤Y), (7) where Y:= [y1, . . . ,yN]∈Rd×NandP(·)is the sinkhorn projection (Cuturi, 2013), which has been widely applied in deep clustering (Caron et al., 2020; Ding et al., 2023).6To enable efficient representation learning, we introduce another learnable mapping f(·;Θ) :RD→Rd, for which zj:=f(xj;Θ)/∥f(xj;Θ)∥2 (8) is the learned representation for the input xj, where Θdenotes the parameters in f(·)to learn the structured representation ZΘ:= [z1, . . . ,zN]∈Rd×N. Therefore, our principled framework for deep subspace clustering (PRO-DSC) in (4) can be repa- rameterized and reformulated as follows: min Θ,ΨL(Θ,Ψ):=−1 2log det I+αZ⊤ ΘZΘ +γ 2∥ZΘ−ZΘCΨ∥2 F+β·r(CΨ).(9) To strengthen the block-diagonal structure of self-expressive matrix, we choose the block-diagonal regularizer (Lu et al., 2018) for r(CΨ). To be specific, given the data affinity AΨ, which is induced by default as AΨ:=1 2 |CΨ|+|C⊤ Ψ| , the block diagonal regularizer is defined as: r(CΨ):=∥AΨ∥κ, (10) where ∥AΨ∥κis the sum of the ksmallest eigenvalues of the Laplacian matrix of the affinity AΨ.7 Consequently, the parameters in ΘandΨof reparameterized PRO-DSC can be trained by Stochastic Gradient Descent (SGD) with the loss function L(Θ,Ψ)defined in (9). For clarity, we summarize the procedure for training and testing of our PRO-DSC in Algorithm 1. Remark 2. We note that all the commonly used regularizers with extended block-diagonal property for self-expressive model as discussed in (Lu et al., 2018) can be used to improve the block-diagonal structure of self-expressive matrix. More interestingly, the specific type of the regularizers is not essential owning to the learned structured representation (Please refer to Table 3 for details), and using a specific regularizer or not is also not essential since that the SGD-based optimization also induces some implicit regularization, e.g., low-rank (Gunasekar et al., 2017; Arora et al., 2019). 3 E XPERIMENTS To validate our theoretical findings and to demonstrate the performance of our proposed framework, we conduct extensive experiments on synthetic data (Sec. 3.1) and real-world data (Sec. 3.2). Im- plementation details and more results are provided in Appendices B.1 and B.3, respectively. 6In practice, we set diag(CΨ) =0to prevent trivial solution CΨ=I. 7Recall that the number of zero eigenvalues of the Laplacian matrix equals to the number of connected components in the graph (von Luxburg, 2007). 6 Published as a conference paper at ICLR 2025 Algorithm 1 Scalable & Efficient Implementation of PRO-DSC via Differential Programming Input: Dataset X=Xtrain∪ X test, batch size nb, hyper-parameters α, β, γ , number of iterations T, learning rate η Initialization: Random initialize the parameters Ψ,Θin the networks h(·;Ψ)andf(·;Θ) Training: 1:fort= 1, . . . , T do 2: Sample a batch Xb∈RD×nbfromXtrain # Forward propagation 3: Compute self-expressive matrix Cb∈Rnb×nbby Eqs. (6–7) 4: Compute representations Zb∈Rd×nbby Eq. (8) # Backward propagation 5: Compute gradients: ∇Ψ:=∂L ∂Ψ,∇Θ:=∂L ∂Θ 6: Update ΨandΘvia:Ψ←Ψ−η· ∇Ψ,Θ←Θ−η· ∇Θ 7:end for Testing: 8:Compute self-expressive matrix Ctestby Eqs. (6–7) for Xtest 9:Apply spectral clustering on the affinity Atest 3.1 E XPERIMENTS ON SYNTHETIC DATA To validate whether PRO-DSC resolves the collapse issue in SEDSC and learns representations with a UoS structure, we first follow the procedure in (Ding et al., 2023) to generate two sets of synthetic data, as shown in the first column of Figure 5, and then visualize in Figure 5(b)-(e) the learned representations which are obtained from different methods on these synthetic data. We observe that the SEDSC model overly compress all the representations to a closed curve on the hypersphere; whereas with increased weights (i.e., γ↑) of the self-expressive term, the representa- tions collapse to a few points. Our PRO-DSC yields linearized representations lying on orthogonal subspaces in both cases, confirming the effectiveness of our approach. Nevertheless, MLC (Ding et al., 2023) yields representations approximately on orthogonal subspaces. (a) Input Data (b) SEDSC (c) SEDSC ( γ↑) (d) MLC (e) PRO-DSC Figure 5: Visualization Experiments on Synthetic Data. 3.2 E XPERIMENTS ON REAL-WORLD DATA To evaluate the performance of our proposed approach, we conduct experiments on six real-world image datasets, including CIFAR-10, CIFAR-20, CIFAR-100, ImageNet-Dogs-15, Tiny-ImageNet- 200, and ImageNet-1k, with the pretrained CLIP features8(Radford et al., 2021), and compare to several baseline methods, including classical clustering algorithms, e.g., k-means (MacQueen, 1967) and spectral clustering (Shi & Malik, 2000), subspace clustering algorithm, e.g., EnSC (You et al., 2016a) and SENet (Zhang et al., 2021), deep clustering algorithms, e.g., SCAN (Van Gansbeke et al., 2020), TEMI (Adaloglou et al., 2023) and CPP (Chu et al., 2024), and deep subspace clustering algorithms, e.g., DSCNet (Ji et al., 2017) and EDESC (Cai et al., 2022). We measure clustering 8Please refer to Appendix B.3 for the results on other pre-trained models. 7 Published as a conference paper at ICLR 2025 Table 1: Clustering performance Comparison on the CLIP features. The best results are in bold and the second best results are underlined. “OOM” means out of GPU memory. MethodCIFAR-10 CIFAR-20 CIFAR-100 TinyImgNet-200 ImgNetDogs-15 ImageNet-1k ACC NMI ACC NMI ACC NMI ACC NMI ACC NMI ACC NMI k-means 83.5 84.1 46.9 49.4 52.8 66.8 54.1 73.4 52.7 53.6 53.9 79.8 SC 79.8 84.8 53.3 61.6 66.4 77.0 62.8 77.0 48.3 45.7 56.0 81.2 SSCOMP 85.5 83.0 61.4 63.4 55.6 69.7 56.7 72.7 25.6 15.9 44.1 74.4 EnSC 95.4 90.3 61.0 68.7 67.0 77.1 64.5 77.7 57.9 56.0 59.7 83.7 SENet 91.2 82.5 65.3 68.6 67.0 74.7 63.9 76.6 58.7 55.3 53.2 78.1 SCAN 95.1 90.3 60.8 61.8 64.1 70.8 56.5 72.7 70.5 68.2 54.4 76.8 TEMI 96.9 92.6 61.8 64.5 73.7 79.9 - - - - 64.0 - CPP 96.8 92.3 67.7 70.5 75.4 82.0 63.4 75.5 83.0 81.5 62.0 82.1 EDESC 84.2 79.3 48.7 49.1 53.1 68.6 51.3 68.8 53.3 47.9 46.5 75.5 DSCNet 78.5 73.6 38.6 45.7 39.2 53.4 62.3 68.3 40.5 30.1 OOM OOM Our PRO-DSC 97.2±0.292.8±0.471.6±1.273.2±0.577.3±1.082.4±0.569.8±1.180.5±0.784.0±0.681.2±0.865.0±1.283.4±0.6 performance using clustering accuracy (ACC) and normalized mutual information (NMI), and report the experimental results in Table 1, where the results of our PRO-DSC are averaged over 10 trials (with±std). Since that for most baselines, except for TEMI, the clustering performance with the CLIP feature has not been reported, we conduct experiments using the implementations provided by the authors. For TEMI, we directly cited the results from (Adaloglou et al., 2023). Performance comparison. As shown in Table 1, our PRO-DSC significantly outperforms subspace clustering algorithms, e.g., SSCOMP, EnSC and SENet, and deep subspace clustering algorithms, e.g., DSCNet and EDESC. Moreover, our PRO-DSC obtains better performance than the state-of- the-art deep clustering and deep manifold clustering methods, e.g., SCAN, TEMI and CPP. Validation to the theoretical results. To validate whether the alignment emerges and when rep- resentations collapse occurs during the training period, we compute Gb=Z⊤ bZbandMb= (I−Cb)(I−Cb)⊤in mini-batch at different epoch during the training period, and then measure the alignment error via ∥GbMb−MbGb∥Fand the eigenspace correlation via ⟨uj,Gbuj ∥Gbuj∥2⟩where ujis the j-th ending eigenvector9ofMbforj= 1,···, nb, and plot the eigenvalues of Gband Mb, where nbis the sample size per mini-batch. Moreover, we also record empirical performance ACC and SRE on CIFAR-10 and CIFAR-100 under varying hyper-parameters αandγto validate the condition in Theorem 2 to avoid collapse. Experimental results are displayed in Figures 1 and 2. We observe that GbandMbare increasingly aligned and the representations will no longer collapse provided that the parameters are properly set. More details are provided in Section B.2. Evaluation on learned representations. To quantitatively evaluate the effectiveness of the learned representations, we run k-means (MacQueen, 1967), spectral clustering (Shi & Malik, 2000), and EnSC (You et al., 2016a) on four datasets with three different features: a) the CLIP features, b) the representations learned via CPP, and c) the representations learned by our PRO-DSC. Experimental results are shown in Figure 6 (and more results are given in Table B.4 of Appendix B.3). We observe that the representations learned by our PRO-DSC outperform the CLIP features and the CPP repre- sentations in most cases across different clustering algorithms and datasets. Notably, the clustering accuracy with the representations learned by our PRO-DSC exceeds 90% on CIFAR-10 and 75% on CIFAR-100, whichever clustering algorithm is used. Besides, the clustering performance is further improved by using the learnable mapping h(·;Ψ), indicating a good generalization ability. k-means SC EnSCh(·;Ψ)406080100ACC (%)-3.4+0.1+0.2+18.7+21.9+0.1CLIP CPP PRO-DSC (a) CIFAR-10 k-means SC EnSCh(·;Ψ)20406080ACC (%)+22.5 +8.6+10.5+23.7+8.8+10.6CLIP CPP PRO-DSC (b) CIFAR-100 k-means SC EnSCh(·;Ψ)20406080ACC (%)+8.6 -5.6-2.5+19.1 +10.5 -0.8CLIP CPP PRO-DSC (c) CIFAR-20 k-means SC EnSCh(·;Ψ)20406080ACC (%)+8.0 -4.8+2.5 +13.5 +4.2+5.0CLIP CPP PRO-DSC (d) TinyImageNet-200 Figure 6: Clustering accuracy with CLIP features and learned representations. 9The eigenvectors are sorted according to eigenvalues of Mbin ascending order. 8 Published as a conference paper at ICLR 2025 Sensitivity to hyper-parameters. In Figure 2, we verify that our PRO-DSC yields satisfactory results when the conditions in Theorem 2 to avoid collapse are met. Moreover, we evaluate the performance sensitivity to hyper-parameters γandβby experiments on the CLIP features of CIFAR- 10, CIFAR-100 and TinyImageNet-200 with varying γandβ. In Figure 7, we observe that the clustering performance maintains satisfactory under a broad range of γandβ. 100 200 300 400 500 600 700 β0.10 0.16 0.21 0.27 0.33 0.39γ96% 97% 97% 95.596.096.597.0 (a) CIFAR-10 150 250 350 450 550 650 750 β0.02 0.05 0.08 0.12 0.15 0.18γ75% 75%78% 78% 50.055.060.065.070.075.080.0 (b) CIFAR-100 100 200 300 400 500 600 700 β0.05 0.08 0.11 0.15 0.18 0.22γ65% 65%68% 68% 50.055.060.065.070.0 (c) TinyImageNet Figure 7: Evaluation on sensitivity to hyper-parameters γandβon three datasets. Time and memory cost. The most time-consuming operations in our PRO-DSC are computing the term involving log det( ·)and the term ∥A∥κinvolving eigenvalue decomposition, respectively. The time complexity for log det( ·)isO(min{n3 b, d3})due to the commutative property of log det( ·) function (Yu et al., 2020), and the time complexity for ∥A∥κisO(kn2 b).10Therefore the overall time complexity of our PRO-DSC is O(kn2 b+ min {n3 b, d3}). Note that TEMI (Adaloglou et al., 2023) employs H= 50 cluster heads during training, adding further time and memory costs and CPP (Chu et al., 2024) involves computing log det( ·)fornb+ 1 times, leading to complexity O((nb+ 1) min {n3 b, d3}). The computation time and memory costs are shown in Table 2 for which all the experiments are conducted on a single NVIDIA RTX 3090 GPU and Intel Xeon Platinum 8255C CPU. We read that our PRO-DSC significantly reduces the time consumption, particularly for datasets with a large number of clusters. Table 2: Comparison on time (s) and memory cost (MiB). “OOM” means out of GPU memory. Methods ComplexityCIFAR-10 CIFAR-100 ImageNet-1k Time Memory Time Memory Time Memory SEDSC O(N2d) - OOM - OOM - OOM TEMI O(Hnbd2) 6.9 1,766 5.1 2,394 262.1 2,858 CPP O((nb+ 1) min {n3 b, d3}) 3.5 3,802 7.1 10,374 1441.2 22,433 PRO-DSC O(kn2 b+ min {n3 b, d3}) 4.5 2,158 4.0 2,328 90.0 2,335 Table 3: Ablation studies on different loss functions and regularizers. Loss Term CIFAR-10 CIFAR-100 ImgNetDogs-15 L1 L2∥A∥κ∥C∥1∥C∥2 F∥C∥∗ACC NMI ACC NMI ACC NMIAblation√ √56.9 47.7 54.6 60.9 46.7 37.1√ √69.6 56.4 64.7 71.7 10.5 1.7√ √97.0 93.0 74.6 80.9 80.9 78.8Regularizer√ √ √97.0 92.6 75.2 81.1 81.3 79.1√ √ √97.0 92.6 75.2 80.9 80.9 78.8√ √ √96.7 91.9 76.4 81.8 81.0 78.8√ √ √97.2 92.8 77.3 82.4 84.0 81.2 Ablation study. To verify the effectiveness of each components in the loss function of our PRO- DSC, we conduct a set of ablation studies with the CLIP features on CIFAR-10, CIFAR-100, and ImageNetDogs-15, and report the results in Table 3, where L1:=−1 2log det I+αZ⊤ ΘZΘ and L2:=1 2∥ZΘ−ZΘCΨ∥2 F. The absence of the term L1leads to catastrophic feature collapse (as demonstrated in Sec. 2.1); whereas without the self-expressive L2, the model lacks a loss function 10For an N×Nmatrix, the complexity of computing its keigenvalues by Lanczos algorithm is O(kN2) and the complexity of computing its det(·)isO(N3). 9 Published as a conference paper at ICLR 2025 for learning the self-expressive coefficients. In both cases, clustering performance drops signifi- cantly. More interestingly, when we replace the block diagonal regularizer ∥A∥κwith∥C∥1,∥C∥∗, and∥C∥2 For even drop the explicit regularizer r(·), the clustering performance still maintains sat- isfactory. This confirms that the choice of the regularizer is not essential owning to the structured representations learned by our PRO-DSC. 4 R ELATED WORK Deep subspace clustering. To tackle with complex real world data, a number of Self-Expressive Deep Subspace Clustering (SEDSC) methods have been developed in the past few years, e.g., (Ji et al., 2017; Peng et al., 2018; Zhou et al., 2018; Zhang et al., 2019a;b; Dang et al., 2020; Peng et al., 2020; Lv et al., 2021; Cai et al., 2022; Wang et al., 2023b). The key step in SEDSC is to adopt a deep learning module to embed the input data into feature space. For example, deep autoencoder network is adopted in (Peng et al., 2018), deep convolutional autoencoder network is used in (Ji et al., 2017; Zhou et al., 2018; Zhang et al., 2019a). Unfortunately, as pointed out in (Haeffele et al., 2021), the existing SEDSC methods suffer from a catastrophic feature collapse and there is no evidence that the learned representations align with a UoS structure. To date, however, a principled deep subspace clustering framework has not been proposed. Deep clustering. Recently, most of state-of-the-art deep clustering methods adopt a two-step proce- dure: at the first step, self-supervised learning based pre-training, e.g., SimCLR (Chen et al., 2020a), MoCo (He et al., 2020), BYOL (Grill et al., 2020) and SwA V (Caron et al., 2020), is adopted to learn the representations; and then deep clustering methods are incorporated to refine the representations, via, e.g., pseudo-labeling (Caron et al., 2018; Van Gansbeke et al., 2020; Park et al., 2021; Niu et al., 2022), cluster-level contrastive learning (Li et al., 2021), local and global neighbor match- ing (Dang et al., 2021), graph contrastive learning (Zhong et al., 2021), self-distillation (Adaloglou et al., 2023). Though the clustering performance has been improved remarkably, the underlying geometry structure of the learned representations is unclear and ignored. Representation learning with a UoS structure. The methods for representation learning that favor a UoS structure are pioneered in supervised setting, e.g., (Lezama et al., 2018; Yu et al., 2020). In (Lezama et al., 2018), a nuclear norm based geometric loss is proposed to learn representations that lie on a union of orthogonal subspaces; in (Yu et al., 2020), a principled framework called Maximal Coding Rate Reduction (MCR2) is proposed to learn representations that favor the structure of a union of orthogonal subspaces (Wang et al., 2024). More recently, the MCR2framework is modified to develop deep manifold clustering methods, e.g., NMCE (Li et al., 2022), MLC (Ding et al., 2023) and CPP (Chu et al., 2024). In (Li et al., 2022), the MCR2framework combines with constrastive learning to perform manifold clustering and representation learning; in (Ding et al., 2023), the MCR2 framework combines with doubly stochastic affinity learning to perform manifold linearizing and clustering; and in (Chu et al., 2024), the features from large pre-trained model (e.g., CLIP) are adopted to evaluate the performance of (Ding et al., 2023). While the MCR2framework has been modified in these methods for manifold clustering, none of them provides theoretical justification to yield structured representations. Though our PRO-DSC shares the same regularizer defined in Eq. (3) with MLC (Ding et al., 2023), we are for the first time to adopt it into the SEDSC framework to attack the catastrophic feature collapse issue with theoretical analysis. 5 C ONCLUSION We presented a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which jointly learn structured representations and self-expressive coefficients. Specifically, our PRO-DSC incor- porates an effective regularization into self-expressive model to prevent the catastrophic representa- tion collapse with theoretical justification. Moreover, we demonstrated that our PRO-DSC is able to learn structured representations that form a desirable UoS structure, and also developed an efficient implementation based on reparameterization and differential programming. We conducted extensive experiments on synthetic data and six benchmark datasets to verify the effectiveness of our proposed approach and validate our theoretical findings. 10 Published as a conference paper at ICLR 2025 ACKNOWLEDGMENTS The authors would like to thank the constructive comments from anonymous reviewers. This work is supported by the National Natural Science Foundation of China under Grant 61876022. ETHICS STATEMENT In this work, we aim to extend traditional subspace clustering algorithms by leveraging deep learning techniques to enhance their representation learning capabilities. Our research does not involve any human subjects, and we have carefully ensured that it poses no potential risks or harms. Additionally, there are no conflicts of interest, sponsorship concerns, or issues related to discrimination, bias, or fairness associated with this study. We have taken steps to address privacy and security concerns, and all data used comply with legal and ethical standards. Our work fully adheres to research integrity principles, and no ethical concerns have arisen during the course of this study. REPRODUCIBILITY STATEMENT To ensure the reproducibility of our work, we have released the source code. Theoretical proofs of the claims made in this paper are provided in Appendix A, and the empirical validation of these theoretical results is shown in Figures 2-4, with further detailed explanations in Appendix B.2. All datasets used in our experiments are publicly available, and we have provided a comprehensive de- scription of the data processing steps in Appendix B.1. Additionally, detailed experimental settings and configurations are outlined in Appendix B.1 to facilitate the reproduction of our results. REFERENCES Nikolas Adaloglou, Felix Michels, Hamza Kalisch, and Markus Kollmann. Exploring the limits of deep image clustering using pretrained models. In British Machine Vision Conference , pp. 297–299, 2023. Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems , 32:7411–7422, 2019. Laurent Bako and Ren ´e Vidal. Algebraic identification of MIMO SARX models. In International Workshop on Hybrid Systems: Computation and Control , pp. 43–57, 2008. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems , pp. 153–160, 2006. Jinyu Cai, Jicong Fan, Wenzhong Guo, Shiping Wang, Yunhe Zhang, and Zhao Zhang. Efficient deep embedded subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 21–30, 2022. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsu- pervised learning of visual features. In European Conference on Computer Vision , pp. 132–149, 2018. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems , 33:9912–9924, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Herv ´e J´egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 9650–9660, 2021. Jianlong Chang, Gaofeng Meng, Lingfeng Wang, Shiming Xiang, and Chunhong Pan. Deep self- evolution clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence , 42(4): 809–823, 2018. 11 Published as a conference paper at ICLR 2025 Guangliang Chen and Gilad Lerman. Spectral curvature clustering (SCC). International Journal of Computer Vision , 81(3):317–330, 2009. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning , pp. 1597–1607, 2020a. Ying Chen, Chun-Guang Li, and Chong You. Stochastic sparse subspace clustering. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 4155–4164, 2020b. Tianzhe Chu, Shengbang Tong, Tianjiao Ding, Xili Dai, Benjamin David Haeffele, Ren ´e Vidal, and Yi Ma. Image clustering via the principle of rate reduction in the age of pretrained models. In International Conference on Learning Representations , 2024. Joao Paulo Costeira and Takeo Kanade. A multibody factorization method for independently moving objects. International Journal of Computer Vision , 29:159–179, 1998. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in Neural Information Processing Systems , 26:2292–2300, 2013. Zhiyuan Dang, Cheng Deng, Xu Yang, and Heng Huang. Multi-scale fusion subspace clustering using similarity constraint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 6657–6666, 2020. Zhiyuan Dang, Cheng Deng, Xu Yang, Kun Wei, and Heng Huang. Nearest neighbor matching for deep clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 13693–13702, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 248–255, 2009. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine , 29(6):141–142, 2012. Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, and Benjamin D. Haeffele. Unsupervised manifold linearizing and clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 5450–5461, October 2023. Ehsan Elhamifar and Ren ´e Vidal. Sparse subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 2790–2797, 2009. Ehsan Elhamifar and Ren ´e Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(11):2765–2781, 2013. Maryam Fazel, Haitham Hindi, and Stephen P Boyd. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In American Control Conference , volume 3, pp. 2156–2162, 2003. Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM , 24 (6):381–395, 1981. Jean-Bastien Grill, Florian Strub, Florent Altch ´e, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems , 33:21271–21284, 2020. Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Sre- bro. Implicit regularization in matrix factorization. Advances in Neural Information Processing Systems , pp. 6151–6159, 2017. 12 Published as a conference paper at ICLR 2025 Benjamin D Haeffele, Chong You, and Ren ´e Vidal. A critique of self-expressive deep subspace clustering. In International Conference on Learning Representations , 2021. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 9729–9738, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll ´ar, and Ross Girshick. Masked au- toencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 16000–16009, 2022. Jeffrey Ho, Ming-Husang Yang, Jongwoo Lim, Kuang-Chih Lee, and David Kriegman. Clustering appearances of objects under varying illumination conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 11–18, 2003. Wei Hong, John Wright, Kun Huang, and Yi Ma. Multiscale hybrid linear models for lossy image representation. IEEE Transactions on Image Processing , 15(12):3655–3671, 2006. Zhizhong Huang, Jie Chen, Junping Zhang, and Hongming Shan. Learning representation for clus- tering via prototype scattering and positive sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(6):7509–7524, 2023. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning , pp. 448–456, 2015. Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian Reid. Deep subspace clustering networks. Advances in Neural Information Processing Systems , pp. 24–33, 2017. Yuheng Jia, Jianhong Cheng, Hui LIU, and Junhui Hou. Towards calibrated deep clustering network. InInternational Conference on Learning Representations , 2025. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations , 2014. Jos´e Lezama, Qiang Qiu, Pablo Mus ´e, and Guillermo Sapiro. OLE: Orthogonal low-rank embedding - a plug and play geometric loss for deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 8109–8118, 2018. Chun-Guang Li, Chong You, and Ren ´e Vidal. Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework. IEEE Transactions on Image Processing , 26(6): 2988–3001, 2017. Chun-Guang Li, Chong You, and Ren ´e Vidal. On geometric analysis of affine sparse subspace clustering. IEEE Journal on Selected Topics in Signal Processing , 12(6):1520–1533, 2018. Yunfan Li, Peng Hu, Zitao Liu, Dezhong Peng, Joey Tianyi Zhou, and Xi Peng. Contrastive cluster- ing. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pp. 8547–8555, 2021. Zengyi Li, Yubei Chen, Yann LeCun, and Friedrich T Sommer. Neural manifold clustering and embedding. arXiv preprint arXiv:2201.10000 , 2022. Derek Lim, Ren ´e Vidal, and Benjamin D Haeffele. Doubly stochastic subspace clustering. arXiv preprint arXiv:2011.14859 , 2020. Guangcan Liu, Zhouchen Lin, and Yong Yu. Robust subspace segmentation by low-rank represen- tation. In International Conference on Machine Learning , pp. 663–670, 2010. Xin Liu, Zhongdao Wang, Ya-Li Li, and Shengjin Wang. Self-supervised learning via maximum entropy coding. Advances in Neural Information Processing Systems , 35:34091–34105, 2022. Canyi Lu, Hai Min, Zhong-Qiu Zhao, Lin Zhu, De-Shuang Huang, and Shuicheng Yan. Robust and efficient subspace segmentation via least squares regression. In European Conference on Computer Vision , pp. 347–360, 2012. 13 Published as a conference paper at ICLR 2025 Canyi Lu, Jiashi Feng, Zhouchen Lin, Tao Mei, and Shuicheng Yan. Subspace clustering by block diagonal representation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 41(2): 487–501, 2018. Juncheng Lv, Zhao Kang, Xiao Lu, and Zenglin Xu. Pseudo-supervised deep subspace clustering. IEEE Transactions on Image Processing , 30:5252–5263, 2021. Yi Ma, Harm Derksen, Wei Hong, and John Wright. Segmentation of multivariate mixed data via lossy coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence , 29(9):1546–1562, 2007. J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceed- ings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability , pp. 281–297, 1967. Ryan McConville, Raul Santos-Rodriguez, Robert J Piechocki, and Ian Craddock. N2d:(not too) deep clustering via clustering the local manifold of an autoencoded embedding. In Proceedings of the International Conference on Pattern Recognition , pp. 5145–5152, 2021. Brian McWilliams and Giovanni Montana. Subspace clustering of high dimensional data: a predic- tive approach. Data Mining and Knowledge Discovery , 28(3):736–772, 2014. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InInternational Conference on Machine Learning , pp. 807–814, 2010. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics & Image Processing , pp. 722–729, 2008. Chuang Niu, Hongming Shan, and Ge Wang. SPICE: Semantic pseudo-labeling for image cluster- ing. IEEE Transactions on Image Processing , 31:7264–7278, 2022. Foivos Ntelemis, Yaochu Jin, and Spencer A Thomas. Information maximization clustering via multi-view self-labelling. Knowledge-Based Systems , 250:109042, 2022. Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V . V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel HAZIZA, Francisco Massa, Alaaeldin El-Nouby, Mido Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Herve Jegou, Julien Mairal, Patrick Labatut, Ar- mand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research , 2024. Sungwon Park, Sungwon Han, Sundong Kim, Danu Kim, Sungkyu Park, Seunghoon Hong, and Meeyoung Cha. Improving unsupervised image clustering with robust learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 12278–12287, 2021. Vishal M Patel and Ren ´e Vidal. Kernel sparse subspace clustering. In Proceedings of the IEEE International Conference on Image Processing , pp. 2849–2853, 2014. Vishal M Patel, Hien Van Nguyen, and Ren ´e Vidal. Latent space sparse subspace clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 225–232, 2013. Vishal M Patel, Hien Van Nguyen, and Ren ´e Vidal. Latent space sparse and low-rank subspace clustering. IEEE Journal of Selected Topics in Signal Processing , 9(4):691–701, 2015. Xi Peng, Jiashi Feng, Shijie Xiao, Wei-Yun Yau, Joey Tianyi Zhou, and Songfan Yang. Structured autoencoders for subspace clustering. IEEE Transactions on Image Processing , 27(10):5076– 5086, 2018. Xi Peng, Jiashi Feng, Joey Tianyi Zhou, Yingjie Lei, and Shuicheng Yan. Deep subspace clustering. IEEE Transactions on Neural Networks and Learning Systems , 31(12):5509–5521, 2020. 14 Published as a conference paper at ICLR 2025 Daniel Pimentel-Alarcon and Robert Nowak. The information-theoretic requirements of subspace clustering with missing data. In International Conference on Machine Learning , pp. 802–810, 2016. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning , pp. 8748–8763, 2021. Shankar Rao, Roberto Tron, Ren ´e Vidal, and Yi Ma. Motion segmentation in the presence of out- lying, incomplete, or corrupted trajectories. IEEE Transactions on Pattern Analysis and Machine Intelligence , 32(10):1832–1845, 2010. Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 22(8):888–905, 2000. Mahdi Soltanolkotabi and Emmanuel J Candes. A geometric analysis of subspace clustering with outliers. Annals of Statistics , 40(4):2195–2238, 2012. Manolis Tsakiris and Ren ´e Vidal. Algebraic clustering of affine subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence , 40(2):482–489, 2017. Manolis Tsakiris and Ren ´e Vidal. Theoretical analysis of sparse subspace clustering with missing entries. In International Conference on Machine Learning , pp. 4975–4984, 2018. Paul Tseng. Nearest q-flat to mpoints. Journal of Optimization Theory and Applications , 105(1): 249–252, 2000. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research , 9(11), 2008. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. SCAN: Learning to classify images without labels. In European Conference on Com- puter Vision , pp. 268–285, 2020. Ren´e Vidal. Identification of PWARX hybrid models with unknown and possibly different orders. InProceedings of the American Control Conference , pp. 547–552, 2004. Ren´e Vidal, Yi Ma, and Shankar Sastry. Generalized Principal Component Analysis (GPCA). IEEE Transactions on Pattern Analysis and Machine Intelligence , 27(12):1–15, 2005. Ren´e Vidal, Roberto Tron, and Richard Hartley. Multiframe motion segmentation with missing data using PowerFactorization, and GPCA. International Journal of Computer Vision , 79(1):85–105, 2008. Ulrike von Luxburg. A tutorial on spectral clustering. Statistics and Computing , 17(4):395–416, 2007. Libin Wang, Yulong Wang, Hao Deng, and Hong Chen. Attention reweighted sparse subspace clustering. Pattern Recognition , 139:109438, 2023a. Peng Wang, Huikang Liu, Druv Pai, Yaodong Yu, Zhihui Zhu, Qing Qu, and Yi Ma. A global geometric analysis of maximal coding rate reduction. In International Conference on Machine Learning , 2024. Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, and Guoren Wang. Self-supervised information bottleneck for deep multi-view subspace clustering. IEEE Transactions on Image Processing , 32: 1555–1567, 2023b. Yu-Xiang Wang and Huan Xu. Noisy sparse subspace clustering. Journal of Machine Learning Research , 17(12):1–41, 2016. Lai Wei, Zhengwei Chen, Jun Yin, Changming Zhu, Rigui Zhou, and Jin Liu. Adaptive graph convolutional subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 6262–6271, 2023. 15 Published as a conference paper at ICLR 2025 Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for benchmark- ing machine learning algorithms. arXiv preprint arXiv:1708.07747 , 2017. Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. InInternational Conference on Machine Learning , pp. 478–487, 2016. Jianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 5147–5156, 2016. Chong You, Chun-Guang Li, Daniel Robinson, and Ren ´e Vidal. Oracle based active set algorithm for scalable elastic net subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 3928–3937, 2016a. Chong You, Daniel Robinson, and Ren ´e Vidal. Scalable sparse subspace clustering by orthogonal matching pursuit. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 3918–3927, 2016b. Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma. Learning diverse and discriminative representations via the principle of maximal coding rate reduction. Advances in Neural Information Processing Systems , 33:9422–9434, 2020. Pengxin Zeng, Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, and Xi Peng. Deep fair cluster- ing via maximizing and minimizing mutual information: Theory, algorithm and metric. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 23986– 23995, 2023. Hongjing Zhang and Ian Davidson. Deep fair discriminative clustering. arXiv preprint arXiv:2105.14146 , 2021. Junjian Zhang, Chun-Guang Li, Chong You, Xianbiao Qi, Honggang Zhang, Jun Guo, and Zhouchen Lin. Self-supervised convolutional subspace clustering network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 5473–5482, 2019a. Shangzhi Zhang, Chong You, Ren ´e Vidal, and Chun-Guang Li. Learning a self-expressive network for subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 12393–12403, 2021. Teng Zhang, Arthur Szlam, and Gilad Lerman. Median k-flats for hybrid linear modeling with many outliers. In IEEE/CVF International Conference on Computer Vision Workshops , pp. 234–241, 2009. Tong Zhang, Pan Ji, Mehrtash Harandi, Wenbing Huang, and Hongdong Li. Neural collaborative subspace clustering. In International Conference on Machine learning , pp. 7384–7393, 2019b. Chen Zhao, Chun-Guang Li, Wei He, and Chong You. Deep self-expressive learning. In The First Conference on Parsimony and Learning , volume 234, pp. 228–247, 2024. Huasong Zhong, Jianlong Wu, Chong Chen, Jianqiang Huang, Minghua Deng, Liqiang Nie, Zhouchen Lin, and Xian-Sheng Hua. Graph contrastive clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 9224–9233, 2021. Pan Zhou, Yunqing Hou, and Jiashi Feng. Deep adversarial subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 1596–1604, 2018. 16 Published as a conference paper at ICLR 2025 SUPPLEMENTARY MATERIAL FOR “EXPLORING A PRINCIPLED FRAMEWORK FOR DEEPSUBSPACE CLUSTERING ” The supplementary materials are divided into three parts. In Section A, we present the proofs of our theoretical results. In Section B, we present the supplementary materials for experiments, including experimental details (Sec. B.1), empirical validation on our theoretical results (Sec. B.2), and more experimental results (Sec. B.3). In Section C, we discuss about the limitations and failure cases of PRO-DSC. A P ROOFS OF MAINRESULTS As a preliminary, we start by introducing a lemma from (Haeffele et al., 2021) and provide its proof for the convenience of the readers. Lemma 1 (Haeffele et al., 2021) .The rows of the optimal solution Zfor problem (2) are the eigen- vectors that associate with the smallest eigenvalues of (I−C)(I−C)⊤. Proof. We note that: ∥Z−ZC∥2 F= Tr Z(I−C) (I−C)⊤Z⊤ =dX i=1z(i)(I−C)(I−C)⊤z(i)⊤, where z(i)is the ithrow of Z, thus problem (2) is reformulated as: min {z(i)}d i=1,C1 2dX i=1z(i)(I−C)(I−C)⊤z(i)⊤+β·r(C) s.t.∥Z∥2 F=N.(11) Without loss of generality, the magnitude of each row of Zis assumed to be fixed, i.e., ∥z(i)∥2 2= τi, i= 1, . . . , d , wherePd i=1τi=N. Then, the optimization problem becomes: min {z(i)}d i=1,C1 2dX i=1z(i)(I−C)(I−C)⊤z(i)⊤+β·r(C) s.t.∥z(i)∥2 2=τi, i= 1, . . . , d.(12) The Lagrangian of problem (12) is: L({z(i)}d i=1,C,{νi}d i=1):=1 2dX i=1z(i)(I−C)(I−C)⊤z(i)⊤+β·r(C) +1 2dX i=1νi(∥z(i)∥2 2−τi), (13) where {νi}d i=1are the Lagrangian multipliers. The necessary conditions for optimal solution are: ∇z(i)L=z(i)(I−C)(I−C)⊤+νiz(i)=0, ∥z(i)∥2 2=τi, i= 1, . . . , d,(14) which implies that the optimal solutions z(i)are eigenvectors of (I−C)(I−C)⊤. By further considering the objective functions, the optimal solution z(i)should be the eigenvectors w.r.t. the smallest eigenvalues of (I−C)(I−C)⊤for all i∈ {1, . . . , d }. The corresponding optimal value is1 2λmin((I−C)(I−C)⊤)Pd i=1τi+β·r(C) =N 2λmin((I−C)(I−C)⊤) +β·r(C), which is irrelevant to {τi}d i=1. Therefore, we conclude that the rows of optimal solution Zto problem (2) are eigenvectors that associate with the smallest eigenvalues of (I−C)(I−C)⊤. 17 Published as a conference paper at ICLR 2025 Lemma A1. Suppose that matrices A,B∈Rn×nare symmetric, then AB =BA if and only if AandBcan be diagonalized simultaneously by U∈ O(n), where O(n)is an orthogonal group. Now we present our theorem about the optimal solution of problem PRO-DSC in (4) with its proof. Theorem 1. Denote the optimal solution of PRO-DSC in (4) as (Z⋆,C⋆), then G⋆andM⋆share eigenspaces, where G⋆:=Z⋆⊤Z⋆,M⋆:= (I−C⋆)(I−C⋆)⊤, i.e.,G⋆andM⋆can be diago- nalized simultaneously by U∈ O(N)where O(N)is an orthogonal matrix group. Proof. We first consider the subproblem of PRO-DSC problem in (4) with respect to Zand prove that for all C∈RN×N, the corresponding optimal Z⋆,Csatisfies G⋆,CM=MG ⋆,C, where G⋆,C=Z⋆,C⊤Z⋆,C,M:= (I−C)(I−C)⊤, implying that G⋆,CandMshare eigenspace. Then, we will demonstrate that G⋆andM⋆share eigenspace. The subproblem with respect to ZCis reformulated into the following semi-definite program: min GC−1 2log det ( I+αGC) +γ 2tr(GCM) s.t.GC⪰0,tr(GC) =N,(15) which has the Lagrangian as: L(GC,∆, ν):=−1 2log det ( I+αGC) +γ 2tr(GCM)−tr(∆GC) +ν 2(tr(GC)−N),(16) where scalar νandN×Nsymmetric matrix ∆are Lagrange multipliers. The KKT conditions is: −α 2(I+αG⋆,C)−1+γ 2M−∆⋆+ν⋆ 2I=0, (17) G⋆,C⪰0, (18) tr(G⋆,C) =N, (19) ∆⋆⪰0, (20) ∆⋆G⋆,C=0, (21) which are the sufficient and necessary condition for the global optimality of the solution G⋆,C. From Eqs. (18),(20) and (21), we have that ∆⋆G⋆,C−G⋆,C∆⋆=∆⋆G⋆,C−(∆⋆G⋆,C)⊤= 0, implying that ∆⋆andG⋆,Cshare eigenspace. By eigenvalue decomposition ∆⋆= QΛ∆⋆Q⊤,G⋆,C=QΛG⋆,CQ⊤, where Λ∆⋆,ΛG⋆,Care diagonal matrices, we have: 2·QΛ∆⋆Q⊤=−αQ(I+αΛG⋆,C)−1Q⊤+γM+ν⋆I (22) ⇒ γM+ν⋆I=Q 2Λ∆⋆+α I+αΛG⋆,C−1 Q⊤, (23) where the first equality is from Eq. (17). Since that 2Λ∆⋆+α(I+αΛG⋆,C)−1is a diagonal matrix, γM+ν⋆Ican be diagonalized by Q. In other words, for ∀M∈S+ Nin problem (15), Mwill share eigenspace with the corresponding optimal solution G⋆,C. Next, denote (Z⋆,C⋆)as the optimal solution of problem (4), C:={Z| ∥Z∥2 F=N}as the feasible set and f(·,·)as the objective function. Since that Z⋆= arg minZ∈Cf(Z,C⋆), otherwise contradicts with the optimality of (Z⋆,C⋆), we conclude that G⋆andM⋆share eigenspace, where M⋆:= (I−C⋆)(I−C⋆)⊤. Theorem 2. Suppose that GandM are aligned in the same eigenspaces and γ < 1 λmax(M)α2 α+min{d N,1}, then we have that: a) rank(Z⋆) = min {d, N}, and b) the singular val- uesσ(i) Z⋆=q1 γλ(i) M+ν⋆−1 αfor all i= 1, . . . , min{d, N}, where Z⋆andν⋆are the primal optimal solution and dual optimal solution, respectively. 18 Published as a conference paper at ICLR 2025 Proof. Since∥Z−ZC∥2 F= Tr Z⊤Z(I−C) (I−C)⊤ and∥Z∥2 F= Tr(Z⊤Z), problem (5) is equivalent to: min G−1 2log det ( I+αG) +γ 2Tr(GM ) s.t.Tr(G) =N,G⪰0,(24) where G:=Z⊤ZandM:= (I−C)(I−C)⊤. Since that GandMhave eigenspaces aligned, we can have GandMdiagonalized simultaneously by an orthogonal matrix U, i.e.,G=UΛGU⊤,M=UΛMU⊤. Therefore, problem (24) can be transformed into the eigenvalue optimization problem as follows: min {λ(i) G}min{d,N} i=1−1 2min{d,N}X i=1log(1 + αλ(i) G) +γ 2λ(i) Mλ(i) G s.t.min{d,N}X i=1λ(i) G=N, λ(i) G≥0,for all i= 1, . . . , min{d, N},(25) where {λ(1) M,···, λ(min{d,N}) M }are the diagonal entries of ΛMand{λ(1) G,···, λ(min{d,N}) G }are the diagonal entries of ΛG. Surprisingly, problem (25) is a convex optimization problem. Thus, the KKT condition is sufficient and necessary to guarantee for the global minimizer. The Lagrangian of problem (25) is: L {λ(i) G}min{d,N} i=1 ,{µi}min{d,N} i=1 , ν := −1 2min{d,N}X i=1log(1 + αλ(i) G) +γ 2λ(i) Mλ(i) G−µiλ(i) G+ν 2min{d,N}X i=1λ(i) G−N , (26) where µi≥0, i= 1, . . . , min{d, N}andνare the Lagrangian multipliers. The KKT conditions are as follows: ∇λ(i) G⋆L= 0, ∀i= 1, . . . , min{d, N}, (27) λ(i) G⋆≥0, ∀i= 1, . . . , min{d, N}, (28) min{d,N}X i=1λ(i) G⋆=N, (29) µi⋆≥0, ∀i= 1, . . . , min{d, N}, (30) µi⋆λ(i) G⋆= 0, ∀i= 1, . . . , min{d, N}. (31) Then, the stationary condition in (27) is equivalent to: µi⋆=1 2 ν⋆+γλ(i) M−α 1 +αλ(i) G⋆ . (32) By using Eqs. (28) and (30)-(32), we come up with the following two cases: µi⋆>0⇒λ(i) G⋆= 0,1 ν⋆+γλ(i) M−1 α<0, (33) µi⋆= 0⇒λ(i) G⋆>0, λ(i) G⋆=1 ν⋆+γλ(i) M−1 α>0. (34) From the above two cases, we conclude that: λ(i) G⋆= maxn 0,1 ν⋆+γλ(i) M−1 αo , (35) 19 Published as a conference paper at ICLR 2025 where ν⋆satisfies: min{d,N}X i=1maxn 0,1 ν⋆+γλ(i) M−1 αo =N. (36) Given that γ < (α−ν⋆)/λmax(M), we have1 ν⋆+γλ(i) M−1 α>0for all i= 1, . . . , min{d, N}. Therefore, for the optimal solution Z⋆of problem (5), we conclude that: rank(Z⋆) = min {d, N} and the singular values σ(i) Z⋆=q1 γλ(i) M+ν⋆−1 α, for all i= 1, . . . , min{d, N}. Note that, the results we established just above rely on a condition γλmax(M)< α−ν⋆where the ν⋆is the optimal Lagrangian multiplier, which is set as a fixed value related to α, γ andλmax(M). Next, we will develop an upper bound for ν⋆and justify the fact that we ensure ν⋆to satisfy the condition γλmax(M)< α−ν⋆by only adjusting the hyper-parameters αandγ. In Eq. (36), we can easily find an upper bound of ν⋆as: N=min{d,N}X i=1maxn 0,1 ν⋆+γλ(i) M−1 αo ≤min{d, N} ν⋆+γλmin(M)−min{d, N} α, (37) ⇒ν⋆≤1 N min{d,N}+1 α−γλmin(M), (38) Therefore, we can find a tighten bound between α−ν⋆andγλmax(M)as: γλmax(M)<α2 α+ mind N,1 <α2 α+ mind N,1 +γλmin(M)≤α−ν⋆, (39) which means that the condition of γλmax(M)< α−ν⋆can be reformed as: γ <1 λmax(M)α2 α+ mind N,1 (40) Remark 3. We notice that (25) is a reverse water-filling problem, where the water level is controlled by 1/α, as shown in Figure A.1. When GandMhave eigenspaces aligned and γ <(α−ν⋆)/λmax(M), we have rank(Z⋆) = min {d, N}andλ(i) G⋆̸= 0 for all i≤min{d, N}. When γ≥(α−ν⋆)/λmax(M), non-zero λ(i) Gfirst disappears for the larger λ(i) M. 0 1 2 3 4 5 61/α i1 γλ(i) M+ν λ(2) Gλ(1) G λ(3) Gλ(0) G Figure A.1: Illustration of the optimal solution for problem (25) . The primal problem can be transformed into a classical reverse water-filling problem. Theorem 3. Suppose that the sufficient conditions to prevent catastrophic feature collapse are sat- isfied. Without loss of generality, we further assume that the columns in matrix Zare arranged into kblocks according to a certain N×Npermutation matrix Γ, i.e.,Z= [Z1,Z2,···,Zk]. Then the 20 Published as a conference paper at ICLR 2025 condition for that PRO-DSC promotes the optimal solution (Z⋆,C⋆)to be desired structure (i.e., Z⊤ ⋆Z⋆andC⋆are block-diagonal), is that ⟨(I−C)(I−C)⊤,G−G∗⟩ →0, where G∗:= Diag G11,G22,···,Gkk = G11 ... Gkk , andGjjis the block Gram matrix corresponding to Zj. Proof. We begin with the analysis to the first two terms of the loss function ˜L:=L1+γL2, where L1:=−1 2log det I+α(ZΓ)⊤(ZΓ) =−1 2log det( I+αG), L2:=1 2∥ZΓ−ZΓΓ⊤CΓ∥2 F=1 2∥Z−ZC∥2 F=1 2Tr G(I−C) (I−C)⊤ , since that Γ⊤Γ=ΓΓ⊤=I. Thus, we have: ˜L(G,C) =γ 2Tr G(I−C) (I−C)⊤ −1 2log det( I+αG), (41) which is a convex function with respect to (w.r.t) GandC, separately. By the property of convex function w.r.t. C, we have: ˜L(G,C)≥˜L(G∗,C∗) +D ∇C˜L(G∗,C∗),C−C∗E +Dγ 2(I−C) (I−C)⊤,G−G∗E =˜L(G∗,C∗) +D −γG∗(I−C∗),C−C∗E +Dγ 2(I−C) (I−C)⊤,G−G∗E , where C∗= Diag C11,C22,···,Ckk = C11 ... Ckk with the blocks associating to the partition of Z= [Z1,Z2,···,Zk]. Since thatD G∗(I−C∗),C−C∗E = 0 due to the comple- mentary between G∗andI−C∗, we have: ˜L(G,C)≥˜L(G∗,C∗) +Dγ 2(I−C) (I−C)⊤,G−G∗E . It is easy to see that ifD (I−C) (I−C)⊤,G−G∗E →0, then we will have: ˜L(G,C)≥˜L(G∗,C∗), (42) where the equality holds only when G=G∗,C=C∗. Furthermore, if the regularizer r(·)satisfies the extended block diagonal condition as defined in (Lu et al., 2018), then we have that r(C)≥ r(C∗), where the equality holds if and only if C=C∗. Therefore, we have: L(G,C) =˜L(G,C) +β·r(C)≥˜L(G∗,C∗) +β·r(C∗) =L(G∗,C∗). (43) Thus we conclude that minimizing the loss function L(G,C) =˜L(G,C) +β·r(C)promotes the optimal solution (G⋆,C⋆)to have block diagonal structure. We note that the Gram matrix being block-diagonal, i.e., G⋆=G∗, implies that Z⊤ ⋆,j1Z⋆,j2=0for all1≤j1< j2≤k, which is corresponding to the subspaces associated to the blocks Z⋆,j’s are orthogonal to each other. 21 Published as a conference paper at ICLR 2025 B E XPERIMENTAL SUPPLEMENTARY MATERIAL B.1 E XPERIMENTAL DETAILS B.1.1 S YNTHETIC DATA As shown in Figure 5a (top row), data points are generated from two manifolds. The first manifold (colored in purple) is generated by sampling 100 data points from x= cos1 5sin (5 φ) cosφ cos1 5sin (5 φ) sinφ sin1 5sin (5 φ) +ϵ, (44) where φis taken uniformly from [0,2π]andϵ∼ N (0,0.05I3)is the additive noise. The second manifold (colored in blue) is generated by sampling 100 data points from a Gaussian distribution N [0,0,1]⊤,0.05I3 . To further test more complicated cases, we generate the second manifold by sampling 50 data points from a Gaussian distribution N [0,0,1]⊤,0.05I3 and 50 data points from another Gaussian distribution N [0,0,−1]⊤,0.05I3 , as shown in Figure 5a (bottom row). In PRO-DSC, the learnable mappings h(·;Ψ)andf(·;Θ)are implemented with two MLPs with Rectified Linear Units (ReLU) (Nair & Hinton, 2010) as the activation function. The hidden dimen- sion and output dimension of the MLPs are set to 100and3, respectively. We train PRO-DSC with batch-size nb= 200 , learning rate η= 5×10−3for1000 epochs. We set γ= 0.5, β= 1000 , and α=3 0.1·200. We use DSCNet (Ji et al., 2017) as the representative of the SEDSC methods. In Figure 5b, we set γ= 1 for both cases, whereas in Figure 5c, γis set to 5and100for the two cases, respectively. The encoder and decoder of DSCNet are MLPs of two hidden layers, with the hidden dimensions being set to 100and3, respectively. We train DSCNet with batch-size nb= 200 , learning rate η= 1×10−4for1000 epochs. B.1.2 R EAL-WORLD DATASETS Datasets description. CIFAR-10 and CIFAR-100 are classic image datasets consisting of 50,000 images for training and 10,000 images for testing. They are split into 10 and 100 classes, respec- tively. CIFAR-20 shares the same images with CIFAR-100 while taking 20 super-classes as labels. ImageNet-Dogs consists of 19,500 images of 15 different dog species. Tiny-ImageNet consists of 100,000 images from 200 different classes. ImageNet-1k is the superset of the two datasets, con- taining more than 1,280,000 real world images from 1000 classes. For all the datasets except for ImageNet-Dogs, we train the network to implement PRO-DSC on the train set and test it on the test set to validate the generalization of the learned model. For ImageNet-Dogs dataset which does not have a test set, we train the network to implement PRO-DSC on the train set and report the clustering performance on the training set. For a direct comparison, we conclude the basic information of these datasets in Table B.1. To leverage the CLIP features for training, the input images are first resized to 224with respect to the smaller edge, then center-cropped to 224×224and fed into the CLIP pre-trained image encoder to obtain fixed features.11The subsequent training of PRO-DSC takes the extracted features as input, instead of loading the entire CLIP pre-trained model. Network architecture and hyper-parameters. The learnable mappings h(·;Ψ)andf(·;Θ)are two fully-connected layers with the same output dimension d. Following (Chu et al., 2024), for the experiments on real-world data, we stack a pre-feature layer before the learnable mappings, which is composed of two fully-connected layers with ReLU and batch-norm (Ioffe & Szegedy, 2015). We train the network by the SGD optimizer with the learning rate set to η= 10−4, and the weight decay parameters of f(·;Θ)andh(·;Ψ)are set to 10−4and5×10−3, respectively. 11We use the ViT L/14 pre-trained model provided by https://github.com/openai/CLIP for 768- dimensional features. 22 Published as a conference paper at ICLR 2025 Table B.1: Basic statistical information of datasets. We summarize the information in terms of the data with both the train and test split, as well as the number of classes involved. Datasets # Train # Test # Classes CIFAR-10 50,000 10,000 10 CIFAR-20 50,000 10,000 20 CIFAR-100 50,000 10,000 100 ImageNet-Dogs 19,500 N/A 15 TinyImageNet 100,000 10,000 200 ImageNet 1,281,167 50,000 1000 Following by (Chu et al., 2024), we warm up training f(·;Θ)by diversifying the features with L1=−log det( I+αZ⊤ ΘZΘ)for a few iterations and share the weights to h(·;Ψ). We set α=d 0.1·nbfor all the experiments. We summarize the hyper-parameters for training the network to implement PRO-DSC in Table B.2. Table B.2: Hyper-parameters configuration for training the network to implement PRO-DSC with CLIP pre-trained features. Here ηis the learning rate, dpreis the hidden and output dimen- sion of pre-feature layer, mis the output dimension of handf,nbis the batch size for training, and “# warm-up” is the number of iterations of warm-up stage. η d pre d #epochs nb #warm-up γ β CIFAR-10 1×10−44096 128 10 1024 200 300/ nb 600 CIFAR-20 1×10−44096 256 50 1500 0 600/ nb 300 CIFAR-100 1×10−44096 128 100 1500 200 150/ nb 500 ImageNet-Dogs 1×10−44096 128 200 1024 0 300/ nb 400 TinyImageNet 1×10−44096 256 100 1500 0 200/ nb 400 ImageNet 1×10−44096 1024 100 2048 2000 800/ nb 400 MNIST 1×10−44096 128 100 1024 200 700/ nb 400 F-MNIST 1×10−41024 128 200 1024 400 50/ nb 100 Flowers 1×10−41024 256 200 1024 200 400/ nb 200 Running other algorithms. Since that k-means (MacQueen, 1967), spectral clustering (Shi & Malik, 2000), EnSC (You et al., 2016a), SSCOMP (You et al., 2016b), and DSCNet (Ji et al., 2017) are based on transductive learning, we evaluate these models directly on the test set for all the experiments. • For EnSC, we tune the hyper-parameter γ∈ {1,2,5,10,20,50,100,200,400,800,1600,3200} and the hyper-parameter τinτ∥ · ∥ 1+1−τ 2∥ · ∥2 2to balance the ℓ1andℓ2norms in {0.9,0.95,1}and report the best clustering result. • For SSCOMP, we tune the hyper-parameter to control the sparsity kmax ∈ {1,2,5,10,20,50,100,200}and the residual ϵ∈ {10−4,10−5,10−6,10−7}and report the best clustering result. • To apply DSCNet to the CLIP features, we use MLPs with two hidden layers to replace the convolutional encoder and decoder. The hidden dimension of the MLPs are set to 128. We tune the balancing hyper-parameters γ∈ {1,2,3,4}andβ∈ {1,5,25,50,75,100} and train the model for 100 epochs with learning rate η= 1×10−4and batch-size nb equivalent to number of samples in the test data set. • As the performance of CPP is evaluated by averaging the ACC and NMI met- rics tested on each batch, we reproduce the results by their open-source imple- mentation and report the results on the entire test set. The authors provide two implementations (see https://github.com/LeslieTrue/CPP/blob/main/ main.py andhttps://github.com/LeslieTrue/CPP/blob/main/main_ efficient.py ), where one optimizes the cluster head and the feature head separately and the other shares weights between the two heads. In this paper, we test both cases and report the better results. 23 Published as a conference paper at ICLR 2025 • For k-means and spectral clustering (including when spectral clustering is used as the fi- nal step in subspace clustering), we repeat the clustering 10 times with different random initializations (by setting ninit= 10 in scikit-learn) and report the best results. • For SENet, SCAN and EDESC, we adjust the hyper-parameters and repeat experiments for three times, with only the best results are reported. B.2 E MPIRICAL VALIDATION ON THEORETICAL RESULTS Empirical Validation on Theorem 1. To validate Theorem 1 empirically, we conduct experiments on CIFAR-100 with the same training configurations as described in Section B.1.2 but change the training period to 1000 epochs. For each epoch, we compute Gb=Z⊤ bZbandMb= (I−Cb)(I− Cb)⊤with the learned representations Zband self-expressive matrix Cbin mini-batch of size nb after the last iteration of different epoch. Then, to quantify the eigenspace alignment of GbandMb, we directly plot the alignment error which is computed via the Frobenius norm of the commutator L:=∥GbMb−MbGb∥Fin mini-batch of size nbduring the training period in Figure 1a. We also show the standard deviation with shaded region after repeating the experiments for 5random seeds. As can be read, the alignment error decreases monotonically during the training period, implying that the eigenspaces are progressively aligned. Moreover, we find the eigenvector {uj}ofMbby eigenvalue decomposition, where ujdenotes the j-th eigenvector which are sorted according to the eigenvalues in the ascending order, and calculate the normalized correlation coefficient which is defined as ⟨uj,Gbuj/∥Gbuj∥2⟩. Note that when the eigenspace alignment holds, one can verify that: ⟨uj,Gbuj ∥Gbuj∥2⟩=( 1, λ(j) Gb̸= 0 0, λ(j) Gb= 0for all j= 1,2, . . . , n b. (45) As shown in Figure 1b, the normalized correlation curves associated to the first d= 128 eigenvec- tors converge to 1, whereas the rest converge to 0, implying the progressively alignment between GbandMb. Empirical Validation on Theorem 2. To verify Theorem 2, we conduct experiments on CIFAR-10 and CIFAR-100. The experimental setup keeps the same as described in Section B.1.2. In each epoch, we compute Gb=Z⊤ bZbandMb= (I−Cb)(I−Cb)⊤fromZbandCbin mini-batch after the last iteration, respectively, and then find the eigenvalues of GbandMb. We display the eigenvalue curves in Figure 1c and 1d, respectively. To enhance the clarity of the visualization, the eigenvalues of GbandMbare sorted in descending and ascending order, respectively. As can be observed, there are min{d, nb}= 128 non-zero eigenvalues in Gb, approximately being inversely proportional to the smallest 128eigenvalues of Mb. This results empirically demonstrate thatrank(Z⋆) = min {d, N}andλ(i) G⋆=1 γλ(i) M+ν⋆−1 αfor minimizers. Furthermore, to verify the sufficient condition of PRO-DSC to prevent feature space collapse, we conduct experiments on CIFAR-10 and CIFAR-100 with varying αandγ, keeping all the other hyper-parameters consistent with Table B.2. As can be read in Figure 2, Theorem 2 is verified since that γ <1 λmax(Mb)α2 α+min{d N,1}yields satisfactory clustering accuracy (ACC%) and subspace-preserving representation error (SRE%). The satisfactory ACC and SRE confirm that PRO-DSC avoids catastrophic collapse when γ <1 λmax(M)α2 α+min{d N,1}holds. When γ≥ 1 λmax(Mb)α2 α+min{d N,1}, PRO-DSC yields significantly worse ACC and SRE. There is a phase tran- sition phenomenon that corresponds to the sufficient condition to prevent collapse.12 Empirical Validation on Theorem 3. To intuitively visualize the structured representations learned by PRO-DSC, we visualize the Gram matrices |Z⊤Z|and Principal Component Analysis (PCA) results for both CLIP features and learned representations on CIFAR-10. The experimental setup also keeps the same as described in Section B.1.2. The Gram matrix shows the similarities between representations within the same class (indicated by in-block diagonal values) and across different classes (indicated by off-block diagonal values). 12In experiments, we estimate that λmax(Mb) = 1 and thus the condition reduces to γ <α2 α+min{d N,1}. 24 Published as a conference paper at ICLR 2025 Moreover, we display the dimensionality reduction results via PCA for the CLIP features and the learned representation of samples from three categories in CIFAR-10. We use PCA for dimension- ality reduction as it performs a linear projection, well preserving the underlying structure. As can be observed in Figure 3, the CLIP features from three classes approximately lie on different subspaces. Despite of the structured nature of the features, the underlying subspaces are not orthog- onal. In the Gram matrix of the CLIP feature, the average similarity between features from different classes is greater than 0.6, resulting in an unclear block diagonal structure. After training with PRO- DSC, the spanned subspaces of the learned representations become orthogonal.13Additionally, the off-block diagonal values of the Gram matrix decrease significantly, revealing a clear block diagonal structure. These visualization results qualitatively verify that PRO-DSC aligns the representations with a union of orthogonal subspaces.14 B.3 M ORE EXPERIMENTAL RESULTS B.3.1 M ORE RESULTS OF PRO-DSC ON SYNTHETIC DATA To explore the learning ability of our PRO-DSC, we prepare experiments on synthetic data with adding an additional subspace, as presented in Figure B.1. In case 1 , we sample 100 points from Gaussian distribution x∼ N ([1√ 2,0,1√ 2]⊤,0.05I3)and 100 points from x∼ N ([−1√ 2,0,1√ 2]⊤,0.05I3), respectively. We train PRO-DSC with batch- sizenb= 300 , learning rate η= 5×10−3for5000 epochs and set γ= 1.3, β= 500 , α= 3 0.1·300. We observe that our PRO-DSC successfully eliminates the nonlinearity in representations and maximally separates the different subspaces. In case 2 , we add a vertical curve x= cos1 5sin (5 φ) cosφ sin1 5cos (5 φ) cos1 5sin (5 φ) sinφ +ϵ, (46) from which 100 points are sampled, where ϵ∼ N (0,0.05I3). We use sin(1 5cos(5 φ))to avoid overlap in the intersection of the two curves. We train PRO-DSC with batch-size nb= 200 , learning rateη= 5×10−3for8000 epochs and set γ= 0.5, β= 500 , α=3 0.1·200. We observe that PRO-DSC finds difficulties to learn representations of data which are located at the intersections of the subspaces. However, for those data points which are away from the intersections are linearized well. (a) Case 1: Input data (b) Case 1: Learned Z (c) Case 2: Input data (d) Case 2: Learned Z Figure B.1: Additional results on synthetic data. B.3.2 E XPERIMENTS WITH BYOL P RE-TRAINING To validate the effectiveness of our PRO-DSC without using CLIP features, we conduct a fair com- parison with existing deep clustering approaches. Similar to most deep clustering algorithms, we 13The dimension of each subspace is much greater than one (see Figure B.4). The 1-dimensional subspaces observed in the PCA results are a consequence of dimensionality reduction. 14Please refer to Figure B.3 and B.7 for the results on other datasets and the visualization of the bases of each subspace. 25 Published as a conference paper at ICLR 2025 divide the training process into two steps. We begin with pre-training the parameters of the back- bone with BYOL (Grill et al., 2020). Then, we leverage the parameters pre-trained in the first stage and fine-tune the model by the proposed PRO-DSC loss function. Specifically, we set the learning rateη= 0.05and the batch size nb= 256 . The output feature dimension dis consistent with the setting for training with the CLIP features. Following (Li et al., 2021; Huang et al., 2023), We use ResNet-18 as the backbone for the experiments on CIFAR-10 and CIFAR-20, and use ResNet-34 as the backbone for the experiments on other datasets, and use a convolution filter of size 3×3 and stride 1 to replace the first convolution filter. We use the commonly used data augmentations methods to the input images, which are listed as follows: transforms.RandomResizedCrop(size=img size, scale=(0.08, 1)), transforms.RandomHorizontalFlip(), transforms.RandomApply([transforms.ColorJitter(0.4, 0.4, 0.2, 0.1)], p=0.8), transforms.RandomGrayscale(p=0.2), transforms.RandomApply([transforms.GaussianBlur(kernel size=23, sigma=(0.1, 2.0))], p=1.0). When re-implementing other baselines, we use the code provided by the respective authors and report the best performance after fine-tuning the hyper-parameters. We report the clustering results based on BYOL pre-training in Table B.3. As can be read from Table B.3, our PRO-DSC outperforms all the deep clustering baselines, including CC (Li et al., 2021), GCC (Zhong et al., 2021), NNM (Dang et al., 2021), SCAN (Van Gansbeke et al., 2020), NMCE (Li et al., 2022), IMC-SwA V (Ntelemis et al., 2022), and MLC (Ding et al., 2023). Table B.3: Clustering performance comparison on BYOL pre-training. The best results are in bold font and the second best results are underlined. Performance marked with “*” is based on our re-implementation. MethodCIFAR-10 CIFAR-20 CIFAR-100 TinyImgNet-200 ImgNetDogs-15 ACC NMI ACC NMI ACC NMI ACC NMI ACC NMI k-means 22.9 8.7 13.0 8.4 9.2 23.0 2.5 6.5 10.5 5.5 SC 24.7 10.3 13.6 9.0 7.0 17.0 2.2 6.3 11.1 3.8 CC 79.0 70.5 42.9 43.1 26.9* 48.1* 14.0 34.0 42.9 44.5 GCC 85.6 76.4 47.2 47.2 28.2* 49.9* 13.8 34.7 52.6 49.0 NNM 84.3 74.8 47.7 48.4 41.2 55.1 - - 31.1* 34.3* SCAN 88.3 79.7 50.7 48.6 34.3 55.7 - - 29.6* 30.3* NMCE 89.1 81.2 53.1 52.4 40.0* 53.9* 21.6* 40.0* 39.8 39.3 IMC-SwA V 89.7 81.8 51.9 52.7 45.1 67.5 28.2 52.6 - - MLC 92.2 85.5 58.3 59.6 49.4 68.3 28.7* 52.2 * 71.0 * 68.3 * Our PRO-DSC 93.0±0.686.5±0.258.3±0.960.1±0.656.3±0.666.7.0±1.031.1±0.346.0±1.074.1±0.569.5±0.6 B.3.3 M ORE EXPERIMENTS ON CLIP, DINO AND MAE P RE-TRAINED FEATURES Clustering on learned representations. To quantitatively validate the effectiveness of the struc- tured representations learned by PRO-DSC, we illustrate the clustering accuracy of representations learned by various algorithms in Figure 6. Here, to compared with the representations learned from SEDSC methods, we additionally conduct experiments on DSCNet (Ji et al., 2017) and report the performance in Table B.4. To apply DSCNet on CLIP features, we use MLPs with two hidden layers to replace the stacked convolutional encoder and decoder. As demonstrated in Sec. B.1, we report the best clustering results after the tuning of hyper-parameters. As analyzed in (Haeffele et al., 2021) and Section 2.1, DSCNet overly compresses the representations and yields unsatisfactory clustering results. Out of domain datasets. We evaluate the capability to refine features by training PRO-DSC with pre-trained CLIP features on out-of-domain datasets, namely, MNIST (Deng, 2012), Fash- ion MNIST (Xiao et al., 2017) and Oxford flowers (Nilsback & Zisserman, 2008). As shown in Table B.5, CPP (Chu et al., 2024) refines the CLIP features and yields better clustering performance comparing with spectral clustering (Shi & Malik, 2000) and EnSC (You et al., 2016a). Our PRO- DSC further demonstrates the best performance on all benchmarks, validating its effectiveness in refining input features. 26 Published as a conference paper at ICLR 2025 Table B.4: Clustering accuracy of CLIP features and learned representations. We apply k- means, spectral clustering, and EnSC to cluster the representations. CIFAR-10 CIFAR-100 CIFAR-20 TinyImgNet-200 k-means SC EnSC k-means SC EnSC k-means SC EnSC k-means SC EnSC CLIP 74.7 70.2 95.4 52.8 66.4 67.0 46.9 49.2 60.8 54.1 62.8 64.5 SEDSC 16.4 18.9 16.9 5.4 4.9 5.3 11.7 10.6 12.8 5.7 3.9 7.2 CPP 71.3 70.3 95.6 75.3 75.0 77.5 55.5 43.6 58.3 62.1 58.0 67.0 PRO-DSC 93.4 92.1 95.5 76.5 75.2 77.6 66.0 59.7 60.0 67.6 67.0 69.5 Table B.5: Experiments on out-of-domain datasets. MethodsMNIST F-MNIST Flowers ACC NMI ACC NMI ACC NMI Spectral Clustering (Shi & Malik, 2000) 74.5 67.0 64.3 56.8 85.6 94.6 EnSC (You et al., 2016a) 91.0 85.3 69.1 65.1 90.0 95.9 CPP (Chu et al., 2024) 95.7 90.4 70.9 68.8 91.3 96.4 PRO-DSC 96.1 90.9 71.3 70.3 92.0 97.4 Experiments on block diagonal regularizer with different k.To test the robustness of block diag- onal regularizer ∥A∥κto different k, we vary kand report the clustering performance in Table B.6. As illustrated, kdoes not necessarily equal to the number of clusters. There exists an interval within which the regularizer works effectively. Table B.6: Clustering performance with different kin block diagonal regularizer. k 2 5 10 15 20 25 30 CIFAR-10ACC 97.2 97.2 97.4 96.3 96.3 95.4 94.0 NMI 93.2 93.2 93.5 92.0 92.0 90.7 88.6 k 25 50 75 100 125 150 200 CIFAR-100ACC 74.3 76.7 78.1 78.2 78.9 76.4 74.8 NMI 80.9 82.3 83.2 82.9 83.2 82.2 81.5 But if kis significantly smaller than the number of clusters, the effect of block diagonal regularizer will be subtle. Therefore, the performance of PRO-DSC will be similar to that of PRO-DSC without a regularizer (see ablation studies in Section 3). In contrary, if kis significantly larger than the number of clusters, over-segmentation will occur to the affinity matrix, which has negative impact on the subsequent clustering performance. Clustering on ImageNet-1k with DINO and MAE. To test the performance of PRO-DSC based on more pre-trained features other than CLIP (Radford et al., 2021), we further conduct experiments on ImageNet-1k (Deng et al., 2009) pre-trained by DINO (Caron et al., 2021) and MAE (He et al., 2022) (see Table B.7). DINO and MAE are pre-trained on ImageNet-1k without leveraging external training data, thus their performance on PRO-DSC is lower than CLIP. Similar to the observations in CPP (Chu et al., 2024), DINO initializes PRO-DSC well, yet MAE fails, which is attributed to the fact that features from MAE prefer fine-tuning with labels, while they are less suitable for learning inter-cluster dis- criminative representations (Oquab et al., 2024). We further extract features from the validation set of ImageNet-1k and visualize through t-SNE (Van der Maaten & Hinton, 2008) to validate the hypothesis (see Figure B.2). B.3.4 E XPERIMENTS WITHOUT USING PRE -TRAINED MODELS Experiments on Reuters and UCI HAR. During the rebuttal, we conducted extra experiments on datasets Reuters and UCI HAR. The dataset Reuters-10k consists of four text classes, containing 10,000 samples of 2,000 dimension. The UCI HAR is a time-series dataset, consisting of six classes, 10,299 samples of 561 dimension. We take EDESC (Cai et al., 2022) as the baseline method for 27 Published as a conference paper at ICLR 2025 Table B.7: Clustering Performance of PRO-DSC based on DINO and CLIP pre-trained features on ImageNet-1k. Method BackbonePRO-DSC k-means ACC NMI ACC NMI MAE (He et al., 2022) ViT L/16 9.0 49.1 9.4 49.3 DINO (Caron et al., 2021) ViT B/16 57.3 79.3 52.2 79.2 DINO (Caron et al., 2021) ViT B/8 59.7 80.8 54.6 80.5 CLIP (Radford et al., 2021) ViT L/14 65.1 83.6 52.5 79.7 40 20 0 20 40 60 80 100100 80 60 40 20 02040 (a) CLIP 80 60 40 20 0 20 40 60 8060 40 20 0204060 (b) MAE Figure B.2: The t-SNE visualization of CLIP and MAE features on the validation set of ImageNet- 1k. deep subspace clustering on Reuters-10k, and take N2D (McConville et al., 2021) and FCMI (Zeng et al., 2023) as the baseline methods for UCI HAR, in which the results are directly cited from the respective papers. We conducted experiments with PRO-DSC on Reuters and UCI HAR following the same protocol for data processing as the baseline methods. We train and test PRO-DSC on the entire dataset and report the results over 10 trials. Experimental results are provided in Table B.8. The hyper-parameters used for PRO-DSC is listed in Table B.9. Table B.8: Experimental Results on Datasets Reuters and UCI HAR with 10 trials. The results of other methods are cited from the respective papers. DatasetREUTERS-10k UCI HAR ACC NMI ACC NMI k-means (MacQueen, 1967) 52.4 31.2 59.9 58.8 SC (Shi & Malik, 2000) 40.2 37.5 53.8 74.1 AE (Bengio et al., 2006) 59.7 32.3 66.3 60.7 V AE (Kingma & Welling, 2014) 62.5 32.9 - - JULE (Yang et al., 2016) 62.6 40.5 - - DEC (Xie et al., 2016) 75.6 68.3 57.1 65.5 DSEC (Chang et al., 2018) 78.3 70.8 - - EDESC (Cai et al., 2022) 82.5 61.1 - - DFDC (Zhang & Davidson, 2021) - - 86.2 84.5 N2D (McConville et al., 2021) - - 82.8 71.7 FCMI (Zeng et al., 2023) - - 88.2 80.7 PRO-DSC 85.7±1.3 64.6 ±1.3 87.1 ±0.4 80.9 ±1.2 28 Published as a conference paper at ICLR 2025 Table B.9: Configuration of hyper-parameters for experiments on Reuters, UCI HAR, EYale-B, ORL and COIL-100. Dataset η d pre d #epochs nb #warm-up γ β REUTERS-10k 10−41024 128 100 1024 50 50 200 UCI HAR 10−41024 128 100 2048 20 100 300 EYale-B 10−41080 256 10000 2432 100 200 50 ORL 10−480 64 5000 400 100 75 10 COIL-100 10−412800 100 10000 7200 100 200 100 Comparison to AGCSC and SAGSC on Extended Yale B, ORL, and COIL-100. During the re- buttal, we conducted more experiments on two state-of-the-art subspace clustering methods AGCSC (Wei et al., 2023) and ARSSC (Wang et al., 2023a). Since that both of the two methods cannot handle the datasets used for evaluating our PRO-DSC, we conducted experiments on the datasets: Extended Yale B (EYaleB), ORL, and COIL-100. We set the architecture of pre-feature layer in PRO-DSC as the same to the encoder of DSCNet (Ji et al., 2017). The hyper-parameters configuration for training PRO-DSC is summarized in Table B.9. We repeated experiments for 10 trails and report the aver- age with standard deviation in Table B.10. As baseline methods, we use EnSC (You et al., 2016a), SSCOMP (You et al., 2016b), S3COMP (Chen et al., 2020b), DSCNet, DSSC (Lim et al., 2020) and DELVE Zhao et al. (2024). The results of these methods, except for S3COMP and DELVE, are directly cited them from DSSC (Lim et al., 2020), and the results of S3COMP and DELVE are cited from their own papers. • Comparison to AGCSC. Our method surpasses AGCSC on the Extended Yale B dataset and achieves comparable results on the ORL dataset. However, AGCSC cannot yield the result on COIL-100 in 24 hours. • Comparison to ARSSC. ARSSC employs three different non-convex regularizers: ℓγnorm Penalty (LP), Log-Sum Penalty (LSP), and Minimax Concave Penalty (MCP). While ARSSC-MCP performs the best on Extended Yale B, our PRO-DSC outperforms ARSSC- MCP on ORL. While AGCSC performs the best on ORL, but it yields inferior results on Extended Yale B and it cannot yield the results on COIL-100 in 24 hours. Thus, we did not report the results of AGCSC on COIL-100 and marked it as Out of Time (OOT). Our PRO-DSC performs the second best results on Extended Yale B, ORL and the best results on COIL-100. Since that we have not found the open-source code for ARSSC, we are un- able to have their results on COIL-100. This comparison also confirms the scalablity of our PRO-DSC which is due to the re-parametrization (similar to SENet). Table B.10: Experiments on Extended Yale B, ORL and COIL-100. EYale-B ORL COIL-100 ACC NMI ACC NMI ACC NMI EnSC 65.2 73.4 77.4 90.3 68.0 90.1 SSCOMP 78.0 84.4 66.4 83.2 31.3 58.8 S3COMP-C (Chen et al., 2020b) 87.4 - - - 78.9 - DSCNet 69.1 74.6 75.8 87.8 49.3 75.2 DELVE (Zhao et al., 2024) 89.8 90.1 - - 79.0 93.9 J-DSSC (Lim et al., 2020) 92.4 95.2 78.5 90.6 79.6 94.3 A-DSSC (Lim et al., 2020) 91.7 94.7 79.0 91.0 82.4 94.6 AGCSC (Wei et al., 2023) 92.3 94.0 86.3 92.8 OOT OOT ARSSC-LP (Wang et al., 2023a) 95.7 - 75.5 - - - ARSSC-LSP (Wang et al., 2023a) 95.9 - 71.3 - - - ARSSC-MCP (Wang et al., 2023a) 99.3 - 72.0 - - - PRO-DSC 96.0±0.395.7±0.8 83 .2±2.2 92 .7±0.682.8±0.995.0±0.6 B.4 M ORE VISUALIZATION RESULTS Gram matrices and PCA visualizations. To qualitatively validate that PRO-DSC learns represen- tations aligning with a union-of-orthogonal-subspaces distribution, we visualize the Gram matrices 29 Published as a conference paper at ICLR 2025 and PCA dimension reduction results of CLIP features and learned representations from PRO-DSC for each dataset. As shown in Figure B.3, the off-block diagonal values decrease significantly, im- plying the orthogonality between representations from different classes. The orthogonal between subspaces can also be observed from the PCA dimension reduction results. Singular values visualization. To show the intrinsic dimension of CLIP features and the represen- tations of PRO-DSC, We plot the singular values of CLIP features and PRO-DSC’s representations in Figure B.4. Specifically, the singular values of features from all the samples are illustrated on the left and the singular values of features within each class are illustrated on the middle and right. As can be seen, the singular values of PRO-DSC decrease much slower than that of CLIP, implying that the features of PRO-DSC enjoy a higher intrinsic dimension and more isotropic structure in the ambient space. Learning curves. We plot the learning curves with respect to loss values and performance of PRO- DSC on CIFAR-100, CIFAR-20 and ImageNet-1k in Figure B.5a, Figure B.5b and Figure B.5c, respectively. Recall that L1:=−1 2log det I+αZ⊤ ΘZΘ ,L2:=1 2∥ZΘ−ZΘCΨ∥2 F, andL3:= ∥AΨ∥κ. Since L1is the only loss function used in the warm-up stage, we plot all the curves starting from the iteration when warm-up ends. As illustrated, the clustering performance of PRO- DSC steadily increase as the loss values gradually decrease, which shows the effectiveness of the proposed loss functions in PRO-DSC. t-SNE visualization of learned representations. We visualize the CLIP features and cluster repre- sentations learned by PRO-DSC leveraging t-SNE (Van der Maaten & Hinton, 2008) in Figure B.6. As illustrated, the learned cluster representations are significantly more compact compared with the CLIP features, which contributes to the improved clustering performance. Subspace visualization. We visualize the principal components of subspaces learned by PRO-DSC in Figure B.7. For each cluster in the dataset, we apply Principal Component Analysis (PCA) to the learned representations. We select the top eight principal components to represent the learned subspaces. Then, for each principal component, we display eight images whose representations are most closely aligned with that principal component. Interestingly, we can observe specific semantic meanings from the principal components learned by PRO-DSC. For instance, the third row of Figure B.7a consists of stealth fighters, whereas the fifth row shows airliners. The second row of Figure B.7c consists of birds standing and resting, while the sixth row shows flying eagles. While Figure B.7j consists of all kinds of trucks, the first row shows fire trucks. C L IMITATIONS AND FAILURE CASES Limitations: In this paper, we explore an effective framework for deep subspace clustering with theoretical justification. However, it is not clear how to develop the geometric guarantee for our PRO-DSC framework to yield correctly subspace-preserving solution. Moreover, it is an unsuper- vised learning framework, we left the extension to semi-supervised setting as future work. Failure Cases: In this paper, we evaluate our PRO-DSC framework on four scenarios of synthetic data (Fig. 5 and B.1), six benchmark datasets with CLIP features (Table 1), five benchmark datasets with BYOL pre-trained features (Table B.3), three out-of-domain datasets (Table B.5), using four different regularization terms (Table 3), using different feature extractor (Table B.7) and varying hyper-parameters (Fig. 7 and Table B.6). We also conduct experiments on two face image datasets (Table B.10), text and temporal dataset (Table B.8). However, as demonstrated in Fig. 1, our PRO- DSC will fail if the sufficient condition to prevent catastrophic collapse is not satisfied by using improper hyper-parameters γandα. Extensibility: As a general framework for self-expressive model based deep subspace clustering, our PRO-DSC is reasonable, scalable and flexible to miscellaneous extensions. For example, rather than using log det( ·), there are other methods to solve the feature collapse issue, e.g., the nuclear norm. In addition, it is also worthwhile to incorporate the supervision information from the pseudo- label, e.g., (Huang et al., 2023; Jia et al., 2025; Li et al., 2017), for further improving the performance of our PRO-DSC. 30 Published as a conference paper at ICLR 2025 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Aquatic Fish Furniture Aquatic Fish Furniture (a) CIFAR-20 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Apple Aquarium Baby Apple Aquarium Baby (b) CIFAR-100 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Goldfish Salamander Bullfrog Goldfish Salamander Bullfrog (c) TinyImageNet-200 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Maltese Pekinese Toy Terrier Maltese Pekinese Toy Terrier (d) ImageNet-Dogs-15 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Tench Cliff Dwelling Tissue Tench Cliff Dwelling Tissue (e) ImageNet-1k Figure B.3: Visualization of the union-of-orthogonal-subspaces structure of the learned repre- sentations via Gram matrix and PCA dimension reduction on three categories. Left:|X⊤X|. Mid-left: |Z⊤Z|. Mid-right: X(3)via PCA. Right: Z(3)via PCA. 31 Published as a conference paper at ICLR 2025 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000056/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000026/uni0000002f/uni0000002c/uni00000033 /uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni0000002f/uni0000002c/uni00000033/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 (a) CIFAR-100 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000056/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000026/uni0000002f/uni0000002c/uni00000033 /uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni0000002f/uni0000002c/uni00000033/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 (b) TinyImageNet-200 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000056/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000026/uni0000002f/uni0000002c/uni00000033 /uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni0000002f/uni0000002c/uni00000033/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 (c) ImageNet-1k Figure B.4: Singular values of features from all samples (left) and features from each class (mid and right). For the better clarity, we plot the singular values for the first ten classes. 32 Published as a conference paper at ICLR 2025 00.20.40.60.811.2 0 300 600 900 1200 1500Loss Value Iterations-300-260-220-180-140-100 0 300 600 900 1200 1500Loss Value Iterations00.10.20.3 0 300 600 900 1200 1500Loss Value Iterations 3035404550556065707580 5101520253035404550Performance EpochsACC NMI (a) CIFAR-20 -152-150-148-146-144-142-140-138 200 800 1400 2000 2600 3200Loss Value Iterations00.10.20.30.40.50.6 200 800 1400 2000 2600 3200Loss Value Iterations00.20.40.60.811.2 200 800 1400 2000 2600 3200Loss Value Iterations30405060708090 102030405060708090100Performance EpochsACC NMI (b) CIFAR-100 -2000-1960-1920-1880-1840-1800 2000 22000 42000 62000Loss Value Iterations00.10.20.30.40.50.6 2000 22000 42000 62000Loss Value Iterations00.511.522.5 2000 22000 42000 62000Loss Value Iterations 30405060708090 25 50 75 100Performance EpochsACC NMI (c) ImageNet-1k Figure B.5: The learning curves w.r.t. loss values and evaluation performance of PRO-DSC on CIFAR-20, CIFAR-100 and ImageNet-1k dataset. 60 40 20 0 20 40 6060 40 20 0204060 (a) CLIP CIFAR-10 60 40 20 0 20 4060 40 20 0204060 (b) PRO-DSC CIFAR-10 60 40 20 0 20 40 6060 40 20 0204060 (c) CLIP CIFAR-100 80 60 40 20 0 20 40 60 8075 50 25 0255075 (d) PRO-DSC CIFAR-100 Figure B.6: t-SNE visualization of CLIP features and PRO-DSC’s learned representations. The experiments are conducted on CIFAR-10 and CIFAR-100 dataset. 33 Published as a conference paper at ICLR 2025 (a) Cluster 1 (b) Cluster 2 (c) Cluster 3 (d) Cluster 4 (e) Cluster 5 (f) Cluster 6 (g) Cluster 7 (h) Cluster 8 (i) Cluster 9 (j) Cluster 10 Figure B.7: Visualization of the principal components in CIFAR-10 dataset. For each cluster, we display the most similar images to its principal components. 34 | 6 | 1 | The paper describes a framework for deep subspace clustering with potentially large models considering the complexity of the tasks (e.g., high-dimensional images from datasets like CIFAR and ImageNet). Given that experiments were conducted on multiple datasets, it's reasonable to estimate that training may require intensive computations but still manageable within a single GPU setting. Considering the feature extraction using CLIP and the computational burdens of deep learning frameworks, a duration of approximately 6 hours on a single GPU appears plausible, particularly since the work includes regularization that may help stabilize training efficiency and reduce training time. | yes | Yes | CV | Exploring a Principled Framework for Deep Subspace Clustering | 2025-03-21T00:00:00.000Z | [https://github.com/mengxianghan123/PRO-DSC] | 1 | Dataset found at: [https://drive.google.com/drive/folders/1C4qlqYOW4-YulIwgkNfqMM7dZ2O5-BK_], [https://drive.google.com/drive/folders/1L9jH8zRF3To6Hb_B0UZ6PbknhgusWm5_] | 20 | https://colab.research.google.com/drive/1D4PwvmROZazdEKuhZj7QkfBKOqY9Jb0r?usp=sharing | YES! SUCCESSFULLY RUN | All things fine! successfully run |
CIFAR-100 | PRO-DSC | [] | Exploring a Principled Framework for Deep Subspace Clustering | 2025-03-21T00:00:00 | https://arxiv.org/abs/2503.17288v1 | [
"https://github.com/mengxianghan123/PRO-DSC"
] | {'Accuracy': '0.773', 'NMI': '0.824'} | [
"Accuracy",
"NMI",
"ARI",
"Train Set",
"Backbone"
] | Given the following paper and codebase:
Paper: Exploring a Principled Framework for Deep Subspace Clustering
Codebase: https://github.com/mengxianghan123/PRO-DSC
Improve the PRO-DSC model on the CIFAR-100 dataset. The result
should improve on the following metrics: {'Accuracy': '0.773', 'NMI': '0.824'}. You must use only the codebase provided.
| Published as a conference paper at ICLR 2025 EXPLORING A PRINCIPLED FRAMEWORK FOR DEEP SUBSPACE CLUSTERING Xianghan Meng†, Zhiyuan Huang†& Wei He Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China {mengxianghan,huangzhiyuan,wei.he }@bupt.edu.cn Xianbiao Qi & Rong Xiao Intellifusion, Shenzhen, P.R. China Chun-Guang Li∗ Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China lichunguang@bupt.edu.cn ABSTRACT Subspace clustering is a classical unsupervised learning task, built on a basic assumption that high-dimensional data can be approximated by a union of sub- spaces (UoS). Nevertheless, the real-world data are often deviating from the UoS assumption. To address this challenge, state-of-the-art deep subspace clustering algorithms attempt to jointly learn UoS representations and self-expressive co- efficients. However, the general framework of the existing algorithms suffers from a catastrophic feature collapse and lacks a theoretical guarantee to learn desired UoS representation. In this paper, we present a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which is designed to learn struc- tured representations and self-expressive coefficients in a unified manner. Specif- ically, in PRO-DSC, we incorporate an effective regularization on the learned representations into the self-expressive model, prove that the regularized self- expressive model is able to prevent feature space collapse, and demonstrate that the learned optimal representations under certain condition lie on a union of or- thogonal subspaces. Moreover, we provide a scalable and efficient approach to implement our PRO-DSC and conduct extensive experiments to verify our theo- retical findings and demonstrate the superior performance of our proposed deep subspace clustering approach. The code is available at: https://github. com/mengxianghan123/PRO-DSC . 1 I NTRODUCTION Subspace clustering is an unsupervised learning task, aiming to partition high dimensional data that are approximately lying on a union of subspaces (UoS), and finds wide-ranging applications, such as motion segmentation (Costeira & Kanade, 1998; Vidal et al., 2008; Rao et al., 2010), hybrid system identification (Vidal, 2004; Bako & Vidal, 2008), image representation and clustering (Hong et al., 2006; Lu et al., 2012), genes expression clustering (McWilliams & Montana, 2014) and so on. Existing subspace clustering algorithms can be roughly divided into four categories: iterative meth- ods (Tseng, 2000; Ho et al., 2003; Zhang et al., 2009), algebraic geometry based methods (Vidal et al., 2005; Tsakiris & Vidal, 2017), statistical methods (Fischler & Bolles, 1981), and spectral clustering-based methods (Chen & Lerman, 2009; Elhamifar & Vidal, 2009; Liu et al., 2010; Lu et al., 2012; You et al., 2016a; Zhang et al., 2021). Among them, spectral clustering based methods gain the most popularity due to the broad theoretical guarantee and superior performance. ∗Corresponding author.†These two authors are equally contributed. 1arXiv:2503.17288v1 [cs.CV] 21 Mar 2025 Published as a conference paper at ICLR 2025 The vital component in spectral clustering based methods is a so-called self-expressive model (El- hamifar & Vidal, 2009; 2013). Formally, given a dataset X:={x1,···,xN}where xj∈RD, self-expressive model expresses each data point xjby a linear combination of other points, i.e., xj=X i̸=jcijxi, (1) where cijis the corresponding self-expressive coefficient. The most intriguing merit of the self- expressive model is that the solution of the self-expressive model under proper regularizer on the coefficients cijis guaranteed to satisfy a subspace-preserving property, namely, cij̸= 0 only if xiandxjare in the same subspace (Elhamifar & Vidal, 2013; Soltanolkotabi & Candes, 2012; Li et al., 2018). Having had the optimal self-expressive coefficients {cij}N i,j=1, the data affinity can be induced by |cij|+|cji|for which spectral clustering is applied to yield the partition of the data. Despite the broad theoretical guarantee, the vanilla self-expressive model still faces great challenges when applied to the complex real-world data that may not well align with the UoS assumption. Earlier works devote to address this deficiency by learning a linear transform of the data (Patel et al., 2013; 2015) or introducing a nonlinear kernel mapping (Patel & Vidal, 2014) under which the representations of the data are supposed to be aligned with the UoS assumption. However, there is a lack of principled mechanism to guide the learning of the linear transforms or the design of the nonlinear kernels to guarantee the representations of the data to form a UoS structure. To handle complex real-world data, in the past few years, there is a surge of interests in designing deep subspace clustering frameworks, e.g., (Ji et al., 2017; Peng et al., 2018; Zhou et al., 2018; Zhang et al., 2019a; Dang et al., 2020; Peng et al., 2020; Lv et al., 2021; Wang et al., 2023b; Zhao et al., 2024). In these works, usually a deep neural network-based representation learning module is integrated to the self-expressive model, to learn the representations Z∈Rd×Nand the self- expressive coefficients C={cij}N i,j=1in a joint optimization framework. However, as analyzed in (Haeffele et al., 2021) that, the optimal representations Zof these methods tend to catastrophically collapse into subspaces with dimensions much lower than the ambient space, which is detrimental to subspace clustering and there is no evidence that the learned representations form a UoS structure. In this paper, we attempt to propose a Principled fRamewOrk for Deep Subspace Clustering (PRO- DSC), which is able to simultaneously learn structured representations and self-expressive coeffi- cients. Specifically, in PRO-DSC, we incorporate an effective regularization on the learned rep- resentations into the self-expressive model and prove that our PRO-DSC can effectively prevent feature collapse. Moreover, we demonstrate that our PRO-DSC under certain condition can yield structured representations forming a UoS structure and provide a scalable and efficient approach to implement PRO-DSC. We conduct extensive experiments on the synthetic data and six benchmark datasets to verify our theoretical findings and the superior performance of our proposed approach. Contributions. The contributions of the paper are highlighted as follows. 1. We propose a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC) that learns both structured representations and self-expressive coefficients simultaneously, in which an effective regularization on the learned representations is incorporated to prevent feature space collapse. 2. We provide a rigorous analysis for the optimal solution of our PRO-DSC, derive a sufficient condition that guarantees the learned representations to escape from feature collapse, and further demonstrate that our PRO-DSC under certain condition can yield structured representations of a UoS structure. 3. We conduct extensive experiments to verify our theoretical findings and to demonstrate the supe- rior performance of the proposed approach. To the best of our knowledge, this is the first principled framework for deep subspace clustering that is guaranteed to prevent feature collapse problem and is shown to yield the UoS representations. 2 D EEPSUBSPACE CLUSTERING : A P RINCIPLED FRAMEWORK , JUSTIFICATION ,AND IMPLEMENTATION In this section, we review the popular framework for deep subspace clustering, called Self- Expressive Deep Subspace Clustering (SEDSC) at first, then present our principled framework for 2 Published as a conference paper at ICLR 2025 deep subspace clustering and provide a rigorous characterization of the optimal solution and the property of the learned structured representations. Finally we describe a scalable implementation based on differential programming for the proposed framework. Please refer to Appendix A for the detailed proofs of our theoretical results. 2.1 P REREQUISITE To apply subspace clustering to complex real-world data that may not well align with the UoS assumption, there has been a surge of interests in exploiting deep neural networks to learn represen- tations and then apply self-expressive model to the learned representations, e.g., (Peng et al., 2018; Ji et al., 2017; Zhou et al., 2018; Zhang et al., 2019a; Dang et al., 2020; Peng et al., 2020; Lv et al., 2021; Wang et al., 2023b; Zhao et al., 2024). Formally, the optimization problem of these SEDSC models can be formulated as follows:1 min Z,C1 2∥Z−ZC∥2 F+β·r(C) s.t.∥Z∥2 F=N, (2) where Z∈Rd×Ndenotes the learned representation, C∈RN×Ndenotes the self-expressive coefficient matrix, and β >0is a hyper-parameter. The following lemma characterizes the property of the optimal solution Zfor problem (2). Lemma 1 (Haeffele et al., 2021) .The rows of the optimal solution Zfor problem (2) are the eigen- vectors that associate with the smallest eigenvalues of (I−C)(I−C)⊤. In other words, the optimal representation Zin SEDSC is restricted to an extremely “narrow” sub- space whose dimension is much smaller than d, leading to an undesirable collapsed solution.2 2.2 O URPRINCIPLED FRAMEWORK FOR DEEPSUBSPACE CLUSTERING In this paper, we attempt to propose a principled framework for deep subspace clustering that prov- ably learns structured representations with maximal intrinsic dimensions. To be specific, we try to optimize the self-expressive model (2) while preserving the intrinsic di- mension of the representation space. Other than using the rank, which is a common measure of the dimension, inspired by (Fazel et al., 2003; Ma et al., 2007; Yu et al., 2020; Liu et al., 2022), we propose to prevent the feature space collapse by incorporating the log det( ·)-based concave smooth surrogate which is defined as follows: R(Z;α):= log det( I+αZ⊤Z), (3) where α > 0is the hyper-parameter. Unlike the commonly used nuclear norm, which is a convex surrogate of the rank, the log det( ·)-based function is concave and differentiable, offers a tighter approximation and encourages learning subspaces with maximal intrinsic dimensions.3 By incorporating the maximization of R(Z;α)as a regularizer into the formulation of SEDSC in (2), we have a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC): min Z,C−1 2log det I+αZ⊤Z +γ 2∥Z−ZC∥2 F+β·r(C) s.t.∥Z∥2 F=N, (4) where γ >0is a hyper-parameter. Now, we will give our theoretical findings for problem (4). Theorem 1 (Eigenspace Alignment) .Denote the optimal solution of PRO-DSC in (4) as (Z⋆,C⋆), G⋆:=Z⋆⊤Z⋆andM⋆:= (I−C⋆)(I−C⋆)⊤. Then G⋆andM⋆share eigenspaces, i.e., G⋆and M⋆can be diagonalized simultaneously by U∈ O(N)where O(N)is an orthogonal group. Note that Theorem 1 provides a perspective from eigenspace alignment for analyzing the property of the optimal solution. Figure 1(a) and (b) show empirical evidences to demonstrate that alignment 1Without loss of generality, we omit the constraint diag(C) =0throughout the analysis. 2The dimension equals to the multiplicity of the smallest eigenvalues of (I−C)(I−C)⊤. 3Please refer to (Ma et al., 2007) for a packing-ball interpretation. 3 Published as a conference paper at ICLR 2025 0 200 400 600 800 1000 Epochs0.0050.0100.0150.0200.0250.0300.0350.040Frobenius norm of commutator /bardblGbMb−MbGb/bardblF/nb (a)∥GbMb−MbGb∥F 101102103 Sorted eigenvalue index (descending)0.2 0.00.20.40.60.81.0Normalized correlation min{d,nb}=128Epoch 0 Epoch 10 Epoch 20 Epoch 25 Epoch 40 Epoch 45 Epoch 100 Epoch 200 Epoch 300 Epoch 900(b)⟨uj,Gbuj/∥Gbuj∥2⟩ 100101102103 Sorted eigenvalue index (descending)01020304050Eigenvalues min{d,nb}=128Epoch 0 Epoch 10 Epoch 20 Epoch 25 Epoch 40 Epoch 45 Epoch 100 Epoch 200 Epoch 300 Epoch 900 (c)λ(Gb) 100101102103 Sorted eigenvalue index (ascending)0.00.20.40.60.81.01.2Eigenvaluesmin{d,nb}=128 Epoch 0 Epoch 10 Epoch 20 Epoch 25 Epoch 40 Epoch 45 Epoch 100 Epoch 200 Epoch 300 Epoch 900 (d)λ(Mb) Figure 1: Empirical Validation to Eigenspace Alignment and Noncollapse Representation in Mini-batch on CIFAR-100. (a): Alignment error curve during the training period. (b): Eigenspace correlation curves measured via ⟨uj,Gbuj ∥Gbuj∥2⟩forj= 1,···, nb. (c) and (d): Eigenvalue curves. 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-10 60708090100 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-100 6065707580 (a) ACC (%) 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-10 20406080 0.05 0.10.15 0.20.25 0.30.35 0.4 α0.32 0.28 0.24 0.2 0.16 0.12 0.08 0.04γCIFAR-100 405060708090 (b) SRE (%) Figure 2: Empirical Validation to Noncollapse Representation on CIFAR-10 and CIFAR-100. Clustering accuracy (ACC%) and subspace-preserving representation error (SRE%) are displayed under varying αandγ. When collapse occurs, both ACC and SRE dramatically degenerate. The perceivable phase transition phenomenon is consistent with the condition to avoid collapse. occurs during the training period, where Gb=Z⊤ bZb,Mb= (I−Cb)(I−Cb)⊤,Zb∈Rd×nb, Cb∈Rnb×nbandnbis batch size, are computed in mini-batch training at different epoch. Next, we will analyze problem (4) from the perspective of alternating optimization. When Zis fixed, the optimization problem with respect to (w.r.t.) Creduces to a standard self-expressive model, which has been extensively studied in (Soltanolkotabi & Candes, 2012; Pimentel-Alarcon & Nowak, 2016; Wang & Xu, 2016; Li et al., 2018; Tsakiris & Vidal, 2018). On the other hand, when Cis fixed, the optimization problem w.r.t. Zbecomes: min Z−1 2log det I+αZ⊤Z +γ 2∥Z−ZC∥2 Fs.t.∥Z∥2 F=N, (5) which is a non-convex optimization problem, whose optimal solution remains under-explored. In light of the fact that GandMconverge to share eigenspaces, we decompose GandM toUDiag( λ(1) G,···, λ(N) G)U⊤andUDiag( λ(1) M,···, λ(N) M)U⊤, respectively. Recall that G:= Z⊤Z,M:= (I−C)(I−C)⊤, by using the eigenvalue decomposition, we reformulate problem (5) into a convex problem w.r.t. {λ(i) G}min{d,N} i=1 (See Appendix A) and have the following result. Theorem 2 (Noncollapse Representation) .Suppose that GandMare aligned in the same eigenspaces and γ <1 λmax(M)α2 α+min{d N,1}. Then we have: a) rank(Z⋆) = min {d, N}, and b) the singular values σ(i) Z⋆=q1 γλ(i) M+ν⋆−1 αfor all i= 1, . . . , min{d, N}, where Z⋆andν⋆are the optimal primal solution and dual solution, respectively. Theorem 2 characterizes the optimal solution for problem (5). Recall that SEDSC in (2) yields a collapsed solution, where rank(Z⋆)≪min{d, N}; whereas the rank of the minimizers for PRO- DSC in (5) satisfies that rank(Z⋆) = min {d, N}. In Figure 1(c) and (d), we show the curves of the eigenvalues of GbandMb, which are computed in the mini-batch training at different epoch, demonstrating that the learned representation does no longer collapse. In Figure 2, we show the subspace clustering accuracy (ACC) and subspace-representation error4(SRE) as a function of the 4For each column cjinC, SRE is computed by100 NP j(1−P iwij· |cij|)/∥cj∥1, where wij∈ {0,1}is the ground-truth affinity. 4 Published as a conference paper at ICLR 2025 (a)|X⊤X| (b)|Z⊤Z| Airplane Automobile Dog (c)X(3)via PCA Airplane Automobile Dog (d)Z(3)via PCA Figure 3: Empirical Validation to Structured Representation on CIFAR-10. Gram matrices for CLIP features Xand learned representations Zare shown in (a) and (b); whereas Data visualization of the samples from three categories X(3)andZ(3)via PCA are shown in (c) and (d), respectively. parameters αandγ. The phase transition phenomenon around γ <1 λmax(M)α2 α+min{d N,1}well illustrates the sufficient condition in Theorem 2 to avoid representation collapse. Furthermore, from the perspective of joint optimizing ZandC, the following theorem demonstrates that PRO-DSC promotes a union-of-orthogonal-subspaces representation Zand block-diagonal self- expressive matrix Cunder certain condition. Theorem 3. Suppose that the sufficient conditions to prevent feature collapse are satisfied. With- out loss of generality, we further assume that the columns in matrix Zare arranged into kblocks according to a certain N×Npermutation matrix Γ, i.e.,Z= [Z1,Z2,···,Zk]. Then the con- dition for that PRO-DSC promotes the optimal solution (Z⋆,C⋆)to have desired structure (i.e., Z⊤ ⋆Z⋆andC⋆are both block-diagonal), is that ⟨(I−C)(I−C)⊤,G−G∗⟩ → 0, where G∗:= Diag G11,G22,···,Gkk andGjjis the block Gram matrix corresponding to Zj. 0 50 100 150 200 250 Iterations0.20.40.60.8Average block value|C∗ b||G∗ b||Cb−C∗ b||Gb−G∗ b| 800 700 600 500 400 300 200 100 0 CSC condition /angbracketleft(I−Cb) (I−Cb)/latticetop,Gb−G∗ b/angbracketright Figure 4: Empirical validation to Theorem 3 in Mini-batch on CIFAR-10. The mean curves of the absolute values of the in-block-diagonal entries (thick) and the off-block-diagonal entries (thin) are displayed along with the CSC condition (gray) during training PRO-DSC.Remark 1. Theorem 3 suggests that our PRO-DSC is able to promote learning representations and self-expressive matrix with desired structures, i.e., the represen- tations form a union of orthogonal sub- spaces and accordingly the self-expressive matrix is block-diagonal, when the condi- tion⟨(I−C) (I−C)⊤,G−G∗⟩ → 0 is met. We call this condition a compati- bly structured coherence (CSC), which re- lates to the properties of the distribution of the representations in Zand the self- coefficients in C. While it is not possi- ble for us to give a theoretical justification when the CSC condition will be satisfied in general, we do have the empirical ev- idence that our implementation for PRO- DSC with careful designs does approxi- mately satisfy such a condition and thus yields representations and self-expressive matrix with desired structure (See Fig- ure 3).5 In Figure 4, we show the curves for the compatibly structured coherence (CSC) condition, and for the average values of the entries in |G∗ b|,|Gb−G∗ b|,|C∗ b|,|Cb−C∗ b|computed in mini-batch during training PRO-DSC on CIFAR-10. As illustrated, the CSC condition is progressively satisfied and consequently the average off-block values |Gb−G∗ b|and|Cb−C∗ b|gradually decrease, while the average in-block values |G∗ b|and|C∗ b|gradually increase, which empirically validates that PRO- DSC promotes block-diagonal GbandCb. 5Please refer to Appendix B.2 for more details about Figures 1-4. 5 Published as a conference paper at ICLR 2025 2.3 S CALABLE IMPLEMENTATION Existing SEDSC models typically use autoencoders to learn the representations and learn the self- expressive matrix Cthrough an N×Nfully-connected layer (Ji et al., 2017; Peng et al., 2018; Zhou et al., 2018; Zhang et al., 2019a). While such implementation is straightforward, there are two major drawbacks: a) since that the number of self-expressive coefficients is quadratic to the number of data points, solving these coefficients suffers from expensive computation burden; b) the learning process is transductive, i.e., the network parameters cannot be generalized to unseen data. To address these issues, similar to (Zhang et al., 2021), we reparameterize the self-expressive co- efficients cijby a neural network. Specifically, the input data xiis fed into a neural network h(·;Ψ) :RD→Rdto yield normalized representations, i.e., yi:=h(xi;Ψ)/∥h(xi;Ψ)∥2, (6) where Ψdenotes all the parameters in h(·). Then, the parameterized self-expressive matrix CΨis generated by: CΨ:=P(Y⊤Y), (7) where Y:= [y1, . . . ,yN]∈Rd×NandP(·)is the sinkhorn projection (Cuturi, 2013), which has been widely applied in deep clustering (Caron et al., 2020; Ding et al., 2023).6To enable efficient representation learning, we introduce another learnable mapping f(·;Θ) :RD→Rd, for which zj:=f(xj;Θ)/∥f(xj;Θ)∥2 (8) is the learned representation for the input xj, where Θdenotes the parameters in f(·)to learn the structured representation ZΘ:= [z1, . . . ,zN]∈Rd×N. Therefore, our principled framework for deep subspace clustering (PRO-DSC) in (4) can be repa- rameterized and reformulated as follows: min Θ,ΨL(Θ,Ψ):=−1 2log det I+αZ⊤ ΘZΘ +γ 2∥ZΘ−ZΘCΨ∥2 F+β·r(CΨ).(9) To strengthen the block-diagonal structure of self-expressive matrix, we choose the block-diagonal regularizer (Lu et al., 2018) for r(CΨ). To be specific, given the data affinity AΨ, which is induced by default as AΨ:=1 2 |CΨ|+|C⊤ Ψ| , the block diagonal regularizer is defined as: r(CΨ):=∥AΨ∥κ, (10) where ∥AΨ∥κis the sum of the ksmallest eigenvalues of the Laplacian matrix of the affinity AΨ.7 Consequently, the parameters in ΘandΨof reparameterized PRO-DSC can be trained by Stochastic Gradient Descent (SGD) with the loss function L(Θ,Ψ)defined in (9). For clarity, we summarize the procedure for training and testing of our PRO-DSC in Algorithm 1. Remark 2. We note that all the commonly used regularizers with extended block-diagonal property for self-expressive model as discussed in (Lu et al., 2018) can be used to improve the block-diagonal structure of self-expressive matrix. More interestingly, the specific type of the regularizers is not essential owning to the learned structured representation (Please refer to Table 3 for details), and using a specific regularizer or not is also not essential since that the SGD-based optimization also induces some implicit regularization, e.g., low-rank (Gunasekar et al., 2017; Arora et al., 2019). 3 E XPERIMENTS To validate our theoretical findings and to demonstrate the performance of our proposed framework, we conduct extensive experiments on synthetic data (Sec. 3.1) and real-world data (Sec. 3.2). Im- plementation details and more results are provided in Appendices B.1 and B.3, respectively. 6In practice, we set diag(CΨ) =0to prevent trivial solution CΨ=I. 7Recall that the number of zero eigenvalues of the Laplacian matrix equals to the number of connected components in the graph (von Luxburg, 2007). 6 Published as a conference paper at ICLR 2025 Algorithm 1 Scalable & Efficient Implementation of PRO-DSC via Differential Programming Input: Dataset X=Xtrain∪ X test, batch size nb, hyper-parameters α, β, γ , number of iterations T, learning rate η Initialization: Random initialize the parameters Ψ,Θin the networks h(·;Ψ)andf(·;Θ) Training: 1:fort= 1, . . . , T do 2: Sample a batch Xb∈RD×nbfromXtrain # Forward propagation 3: Compute self-expressive matrix Cb∈Rnb×nbby Eqs. (6–7) 4: Compute representations Zb∈Rd×nbby Eq. (8) # Backward propagation 5: Compute gradients: ∇Ψ:=∂L ∂Ψ,∇Θ:=∂L ∂Θ 6: Update ΨandΘvia:Ψ←Ψ−η· ∇Ψ,Θ←Θ−η· ∇Θ 7:end for Testing: 8:Compute self-expressive matrix Ctestby Eqs. (6–7) for Xtest 9:Apply spectral clustering on the affinity Atest 3.1 E XPERIMENTS ON SYNTHETIC DATA To validate whether PRO-DSC resolves the collapse issue in SEDSC and learns representations with a UoS structure, we first follow the procedure in (Ding et al., 2023) to generate two sets of synthetic data, as shown in the first column of Figure 5, and then visualize in Figure 5(b)-(e) the learned representations which are obtained from different methods on these synthetic data. We observe that the SEDSC model overly compress all the representations to a closed curve on the hypersphere; whereas with increased weights (i.e., γ↑) of the self-expressive term, the representa- tions collapse to a few points. Our PRO-DSC yields linearized representations lying on orthogonal subspaces in both cases, confirming the effectiveness of our approach. Nevertheless, MLC (Ding et al., 2023) yields representations approximately on orthogonal subspaces. (a) Input Data (b) SEDSC (c) SEDSC ( γ↑) (d) MLC (e) PRO-DSC Figure 5: Visualization Experiments on Synthetic Data. 3.2 E XPERIMENTS ON REAL-WORLD DATA To evaluate the performance of our proposed approach, we conduct experiments on six real-world image datasets, including CIFAR-10, CIFAR-20, CIFAR-100, ImageNet-Dogs-15, Tiny-ImageNet- 200, and ImageNet-1k, with the pretrained CLIP features8(Radford et al., 2021), and compare to several baseline methods, including classical clustering algorithms, e.g., k-means (MacQueen, 1967) and spectral clustering (Shi & Malik, 2000), subspace clustering algorithm, e.g., EnSC (You et al., 2016a) and SENet (Zhang et al., 2021), deep clustering algorithms, e.g., SCAN (Van Gansbeke et al., 2020), TEMI (Adaloglou et al., 2023) and CPP (Chu et al., 2024), and deep subspace clustering algorithms, e.g., DSCNet (Ji et al., 2017) and EDESC (Cai et al., 2022). We measure clustering 8Please refer to Appendix B.3 for the results on other pre-trained models. 7 Published as a conference paper at ICLR 2025 Table 1: Clustering performance Comparison on the CLIP features. The best results are in bold and the second best results are underlined. “OOM” means out of GPU memory. MethodCIFAR-10 CIFAR-20 CIFAR-100 TinyImgNet-200 ImgNetDogs-15 ImageNet-1k ACC NMI ACC NMI ACC NMI ACC NMI ACC NMI ACC NMI k-means 83.5 84.1 46.9 49.4 52.8 66.8 54.1 73.4 52.7 53.6 53.9 79.8 SC 79.8 84.8 53.3 61.6 66.4 77.0 62.8 77.0 48.3 45.7 56.0 81.2 SSCOMP 85.5 83.0 61.4 63.4 55.6 69.7 56.7 72.7 25.6 15.9 44.1 74.4 EnSC 95.4 90.3 61.0 68.7 67.0 77.1 64.5 77.7 57.9 56.0 59.7 83.7 SENet 91.2 82.5 65.3 68.6 67.0 74.7 63.9 76.6 58.7 55.3 53.2 78.1 SCAN 95.1 90.3 60.8 61.8 64.1 70.8 56.5 72.7 70.5 68.2 54.4 76.8 TEMI 96.9 92.6 61.8 64.5 73.7 79.9 - - - - 64.0 - CPP 96.8 92.3 67.7 70.5 75.4 82.0 63.4 75.5 83.0 81.5 62.0 82.1 EDESC 84.2 79.3 48.7 49.1 53.1 68.6 51.3 68.8 53.3 47.9 46.5 75.5 DSCNet 78.5 73.6 38.6 45.7 39.2 53.4 62.3 68.3 40.5 30.1 OOM OOM Our PRO-DSC 97.2±0.292.8±0.471.6±1.273.2±0.577.3±1.082.4±0.569.8±1.180.5±0.784.0±0.681.2±0.865.0±1.283.4±0.6 performance using clustering accuracy (ACC) and normalized mutual information (NMI), and report the experimental results in Table 1, where the results of our PRO-DSC are averaged over 10 trials (with±std). Since that for most baselines, except for TEMI, the clustering performance with the CLIP feature has not been reported, we conduct experiments using the implementations provided by the authors. For TEMI, we directly cited the results from (Adaloglou et al., 2023). Performance comparison. As shown in Table 1, our PRO-DSC significantly outperforms subspace clustering algorithms, e.g., SSCOMP, EnSC and SENet, and deep subspace clustering algorithms, e.g., DSCNet and EDESC. Moreover, our PRO-DSC obtains better performance than the state-of- the-art deep clustering and deep manifold clustering methods, e.g., SCAN, TEMI and CPP. Validation to the theoretical results. To validate whether the alignment emerges and when rep- resentations collapse occurs during the training period, we compute Gb=Z⊤ bZbandMb= (I−Cb)(I−Cb)⊤in mini-batch at different epoch during the training period, and then measure the alignment error via ∥GbMb−MbGb∥Fand the eigenspace correlation via ⟨uj,Gbuj ∥Gbuj∥2⟩where ujis the j-th ending eigenvector9ofMbforj= 1,···, nb, and plot the eigenvalues of Gband Mb, where nbis the sample size per mini-batch. Moreover, we also record empirical performance ACC and SRE on CIFAR-10 and CIFAR-100 under varying hyper-parameters αandγto validate the condition in Theorem 2 to avoid collapse. Experimental results are displayed in Figures 1 and 2. We observe that GbandMbare increasingly aligned and the representations will no longer collapse provided that the parameters are properly set. More details are provided in Section B.2. Evaluation on learned representations. To quantitatively evaluate the effectiveness of the learned representations, we run k-means (MacQueen, 1967), spectral clustering (Shi & Malik, 2000), and EnSC (You et al., 2016a) on four datasets with three different features: a) the CLIP features, b) the representations learned via CPP, and c) the representations learned by our PRO-DSC. Experimental results are shown in Figure 6 (and more results are given in Table B.4 of Appendix B.3). We observe that the representations learned by our PRO-DSC outperform the CLIP features and the CPP repre- sentations in most cases across different clustering algorithms and datasets. Notably, the clustering accuracy with the representations learned by our PRO-DSC exceeds 90% on CIFAR-10 and 75% on CIFAR-100, whichever clustering algorithm is used. Besides, the clustering performance is further improved by using the learnable mapping h(·;Ψ), indicating a good generalization ability. k-means SC EnSCh(·;Ψ)406080100ACC (%)-3.4+0.1+0.2+18.7+21.9+0.1CLIP CPP PRO-DSC (a) CIFAR-10 k-means SC EnSCh(·;Ψ)20406080ACC (%)+22.5 +8.6+10.5+23.7+8.8+10.6CLIP CPP PRO-DSC (b) CIFAR-100 k-means SC EnSCh(·;Ψ)20406080ACC (%)+8.6 -5.6-2.5+19.1 +10.5 -0.8CLIP CPP PRO-DSC (c) CIFAR-20 k-means SC EnSCh(·;Ψ)20406080ACC (%)+8.0 -4.8+2.5 +13.5 +4.2+5.0CLIP CPP PRO-DSC (d) TinyImageNet-200 Figure 6: Clustering accuracy with CLIP features and learned representations. 9The eigenvectors are sorted according to eigenvalues of Mbin ascending order. 8 Published as a conference paper at ICLR 2025 Sensitivity to hyper-parameters. In Figure 2, we verify that our PRO-DSC yields satisfactory results when the conditions in Theorem 2 to avoid collapse are met. Moreover, we evaluate the performance sensitivity to hyper-parameters γandβby experiments on the CLIP features of CIFAR- 10, CIFAR-100 and TinyImageNet-200 with varying γandβ. In Figure 7, we observe that the clustering performance maintains satisfactory under a broad range of γandβ. 100 200 300 400 500 600 700 β0.10 0.16 0.21 0.27 0.33 0.39γ96% 97% 97% 95.596.096.597.0 (a) CIFAR-10 150 250 350 450 550 650 750 β0.02 0.05 0.08 0.12 0.15 0.18γ75% 75%78% 78% 50.055.060.065.070.075.080.0 (b) CIFAR-100 100 200 300 400 500 600 700 β0.05 0.08 0.11 0.15 0.18 0.22γ65% 65%68% 68% 50.055.060.065.070.0 (c) TinyImageNet Figure 7: Evaluation on sensitivity to hyper-parameters γandβon three datasets. Time and memory cost. The most time-consuming operations in our PRO-DSC are computing the term involving log det( ·)and the term ∥A∥κinvolving eigenvalue decomposition, respectively. The time complexity for log det( ·)isO(min{n3 b, d3})due to the commutative property of log det( ·) function (Yu et al., 2020), and the time complexity for ∥A∥κisO(kn2 b).10Therefore the overall time complexity of our PRO-DSC is O(kn2 b+ min {n3 b, d3}). Note that TEMI (Adaloglou et al., 2023) employs H= 50 cluster heads during training, adding further time and memory costs and CPP (Chu et al., 2024) involves computing log det( ·)fornb+ 1 times, leading to complexity O((nb+ 1) min {n3 b, d3}). The computation time and memory costs are shown in Table 2 for which all the experiments are conducted on a single NVIDIA RTX 3090 GPU and Intel Xeon Platinum 8255C CPU. We read that our PRO-DSC significantly reduces the time consumption, particularly for datasets with a large number of clusters. Table 2: Comparison on time (s) and memory cost (MiB). “OOM” means out of GPU memory. Methods ComplexityCIFAR-10 CIFAR-100 ImageNet-1k Time Memory Time Memory Time Memory SEDSC O(N2d) - OOM - OOM - OOM TEMI O(Hnbd2) 6.9 1,766 5.1 2,394 262.1 2,858 CPP O((nb+ 1) min {n3 b, d3}) 3.5 3,802 7.1 10,374 1441.2 22,433 PRO-DSC O(kn2 b+ min {n3 b, d3}) 4.5 2,158 4.0 2,328 90.0 2,335 Table 3: Ablation studies on different loss functions and regularizers. Loss Term CIFAR-10 CIFAR-100 ImgNetDogs-15 L1 L2∥A∥κ∥C∥1∥C∥2 F∥C∥∗ACC NMI ACC NMI ACC NMIAblation√ √56.9 47.7 54.6 60.9 46.7 37.1√ √69.6 56.4 64.7 71.7 10.5 1.7√ √97.0 93.0 74.6 80.9 80.9 78.8Regularizer√ √ √97.0 92.6 75.2 81.1 81.3 79.1√ √ √97.0 92.6 75.2 80.9 80.9 78.8√ √ √96.7 91.9 76.4 81.8 81.0 78.8√ √ √97.2 92.8 77.3 82.4 84.0 81.2 Ablation study. To verify the effectiveness of each components in the loss function of our PRO- DSC, we conduct a set of ablation studies with the CLIP features on CIFAR-10, CIFAR-100, and ImageNetDogs-15, and report the results in Table 3, where L1:=−1 2log det I+αZ⊤ ΘZΘ and L2:=1 2∥ZΘ−ZΘCΨ∥2 F. The absence of the term L1leads to catastrophic feature collapse (as demonstrated in Sec. 2.1); whereas without the self-expressive L2, the model lacks a loss function 10For an N×Nmatrix, the complexity of computing its keigenvalues by Lanczos algorithm is O(kN2) and the complexity of computing its det(·)isO(N3). 9 Published as a conference paper at ICLR 2025 for learning the self-expressive coefficients. In both cases, clustering performance drops signifi- cantly. More interestingly, when we replace the block diagonal regularizer ∥A∥κwith∥C∥1,∥C∥∗, and∥C∥2 For even drop the explicit regularizer r(·), the clustering performance still maintains sat- isfactory. This confirms that the choice of the regularizer is not essential owning to the structured representations learned by our PRO-DSC. 4 R ELATED WORK Deep subspace clustering. To tackle with complex real world data, a number of Self-Expressive Deep Subspace Clustering (SEDSC) methods have been developed in the past few years, e.g., (Ji et al., 2017; Peng et al., 2018; Zhou et al., 2018; Zhang et al., 2019a;b; Dang et al., 2020; Peng et al., 2020; Lv et al., 2021; Cai et al., 2022; Wang et al., 2023b). The key step in SEDSC is to adopt a deep learning module to embed the input data into feature space. For example, deep autoencoder network is adopted in (Peng et al., 2018), deep convolutional autoencoder network is used in (Ji et al., 2017; Zhou et al., 2018; Zhang et al., 2019a). Unfortunately, as pointed out in (Haeffele et al., 2021), the existing SEDSC methods suffer from a catastrophic feature collapse and there is no evidence that the learned representations align with a UoS structure. To date, however, a principled deep subspace clustering framework has not been proposed. Deep clustering. Recently, most of state-of-the-art deep clustering methods adopt a two-step proce- dure: at the first step, self-supervised learning based pre-training, e.g., SimCLR (Chen et al., 2020a), MoCo (He et al., 2020), BYOL (Grill et al., 2020) and SwA V (Caron et al., 2020), is adopted to learn the representations; and then deep clustering methods are incorporated to refine the representations, via, e.g., pseudo-labeling (Caron et al., 2018; Van Gansbeke et al., 2020; Park et al., 2021; Niu et al., 2022), cluster-level contrastive learning (Li et al., 2021), local and global neighbor match- ing (Dang et al., 2021), graph contrastive learning (Zhong et al., 2021), self-distillation (Adaloglou et al., 2023). Though the clustering performance has been improved remarkably, the underlying geometry structure of the learned representations is unclear and ignored. Representation learning with a UoS structure. The methods for representation learning that favor a UoS structure are pioneered in supervised setting, e.g., (Lezama et al., 2018; Yu et al., 2020). In (Lezama et al., 2018), a nuclear norm based geometric loss is proposed to learn representations that lie on a union of orthogonal subspaces; in (Yu et al., 2020), a principled framework called Maximal Coding Rate Reduction (MCR2) is proposed to learn representations that favor the structure of a union of orthogonal subspaces (Wang et al., 2024). More recently, the MCR2framework is modified to develop deep manifold clustering methods, e.g., NMCE (Li et al., 2022), MLC (Ding et al., 2023) and CPP (Chu et al., 2024). In (Li et al., 2022), the MCR2framework combines with constrastive learning to perform manifold clustering and representation learning; in (Ding et al., 2023), the MCR2 framework combines with doubly stochastic affinity learning to perform manifold linearizing and clustering; and in (Chu et al., 2024), the features from large pre-trained model (e.g., CLIP) are adopted to evaluate the performance of (Ding et al., 2023). While the MCR2framework has been modified in these methods for manifold clustering, none of them provides theoretical justification to yield structured representations. Though our PRO-DSC shares the same regularizer defined in Eq. (3) with MLC (Ding et al., 2023), we are for the first time to adopt it into the SEDSC framework to attack the catastrophic feature collapse issue with theoretical analysis. 5 C ONCLUSION We presented a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which jointly learn structured representations and self-expressive coefficients. Specifically, our PRO-DSC incor- porates an effective regularization into self-expressive model to prevent the catastrophic representa- tion collapse with theoretical justification. Moreover, we demonstrated that our PRO-DSC is able to learn structured representations that form a desirable UoS structure, and also developed an efficient implementation based on reparameterization and differential programming. We conducted extensive experiments on synthetic data and six benchmark datasets to verify the effectiveness of our proposed approach and validate our theoretical findings. 10 Published as a conference paper at ICLR 2025 ACKNOWLEDGMENTS The authors would like to thank the constructive comments from anonymous reviewers. This work is supported by the National Natural Science Foundation of China under Grant 61876022. ETHICS STATEMENT In this work, we aim to extend traditional subspace clustering algorithms by leveraging deep learning techniques to enhance their representation learning capabilities. Our research does not involve any human subjects, and we have carefully ensured that it poses no potential risks or harms. Additionally, there are no conflicts of interest, sponsorship concerns, or issues related to discrimination, bias, or fairness associated with this study. We have taken steps to address privacy and security concerns, and all data used comply with legal and ethical standards. Our work fully adheres to research integrity principles, and no ethical concerns have arisen during the course of this study. REPRODUCIBILITY STATEMENT To ensure the reproducibility of our work, we have released the source code. Theoretical proofs of the claims made in this paper are provided in Appendix A, and the empirical validation of these theoretical results is shown in Figures 2-4, with further detailed explanations in Appendix B.2. All datasets used in our experiments are publicly available, and we have provided a comprehensive de- scription of the data processing steps in Appendix B.1. Additionally, detailed experimental settings and configurations are outlined in Appendix B.1 to facilitate the reproduction of our results. REFERENCES Nikolas Adaloglou, Felix Michels, Hamza Kalisch, and Markus Kollmann. Exploring the limits of deep image clustering using pretrained models. In British Machine Vision Conference , pp. 297–299, 2023. Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems , 32:7411–7422, 2019. Laurent Bako and Ren ´e Vidal. Algebraic identification of MIMO SARX models. In International Workshop on Hybrid Systems: Computation and Control , pp. 43–57, 2008. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems , pp. 153–160, 2006. Jinyu Cai, Jicong Fan, Wenzhong Guo, Shiping Wang, Yunhe Zhang, and Zhao Zhang. Efficient deep embedded subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 21–30, 2022. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsu- pervised learning of visual features. In European Conference on Computer Vision , pp. 132–149, 2018. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems , 33:9912–9924, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Herv ´e J´egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 9650–9660, 2021. Jianlong Chang, Gaofeng Meng, Lingfeng Wang, Shiming Xiang, and Chunhong Pan. Deep self- evolution clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence , 42(4): 809–823, 2018. 11 Published as a conference paper at ICLR 2025 Guangliang Chen and Gilad Lerman. Spectral curvature clustering (SCC). International Journal of Computer Vision , 81(3):317–330, 2009. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning , pp. 1597–1607, 2020a. Ying Chen, Chun-Guang Li, and Chong You. Stochastic sparse subspace clustering. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 4155–4164, 2020b. Tianzhe Chu, Shengbang Tong, Tianjiao Ding, Xili Dai, Benjamin David Haeffele, Ren ´e Vidal, and Yi Ma. Image clustering via the principle of rate reduction in the age of pretrained models. In International Conference on Learning Representations , 2024. Joao Paulo Costeira and Takeo Kanade. A multibody factorization method for independently moving objects. International Journal of Computer Vision , 29:159–179, 1998. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in Neural Information Processing Systems , 26:2292–2300, 2013. Zhiyuan Dang, Cheng Deng, Xu Yang, and Heng Huang. Multi-scale fusion subspace clustering using similarity constraint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 6657–6666, 2020. Zhiyuan Dang, Cheng Deng, Xu Yang, Kun Wei, and Heng Huang. Nearest neighbor matching for deep clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 13693–13702, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 248–255, 2009. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine , 29(6):141–142, 2012. Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, and Benjamin D. Haeffele. Unsupervised manifold linearizing and clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 5450–5461, October 2023. Ehsan Elhamifar and Ren ´e Vidal. Sparse subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 2790–2797, 2009. Ehsan Elhamifar and Ren ´e Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(11):2765–2781, 2013. Maryam Fazel, Haitham Hindi, and Stephen P Boyd. Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In American Control Conference , volume 3, pp. 2156–2162, 2003. Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM , 24 (6):381–395, 1981. Jean-Bastien Grill, Florian Strub, Florent Altch ´e, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in Neural Information Processing Systems , 33:21271–21284, 2020. Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Sre- bro. Implicit regularization in matrix factorization. Advances in Neural Information Processing Systems , pp. 6151–6159, 2017. 12 Published as a conference paper at ICLR 2025 Benjamin D Haeffele, Chong You, and Ren ´e Vidal. A critique of self-expressive deep subspace clustering. In International Conference on Learning Representations , 2021. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 9729–9738, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll ´ar, and Ross Girshick. Masked au- toencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 16000–16009, 2022. Jeffrey Ho, Ming-Husang Yang, Jongwoo Lim, Kuang-Chih Lee, and David Kriegman. Clustering appearances of objects under varying illumination conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 11–18, 2003. Wei Hong, John Wright, Kun Huang, and Yi Ma. Multiscale hybrid linear models for lossy image representation. IEEE Transactions on Image Processing , 15(12):3655–3671, 2006. Zhizhong Huang, Jie Chen, Junping Zhang, and Hongming Shan. Learning representation for clus- tering via prototype scattering and positive sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(6):7509–7524, 2023. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning , pp. 448–456, 2015. Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian Reid. Deep subspace clustering networks. Advances in Neural Information Processing Systems , pp. 24–33, 2017. Yuheng Jia, Jianhong Cheng, Hui LIU, and Junhui Hou. Towards calibrated deep clustering network. InInternational Conference on Learning Representations , 2025. Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations , 2014. Jos´e Lezama, Qiang Qiu, Pablo Mus ´e, and Guillermo Sapiro. OLE: Orthogonal low-rank embedding - a plug and play geometric loss for deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 8109–8118, 2018. Chun-Guang Li, Chong You, and Ren ´e Vidal. Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework. IEEE Transactions on Image Processing , 26(6): 2988–3001, 2017. Chun-Guang Li, Chong You, and Ren ´e Vidal. On geometric analysis of affine sparse subspace clustering. IEEE Journal on Selected Topics in Signal Processing , 12(6):1520–1533, 2018. Yunfan Li, Peng Hu, Zitao Liu, Dezhong Peng, Joey Tianyi Zhou, and Xi Peng. Contrastive cluster- ing. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pp. 8547–8555, 2021. Zengyi Li, Yubei Chen, Yann LeCun, and Friedrich T Sommer. Neural manifold clustering and embedding. arXiv preprint arXiv:2201.10000 , 2022. Derek Lim, Ren ´e Vidal, and Benjamin D Haeffele. Doubly stochastic subspace clustering. arXiv preprint arXiv:2011.14859 , 2020. Guangcan Liu, Zhouchen Lin, and Yong Yu. Robust subspace segmentation by low-rank represen- tation. In International Conference on Machine Learning , pp. 663–670, 2010. Xin Liu, Zhongdao Wang, Ya-Li Li, and Shengjin Wang. Self-supervised learning via maximum entropy coding. Advances in Neural Information Processing Systems , 35:34091–34105, 2022. Canyi Lu, Hai Min, Zhong-Qiu Zhao, Lin Zhu, De-Shuang Huang, and Shuicheng Yan. Robust and efficient subspace segmentation via least squares regression. In European Conference on Computer Vision , pp. 347–360, 2012. 13 Published as a conference paper at ICLR 2025 Canyi Lu, Jiashi Feng, Zhouchen Lin, Tao Mei, and Shuicheng Yan. Subspace clustering by block diagonal representation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 41(2): 487–501, 2018. Juncheng Lv, Zhao Kang, Xiao Lu, and Zenglin Xu. Pseudo-supervised deep subspace clustering. IEEE Transactions on Image Processing , 30:5252–5263, 2021. Yi Ma, Harm Derksen, Wei Hong, and John Wright. Segmentation of multivariate mixed data via lossy coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence , 29(9):1546–1562, 2007. J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceed- ings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability , pp. 281–297, 1967. Ryan McConville, Raul Santos-Rodriguez, Robert J Piechocki, and Ian Craddock. N2d:(not too) deep clustering via clustering the local manifold of an autoencoded embedding. In Proceedings of the International Conference on Pattern Recognition , pp. 5145–5152, 2021. Brian McWilliams and Giovanni Montana. Subspace clustering of high dimensional data: a predic- tive approach. Data Mining and Knowledge Discovery , 28(3):736–772, 2014. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. InInternational Conference on Machine Learning , pp. 807–814, 2010. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In Proceedings of the Indian Conference on Computer Vision, Graphics & Image Processing , pp. 722–729, 2008. Chuang Niu, Hongming Shan, and Ge Wang. SPICE: Semantic pseudo-labeling for image cluster- ing. IEEE Transactions on Image Processing , 31:7264–7278, 2022. Foivos Ntelemis, Yaochu Jin, and Spencer A Thomas. Information maximization clustering via multi-view self-labelling. Knowledge-Based Systems , 250:109042, 2022. Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V . V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel HAZIZA, Francisco Massa, Alaaeldin El-Nouby, Mido Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Herve Jegou, Julien Mairal, Patrick Labatut, Ar- mand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research , 2024. Sungwon Park, Sungwon Han, Sundong Kim, Danu Kim, Sungkyu Park, Seunghoon Hong, and Meeyoung Cha. Improving unsupervised image clustering with robust learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 12278–12287, 2021. Vishal M Patel and Ren ´e Vidal. Kernel sparse subspace clustering. In Proceedings of the IEEE International Conference on Image Processing , pp. 2849–2853, 2014. Vishal M Patel, Hien Van Nguyen, and Ren ´e Vidal. Latent space sparse subspace clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 225–232, 2013. Vishal M Patel, Hien Van Nguyen, and Ren ´e Vidal. Latent space sparse and low-rank subspace clustering. IEEE Journal of Selected Topics in Signal Processing , 9(4):691–701, 2015. Xi Peng, Jiashi Feng, Shijie Xiao, Wei-Yun Yau, Joey Tianyi Zhou, and Songfan Yang. Structured autoencoders for subspace clustering. IEEE Transactions on Image Processing , 27(10):5076– 5086, 2018. Xi Peng, Jiashi Feng, Joey Tianyi Zhou, Yingjie Lei, and Shuicheng Yan. Deep subspace clustering. IEEE Transactions on Neural Networks and Learning Systems , 31(12):5509–5521, 2020. 14 Published as a conference paper at ICLR 2025 Daniel Pimentel-Alarcon and Robert Nowak. The information-theoretic requirements of subspace clustering with missing data. In International Conference on Machine Learning , pp. 802–810, 2016. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning , pp. 8748–8763, 2021. Shankar Rao, Roberto Tron, Ren ´e Vidal, and Yi Ma. Motion segmentation in the presence of out- lying, incomplete, or corrupted trajectories. IEEE Transactions on Pattern Analysis and Machine Intelligence , 32(10):1832–1845, 2010. Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 22(8):888–905, 2000. Mahdi Soltanolkotabi and Emmanuel J Candes. A geometric analysis of subspace clustering with outliers. Annals of Statistics , 40(4):2195–2238, 2012. Manolis Tsakiris and Ren ´e Vidal. Algebraic clustering of affine subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence , 40(2):482–489, 2017. Manolis Tsakiris and Ren ´e Vidal. Theoretical analysis of sparse subspace clustering with missing entries. In International Conference on Machine Learning , pp. 4975–4984, 2018. Paul Tseng. Nearest q-flat to mpoints. Journal of Optimization Theory and Applications , 105(1): 249–252, 2000. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research , 9(11), 2008. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. SCAN: Learning to classify images without labels. In European Conference on Com- puter Vision , pp. 268–285, 2020. Ren´e Vidal. Identification of PWARX hybrid models with unknown and possibly different orders. InProceedings of the American Control Conference , pp. 547–552, 2004. Ren´e Vidal, Yi Ma, and Shankar Sastry. Generalized Principal Component Analysis (GPCA). IEEE Transactions on Pattern Analysis and Machine Intelligence , 27(12):1–15, 2005. Ren´e Vidal, Roberto Tron, and Richard Hartley. Multiframe motion segmentation with missing data using PowerFactorization, and GPCA. International Journal of Computer Vision , 79(1):85–105, 2008. Ulrike von Luxburg. A tutorial on spectral clustering. Statistics and Computing , 17(4):395–416, 2007. Libin Wang, Yulong Wang, Hao Deng, and Hong Chen. Attention reweighted sparse subspace clustering. Pattern Recognition , 139:109438, 2023a. Peng Wang, Huikang Liu, Druv Pai, Yaodong Yu, Zhihui Zhu, Qing Qu, and Yi Ma. A global geometric analysis of maximal coding rate reduction. In International Conference on Machine Learning , 2024. Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, and Guoren Wang. Self-supervised information bottleneck for deep multi-view subspace clustering. IEEE Transactions on Image Processing , 32: 1555–1567, 2023b. Yu-Xiang Wang and Huan Xu. Noisy sparse subspace clustering. Journal of Machine Learning Research , 17(12):1–41, 2016. Lai Wei, Zhengwei Chen, Jun Yin, Changming Zhu, Rigui Zhou, and Jin Liu. Adaptive graph convolutional subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 6262–6271, 2023. 15 Published as a conference paper at ICLR 2025 Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for benchmark- ing machine learning algorithms. arXiv preprint arXiv:1708.07747 , 2017. Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. InInternational Conference on Machine Learning , pp. 478–487, 2016. Jianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 5147–5156, 2016. Chong You, Chun-Guang Li, Daniel Robinson, and Ren ´e Vidal. Oracle based active set algorithm for scalable elastic net subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 3928–3937, 2016a. Chong You, Daniel Robinson, and Ren ´e Vidal. Scalable sparse subspace clustering by orthogonal matching pursuit. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 3918–3927, 2016b. Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, and Yi Ma. Learning diverse and discriminative representations via the principle of maximal coding rate reduction. Advances in Neural Information Processing Systems , 33:9422–9434, 2020. Pengxin Zeng, Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, and Xi Peng. Deep fair cluster- ing via maximizing and minimizing mutual information: Theory, algorithm and metric. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 23986– 23995, 2023. Hongjing Zhang and Ian Davidson. Deep fair discriminative clustering. arXiv preprint arXiv:2105.14146 , 2021. Junjian Zhang, Chun-Guang Li, Chong You, Xianbiao Qi, Honggang Zhang, Jun Guo, and Zhouchen Lin. Self-supervised convolutional subspace clustering network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 5473–5482, 2019a. Shangzhi Zhang, Chong You, Ren ´e Vidal, and Chun-Guang Li. Learning a self-expressive network for subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 12393–12403, 2021. Teng Zhang, Arthur Szlam, and Gilad Lerman. Median k-flats for hybrid linear modeling with many outliers. In IEEE/CVF International Conference on Computer Vision Workshops , pp. 234–241, 2009. Tong Zhang, Pan Ji, Mehrtash Harandi, Wenbing Huang, and Hongdong Li. Neural collaborative subspace clustering. In International Conference on Machine learning , pp. 7384–7393, 2019b. Chen Zhao, Chun-Guang Li, Wei He, and Chong You. Deep self-expressive learning. In The First Conference on Parsimony and Learning , volume 234, pp. 228–247, 2024. Huasong Zhong, Jianlong Wu, Chong Chen, Jianqiang Huang, Minghua Deng, Liqiang Nie, Zhouchen Lin, and Xian-Sheng Hua. Graph contrastive clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 9224–9233, 2021. Pan Zhou, Yunqing Hou, and Jiashi Feng. Deep adversarial subspace clustering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 1596–1604, 2018. 16 Published as a conference paper at ICLR 2025 SUPPLEMENTARY MATERIAL FOR “EXPLORING A PRINCIPLED FRAMEWORK FOR DEEPSUBSPACE CLUSTERING ” The supplementary materials are divided into three parts. In Section A, we present the proofs of our theoretical results. In Section B, we present the supplementary materials for experiments, including experimental details (Sec. B.1), empirical validation on our theoretical results (Sec. B.2), and more experimental results (Sec. B.3). In Section C, we discuss about the limitations and failure cases of PRO-DSC. A P ROOFS OF MAINRESULTS As a preliminary, we start by introducing a lemma from (Haeffele et al., 2021) and provide its proof for the convenience of the readers. Lemma 1 (Haeffele et al., 2021) .The rows of the optimal solution Zfor problem (2) are the eigen- vectors that associate with the smallest eigenvalues of (I−C)(I−C)⊤. Proof. We note that: ∥Z−ZC∥2 F= Tr Z(I−C) (I−C)⊤Z⊤ =dX i=1z(i)(I−C)(I−C)⊤z(i)⊤, where z(i)is the ithrow of Z, thus problem (2) is reformulated as: min {z(i)}d i=1,C1 2dX i=1z(i)(I−C)(I−C)⊤z(i)⊤+β·r(C) s.t.∥Z∥2 F=N.(11) Without loss of generality, the magnitude of each row of Zis assumed to be fixed, i.e., ∥z(i)∥2 2= τi, i= 1, . . . , d , wherePd i=1τi=N. Then, the optimization problem becomes: min {z(i)}d i=1,C1 2dX i=1z(i)(I−C)(I−C)⊤z(i)⊤+β·r(C) s.t.∥z(i)∥2 2=τi, i= 1, . . . , d.(12) The Lagrangian of problem (12) is: L({z(i)}d i=1,C,{νi}d i=1):=1 2dX i=1z(i)(I−C)(I−C)⊤z(i)⊤+β·r(C) +1 2dX i=1νi(∥z(i)∥2 2−τi), (13) where {νi}d i=1are the Lagrangian multipliers. The necessary conditions for optimal solution are: ∇z(i)L=z(i)(I−C)(I−C)⊤+νiz(i)=0, ∥z(i)∥2 2=τi, i= 1, . . . , d,(14) which implies that the optimal solutions z(i)are eigenvectors of (I−C)(I−C)⊤. By further considering the objective functions, the optimal solution z(i)should be the eigenvectors w.r.t. the smallest eigenvalues of (I−C)(I−C)⊤for all i∈ {1, . . . , d }. The corresponding optimal value is1 2λmin((I−C)(I−C)⊤)Pd i=1τi+β·r(C) =N 2λmin((I−C)(I−C)⊤) +β·r(C), which is irrelevant to {τi}d i=1. Therefore, we conclude that the rows of optimal solution Zto problem (2) are eigenvectors that associate with the smallest eigenvalues of (I−C)(I−C)⊤. 17 Published as a conference paper at ICLR 2025 Lemma A1. Suppose that matrices A,B∈Rn×nare symmetric, then AB =BA if and only if AandBcan be diagonalized simultaneously by U∈ O(n), where O(n)is an orthogonal group. Now we present our theorem about the optimal solution of problem PRO-DSC in (4) with its proof. Theorem 1. Denote the optimal solution of PRO-DSC in (4) as (Z⋆,C⋆), then G⋆andM⋆share eigenspaces, where G⋆:=Z⋆⊤Z⋆,M⋆:= (I−C⋆)(I−C⋆)⊤, i.e.,G⋆andM⋆can be diago- nalized simultaneously by U∈ O(N)where O(N)is an orthogonal matrix group. Proof. We first consider the subproblem of PRO-DSC problem in (4) with respect to Zand prove that for all C∈RN×N, the corresponding optimal Z⋆,Csatisfies G⋆,CM=MG ⋆,C, where G⋆,C=Z⋆,C⊤Z⋆,C,M:= (I−C)(I−C)⊤, implying that G⋆,CandMshare eigenspace. Then, we will demonstrate that G⋆andM⋆share eigenspace. The subproblem with respect to ZCis reformulated into the following semi-definite program: min GC−1 2log det ( I+αGC) +γ 2tr(GCM) s.t.GC⪰0,tr(GC) =N,(15) which has the Lagrangian as: L(GC,∆, ν):=−1 2log det ( I+αGC) +γ 2tr(GCM)−tr(∆GC) +ν 2(tr(GC)−N),(16) where scalar νandN×Nsymmetric matrix ∆are Lagrange multipliers. The KKT conditions is: −α 2(I+αG⋆,C)−1+γ 2M−∆⋆+ν⋆ 2I=0, (17) G⋆,C⪰0, (18) tr(G⋆,C) =N, (19) ∆⋆⪰0, (20) ∆⋆G⋆,C=0, (21) which are the sufficient and necessary condition for the global optimality of the solution G⋆,C. From Eqs. (18),(20) and (21), we have that ∆⋆G⋆,C−G⋆,C∆⋆=∆⋆G⋆,C−(∆⋆G⋆,C)⊤= 0, implying that ∆⋆andG⋆,Cshare eigenspace. By eigenvalue decomposition ∆⋆= QΛ∆⋆Q⊤,G⋆,C=QΛG⋆,CQ⊤, where Λ∆⋆,ΛG⋆,Care diagonal matrices, we have: 2·QΛ∆⋆Q⊤=−αQ(I+αΛG⋆,C)−1Q⊤+γM+ν⋆I (22) ⇒ γM+ν⋆I=Q 2Λ∆⋆+α I+αΛG⋆,C−1 Q⊤, (23) where the first equality is from Eq. (17). Since that 2Λ∆⋆+α(I+αΛG⋆,C)−1is a diagonal matrix, γM+ν⋆Ican be diagonalized by Q. In other words, for ∀M∈S+ Nin problem (15), Mwill share eigenspace with the corresponding optimal solution G⋆,C. Next, denote (Z⋆,C⋆)as the optimal solution of problem (4), C:={Z| ∥Z∥2 F=N}as the feasible set and f(·,·)as the objective function. Since that Z⋆= arg minZ∈Cf(Z,C⋆), otherwise contradicts with the optimality of (Z⋆,C⋆), we conclude that G⋆andM⋆share eigenspace, where M⋆:= (I−C⋆)(I−C⋆)⊤. Theorem 2. Suppose that GandM are aligned in the same eigenspaces and γ < 1 λmax(M)α2 α+min{d N,1}, then we have that: a) rank(Z⋆) = min {d, N}, and b) the singular val- uesσ(i) Z⋆=q1 γλ(i) M+ν⋆−1 αfor all i= 1, . . . , min{d, N}, where Z⋆andν⋆are the primal optimal solution and dual optimal solution, respectively. 18 Published as a conference paper at ICLR 2025 Proof. Since∥Z−ZC∥2 F= Tr Z⊤Z(I−C) (I−C)⊤ and∥Z∥2 F= Tr(Z⊤Z), problem (5) is equivalent to: min G−1 2log det ( I+αG) +γ 2Tr(GM ) s.t.Tr(G) =N,G⪰0,(24) where G:=Z⊤ZandM:= (I−C)(I−C)⊤. Since that GandMhave eigenspaces aligned, we can have GandMdiagonalized simultaneously by an orthogonal matrix U, i.e.,G=UΛGU⊤,M=UΛMU⊤. Therefore, problem (24) can be transformed into the eigenvalue optimization problem as follows: min {λ(i) G}min{d,N} i=1−1 2min{d,N}X i=1log(1 + αλ(i) G) +γ 2λ(i) Mλ(i) G s.t.min{d,N}X i=1λ(i) G=N, λ(i) G≥0,for all i= 1, . . . , min{d, N},(25) where {λ(1) M,···, λ(min{d,N}) M }are the diagonal entries of ΛMand{λ(1) G,···, λ(min{d,N}) G }are the diagonal entries of ΛG. Surprisingly, problem (25) is a convex optimization problem. Thus, the KKT condition is sufficient and necessary to guarantee for the global minimizer. The Lagrangian of problem (25) is: L {λ(i) G}min{d,N} i=1 ,{µi}min{d,N} i=1 , ν := −1 2min{d,N}X i=1log(1 + αλ(i) G) +γ 2λ(i) Mλ(i) G−µiλ(i) G+ν 2min{d,N}X i=1λ(i) G−N , (26) where µi≥0, i= 1, . . . , min{d, N}andνare the Lagrangian multipliers. The KKT conditions are as follows: ∇λ(i) G⋆L= 0, ∀i= 1, . . . , min{d, N}, (27) λ(i) G⋆≥0, ∀i= 1, . . . , min{d, N}, (28) min{d,N}X i=1λ(i) G⋆=N, (29) µi⋆≥0, ∀i= 1, . . . , min{d, N}, (30) µi⋆λ(i) G⋆= 0, ∀i= 1, . . . , min{d, N}. (31) Then, the stationary condition in (27) is equivalent to: µi⋆=1 2 ν⋆+γλ(i) M−α 1 +αλ(i) G⋆ . (32) By using Eqs. (28) and (30)-(32), we come up with the following two cases: µi⋆>0⇒λ(i) G⋆= 0,1 ν⋆+γλ(i) M−1 α<0, (33) µi⋆= 0⇒λ(i) G⋆>0, λ(i) G⋆=1 ν⋆+γλ(i) M−1 α>0. (34) From the above two cases, we conclude that: λ(i) G⋆= maxn 0,1 ν⋆+γλ(i) M−1 αo , (35) 19 Published as a conference paper at ICLR 2025 where ν⋆satisfies: min{d,N}X i=1maxn 0,1 ν⋆+γλ(i) M−1 αo =N. (36) Given that γ < (α−ν⋆)/λmax(M), we have1 ν⋆+γλ(i) M−1 α>0for all i= 1, . . . , min{d, N}. Therefore, for the optimal solution Z⋆of problem (5), we conclude that: rank(Z⋆) = min {d, N} and the singular values σ(i) Z⋆=q1 γλ(i) M+ν⋆−1 α, for all i= 1, . . . , min{d, N}. Note that, the results we established just above rely on a condition γλmax(M)< α−ν⋆where the ν⋆is the optimal Lagrangian multiplier, which is set as a fixed value related to α, γ andλmax(M). Next, we will develop an upper bound for ν⋆and justify the fact that we ensure ν⋆to satisfy the condition γλmax(M)< α−ν⋆by only adjusting the hyper-parameters αandγ. In Eq. (36), we can easily find an upper bound of ν⋆as: N=min{d,N}X i=1maxn 0,1 ν⋆+γλ(i) M−1 αo ≤min{d, N} ν⋆+γλmin(M)−min{d, N} α, (37) ⇒ν⋆≤1 N min{d,N}+1 α−γλmin(M), (38) Therefore, we can find a tighten bound between α−ν⋆andγλmax(M)as: γλmax(M)<α2 α+ mind N,1 <α2 α+ mind N,1 +γλmin(M)≤α−ν⋆, (39) which means that the condition of γλmax(M)< α−ν⋆can be reformed as: γ <1 λmax(M)α2 α+ mind N,1 (40) Remark 3. We notice that (25) is a reverse water-filling problem, where the water level is controlled by 1/α, as shown in Figure A.1. When GandMhave eigenspaces aligned and γ <(α−ν⋆)/λmax(M), we have rank(Z⋆) = min {d, N}andλ(i) G⋆̸= 0 for all i≤min{d, N}. When γ≥(α−ν⋆)/λmax(M), non-zero λ(i) Gfirst disappears for the larger λ(i) M. 0 1 2 3 4 5 61/α i1 γλ(i) M+ν λ(2) Gλ(1) G λ(3) Gλ(0) G Figure A.1: Illustration of the optimal solution for problem (25) . The primal problem can be transformed into a classical reverse water-filling problem. Theorem 3. Suppose that the sufficient conditions to prevent catastrophic feature collapse are sat- isfied. Without loss of generality, we further assume that the columns in matrix Zare arranged into kblocks according to a certain N×Npermutation matrix Γ, i.e.,Z= [Z1,Z2,···,Zk]. Then the 20 Published as a conference paper at ICLR 2025 condition for that PRO-DSC promotes the optimal solution (Z⋆,C⋆)to be desired structure (i.e., Z⊤ ⋆Z⋆andC⋆are block-diagonal), is that ⟨(I−C)(I−C)⊤,G−G∗⟩ →0, where G∗:= Diag G11,G22,···,Gkk = G11 ... Gkk , andGjjis the block Gram matrix corresponding to Zj. Proof. We begin with the analysis to the first two terms of the loss function ˜L:=L1+γL2, where L1:=−1 2log det I+α(ZΓ)⊤(ZΓ) =−1 2log det( I+αG), L2:=1 2∥ZΓ−ZΓΓ⊤CΓ∥2 F=1 2∥Z−ZC∥2 F=1 2Tr G(I−C) (I−C)⊤ , since that Γ⊤Γ=ΓΓ⊤=I. Thus, we have: ˜L(G,C) =γ 2Tr G(I−C) (I−C)⊤ −1 2log det( I+αG), (41) which is a convex function with respect to (w.r.t) GandC, separately. By the property of convex function w.r.t. C, we have: ˜L(G,C)≥˜L(G∗,C∗) +D ∇C˜L(G∗,C∗),C−C∗E +Dγ 2(I−C) (I−C)⊤,G−G∗E =˜L(G∗,C∗) +D −γG∗(I−C∗),C−C∗E +Dγ 2(I−C) (I−C)⊤,G−G∗E , where C∗= Diag C11,C22,···,Ckk = C11 ... Ckk with the blocks associating to the partition of Z= [Z1,Z2,···,Zk]. Since thatD G∗(I−C∗),C−C∗E = 0 due to the comple- mentary between G∗andI−C∗, we have: ˜L(G,C)≥˜L(G∗,C∗) +Dγ 2(I−C) (I−C)⊤,G−G∗E . It is easy to see that ifD (I−C) (I−C)⊤,G−G∗E →0, then we will have: ˜L(G,C)≥˜L(G∗,C∗), (42) where the equality holds only when G=G∗,C=C∗. Furthermore, if the regularizer r(·)satisfies the extended block diagonal condition as defined in (Lu et al., 2018), then we have that r(C)≥ r(C∗), where the equality holds if and only if C=C∗. Therefore, we have: L(G,C) =˜L(G,C) +β·r(C)≥˜L(G∗,C∗) +β·r(C∗) =L(G∗,C∗). (43) Thus we conclude that minimizing the loss function L(G,C) =˜L(G,C) +β·r(C)promotes the optimal solution (G⋆,C⋆)to have block diagonal structure. We note that the Gram matrix being block-diagonal, i.e., G⋆=G∗, implies that Z⊤ ⋆,j1Z⋆,j2=0for all1≤j1< j2≤k, which is corresponding to the subspaces associated to the blocks Z⋆,j’s are orthogonal to each other. 21 Published as a conference paper at ICLR 2025 B E XPERIMENTAL SUPPLEMENTARY MATERIAL B.1 E XPERIMENTAL DETAILS B.1.1 S YNTHETIC DATA As shown in Figure 5a (top row), data points are generated from two manifolds. The first manifold (colored in purple) is generated by sampling 100 data points from x= cos1 5sin (5 φ) cosφ cos1 5sin (5 φ) sinφ sin1 5sin (5 φ) +ϵ, (44) where φis taken uniformly from [0,2π]andϵ∼ N (0,0.05I3)is the additive noise. The second manifold (colored in blue) is generated by sampling 100 data points from a Gaussian distribution N [0,0,1]⊤,0.05I3 . To further test more complicated cases, we generate the second manifold by sampling 50 data points from a Gaussian distribution N [0,0,1]⊤,0.05I3 and 50 data points from another Gaussian distribution N [0,0,−1]⊤,0.05I3 , as shown in Figure 5a (bottom row). In PRO-DSC, the learnable mappings h(·;Ψ)andf(·;Θ)are implemented with two MLPs with Rectified Linear Units (ReLU) (Nair & Hinton, 2010) as the activation function. The hidden dimen- sion and output dimension of the MLPs are set to 100and3, respectively. We train PRO-DSC with batch-size nb= 200 , learning rate η= 5×10−3for1000 epochs. We set γ= 0.5, β= 1000 , and α=3 0.1·200. We use DSCNet (Ji et al., 2017) as the representative of the SEDSC methods. In Figure 5b, we set γ= 1 for both cases, whereas in Figure 5c, γis set to 5and100for the two cases, respectively. The encoder and decoder of DSCNet are MLPs of two hidden layers, with the hidden dimensions being set to 100and3, respectively. We train DSCNet with batch-size nb= 200 , learning rate η= 1×10−4for1000 epochs. B.1.2 R EAL-WORLD DATASETS Datasets description. CIFAR-10 and CIFAR-100 are classic image datasets consisting of 50,000 images for training and 10,000 images for testing. They are split into 10 and 100 classes, respec- tively. CIFAR-20 shares the same images with CIFAR-100 while taking 20 super-classes as labels. ImageNet-Dogs consists of 19,500 images of 15 different dog species. Tiny-ImageNet consists of 100,000 images from 200 different classes. ImageNet-1k is the superset of the two datasets, con- taining more than 1,280,000 real world images from 1000 classes. For all the datasets except for ImageNet-Dogs, we train the network to implement PRO-DSC on the train set and test it on the test set to validate the generalization of the learned model. For ImageNet-Dogs dataset which does not have a test set, we train the network to implement PRO-DSC on the train set and report the clustering performance on the training set. For a direct comparison, we conclude the basic information of these datasets in Table B.1. To leverage the CLIP features for training, the input images are first resized to 224with respect to the smaller edge, then center-cropped to 224×224and fed into the CLIP pre-trained image encoder to obtain fixed features.11The subsequent training of PRO-DSC takes the extracted features as input, instead of loading the entire CLIP pre-trained model. Network architecture and hyper-parameters. The learnable mappings h(·;Ψ)andf(·;Θ)are two fully-connected layers with the same output dimension d. Following (Chu et al., 2024), for the experiments on real-world data, we stack a pre-feature layer before the learnable mappings, which is composed of two fully-connected layers with ReLU and batch-norm (Ioffe & Szegedy, 2015). We train the network by the SGD optimizer with the learning rate set to η= 10−4, and the weight decay parameters of f(·;Θ)andh(·;Ψ)are set to 10−4and5×10−3, respectively. 11We use the ViT L/14 pre-trained model provided by https://github.com/openai/CLIP for 768- dimensional features. 22 Published as a conference paper at ICLR 2025 Table B.1: Basic statistical information of datasets. We summarize the information in terms of the data with both the train and test split, as well as the number of classes involved. Datasets # Train # Test # Classes CIFAR-10 50,000 10,000 10 CIFAR-20 50,000 10,000 20 CIFAR-100 50,000 10,000 100 ImageNet-Dogs 19,500 N/A 15 TinyImageNet 100,000 10,000 200 ImageNet 1,281,167 50,000 1000 Following by (Chu et al., 2024), we warm up training f(·;Θ)by diversifying the features with L1=−log det( I+αZ⊤ ΘZΘ)for a few iterations and share the weights to h(·;Ψ). We set α=d 0.1·nbfor all the experiments. We summarize the hyper-parameters for training the network to implement PRO-DSC in Table B.2. Table B.2: Hyper-parameters configuration for training the network to implement PRO-DSC with CLIP pre-trained features. Here ηis the learning rate, dpreis the hidden and output dimen- sion of pre-feature layer, mis the output dimension of handf,nbis the batch size for training, and “# warm-up” is the number of iterations of warm-up stage. η d pre d #epochs nb #warm-up γ β CIFAR-10 1×10−44096 128 10 1024 200 300/ nb 600 CIFAR-20 1×10−44096 256 50 1500 0 600/ nb 300 CIFAR-100 1×10−44096 128 100 1500 200 150/ nb 500 ImageNet-Dogs 1×10−44096 128 200 1024 0 300/ nb 400 TinyImageNet 1×10−44096 256 100 1500 0 200/ nb 400 ImageNet 1×10−44096 1024 100 2048 2000 800/ nb 400 MNIST 1×10−44096 128 100 1024 200 700/ nb 400 F-MNIST 1×10−41024 128 200 1024 400 50/ nb 100 Flowers 1×10−41024 256 200 1024 200 400/ nb 200 Running other algorithms. Since that k-means (MacQueen, 1967), spectral clustering (Shi & Malik, 2000), EnSC (You et al., 2016a), SSCOMP (You et al., 2016b), and DSCNet (Ji et al., 2017) are based on transductive learning, we evaluate these models directly on the test set for all the experiments. • For EnSC, we tune the hyper-parameter γ∈ {1,2,5,10,20,50,100,200,400,800,1600,3200} and the hyper-parameter τinτ∥ · ∥ 1+1−τ 2∥ · ∥2 2to balance the ℓ1andℓ2norms in {0.9,0.95,1}and report the best clustering result. • For SSCOMP, we tune the hyper-parameter to control the sparsity kmax ∈ {1,2,5,10,20,50,100,200}and the residual ϵ∈ {10−4,10−5,10−6,10−7}and report the best clustering result. • To apply DSCNet to the CLIP features, we use MLPs with two hidden layers to replace the convolutional encoder and decoder. The hidden dimension of the MLPs are set to 128. We tune the balancing hyper-parameters γ∈ {1,2,3,4}andβ∈ {1,5,25,50,75,100} and train the model for 100 epochs with learning rate η= 1×10−4and batch-size nb equivalent to number of samples in the test data set. • As the performance of CPP is evaluated by averaging the ACC and NMI met- rics tested on each batch, we reproduce the results by their open-source imple- mentation and report the results on the entire test set. The authors provide two implementations (see https://github.com/LeslieTrue/CPP/blob/main/ main.py andhttps://github.com/LeslieTrue/CPP/blob/main/main_ efficient.py ), where one optimizes the cluster head and the feature head separately and the other shares weights between the two heads. In this paper, we test both cases and report the better results. 23 Published as a conference paper at ICLR 2025 • For k-means and spectral clustering (including when spectral clustering is used as the fi- nal step in subspace clustering), we repeat the clustering 10 times with different random initializations (by setting ninit= 10 in scikit-learn) and report the best results. • For SENet, SCAN and EDESC, we adjust the hyper-parameters and repeat experiments for three times, with only the best results are reported. B.2 E MPIRICAL VALIDATION ON THEORETICAL RESULTS Empirical Validation on Theorem 1. To validate Theorem 1 empirically, we conduct experiments on CIFAR-100 with the same training configurations as described in Section B.1.2 but change the training period to 1000 epochs. For each epoch, we compute Gb=Z⊤ bZbandMb= (I−Cb)(I− Cb)⊤with the learned representations Zband self-expressive matrix Cbin mini-batch of size nb after the last iteration of different epoch. Then, to quantify the eigenspace alignment of GbandMb, we directly plot the alignment error which is computed via the Frobenius norm of the commutator L:=∥GbMb−MbGb∥Fin mini-batch of size nbduring the training period in Figure 1a. We also show the standard deviation with shaded region after repeating the experiments for 5random seeds. As can be read, the alignment error decreases monotonically during the training period, implying that the eigenspaces are progressively aligned. Moreover, we find the eigenvector {uj}ofMbby eigenvalue decomposition, where ujdenotes the j-th eigenvector which are sorted according to the eigenvalues in the ascending order, and calculate the normalized correlation coefficient which is defined as ⟨uj,Gbuj/∥Gbuj∥2⟩. Note that when the eigenspace alignment holds, one can verify that: ⟨uj,Gbuj ∥Gbuj∥2⟩=( 1, λ(j) Gb̸= 0 0, λ(j) Gb= 0for all j= 1,2, . . . , n b. (45) As shown in Figure 1b, the normalized correlation curves associated to the first d= 128 eigenvec- tors converge to 1, whereas the rest converge to 0, implying the progressively alignment between GbandMb. Empirical Validation on Theorem 2. To verify Theorem 2, we conduct experiments on CIFAR-10 and CIFAR-100. The experimental setup keeps the same as described in Section B.1.2. In each epoch, we compute Gb=Z⊤ bZbandMb= (I−Cb)(I−Cb)⊤fromZbandCbin mini-batch after the last iteration, respectively, and then find the eigenvalues of GbandMb. We display the eigenvalue curves in Figure 1c and 1d, respectively. To enhance the clarity of the visualization, the eigenvalues of GbandMbare sorted in descending and ascending order, respectively. As can be observed, there are min{d, nb}= 128 non-zero eigenvalues in Gb, approximately being inversely proportional to the smallest 128eigenvalues of Mb. This results empirically demonstrate thatrank(Z⋆) = min {d, N}andλ(i) G⋆=1 γλ(i) M+ν⋆−1 αfor minimizers. Furthermore, to verify the sufficient condition of PRO-DSC to prevent feature space collapse, we conduct experiments on CIFAR-10 and CIFAR-100 with varying αandγ, keeping all the other hyper-parameters consistent with Table B.2. As can be read in Figure 2, Theorem 2 is verified since that γ <1 λmax(Mb)α2 α+min{d N,1}yields satisfactory clustering accuracy (ACC%) and subspace-preserving representation error (SRE%). The satisfactory ACC and SRE confirm that PRO-DSC avoids catastrophic collapse when γ <1 λmax(M)α2 α+min{d N,1}holds. When γ≥ 1 λmax(Mb)α2 α+min{d N,1}, PRO-DSC yields significantly worse ACC and SRE. There is a phase tran- sition phenomenon that corresponds to the sufficient condition to prevent collapse.12 Empirical Validation on Theorem 3. To intuitively visualize the structured representations learned by PRO-DSC, we visualize the Gram matrices |Z⊤Z|and Principal Component Analysis (PCA) results for both CLIP features and learned representations on CIFAR-10. The experimental setup also keeps the same as described in Section B.1.2. The Gram matrix shows the similarities between representations within the same class (indicated by in-block diagonal values) and across different classes (indicated by off-block diagonal values). 12In experiments, we estimate that λmax(Mb) = 1 and thus the condition reduces to γ <α2 α+min{d N,1}. 24 Published as a conference paper at ICLR 2025 Moreover, we display the dimensionality reduction results via PCA for the CLIP features and the learned representation of samples from three categories in CIFAR-10. We use PCA for dimension- ality reduction as it performs a linear projection, well preserving the underlying structure. As can be observed in Figure 3, the CLIP features from three classes approximately lie on different subspaces. Despite of the structured nature of the features, the underlying subspaces are not orthog- onal. In the Gram matrix of the CLIP feature, the average similarity between features from different classes is greater than 0.6, resulting in an unclear block diagonal structure. After training with PRO- DSC, the spanned subspaces of the learned representations become orthogonal.13Additionally, the off-block diagonal values of the Gram matrix decrease significantly, revealing a clear block diagonal structure. These visualization results qualitatively verify that PRO-DSC aligns the representations with a union of orthogonal subspaces.14 B.3 M ORE EXPERIMENTAL RESULTS B.3.1 M ORE RESULTS OF PRO-DSC ON SYNTHETIC DATA To explore the learning ability of our PRO-DSC, we prepare experiments on synthetic data with adding an additional subspace, as presented in Figure B.1. In case 1 , we sample 100 points from Gaussian distribution x∼ N ([1√ 2,0,1√ 2]⊤,0.05I3)and 100 points from x∼ N ([−1√ 2,0,1√ 2]⊤,0.05I3), respectively. We train PRO-DSC with batch- sizenb= 300 , learning rate η= 5×10−3for5000 epochs and set γ= 1.3, β= 500 , α= 3 0.1·300. We observe that our PRO-DSC successfully eliminates the nonlinearity in representations and maximally separates the different subspaces. In case 2 , we add a vertical curve x= cos1 5sin (5 φ) cosφ sin1 5cos (5 φ) cos1 5sin (5 φ) sinφ +ϵ, (46) from which 100 points are sampled, where ϵ∼ N (0,0.05I3). We use sin(1 5cos(5 φ))to avoid overlap in the intersection of the two curves. We train PRO-DSC with batch-size nb= 200 , learning rateη= 5×10−3for8000 epochs and set γ= 0.5, β= 500 , α=3 0.1·200. We observe that PRO-DSC finds difficulties to learn representations of data which are located at the intersections of the subspaces. However, for those data points which are away from the intersections are linearized well. (a) Case 1: Input data (b) Case 1: Learned Z (c) Case 2: Input data (d) Case 2: Learned Z Figure B.1: Additional results on synthetic data. B.3.2 E XPERIMENTS WITH BYOL P RE-TRAINING To validate the effectiveness of our PRO-DSC without using CLIP features, we conduct a fair com- parison with existing deep clustering approaches. Similar to most deep clustering algorithms, we 13The dimension of each subspace is much greater than one (see Figure B.4). The 1-dimensional subspaces observed in the PCA results are a consequence of dimensionality reduction. 14Please refer to Figure B.3 and B.7 for the results on other datasets and the visualization of the bases of each subspace. 25 Published as a conference paper at ICLR 2025 divide the training process into two steps. We begin with pre-training the parameters of the back- bone with BYOL (Grill et al., 2020). Then, we leverage the parameters pre-trained in the first stage and fine-tune the model by the proposed PRO-DSC loss function. Specifically, we set the learning rateη= 0.05and the batch size nb= 256 . The output feature dimension dis consistent with the setting for training with the CLIP features. Following (Li et al., 2021; Huang et al., 2023), We use ResNet-18 as the backbone for the experiments on CIFAR-10 and CIFAR-20, and use ResNet-34 as the backbone for the experiments on other datasets, and use a convolution filter of size 3×3 and stride 1 to replace the first convolution filter. We use the commonly used data augmentations methods to the input images, which are listed as follows: transforms.RandomResizedCrop(size=img size, scale=(0.08, 1)), transforms.RandomHorizontalFlip(), transforms.RandomApply([transforms.ColorJitter(0.4, 0.4, 0.2, 0.1)], p=0.8), transforms.RandomGrayscale(p=0.2), transforms.RandomApply([transforms.GaussianBlur(kernel size=23, sigma=(0.1, 2.0))], p=1.0). When re-implementing other baselines, we use the code provided by the respective authors and report the best performance after fine-tuning the hyper-parameters. We report the clustering results based on BYOL pre-training in Table B.3. As can be read from Table B.3, our PRO-DSC outperforms all the deep clustering baselines, including CC (Li et al., 2021), GCC (Zhong et al., 2021), NNM (Dang et al., 2021), SCAN (Van Gansbeke et al., 2020), NMCE (Li et al., 2022), IMC-SwA V (Ntelemis et al., 2022), and MLC (Ding et al., 2023). Table B.3: Clustering performance comparison on BYOL pre-training. The best results are in bold font and the second best results are underlined. Performance marked with “*” is based on our re-implementation. MethodCIFAR-10 CIFAR-20 CIFAR-100 TinyImgNet-200 ImgNetDogs-15 ACC NMI ACC NMI ACC NMI ACC NMI ACC NMI k-means 22.9 8.7 13.0 8.4 9.2 23.0 2.5 6.5 10.5 5.5 SC 24.7 10.3 13.6 9.0 7.0 17.0 2.2 6.3 11.1 3.8 CC 79.0 70.5 42.9 43.1 26.9* 48.1* 14.0 34.0 42.9 44.5 GCC 85.6 76.4 47.2 47.2 28.2* 49.9* 13.8 34.7 52.6 49.0 NNM 84.3 74.8 47.7 48.4 41.2 55.1 - - 31.1* 34.3* SCAN 88.3 79.7 50.7 48.6 34.3 55.7 - - 29.6* 30.3* NMCE 89.1 81.2 53.1 52.4 40.0* 53.9* 21.6* 40.0* 39.8 39.3 IMC-SwA V 89.7 81.8 51.9 52.7 45.1 67.5 28.2 52.6 - - MLC 92.2 85.5 58.3 59.6 49.4 68.3 28.7* 52.2 * 71.0 * 68.3 * Our PRO-DSC 93.0±0.686.5±0.258.3±0.960.1±0.656.3±0.666.7.0±1.031.1±0.346.0±1.074.1±0.569.5±0.6 B.3.3 M ORE EXPERIMENTS ON CLIP, DINO AND MAE P RE-TRAINED FEATURES Clustering on learned representations. To quantitatively validate the effectiveness of the struc- tured representations learned by PRO-DSC, we illustrate the clustering accuracy of representations learned by various algorithms in Figure 6. Here, to compared with the representations learned from SEDSC methods, we additionally conduct experiments on DSCNet (Ji et al., 2017) and report the performance in Table B.4. To apply DSCNet on CLIP features, we use MLPs with two hidden layers to replace the stacked convolutional encoder and decoder. As demonstrated in Sec. B.1, we report the best clustering results after the tuning of hyper-parameters. As analyzed in (Haeffele et al., 2021) and Section 2.1, DSCNet overly compresses the representations and yields unsatisfactory clustering results. Out of domain datasets. We evaluate the capability to refine features by training PRO-DSC with pre-trained CLIP features on out-of-domain datasets, namely, MNIST (Deng, 2012), Fash- ion MNIST (Xiao et al., 2017) and Oxford flowers (Nilsback & Zisserman, 2008). As shown in Table B.5, CPP (Chu et al., 2024) refines the CLIP features and yields better clustering performance comparing with spectral clustering (Shi & Malik, 2000) and EnSC (You et al., 2016a). Our PRO- DSC further demonstrates the best performance on all benchmarks, validating its effectiveness in refining input features. 26 Published as a conference paper at ICLR 2025 Table B.4: Clustering accuracy of CLIP features and learned representations. We apply k- means, spectral clustering, and EnSC to cluster the representations. CIFAR-10 CIFAR-100 CIFAR-20 TinyImgNet-200 k-means SC EnSC k-means SC EnSC k-means SC EnSC k-means SC EnSC CLIP 74.7 70.2 95.4 52.8 66.4 67.0 46.9 49.2 60.8 54.1 62.8 64.5 SEDSC 16.4 18.9 16.9 5.4 4.9 5.3 11.7 10.6 12.8 5.7 3.9 7.2 CPP 71.3 70.3 95.6 75.3 75.0 77.5 55.5 43.6 58.3 62.1 58.0 67.0 PRO-DSC 93.4 92.1 95.5 76.5 75.2 77.6 66.0 59.7 60.0 67.6 67.0 69.5 Table B.5: Experiments on out-of-domain datasets. MethodsMNIST F-MNIST Flowers ACC NMI ACC NMI ACC NMI Spectral Clustering (Shi & Malik, 2000) 74.5 67.0 64.3 56.8 85.6 94.6 EnSC (You et al., 2016a) 91.0 85.3 69.1 65.1 90.0 95.9 CPP (Chu et al., 2024) 95.7 90.4 70.9 68.8 91.3 96.4 PRO-DSC 96.1 90.9 71.3 70.3 92.0 97.4 Experiments on block diagonal regularizer with different k.To test the robustness of block diag- onal regularizer ∥A∥κto different k, we vary kand report the clustering performance in Table B.6. As illustrated, kdoes not necessarily equal to the number of clusters. There exists an interval within which the regularizer works effectively. Table B.6: Clustering performance with different kin block diagonal regularizer. k 2 5 10 15 20 25 30 CIFAR-10ACC 97.2 97.2 97.4 96.3 96.3 95.4 94.0 NMI 93.2 93.2 93.5 92.0 92.0 90.7 88.6 k 25 50 75 100 125 150 200 CIFAR-100ACC 74.3 76.7 78.1 78.2 78.9 76.4 74.8 NMI 80.9 82.3 83.2 82.9 83.2 82.2 81.5 But if kis significantly smaller than the number of clusters, the effect of block diagonal regularizer will be subtle. Therefore, the performance of PRO-DSC will be similar to that of PRO-DSC without a regularizer (see ablation studies in Section 3). In contrary, if kis significantly larger than the number of clusters, over-segmentation will occur to the affinity matrix, which has negative impact on the subsequent clustering performance. Clustering on ImageNet-1k with DINO and MAE. To test the performance of PRO-DSC based on more pre-trained features other than CLIP (Radford et al., 2021), we further conduct experiments on ImageNet-1k (Deng et al., 2009) pre-trained by DINO (Caron et al., 2021) and MAE (He et al., 2022) (see Table B.7). DINO and MAE are pre-trained on ImageNet-1k without leveraging external training data, thus their performance on PRO-DSC is lower than CLIP. Similar to the observations in CPP (Chu et al., 2024), DINO initializes PRO-DSC well, yet MAE fails, which is attributed to the fact that features from MAE prefer fine-tuning with labels, while they are less suitable for learning inter-cluster dis- criminative representations (Oquab et al., 2024). We further extract features from the validation set of ImageNet-1k and visualize through t-SNE (Van der Maaten & Hinton, 2008) to validate the hypothesis (see Figure B.2). B.3.4 E XPERIMENTS WITHOUT USING PRE -TRAINED MODELS Experiments on Reuters and UCI HAR. During the rebuttal, we conducted extra experiments on datasets Reuters and UCI HAR. The dataset Reuters-10k consists of four text classes, containing 10,000 samples of 2,000 dimension. The UCI HAR is a time-series dataset, consisting of six classes, 10,299 samples of 561 dimension. We take EDESC (Cai et al., 2022) as the baseline method for 27 Published as a conference paper at ICLR 2025 Table B.7: Clustering Performance of PRO-DSC based on DINO and CLIP pre-trained features on ImageNet-1k. Method BackbonePRO-DSC k-means ACC NMI ACC NMI MAE (He et al., 2022) ViT L/16 9.0 49.1 9.4 49.3 DINO (Caron et al., 2021) ViT B/16 57.3 79.3 52.2 79.2 DINO (Caron et al., 2021) ViT B/8 59.7 80.8 54.6 80.5 CLIP (Radford et al., 2021) ViT L/14 65.1 83.6 52.5 79.7 40 20 0 20 40 60 80 100100 80 60 40 20 02040 (a) CLIP 80 60 40 20 0 20 40 60 8060 40 20 0204060 (b) MAE Figure B.2: The t-SNE visualization of CLIP and MAE features on the validation set of ImageNet- 1k. deep subspace clustering on Reuters-10k, and take N2D (McConville et al., 2021) and FCMI (Zeng et al., 2023) as the baseline methods for UCI HAR, in which the results are directly cited from the respective papers. We conducted experiments with PRO-DSC on Reuters and UCI HAR following the same protocol for data processing as the baseline methods. We train and test PRO-DSC on the entire dataset and report the results over 10 trials. Experimental results are provided in Table B.8. The hyper-parameters used for PRO-DSC is listed in Table B.9. Table B.8: Experimental Results on Datasets Reuters and UCI HAR with 10 trials. The results of other methods are cited from the respective papers. DatasetREUTERS-10k UCI HAR ACC NMI ACC NMI k-means (MacQueen, 1967) 52.4 31.2 59.9 58.8 SC (Shi & Malik, 2000) 40.2 37.5 53.8 74.1 AE (Bengio et al., 2006) 59.7 32.3 66.3 60.7 V AE (Kingma & Welling, 2014) 62.5 32.9 - - JULE (Yang et al., 2016) 62.6 40.5 - - DEC (Xie et al., 2016) 75.6 68.3 57.1 65.5 DSEC (Chang et al., 2018) 78.3 70.8 - - EDESC (Cai et al., 2022) 82.5 61.1 - - DFDC (Zhang & Davidson, 2021) - - 86.2 84.5 N2D (McConville et al., 2021) - - 82.8 71.7 FCMI (Zeng et al., 2023) - - 88.2 80.7 PRO-DSC 85.7±1.3 64.6 ±1.3 87.1 ±0.4 80.9 ±1.2 28 Published as a conference paper at ICLR 2025 Table B.9: Configuration of hyper-parameters for experiments on Reuters, UCI HAR, EYale-B, ORL and COIL-100. Dataset η d pre d #epochs nb #warm-up γ β REUTERS-10k 10−41024 128 100 1024 50 50 200 UCI HAR 10−41024 128 100 2048 20 100 300 EYale-B 10−41080 256 10000 2432 100 200 50 ORL 10−480 64 5000 400 100 75 10 COIL-100 10−412800 100 10000 7200 100 200 100 Comparison to AGCSC and SAGSC on Extended Yale B, ORL, and COIL-100. During the re- buttal, we conducted more experiments on two state-of-the-art subspace clustering methods AGCSC (Wei et al., 2023) and ARSSC (Wang et al., 2023a). Since that both of the two methods cannot handle the datasets used for evaluating our PRO-DSC, we conducted experiments on the datasets: Extended Yale B (EYaleB), ORL, and COIL-100. We set the architecture of pre-feature layer in PRO-DSC as the same to the encoder of DSCNet (Ji et al., 2017). The hyper-parameters configuration for training PRO-DSC is summarized in Table B.9. We repeated experiments for 10 trails and report the aver- age with standard deviation in Table B.10. As baseline methods, we use EnSC (You et al., 2016a), SSCOMP (You et al., 2016b), S3COMP (Chen et al., 2020b), DSCNet, DSSC (Lim et al., 2020) and DELVE Zhao et al. (2024). The results of these methods, except for S3COMP and DELVE, are directly cited them from DSSC (Lim et al., 2020), and the results of S3COMP and DELVE are cited from their own papers. • Comparison to AGCSC. Our method surpasses AGCSC on the Extended Yale B dataset and achieves comparable results on the ORL dataset. However, AGCSC cannot yield the result on COIL-100 in 24 hours. • Comparison to ARSSC. ARSSC employs three different non-convex regularizers: ℓγnorm Penalty (LP), Log-Sum Penalty (LSP), and Minimax Concave Penalty (MCP). While ARSSC-MCP performs the best on Extended Yale B, our PRO-DSC outperforms ARSSC- MCP on ORL. While AGCSC performs the best on ORL, but it yields inferior results on Extended Yale B and it cannot yield the results on COIL-100 in 24 hours. Thus, we did not report the results of AGCSC on COIL-100 and marked it as Out of Time (OOT). Our PRO-DSC performs the second best results on Extended Yale B, ORL and the best results on COIL-100. Since that we have not found the open-source code for ARSSC, we are un- able to have their results on COIL-100. This comparison also confirms the scalablity of our PRO-DSC which is due to the re-parametrization (similar to SENet). Table B.10: Experiments on Extended Yale B, ORL and COIL-100. EYale-B ORL COIL-100 ACC NMI ACC NMI ACC NMI EnSC 65.2 73.4 77.4 90.3 68.0 90.1 SSCOMP 78.0 84.4 66.4 83.2 31.3 58.8 S3COMP-C (Chen et al., 2020b) 87.4 - - - 78.9 - DSCNet 69.1 74.6 75.8 87.8 49.3 75.2 DELVE (Zhao et al., 2024) 89.8 90.1 - - 79.0 93.9 J-DSSC (Lim et al., 2020) 92.4 95.2 78.5 90.6 79.6 94.3 A-DSSC (Lim et al., 2020) 91.7 94.7 79.0 91.0 82.4 94.6 AGCSC (Wei et al., 2023) 92.3 94.0 86.3 92.8 OOT OOT ARSSC-LP (Wang et al., 2023a) 95.7 - 75.5 - - - ARSSC-LSP (Wang et al., 2023a) 95.9 - 71.3 - - - ARSSC-MCP (Wang et al., 2023a) 99.3 - 72.0 - - - PRO-DSC 96.0±0.395.7±0.8 83 .2±2.2 92 .7±0.682.8±0.995.0±0.6 B.4 M ORE VISUALIZATION RESULTS Gram matrices and PCA visualizations. To qualitatively validate that PRO-DSC learns represen- tations aligning with a union-of-orthogonal-subspaces distribution, we visualize the Gram matrices 29 Published as a conference paper at ICLR 2025 and PCA dimension reduction results of CLIP features and learned representations from PRO-DSC for each dataset. As shown in Figure B.3, the off-block diagonal values decrease significantly, im- plying the orthogonality between representations from different classes. The orthogonal between subspaces can also be observed from the PCA dimension reduction results. Singular values visualization. To show the intrinsic dimension of CLIP features and the represen- tations of PRO-DSC, We plot the singular values of CLIP features and PRO-DSC’s representations in Figure B.4. Specifically, the singular values of features from all the samples are illustrated on the left and the singular values of features within each class are illustrated on the middle and right. As can be seen, the singular values of PRO-DSC decrease much slower than that of CLIP, implying that the features of PRO-DSC enjoy a higher intrinsic dimension and more isotropic structure in the ambient space. Learning curves. We plot the learning curves with respect to loss values and performance of PRO- DSC on CIFAR-100, CIFAR-20 and ImageNet-1k in Figure B.5a, Figure B.5b and Figure B.5c, respectively. Recall that L1:=−1 2log det I+αZ⊤ ΘZΘ ,L2:=1 2∥ZΘ−ZΘCΨ∥2 F, andL3:= ∥AΨ∥κ. Since L1is the only loss function used in the warm-up stage, we plot all the curves starting from the iteration when warm-up ends. As illustrated, the clustering performance of PRO- DSC steadily increase as the loss values gradually decrease, which shows the effectiveness of the proposed loss functions in PRO-DSC. t-SNE visualization of learned representations. We visualize the CLIP features and cluster repre- sentations learned by PRO-DSC leveraging t-SNE (Van der Maaten & Hinton, 2008) in Figure B.6. As illustrated, the learned cluster representations are significantly more compact compared with the CLIP features, which contributes to the improved clustering performance. Subspace visualization. We visualize the principal components of subspaces learned by PRO-DSC in Figure B.7. For each cluster in the dataset, we apply Principal Component Analysis (PCA) to the learned representations. We select the top eight principal components to represent the learned subspaces. Then, for each principal component, we display eight images whose representations are most closely aligned with that principal component. Interestingly, we can observe specific semantic meanings from the principal components learned by PRO-DSC. For instance, the third row of Figure B.7a consists of stealth fighters, whereas the fifth row shows airliners. The second row of Figure B.7c consists of birds standing and resting, while the sixth row shows flying eagles. While Figure B.7j consists of all kinds of trucks, the first row shows fire trucks. C L IMITATIONS AND FAILURE CASES Limitations: In this paper, we explore an effective framework for deep subspace clustering with theoretical justification. However, it is not clear how to develop the geometric guarantee for our PRO-DSC framework to yield correctly subspace-preserving solution. Moreover, it is an unsuper- vised learning framework, we left the extension to semi-supervised setting as future work. Failure Cases: In this paper, we evaluate our PRO-DSC framework on four scenarios of synthetic data (Fig. 5 and B.1), six benchmark datasets with CLIP features (Table 1), five benchmark datasets with BYOL pre-trained features (Table B.3), three out-of-domain datasets (Table B.5), using four different regularization terms (Table 3), using different feature extractor (Table B.7) and varying hyper-parameters (Fig. 7 and Table B.6). We also conduct experiments on two face image datasets (Table B.10), text and temporal dataset (Table B.8). However, as demonstrated in Fig. 1, our PRO- DSC will fail if the sufficient condition to prevent catastrophic collapse is not satisfied by using improper hyper-parameters γandα. Extensibility: As a general framework for self-expressive model based deep subspace clustering, our PRO-DSC is reasonable, scalable and flexible to miscellaneous extensions. For example, rather than using log det( ·), there are other methods to solve the feature collapse issue, e.g., the nuclear norm. In addition, it is also worthwhile to incorporate the supervision information from the pseudo- label, e.g., (Huang et al., 2023; Jia et al., 2025; Li et al., 2017), for further improving the performance of our PRO-DSC. 30 Published as a conference paper at ICLR 2025 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Aquatic Fish Furniture Aquatic Fish Furniture (a) CIFAR-20 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Apple Aquarium Baby Apple Aquarium Baby (b) CIFAR-100 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Goldfish Salamander Bullfrog Goldfish Salamander Bullfrog (c) TinyImageNet-200 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Maltese Pekinese Toy Terrier Maltese Pekinese Toy Terrier (d) ImageNet-Dogs-15 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 02000 4000 6000 8000 10000 12000 1400002000400060008000100001200014000 0.0 0.2 0.4 0.6 0.8 1.0 Tench Cliff Dwelling Tissue Tench Cliff Dwelling Tissue (e) ImageNet-1k Figure B.3: Visualization of the union-of-orthogonal-subspaces structure of the learned repre- sentations via Gram matrix and PCA dimension reduction on three categories. Left:|X⊤X|. Mid-left: |Z⊤Z|. Mid-right: X(3)via PCA. Right: Z(3)via PCA. 31 Published as a conference paper at ICLR 2025 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000056/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000026/uni0000002f/uni0000002c/uni00000033 /uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni0000002f/uni0000002c/uni00000033/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 (a) CIFAR-100 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000056/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000026/uni0000002f/uni0000002c/uni00000033 /uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni0000002f/uni0000002c/uni00000033/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 (b) TinyImageNet-200 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000056/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000026/uni0000002f/uni0000002c/uni00000033 /uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000026/uni0000002f/uni0000002c/uni00000033/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000026/uni00000052/uni00000050/uni00000053/uni00000052/uni00000051/uni00000048/uni00000051/uni00000057/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000031/uni00000052/uni00000055/uni00000050/uni00000044/uni0000004f/uni0000004c/uni0000005d/uni00000048/uni00000047/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056 /uni00000036/uni0000004c/uni00000051/uni0000004a/uni00000058/uni0000004f/uni00000044/uni00000055/uni00000003/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000035/uni00000032/uni00000010/uni00000027/uni00000036/uni00000026/uni00000003/uni00000049/uni00000055/uni00000052/uni00000050/uni00000003/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000003/uni00000046/uni0000004f/uni00000044/uni00000056/uni00000056 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000015 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000016 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000017 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000018 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000019 /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001a /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001b /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni0000001c /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni00000014/uni00000013 (c) ImageNet-1k Figure B.4: Singular values of features from all samples (left) and features from each class (mid and right). For the better clarity, we plot the singular values for the first ten classes. 32 Published as a conference paper at ICLR 2025 00.20.40.60.811.2 0 300 600 900 1200 1500Loss Value Iterations-300-260-220-180-140-100 0 300 600 900 1200 1500Loss Value Iterations00.10.20.3 0 300 600 900 1200 1500Loss Value Iterations 3035404550556065707580 5101520253035404550Performance EpochsACC NMI (a) CIFAR-20 -152-150-148-146-144-142-140-138 200 800 1400 2000 2600 3200Loss Value Iterations00.10.20.30.40.50.6 200 800 1400 2000 2600 3200Loss Value Iterations00.20.40.60.811.2 200 800 1400 2000 2600 3200Loss Value Iterations30405060708090 102030405060708090100Performance EpochsACC NMI (b) CIFAR-100 -2000-1960-1920-1880-1840-1800 2000 22000 42000 62000Loss Value Iterations00.10.20.30.40.50.6 2000 22000 42000 62000Loss Value Iterations00.511.522.5 2000 22000 42000 62000Loss Value Iterations 30405060708090 25 50 75 100Performance EpochsACC NMI (c) ImageNet-1k Figure B.5: The learning curves w.r.t. loss values and evaluation performance of PRO-DSC on CIFAR-20, CIFAR-100 and ImageNet-1k dataset. 60 40 20 0 20 40 6060 40 20 0204060 (a) CLIP CIFAR-10 60 40 20 0 20 4060 40 20 0204060 (b) PRO-DSC CIFAR-10 60 40 20 0 20 40 6060 40 20 0204060 (c) CLIP CIFAR-100 80 60 40 20 0 20 40 60 8075 50 25 0255075 (d) PRO-DSC CIFAR-100 Figure B.6: t-SNE visualization of CLIP features and PRO-DSC’s learned representations. The experiments are conducted on CIFAR-10 and CIFAR-100 dataset. 33 Published as a conference paper at ICLR 2025 (a) Cluster 1 (b) Cluster 2 (c) Cluster 3 (d) Cluster 4 (e) Cluster 5 (f) Cluster 6 (g) Cluster 7 (h) Cluster 8 (i) Cluster 9 (j) Cluster 10 Figure B.7: Visualization of the principal components in CIFAR-10 dataset. For each cluster, we display the most similar images to its principal components. 34 | 6 | 1 | The proposed framework is based on deep learning techniques which typically require significant computational resources. The paper mentions extensive experiments on multiple datasets including CIFAR-10, CIFAR-20, and CIFAR-100, which are standard benchmarks in the field, often used in deep learning model training. Given the complexity of the proposed model with both learned representations and self-expressive coefficients, an estimate of around 6 hours of training seems reasonable for a setup on a single high-end GPU (like an NVIDIA V100 or A100). This estimate considers that it is a new model architecture and the potential for some optimizations through mini-batching and differential programming. Given that extensive training is conducted as per ICLR standards, it's also reasonable to assume that with a single, modern GPU, the training can be completed within 8 hours based on the techniques described. | yes | Yes | CV | Exploring a Principled Framework for Deep Subspace Clustering | 2025-03-21T00:00:00.000Z | [https://github.com/mengxianghan123/PRO-DSC] | 1 | Dataset found at: [https://drive.google.com/drive/folders/1C4qlqYOW4-YulIwgkNfqMM7dZ2O5-BK_], [https://drive.google.com/drive/folders/1L9jH8zRF3To6Hb_B0UZ6PbknhgusWm5_] | 20 | https://colab.research.google.com/drive/1D4PwvmROZazdEKuhZj7QkfBKOqY9Jb0r?usp=sharing | YES! SUCCESSFULLY RUN | All things fine! successfully run |
FB15k-237 | DaBR | [] | Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04076v2 | [
"https://github.com/llqy123/dabr"
] | {'MRR': '0.373', 'Hits@10': '0.572', 'Hits@3': '0.410', 'Hits@1': '0.247', 'MR': '83'} | [
"Hits@1",
"Hits@3",
"Hits@10",
"MRR",
"MR",
"training time (s)",
"Hit@1",
"Hit@10"
] | Given the following paper and codebase:
Paper: Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation
Codebase: https://github.com/llqy123/dabr
Improve the DaBR model on the FB15k-237 dataset. The result
should improve on the following metrics: {'MRR': '0.373', 'Hits@10': '0.572', 'Hits@3': '0.410', 'Hits@1': '0.247', 'MR': '83'}. You must use only the codebase provided.
| Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation Weihua Wang1,2,3, *, Qiuyu Liang1, Feilong Bao1,2,3, Guanglai Gao1,2,3 1College of Computer Science, Inner Mongolia University, Hohhot, China 2National and Local Joint Engineering Research Center of Intelligent Information Processing Technology for Mongolian, Hohhot, China 3Inner Mongolia Key Laboratory of Multilingual Artificial Intelligence Technology, Hohhot, China Abstract Quaternion contains one real part and three imaginary parts, which provided a more expres- sive hypercomplex space for learning knowl- edge graph. Existing quaternion embedding models measure the plausibility of a triplet through either semantic matching or geometric distance scoring functions. However, it appears that semantic matching diminishes the sepa- rability of entities, while the distance scoring function weakens the semantics of entities. To address this issue, we propose a novel quater- nion knowledge graph embedding model. Our model combines semantic matching with the geometric distance of entities to better mea- sure the plausibility of triplets. Specifically, in the quaternion space, we perform a right rotation on head entity and a reverse rota- tion on tail entity to learn rich semantic fea- tures. We then utilize distance-adaptive trans- lations to learn geometric distance between entities. Furthermore, we provide mathemati- cal proofs to demonstrate our model can han- dle complex logical relationships. Extensive experimental results and analyses show our model significantly outperforms previous mod- els on well-known knowledge graph comple- tion benchmark datasets. Our code is available athttps://github.com/llqy123/DaBR . 1 Introduction Knowledge graphs (KGs) (Liang et al., 2024a) are powerful tools for representing valid factual triplets by capturing entities and their relationships in a graphical format. Owing to the well-structured of graphs, KGs are often used for various Natural Language Processing tasks, such as question an- swering (Mendes et al., 2024; Faldu et al., 2024), entity alignment (Wang et al., 2024a,b), KG-based recommendation (Liang et al., 2024c) and KG en- hanced Large Language Model (Wen et al., 2024). *Corresponding Author. Email: wangwh@imu.edu.cn. (a) QuatE (b) TransERR Figure 1: The visualization embedding of QuatE and TransERR models after 100 epochs training. Points in the same color represent tail entities that have the same (hr, rj)(query) context. However, KGs are usually incomplete and the incompleteness limits their application. As an ef- fective tool for predicting missing facts, knowledge graph completion (KGC) has received considerable attention from researchers. Typically, researchers transform KGC tasks into knowledge graph embed- dings (KGEs). KGE refers to learning representa- tions of entities and relations in a low-dimensional space while preserving the graph’s inherent struc- ture and semantic properties. In this representation space, a scoring function can be defined to mea- sure the plausibility of each triplet, where valid triplets should receive higher scores than these in- valid ones. Quaternion contains one real part and three imag- inary parts, which providing a more expressive space for learning embeddings of entities and re- lations. Rotation in the quaternion space is of- ten used to model the KGs. For example, QuatE (Zhang et al., 2019) learns semantic information about entities by treating relations as rotations from head entities to tail entities. TransERR (Li et al., 2024) encodes the KG by rotating the head and tail entities with their corresponding unit quaternions. These models use either semantic matching or dis- tance scoring functions to measure the plausibility of the triplet, respectively. However, it appearsarXiv:2412.04076v2 [cs.LG] 12 Dec 2024 that semantic matching diminishes the separabil- ity of entities, while the distance scoring function weakens the semantics of entities. For example, we visualized the results for the same query in Fig- ure 11. Specifically, as shown in Figure 1, we observe that QuatE model overlaps some queries when using semantic matching as a scoring func- tion. The entities of TransERR using the distance scoring function are also indistinguishable from each query. To address this issue, we propose a Distance- adaptive quaternion knowledge graph embedding with Bidirectional Rotation model, named as DaBR . Our model combines semantic matching with the geometric distance of entities to better measure the plausibility of triplets. Specifically, in the quaternion space, we perform a right ro- tation on the head entity and a reverse rotation on the tail entity to learn rich semantic features. This process is called bidirectional rotation. We conducted extensive experiments on multiple well- known benchmark datasets for knowledge graph completion task. The experimental results and anal- yses demonstrated the effectiveness and robustness of our model. Our contributions are summarized as follows: •We propose performing a right rotation on the head entity and a reverse rotation on the tail entity to learn rich semantic features. •We propose learning the embedding distance between entities by incorporating distance adaptive translations. •We provide mathematical proofs to demon- strate that our model can handle rich logical relationships. •Extensive experiments show that our model provides consistent and significant improve- ments over previous models in most metrics. 2 Related Work For KGE models, the design of the scoring func- tion directly affects these models’ performance and effectiveness. Based on the calculation methods of scoring functions in previous models, KGE scoring functions can mainly be categorized into semantic matching- and geometric distance-based. 1For more information about queries, see Section 6.4.Semantic matching. Semantic matching scor- ing functions capture the interactions between en- tities and relations through inner products on em- bedding vectors. The hypothesis is that entities connected by relations are close to each other in the semantic space. For example, QuatE (Zhang et al., 2019) obtains semantic information about en- tities through the Hamiltonian rotation of the head entity on the relation in quaternion space. DualE (Cao et al., 2021) further enhances QuatE to model knowledge graphs in dual quaternion space. Qua- tRE (Nguyen et al., 2022) associates each relation with two relation-aware rotations, which are used to rotate the quaternion embeddings of the head and tail entities, respectively. ConvQE (Liang et al., 2024d) investigates the potential of quaternion con- volution in knowledge graph embedding. A common feature of these models is the compu- tation of the inner product between the head entity and the tail entity after a relation transformation. However, these models overlook the geometric dis- tance properties between entities in the knowledge graph, which leads to distorted embeddings of the learned entities. Geometric distance. Geometric distance scor- ing functions assess the plausibility of triplets by calculating the distances between embedding vec- tors in the representation space. The goal of this scoring function is to keep the head/tail entity vec- tor closer to the tail/head entity vector after being transformed through the relation vector. For exam- ple, TransE (Bordes et al., 2013), considered the first model to employ a geometric distance scoring function, assumes that triplets (h, r, t )in knowl- edge graphs should satisfy the expression h+r≈t. However, TransE struggles with more complex re- lation types, such as one-to-many (1-to-N), many- to-one (N-to-1) and many-to-many (N-to-N). To address this limitation, several models using distance-based scoring functions have been pro- posed. For example, Rotate3D (Gao et al., 2020) maps entities to a 3D space, defining the relation as a rotation from the head entity to the tail entity. Trans4E (Nayyeri et al., 2021) performs rotations and translations in a quaternion space. RotateCT (Dong et al., 2022) transforms entity coordinates and represents each relation as a rotation in com- plex space. Rotate4D (Le et al., 2023) employs two distinct rotational transformations to align the head embedding with the tail embedding. DCNE (Dong et al., 2024) maps entities to the dual com- plex number space, using rotations in the 2D space through the multiplication of dual complex num- bers to represent relations. TransERR (Li et al., 2024) encodes knowledge graphs by rotating the head and tail entities with their corresponding unit quaternions. A common feature of these models is that the plausibility of the triplets is evaluated by calculat- ing the distance between the head entity and the tail entity after transformation. However, these models do not consider information about entities within the semantic space, leading to performance degradation. 3 Preliminaries This section begins with a definition of the knowl- edge graph completion task, followed by a brief background on quaternion algebra. 3.1 Knowledge Graph Completion Knowledge graph completion is the task of predict- ing missing elements in a triplet (h, r, t ). This task can be broken down into three sub-tasks: predict- ing the head entity (?, r, t), predicting the relation (h,?, t), and predicting the tail entity (h, r,?). Fol- lowing previous research, our work focuses on pre- dicting the head (?, r, t)and tail (h, r,?)entities. It is because relation information is needed in the training process. 3.2 Quaternion Algebra The quaternion extends the complex number sys- tem to four dimensions. In n-dimensional quater- nion space Qn, a quaternion p∈Qnconsists of one real component and three imaginary compo- nents. It can be formalized as: p=a+bi+cj+dk, where a, b, c, d ∈Rn 4are real numbers and i,j,k are imaginary units. The imaginary part satisfies the Hamilton’s rules (Hamilton, 1844): i2=j2= k2=ijk=−1. Addition. Given two quaternions p=a+bi+ cj+dkandq=e+fi+gj+hk∈Qn, quaternion addition is defined as: p+q= (a+e)+(b+f)i+(c+g)j+(d+h)k(1) Norm. The normalization of quaternions ∥p∥∈ Qncan be defined by the following: ∥p∥=p a2+b2+c2+d2. (2) Inverse. The inverse of quaternions ∥p∥∈Qn can be defined by the following: p−1=¯ p ∥p∥2,¯ p=a−bi−cj−dk,(3)where ¯ p∈Qnis the conjugate of p∈Qn. Hamilton product. Given two quaternions p andq. The quaternion rotation of these two quater- nions can be performed by the Hamilton product: p⊗q=(a◦e−b◦f−c◦g−d◦h)+ (b◦e+a◦f+c◦h−d◦g)i+ (c◦e+a◦g+d◦f−b◦h)j+ (d◦e+a◦h+b◦g−c◦f)k,(4) where ◦denotes the element-wise product. 4 Methodology In this section, we describe our model in detail, which consists of two main parts: •Bidirectional rotation : Performing a right ro- tation on the head entity and a reverse rotation on the tail entity to learn the rich semantic features. •Distance-adaptation : Incorporating a dis- tance adaptive translation to learn the geomet- ric distance between entity embeddings. 4.1 Symbol Description A knowledge graph G=:{(h, r, t )} ∈ E × R × E is a collection of triplet, where EandRare the entity set and relation set. |E|and|R|represent the number of entities and relations, respectively. Given a triplet (h, r, t ), the embeddings of head en- tityh, relation rand tail entity tcan be represented by quaternions: h=ah+bhi+chj+dhk r=p+qi+uj+vk t=at+bti+ctj+dtk(5) 4.2 Part One: Bidirectional Rotation In Figure 2, we show the differences between our proposed bidirectional rotation and previous meth- ods when modeling entity semantics. Specifically, QuatE (Figure 2(a)) performs a right rotation for head entity. QuatRE (Figure 2(b)) performs two times right rotation for head entity and a right ro- tation for tail entity. Our model (Figure 2(c)) per- forms a right rotation for head entity and a reverse rotation for tail entity. We first normalize the relation quaternion rto a unit quaternion rto eliminate the scaling effect by dividing by its norm (Equation 2): r=r ∥r∥=p+qi+uj+vkp p2+q2+u2+v2.(6) (a) QuatE (b) QuatRE (c) DaBR (ours) Figure 2: The comparison of modeling entity semantics of QuatE, QuatRE and DaBR. These models learn the embeddings of knowledge graphs in quaternion spaces. ⊗denotes the Hamilton product (Equation 4). Then, the head entity his right rotated using the relation r, i.e., the entity vector and the relation vector do a Hamilton product (Equation 4): h′=h⊗r. (7) Similarly, the inverse of the relation unit quater- nionris used to make a reverse rotation of the tail entity t: t′=t⊗r−1. (8) Since ris a unit quaternion, we have: t′=t⊗r−1=t⊗¯ r, (9) where ¯ ris the conjugate of r. Therefore, the scoring function s(h, r, t )for the bidirectional rotation modeling entity semantics is defined by: s(h, r, t ) =h′◦t′=h⊗r◦t⊗¯ r,(10) 4.3 Part Two: Distance-Adaptation As shown in Figure 2, the previous QuatE (Figure 2(a)) and QuatRE (Figure 2(b)) can only learn the semantic information of an entity but ignore the geometric distance attribute of an entity. Our DaBR effectively addresses this limitation by adding a distance-adaptation (Figure 2(c)). Therefore, to model the geometric distance infor- mation, we initialize a distance-adaptive relation embedding rd=pd+qdi+udj+vdk. Finally, the geometric distance part scoring function d(h, r, t ) is defined as: d(h, r, t ) =∥h+rd−t∥1, (11) where ∥ · ∥ 1represents the ℓ1norm. Despite its simplicity, we find that the proposed method is effective enough in providing distance information for our model.4.4 Scoring Function After obtaining the scoring functions for modeling entity semantics and entity geometric distances, respectively. We fuse these scoring functions into a new scoring function for model training: ϕ(h, r, t ) =s(h, r, t ) +λd(h, r, t ) =h⊗r·t⊗¯ r+λ∥h+rd−t∥1,(12) where s(h, r, t )represents the semantic matching scoring function, d(h, r, t )represents the geometric distance scoring function, and λ∈Ris an adaptive parameter that learned by our model. 4.5 Loss Function Following Trouillon et al. (2016), we formulate the task as a classification problem, and the model parameters are learned by minimizing the following regularized logistic loss: L=X r(h,t)∈Ω∪Ω−log(1 + exp(−Yhrtϕ(h, r, t ))) +η1∥E∥2 2+η2∥R∥2 2, (13) where EandRdenote the embedding of all entities and relations. Here we use the ℓ2norm with reg- ularization rates η1andη2to regularize EandR, respectively. Ω−is sampled from the unobserved setΩ′using uniform sampling. Yhrt∈ {− 1,1} represents the corresponding label of the triplet (h, r, t ). 4.6 Discussion As described in Chami et al. (2020), there are com- plex logical relationships (such as symmetry, anti- symmetry, inversion and composition relationships) in the knowledge graph. In this part, we analyze the ability of our DaBR to infer these relationships. Lemma 1 DaBR can infer the symmetry relation- ship pattern. (See proof in Appendix A.1) SF ModelWN18RR FB15k-237 MR(↓) MRR H@10 H@3 H@1 MR( ↓) MRR H@10 H@3 H@1 SMTuckER (2019) - .470 .526 .482 .443 - .358 .544 .394 .266 QuatE (2019) 2314 .488 .582 .508 .438 87 .348 .550 .382 .248 DualE (2021) 2270 .492 .584 .513 .444 91 .365 .559 .400 .268 QuatRE (2022) 1986 .493 .592 .519 .439 88 .367 .563 .404 .269 ConvQE (2024d) - .487 .563 .502 .447 - .366 .551 .402 .273 GDATTH (2020) - .486 .573 .499 .443 - .348 .540 .384 .252 Rotate3D (2020) 3328 .489 .579 .505 .442 165 .347 .250 .543 .385 Trans4E (2021) 1755 .469 .577 .487 .416 158 .332 .527 .366 .236 RotateCT (2022) 3285 .492 .579 .507 .448 171 .347 .537 .382 .251 Rotate4D (2023) 3167 .499 .587 .518 .455 181 .353 .547 .391 .257 CompoundE (2023) - .491 .576 .508 .450 - .357 .545 .393 .264 HAQE (2024e) - .496 .584 .512 .450 - .343 .535 .379 .247 DCNE (2024) 3244 .492 .581 .510 .448 169 .354 .547 .393 .257 FHRE (2024b) - .494 .563 .510 .450 - .345 .528 .375 .255 TransERR (2024) 1167 .501 .605 .520 .450 125 .360 .555 .396 .264 SG DaBR (ours) 899 .510 .622 .538 .450 83 .373 .572 .410 .274 Table 1: Knowledge graph completion results on WN18RR and FB15k-237 datasets. Best results are in bold and second best results are underlined. SFindicates the scoring function, SMindicates semantic matching scoring function, GDindicates geometric distance scoring function, and SGindicates our semantic matching and geometric distance scoring function. “-” indicates that there is no result reported. The same settings apply to Table 2. Lemma 2 DaBR can infer the antisymmetry rela- tionship pattern. (See proof in Appendix A.2) Lemma 3 DaBR can infer the inversion relation- ship pattern. (See proof in Appendix A.3) Lemma 4 DaBR can infer the composition rela- tionship pattern. (See proof in Appendix A.4) 5 Experiments In this section, we first introduce the datasets, eval- uation protocol, implementation details and base- lines. Subsequently, we evaluate our model on four benchmark datasets. Datasets . To verify the effectiveness and robust- ness of our model, we conducted extensive experi- ments on four standard knowledge graph comple- tion datasets including WN18RR (Dettmers et al., 2018), FB15k-237 (Toutanova and Chen, 2015), WN18 (Bordes et al., 2013) and FB15k (Bordes et al., 2013). The WN18 and FB15k datasets are known to suffer from a data leakage problem, which causes models to easily inferred and conse- quently performing well on metrics. WN18RR and FB15k-237 were derived as subsets of WN18 and FB15k respectively. These datasets are designed to address data leakage concerns and thereby present a more realistic prediction task. The detailed statis- tics of the four standard datasets are shown in Ap- pendix B. Evaluation protocol . Similar to previous work (Zhang et al., 2019; Li et al., 2024), we employedthe filtered evaluation setup described in reference (Bordes et al., 2013) to filter out real triplets during the evaluation process. This was done to avoid flawed evaluations. We used evaluation metrics encompassed Mean Rank (MR), Mean Reciprocity Rating (MRR) and Hits@n (n=1, 3 or 10). Where a smaller value on the MR indicates a better model. The final scoring model on the test set is derived from the model with the highest Hits@10 score on the validation set. Implementation details . We conduct all our ex- periments on a single NVIDIA GeForce RTX 4090 with 24GB of memory. The ranges of the hyper- parameters for the grid search are set as follows: the embedding dimension ( dim) is selected from {300, 400, 500}; the learning rate ( lr) is chosen from {0.01, 0.02, 0.05, 0.1}; and the number of negative triplets sampled ( neg) per training triplet is selected from {5, 10}. The regularization rates η1andη2are adjusted within {0.01, 0.05, 0.1, 0.5}. We create 100 batches of training samples for dif- ferent datasets. We optimize the loss function by utilizing Adagrad (Duchi et al., 2011). All our hyper-parameters are provided in Appendix C. It is worth noting that our models do not employ the training strategies of self-adversarial negative sampling (Sun et al., 2019) or N3 regularization with reciprocal learning (Lacroix et al., 2018). Baselines . To verify the effectiveness of our model, we compared DaBR with several powerful baseline SF ModelWN18 FB15k MR(↓) MRR H@10 H@3 H@1 MR( ↓) MRR H@10 H@3 H@1 SMTuckER (2019) - .953 .958 .955 .949 - .795 .892 .833 .741 QuatE (2019) 162 .950 .959 .954 .945 17 .782 .900 .835 .711 DualE (2021) 156 .952 .962 .956 .946 21 .813 .896 .850 .766 QuatRE (2022) 116 .939 .963 .953 .946 21 .808 .896 .851 .751 GDRotate3D (2020) 214 .951 .961 .953 .945 39 .789 .887 .832 .728 Trans4E (2021) 175 .950 .960 .953 .944 47 .767 .892 .834 .681 RotateCT (2022) 201 .951 .963 .956 .944 34 .794 .888 .834 .737 Rotate4D (2023) 173 .952 .963 .956 .946 37 .790 .887 .831 .732 DCNE (2024) 192 .952 .963 .955 .945 34 .798 .888 .835 .745 TransERR (2024) 82 .953 .965 .957 .945 41 .815 .896 .848 .767 SG DaBR (ours) 56 .954 .966 .959 .946 18 .819 .900 .854 .769 Table 2: Knowledge graph completion results on WN18 and FB15k datasets. ModelWN18RR FB15k-237 WN18 FB15k MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 DaBR .510 .622 .538 .450 .373 .572 .410 .274 .954 .966 .959 .946 .819 .900 .854 .769 Variant I .505 .617 .532 .445 .370 .569 .404 .272 .953 .964 .956 .943 .816 .894 .844 .766 Variant II .495 .580 .512 .445 .368 .566 .402 .270 .947 .960 .954 .937 .801 .890 .847 .751 Table 3: Ablation results for all datasets. models, including both well-known and recently proposed ones with outstanding results. We divide these models according to the scoring function: 1) Semantic Matching: TuckER (Balazevic et al., 2019), QuatE (Zhang et al., 2019), DualE (Cao et al., 2021), QuatRE (Nguyen et al., 2022) and ConvQE (Liang et al., 2024d). 2) Geometric Distance: ATTH (Chami et al., 2020), Rotate3D (Gao et al., 2020), Trans4E (Nayy- eri et al., 2021), RotateCT (Dong et al., 2022), Ro- tate4D (Le et al., 2023), CompoundE (Ge et al., 2023), HAQE (Liang et al., 2024e), DCNE (Dong et al., 2024), FHRE (Liang et al., 2024b) and TransERR (Li et al., 2024). For a fair comparison, we report the optimal results for these baselines from the original papers. 5.1 Main Results The main results of our DaBR and the baselines for the WN18RR and FB15k-237 datasets are listed in Table 1. We categorize the baseline models into two main groups based on scoring functions, namely semantic matching and geometric distance scoring functions. The models based on Semantic Matching are listed in the upper part of the table, while the Geometric Distance based methods are listed in the lower part of the table. It is worth not- ing that our model’s scoring function is the unique scoring function that simultaneously measures both Semantic and Geometric distances. From Table 1 we can clearly see that our modelachieves the best results on both datasets, except for the H@1 metric on the WN18RR dataset. Specif- ically, compared to the best performing of the se- mantic matching model, QuatRE, our model drops from 1986 to 899 on the MR metric and absolutely improves 3.4%, 5.0%, 3.6% and 2.5% on the MRR, H@10, H@3 and H@1 metrics on the WN18RR dataset. On the FB15k-237 dataset, our model decreases from 88 to 83 on the MR metrics, and absolutely improves on the MRR, H@10, H@3 and H@1 metrics by 1.6%, 1.5%, 1.4% and 1.8%. Compared to the latest and best performance of the geometric distance model, TransERR, our model decreases from 1167 to 899 on the MR met- ric and achieves an absolute improvement of 1.8%, 2.8%, and 3.4% on the MRR, H@10 and H@3 met- rics on the WN18RR dataset. On the FB15k-237 dataset, our model decreases from 125 to 83 on the MR metrics, and absolutely improves on the MRR, H@10, H@3 and H@1 metrics by 3.6%, 3.0%, 3.5% and 3.7%, respectively. The KGC results on WN18 and FB15k datasets are shown in Table 2. The Table 2 illustrates our model superiority over any previous model on the FB15k dataset. On the WN18 dataset, our model achieves the best results on all metrics, except for the H@1 metric which achieves second place. In conclusion, our model not only achieves optimal results compared to semantic matching models, but also achieves competitive results compared to geo- metric distance models. (a) 1-to-N (b) N-to-1 (c) N-to-N Figure 3: MRR scores for QuatE, QuatRE and our DaBR models over 0 to 5200 training epochs. 6 Analysis To demonstrate the superiority of our model, we have conducted in-depth analysis experiments from various aspects. The obtained experimental results and analysis are as follows: 6.1 Ablation Analysis In this section, we aim to evaluate the efficacy of bidirectional rotation and distance-adaptation within our DaBR. We have designed the following model variants: Variant I : We remove the rotation of the tail entity and keep the rotation of the head entity. Variant II : We removed the distance-adaptation. The DaBR degenerates into a semantic matching model. We show the results of the ablation experiments in Table 3. From the table, we can obtain the fol- lowing conclusions: 1) The rotation of the tail en- tity and distance-adaptation are important parts of our model. 2) When our model removed the tail rotation, the model (i.e., Variant I ) still achieved the best results compared to the models in Table 1 and Table 2. We attribute this to the fact that our model can measure both the semantics of en- tities and the embedding distance of entities. 3) When our model removed distance-adaptation, the model (i.e., Variant II ) performance decreased dra- matically on all datasets. It is worth noting that our model still achieves optimal results on most datasets compared to the semantic matching model on most datasets. 6.2 Parameter Comparison Analysis To analyze the number of parameters compared to other models, we compared our DaBR with the best semantic matching model (QuatRE) and the best geometric distance model (TransERR). Given the same embedding dimension n, QuatRE andTransERR have (|E| × n+ 3× |R| × n)parame- ters, while our DaBR has (|E| × n+ 2× |R| × n) parameters, where EandRare the entity set and relation set. Compared to QuatRE and TransERR, our model achieves better results with fewer param- eters. 6.3 Relationship Type Analysis To explore the robustness of our model in the face of different relation types (one-to-many (1-to-N), many-to-one (N-to-1) and many-to-many (N-to- N)), we compared DaBR with QuatE and QuatRE in WN18R dataset. For the results of the QuatE and QuatRE, we reproduce these models following the hyper-parameter settings of their paper. In accordance with the calculation rules set out in Bordes et al. (2013), the test set of WN18RR has been divided into three categories: 1-to-N, N- to-1 and N-to-N. The division results are shown in Appendix D, where ηhandηtrepresent the average degree of head and tail entities, respectively. We show the MRR scores for the QuatE, Qua- tRE, and DaBR models for 0 to 5200 training epochs in Figure 3. This demonstrates the effec- tiveness of our model in modelling different types of relationships. In particular, the model is supe- rior in dealing with 1-to-N relationship. “1-to-N” means that a head entity can form a fact triplet with multiple tail entities. We attribute this superior en- hancement to the distance-adaptive embedding of our model. 6.4 Visualization Analysis In this section, to explore the embedding results of our model after distance adaptive embedding, we visualize the the tail entity embeddings using t- SNE (van der Maaten and Hinton, 2008). Suppose (hi,rj) is a query where hiandrjare the head entity and the relation, respectively. If ( hi,rj,tk) is valid, the entity tkis the answer to query ( hi,rj). (a) QuatE (epoch=1) (b) QuatE (epoch=100) (c) QuatRE (epoch=1) (d) QuatRE (epoch=100) (e) TransERR (epoch=1) (f) TransERR (epoch=100) (g) DaBR (epoch=1) (h) DaBR (epoch=100) Figure 4: Visualization of the embeddings of tail entities using t-SNE. A point represents a tail entity. Points in the same color represent tail entities that have the same (hi, rj)context. We selected 9 queries in FB15k-237 dataset, each of which has 50 answers. For more details about the 9 queries, please refer to the Appendix E. We then use t-SNE to visualize the semantic matching models QuatE and QuatRE, the geomet- ric distance model TransERR, and our combined semantic and geometric distance DaBR to gener- ate the answer embeddings for epoch 1 and epoch 100, respectively. Figure 4 shows the visualization results2. Each entity is represented by a 2D point and points in the same color represent tail entities with the same ( hi,rj) context (i.e. query). Specifically, our model (Figure 4(g)) in the first epoch have demonstrated better embedding com- pared to QuatE, QuatRE and TransERR. At epoch 100, our model (Figure 4(h)) show clear inter- cluster separability, with entities within each clus- ter (intra-cluster) being well-separated from one another. However, the semantic matching model QuatE (Figure 4(b)) and QuatRE (Figure 4(d)) heav- ily overlap entities within clusters despite inter- cluster separability. The geometric distance model TransERR (Figure 4(f)) clusters are indistinguish- able from each other and entities within the clusters (intra-clusters) are distinguishable. Table 4 summarizes our analysis above, which we attribute to the fact that our model combines semantic matching with entity geometric distance to better measure the plausibility of triplets. 2Refer to Appendix F for more visualization results.Model intra-cluster inter-cluster QuatE ✓ QuatRE ✓ TransERR ✓ DaBR ✓ ✓ Table 4: ✓indicates a separable ability. (a) DaBR (with) (b) DaBR (without) Figure 5: DaBR with distance-adaptation and without. 6.5 Visualization Ablation Analysis In Figure 5, we visualize that our model removes the distance adaptive embedding in the first epoch. We can find that the visualization without the dis- tance adaptive embedding (Figure 5(b)) is worse than the with one (Figure 5(a)). By visualizing the ablation experiments, we can further illustrate the advantage of distance adaptive embedding. 7 Conclusion We note that existing quaternion models based on semantic matching diminishes the separability of entities, while the distance scoring function weak- ens the semantics of entities. To address this issue, we propose a novel quaternion knowledge graph embedding model. By combining semantic match- ing with entity geometric distance, our model pro- vides a robust and comprehensive framework for knowledge graph embedding. We provide mathe- matical proofs to demonstrate our model can han- dle complex logical relationships. Visualization results show that our model can learn the geomet- ric distance property between entities to achieve both inter-cluster and intra-cluster separability. Limitations The H@1 metric performance of our model on the WN18 and WN18RR datasets is not optimal. In addition, like most knowledge graph embedding models, our model is unable to predict new entities that do not exist in the training data. Acknowledgements This work is supported by National Natural Science Foundation of China (No.62066033); Inner Mongolia Natural Science Foundation (Nos.2024MS06013, 2022JQ05); Inner Mongo- lia Autonomous Region Science and Technol- ogy Programme Project (Nos.2023YFSW0001, 2022YFDZ0059, 2021GG0158); We also thank all anonymous reviewers for their insightful com- ments. References Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP) , pages 5185–5194, Hong Kong, China. As- sociation for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proceedings of the 26th Interna- tional Conference on Neural Information Processing Systems - Volume 2 , NIPS’13, page 2787–2795. Cur- ran Associates Inc. Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, and Qingming Huang. 2021. Dual quater- nion knowledge graph embeddings. Proceedings of the AAAI Conference on Artificial Intelligence , 35(8):6894–6902.Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. 2020. Low- dimensional hyperbolic knowledge graph embed- dings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 6901–6914, Online. Association for Computational Linguistics. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowl- edge graph embeddings. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence . AAAI Press. Yao Dong, Qingchao Kong, Lei Wang, and Yin Luo. 2024. Dual complex number knowledge graph em- beddings. In Proceedings of the 2024 Joint In- ternational Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 5391–5400, Torino, Italia. ELRA and ICCL. Yao Dong, Lei Wang, Ji Xiang, Xiaobo Guo, and Yuqiang Xie. 2022. RotateCT: Knowledge graph embedding by rotation and coordinate transformation in complex space. In Proceedings of the 29th Inter- national Conference on Computational Linguistics , pages 4918–4932, Gyeongju, Republic of Korea. In- ternational Committee on Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. , 12(null):2121–2159. Prayushi Faldu, Indrajit Bhattacharya, and Mausam . 2024. RetinaQA: A robust knowledge base question answering model for both answerable and unanswer- able questions. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 6643–6656, Bangkok, Thailand. Association for Computational Linguistics. Chang Gao, Chengjie Sun, Lili Shan, Lei Lin, and Mingjiang Wang. 2020. Rotate3D: Representing re- lations as rotations in three-dimensional space for knowledge graph embedding. In Proceedings of the 29th ACM International Conference on Infor- mation & Knowledge Management , CIKM ’20, page 385–394, New York, NY , USA. Association for Com- puting Machinery. Xiou Ge, Yun Cheng Wang, Bin Wang, and C.-C. Jay Kuo. 2023. Compounding geometric operations for knowledge graph completion. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 6947–6965, Toronto, Canada. Association for Com- putational Linguistics. Wm R Hamilton. 1844. Theory of quaternions. Pro- ceedings of the Royal Irish Academy (1836-1869) , 3:1–16. Timothee Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learn- ing, volume 80 of Proceedings of Machine Learning Research , pages 2863–2872. PMLR. Thanh Le, Huy Tran, and Bac Le. 2023. Knowledge graph embedding with the special orthogonal group in quaternion space for link prediction. Knowledge- Based Systems , 266:1–26. Jiang Li, Xiangdong Su, Fujun Zhang, and Guanglai Gao. 2024. TransERR: Translation-based knowl- edge graph embedding via efficient relation rota- tion. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 16727–16737, Torino, Italia. ELRA and ICCL. Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenx- uan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, Fuchun Sun, and Kunlun He. 2024a. A survey of knowledge graph reasoning on graph types: Static, dynamic, and multi-modal. IEEE Transactions on Pattern Analysis and Machine Intelligence , pages 1–20. Qiuyu Liang, Weihua Wang, Feilong Bao, and Guanglai Gao. 2024b. Fully hyperbolic rotation for knowledge graph embedding. In 2024 - 27th European Confer- ence on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain , pages 1615–1622. IOS Press. Qiuyu Liang, Weihua Wang, Lei Lv, and Feilong Bao. 2024c. Knowledge graph-enhanced recommendation with box embeddings. In Chinese Computational Linguistics , pages 274–288. Qiuyu Liang, Weihua Wang, Jie Yu, and Feilong Bao. 2024d. Effective knowledge graph embedding with quaternion convolutional networks. In CCF Interna- tional Conference on Natural Language Processing and Chinese Computing , pages 183–196. Springer. Qiuyu Liang, Weihua Wang, Jie Yu, and Feilong Bao. 2024e. Hierarchy-aware quaternion embedding for knowledge graph completion. In 2024 International Joint Conference on Neural Networks (IJCNN) , pages 1–8. Renê Mendes, Dimas Oliveira, and Victor Garcia. 2024. Application of generative AI as an enterprise wik- ibase knowledge graph Q&A system. In Proceed- ings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024) , pages 35– 42, Bangkok, Thailand. Association for Computa- tional Linguistics. Mojtaba Nayyeri, Gokce Muge Cil, Sahar Vahdati, Francesco Osborne, Mahfuzur Rahman, Simone Angioni, Angelo Salatino, Diego Reforgiato Recu- pero, Nadezhda Vassilyeva, Enrico Motta, and Jens Lehmann. 2021. Trans4E: Link prediction on schol- arly knowledge graphs. Neurocomputing , 461:530– 542.Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, and Dinh Phung. 2022. QuatRE: Relation-aware quater- nions for knowledge graph embeddings. In Compan- ion Proceedings of the Web Conference 2022 , WWW ’22, page 189–192, New York, NY , USA. Association for Computing Machinery. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations . Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compo- sitionality , pages 57–66. Association for Computa- tional Linguistics. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceed- ings of the 33rd International Conference on Interna- tional Conference on Machine Learning - Volume 48 , ICML’16, page 2071–2080. JMLR.org. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research , 9(86):2579–2605. Cunda Wang, Weihua Wang, Qiuyu Liang, Feilong Bao, and Guanglai Gao. 2024a. Unifying dual-space em- bedding for entity alignment via contrastive learning. arXiv preprint arXiv:2412.05028 . Cunda Wang, Weihua Wang, Qiuyu Liang, Jie Yu, and Guanglai Gao. 2024b. Gsea: Global structure-aware graph neural networks for entity alignment. In CCF International Conference on Natural Language Pro- cessing and Chinese Computing , pages 187–199. Springer. Yilin Wen, Zifeng Wang, and Jimeng Sun. 2024. MindMap: Knowledge graph prompting sparks graph of thoughts in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10370–10388, Bangkok, Thailand. Association for Computational Linguistics. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embeddings. In Pro- ceedings of the 33rd International Conference on Neural Information Processing Systems , Red Hook, NY , USA. Curran Associates Inc. Appendix A Proof Given h=ah+bhi+chj+dhk,r=p+qi+ uj+vk,t=at+bti+ctj+dtk, where ris a unit quaternion after normalization operation. We can make λ= 0and then our scoring function can be simplified as follows: ϕ(h, r, t ) =h⊗r·t⊗¯ r = [(ah◦p−bh◦q−ch◦u−dh◦v) + (ah◦q+bh◦p+ch◦v−dh◦u)i + (ah◦u−bh◦v+ch◦p+dh◦q)j + (ah◦v+bh◦u−ch◦q+dh◦p)k] ·[(at◦p+bt◦q+ct◦u+dt◦v) + (−at◦q+bt◦p−ct◦v+dt◦u)i + (−at◦u+bt◦v+ct◦p−dt◦q)j + (−at◦v−bt◦u+ct◦q+dt◦p)k](14) where ⊗is the Hamilton product, ◦denotes the element-wise product, and “ ·” is the inner product. A.1 Proof of Symmetry pattern In order to prove the symmetry pattern, we need to prove the following equality: h⊗r·t⊗¯ r=t⊗r·h⊗¯ r. (15) The symmetry property of DaBR can be proved by setting the imaginary parts of rto zero. A.2 Proof of Antisymmetry pattern In order to prove the antisymmetry pattern, we need to prove the following inequality when imaginary components are nonzero: h⊗r·t⊗¯ r̸=t⊗r·h⊗¯ r. (16) We expand the right term: t⊗r·h⊗¯ r = [(at◦p−bt◦q−ct◦u−dt◦v) + (at◦q+bt◦p+ct◦v−dt◦u)i + (at◦u−bt◦v+ct◦p+dt◦q)j + (at◦v+bt◦u−ct◦q+dt◦p)k] ·[(ah◦p+bh◦q+ch◦u+dh◦v) + (−ah◦q+bh◦p−ch◦v+dh◦u)i + (−ah◦u+bh◦v+ch◦p−dh◦q)j + (−ah◦v−bh◦u+ch◦q+dh◦p)k].(17) We can easily see that those two terms are not equal as the signs for some terms are not the same.A.3 Proof of Inversion pattern To prove the inversion pattern, we need to prove that: h⊗r·t⊗¯ r=t⊗¯ r·h⊗¯ r−1. (18) We expand the right term: t⊗¯ r·h⊗¯ r−1 =t⊗¯ r·h⊗r = [(at◦p+bt◦q+ct◦u+dt◦v) + (−at◦q+bt◦p−ct◦v+dt◦u)i + (−at◦u+bt◦v+ct◦p−dt◦q)j + (−at◦v−bt◦u+ct◦q+dt◦p)k] ·[(ah◦p−bh◦q−ch◦u−dh◦v) + (ah◦q+bh◦p+ch◦v−dh◦u)i + (ah◦u−bh◦v+ch◦p+dh◦q)j + (ah◦v+bh◦u−ch◦q+dh◦p)k].(19) We can easily check the equality of these two terms. Since ris a unit quaternion, we have r−1=¯ r. A.4 Proof of Composition pattern For composition relationships, we can get that: (h⊗r2)⊗r3·(t⊗¯ r2)⊗¯ r3 =h⊗(r2⊗r3)·t⊗(¯ r2⊗¯ r3) =h⊗r1·t⊗¯ r1(20) B Dataset statistics The detailed statistics of the four standard datasets are shown in Table 6. C Optimal hyper-parameters Table 7 shows the optimal hyperparameter settings for our model on the four benchmark datasets. The optimal parameters come from the highest scores of our model on the validation dataset. D Classification rules The classification rules and classification results for WN18RR dataset in the Table 8. E The queries in t-SNE visualization In Table 5, we list the nine queries used in the t- SNE visualization (Section 6.4 in the main text). Note that a query is represented as (h, r,?), where hdenotes the head entity and rdenotes the relation. F More visualization results Figure 6 shows more visualization results. Index Query 1 (political drama, /media_common /netflix_genre /titles, ?) 2 (Academy Award for Best Original Song, /award /award_category /winners. /award /award_honor /ceremony, ?) 3 (Germany, /location /location /contains, ?) 4 (Master’s Degree, /education /educational_degree /people_with_this_degree. /education /education /major_field_of_study, ?) 5 (broccoli, /food/food/nutrients. /food/nutrition_fact /nutrient, ?) 6 (shooting sport, /olympics /olympic_sport /athletes. /olympics /olympic_athlete_affiliation /country,?) 7 (synthpop, /music /genre /artists, ?) 8 (Italian American, /people /ethnicity /people, ?) 9 (organ, /music /performance_role /track_performances. /music /track_contribution /role, ?) Table 5: The queries in t-SNE visualizations. Dataset #Ent #Rel #Train #Valid #Test WN18RR 40k11 86 k 3k 3k FB15k-237 14k237 272 k17k20k WN18 40k18 141 k 5k 5k FB15k 14k1345 483 k50k59k Table 6: Dataset statistics on four datasets. Dataset lr neg dim η 1η2 WN18RR 0.1 5 500 0.5 0.01 FB15k-237 0.05 10 500 0.5 0.01 WN18 0.05 5 300 0.05 0.01 FB15k 0.02 10 400 0.05 0.01 Table 7: Optimal hyper-parameters for our DaBR on each dataset. Category ηh ηt #triplets 1-to-N <1.5>1.5 475 N-to-1 >1.5<1.5 1487 N-to-N >1.5>1.5 1130 Table 8: Classification rules and classification results for WN18RR. The last column is the number after division. (a) QuatE (epoch=1) (b) QuatE (epoch=50) (c) QuatE (epoch=100) (d) QuatRE (epoch=1) (e) QuatRE (epoch=50) (f) QuatRE (epoch=100) (g) TransERR (epoch=1) (h) TransERR (epoch=50) (i) TransERR (epoch=100) (j) DaBR (epoch=1) (k) DaBR (epoch=50) (l) DaBR (epoch=100) Figure 6: Visualization of the embeddings of tail entities using t-SNE. A point represents a tail entity. Points in the same color represent tail entities that have the same (hr, rj)context. | 6 | 1 | The DaBR model has a unique architecture involving quaternion embeddings and bidirectional rotations, likely making it smaller than multi-layered transformer models but larger than simple embeddings. The paper does not mention exact parameter counts, but it's implied that the embedding size can vary (300-500) and relates to entity and relation counts. Assuming around 1000 entities and 500 relations, the parameter count could be roughly 1.5 million. The model was trained on a single NVIDIA GeForce RTX 4090, which has enough memory to handle a batch size of approximately 100 to 200. The choice of hyperparameters and extensive experimental results indicates many epochs (potentially in the hundreds) but since they don't specify epoch counts, we estimate around 300 epochs for robust validation. Given all this and standard computational practices in the field, training could be expected to take around 6 hours. Since this fits within the 8-hour limit on the aforementioned GPU, the model can indeed be trained in under 8 hours on a single GPU. | yes | Yes | Graph | Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation | 2024-12-05T00:00:00.000Z | [https://github.com/llqy123/dabr] | 1 |
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
| 20 min | https://colab.research.google.com/drive/1nML0U1finrLk-EkU2gHBF3GiLpUh6rLK?usp=sharing | Yes | we fixes some issues and it runs successfully |
MM-Vet | FlashSloth-HD | [] | FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04317v1 | [
"https://github.com/codefanw/flashsloth"
] | {'GPT-4 score': '49.0', 'Params': '3.2B'} | [
"GPT-4 score",
"Params"
] | Given the following paper and codebase:
Paper: FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression
Codebase: https://github.com/codefanw/flashsloth
Improve the FlashSloth-HD model on the MM-Vet dataset. The result
should improve on the following metrics: {'GPT-4 score': '49.0', 'Params': '3.2B'}. You must use only the codebase provided.
| FlashSloth : Lightning Multimodal Large Language Models via Embedded Visual Compression Bo Tong1, Bokai Lai1, Yiyi Zhou1*, Gen Luo3, Yunhang Shen2, Ke Li2, Xiaoshuai Sun1, Rongrong Ji1 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China. 2Youtu Lab, Tencent, P.R. China. 3OpenGVLab, Shanghai AI Laboratory. {tongbo,laibokai }@stu.xmu.edu.cn, {zhouyiyi, xssun, rrji }@xmu.edu.cn, luogen@pjlab.org.cn, shenyunhang01@gmail.com, tristanli@tencent.com Abstract Despite a big leap forward in capability, multimodal large language models (MLLMs) tend to behave like a sloth in practical use, i.e., slow response and large latency. Re- cent efforts are devoted to building tiny MLLMs for bet- ter efficiency, but the plethora of visual tokens still used limit their actual speedup. In this paper, we propose a powerful and fast tiny MLLM called FlashSloth . Different from previous efforts, FlashSloth focuses on improving the descriptive power of visual tokens in the process of com- pressing their redundant semantics. In particular, Flash- Sloth introduces embedded visual compression designs to capture both visually salient and instruction-related image information, so as to achieving superior multimodal per- formance with fewer visual tokens. Extensive experiments are conducted to validate the proposed FlashSloth, and a bunch of tiny but strong MLLMs are also comprehensively compared, e.g., InternVL2, MiniCPM-V2 and Qwen2-VL. The experimental results show that compared with these ad- vanced tiny MLLMs, our FlashSloth can greatly reduce the number of visual tokens, training memory and computation complexity while retaining high performance on various VL tasks. Our code is released at: https://github.com/ codefanw/FlashSloth . 1. Introduction Recent years have witnessed the remarkable breakthroughs made by extending large language models (LLMs) [20, 56, 71] to more modalities, e.g., building multimodal large lan- guage models (MLLMs) for vision-language tasks [28, 35, *Corresponding Author. /uni00000019 /uni0000001b /uni00000014/uni00000013 /uni00000014/uni00000015 /uni00000014/uni00000017 /uni00000014/uni00000019 /uni00000014/uni0000001b /uni00000015/uni00000013 /uni0000002c/uni00000051/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni00000037/uni0000004c/uni00000050/uni00000048/uni00000018/uni00000013/uni00000018/uni00000018/uni00000019/uni00000013/uni00000019/uni00000018/uni0000001a/uni00000013/uni0000001a/uni00000018/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni0000002c/uni00000051/uni00000057/uni00000048/uni00000055/uni00000051/uni00000010/uni00000039/uni0000002f/uni00000015 /uni00000015/uni00000025 /uni0000002f/uni0000002f/uni00000044/uni00000039 /uni00000024/uni00000010/uni00000059/uni00000014/uni00000011/uni00000018 /uni0000001a/uni00000025/uni0000002c/uni00000030/uni00000033/uni00000010/uni00000059/uni00000014 /uni00000016/uni00000025/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015 /uni00000039/uni0000002f/uni00000010/uni00000015/uni00000025 /uni00000030/uni00000052/uni00000045/uni0000004c/uni0000004f/uni00000048/uni00000039/uni0000002f/uni00000030 /uni00000059/uni00000015/uni00000010/uni00000016/uni00000025/uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000010/uni0000002a/uni00000048/uni00000050/uni0000004c/uni00000051/uni0000004c /uni00000015/uni00000011/uni00000015/uni00000025/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000036/uni0000004f/uni00000052/uni00000057/uni0000004b /uni00000016/uni00000025/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000026/uni00000033/uni00000030 /uni00000039/uni00000015/uni00000010/uni00000016/uni00000025Figure 1. Comparison between FlashSloth and recent MLLMs on MMB in terms of performance, response time (the prediction of first token) and GPU memory overhead (the circle size). Advanced tiny MLLMs [10, 12, 17, 31, 51, 58] can already exhibit strong capability against common MLLMs like LLaV A-1.5-7B [33], but their actual speed up is greatly limited by the excessive use of vi- sual tokens. Our FlashSloth is a powerful and tiny MLLM that offers a decent balance between performance and efficiency. 42]. Among these advancements, one main research fo- cus is on enhancing the visual perception of MLLMs, and the widely recognized solution is to use a larger number of visual tokens [32, 34, 45]. For instance, LLaV A-NeXT [34] uses 5 times more visual tokens compared to LLaV A- 1.5 [33] by subdividing input images into multiple tiles. Similarly, recent MLLMs, such as InternVL1.5 [10] and Qwen2-VL [58], can support up to thousands of visual to- kens for high-resolution image understanding via dynamic- resolution encoding. Although effective, the excessive use of visual tokens further execrates already high computation of MLLMs, limiting practical use. In this case, more and more efforts are devoted to the re- 1arXiv:2412.04317v1 [cs.CV] 5 Dec 2024 search of lightweight and efficient MLLMs [12, 16, 51, 65]. In particular, with the emergence of small-scale LLMs, e.g., Phi [20]and Gemma [4], recent endeavors start to explore their use in building tiny MLLMs, such as Mo- bileVLM [11, 12], Imp [51] and Mini-Gemini [31]. Mean- while, representative MLLM families also launch their slim versions for better mobile applications, e.g., Qwen2- VL [58], InternVL [10] and MiniCPM-V [16]. With a much smaller LLM structure, these tiny MLLMs typically scale to about 2-3 billion parameters, so their training expenditure as well as memory overhead are also much cheaper than previous MLLMs [28, 33, 42]. However, to retain general multimodal capability, most tiny MLLMs [10, 51, 65] still adopt a large number of visual tokens, making it hard to achieve actual speedup. As shown in Fig. 1, with more vi- sual tokens used, tiny MLLMs even have a slower response time1than common MLLMs like LLaV A-1.5-7B [33]. By revisiting the development of vision-language re- search [2, 43, 59, 60, 73], we can see that the way to achieve better visual capability is not confined to a singu- lar paradigm. In principle, the key to addressing visual shortcoming is to make “ vision ” matter in MLLMs [15, 52], thereby helping them better understand visual information and also reduce the impact of language bias [22, 72]. From this perspective, the use of enough visual tokens does con- tribute more to self-attention modeling in MLLMs, but re- cent studies [7, 62] also show that this paradigm is of- ten inefficient and obviously redundant. In addition, var- ious attempts have been successfully made in improving visual capability before the era of MLLMs. For instance, enriching the visual semantics [21, 44, 70] or refining complex image information based on visual saliency or question dependency through various attention-based ap- proaches [2, 41, 64, 73]. To this end, we believe that a good balance between the performance and efficiency of MLLMs is feasible. In this paper, we propose a tiny and fast MLLM called FlashSloth . The principle of FlashSloth is to improve the descriptive power of visual tokens in the process of refining and compressing their redundant semantics, thereby achiev- ing actual speedup during inference. Concretely, Flash- Sloth first introduces a spatial-aware attentive pooling to compress the redundant image information while captur- ing visually salient semantics. Meanwhile, a novel and lightweight Query module is equipped to grasp instruction- related image information, thereby compensating the loss of image details in the attention pooling process. No- tably, this query module is embedded into the architecture of FlashSloth rather than as an independent bridge branch that requires another language modeling [13, 61, 74], e.g., Q-Former [28]. Thus, we term it EmbQ . In addition to the compact structure designs, EmbQ also consumes much 1The time for the first answer token.lower training and inference costs, well facilitating the effi- ciency goal of FlashSloth. For instance, EmbQ does not re- quire dedicated large-scale VL alignment pretraining [28]. With these intuitive designs, FlashSloth can not only greatly reduce the number of input visual tokens but also improve their discrimination for better multimodal reasoning. To validate the proposed FlashSloth, we conduct exten- sive experiments on a set of highly-competitive VL and MLLM benchmarks [14, 25, 30, 36, 66], and compare it with a bunch of least tiny MLLMs, including Qwen2- VL-2B [58], Intern-VL-2 [10], MiniCPM-V2 [17], and MM1.5 [68]. Experimental results demonstrate that com- pared to these advanced tiny MLLMs, our FlashSloth can reduce the number of visual tokens, training memory and inference computation by 80-89%, 61-80% and 70-98%, respectively, while shortening the actual response time by about 2 ×to 5×times. Retaining high efficiency, FlashSloth also exhibits competitive ness against these SOTA methods, and even perform slightly better on several common VL tasks, e.g., MMB [36] and MMMU [66], well confirming our motivation and the designs of FlashSloth. In summary, our contributions are three folds: • We propose a strong and fast tiny MLLM in this paper, coined as FlashSloth , showing that a good balance be- tween performance and efficiency is feasible. • In FlashSloth, we introduce embedded visual compres- sion designs to efficiently capture both visually salient and instruction-related semantics, namely Spatial Atten- tion Pooling andEmbedded Query modules. • The extensive experiments not only show the strong mul- timodal capability of FlashSloth, but also confirm its competitiveness with a set of advanced MLLMs while re- taining higher efficiency. 2. Related Works 2.1. Multimodal Large Language Models Based on the rapid development of large language models (LLMs) [20, 56, 71] and visual encoders [48, 50], multi- modal large language models (MLLMs) also achieve sig- nificant strides in various vision-language (VL) tasks. Nu- merous open-source MLLMs [27, 33, 42 ?] emerges in recent years, some of which even achieve outstanding capa- bility comparable to GPT-4 [1] in specific fields. However, this advancement is often support with increasingly larger parameter sizes, which also results in heavy burden to the training and application of MLLMs. Therefore, more re- cent research resorts to smaller LLMs to build tiny MLLMs, such as Phi [20], Gemma [4], and Qwen2 [3]. For in- stance, MobileVLM [11] first realize the attempt of extend- ing tiny LLMs to multimodal tasks with a simple yet ef- fective visual projection after image encoder. Additionally, more efforts are devoted to explore the design and train- 2 Vision Encoder Spatial Attention Pooling Text Instruction: “How many birds in the picture?” Query Tokens Decoding Layer… Embedded Query Module …Multimodal Language Model Softmax α1 α2 α3 α4 Aggregation Attention Attention Attention Attention Attention Stack Query TokensText TokensImage TokensQ K V Down Projector Softmax Dot Product Q K V Dot Product Down Projector Softmax Dot Product Dot Product Up Projector FlashSloth Decoding Layer Decoding Layer Tokenize ProjectorFigure 2. The overall framework of the proposed FlashSloth. The visual tokens extracted by the vision encoder are first refined and compressed by a Spatial Attention Pooling (SAP) module and then fed to FlashSloth. In addition to visual and text tokens, a set of learnable query tokens are also padded to query instruction-related image information via the Embedded Query Module (EmbQ) after some layers of FlashSloth. In particular, SAP is to capture the visually salient semantics in image regions via uni-modal visual attention, as depicted in the left. EmbQ is a lightweight and embedded module for visual enhancement in FlashSloth, which requires no additional language modeling and dedicated alignment pretraining, as shown in the right. ing strategies of tiny MLLMs based on small LLMs, such as LLaV A-Phi [75], Imp [51], and PaliGemma [9]. Mean- while, the influential MLLM families, such as MiniCPM- V [17], Qwen2-VL [58] and InternVL [10], also develop their slim but also powerful versions via exploring high- resolution image encoding and high-quality data collection. Overall, the advancement of tiny MLLMs well facilitate the real-world applications of MLLMs. However, a magnitude of visual tokens still used also slow down the response time of tiny MLLMs in addition to high expenditure [7, 29], i.e., the first token prediction, hindering application. 2.2. Visual Token Compression Most existing MLLMs [10, 31, 45, 53, 58] usually rely on a large number of visual tokens for superior visual capabil- ity, whether they are large or tiny. However, this paradigm is often criticized for excessive computation and obvious visual redundancy, which also attracts an influx of interest in efficient visual learning of MLLMs [7, 18, 62]. In terms of network designs, Q-Former- like methods [13, 28, 61, 74] uses learnable tokens to control the number of visual tokens, with a purpose of capturing instruction-related visual infor- mation via visual compression. However, they often use an- other language model like BERT [57] to interpret the text in- struction and require dedicated vision-language pretraining. Some methods like Abstractor [6] and LDP [11, 12] em-ploy convolutional layers to learn local visual compression. Similarly, methods [10] like InternVL apply pixel shuffle to directly reduce the number of visual tokens. However, the information loss in these local compression methods are of- ten not further compensated. The other main paradigm for visual efficiency is to apply external methods for effective visual token pruning during inference [7, 18, 62]. For ex- ample, FastV [7] determines each token’s importance based on average attention,and FitPrune [62] selects retained fea- tures by minimizing the attention distribution difference be- fore and after pruning. However, the contributions of token pruning methods are orthogonal to this paper, and we focus on improving the discrimination of visual tokens via inves- tigating network structure design. 3. Method 3.1. Overall In this paper, we propose a tiny and fast MLLM called FlashSloth , of which framework is illustrated in Fig. 1. FlashSloth aims to improve the descriptive power of visual tokens with embedded visual compressions, i.e., the spa- tial attention pooling (SAP) and embedded query (EmbQ) modules, thereby achieving superior multimodal capability with a shorter visual token sequence. Concretely, given an input image I, FlashSloth uses the 3 image encoder to extract its visual token features, denoted Fv∈R(h×w)×d, where h×wdenotes the resolution and d is the feature dimension. And the input text instruction Tis first truncated into a set of tokens, which are then vectorized by the corresponding word embeddings, denoted as Ft∈ Rl×d. Here, lis the length of text sequence. In existing MLLMs [33, 51], the number of directly out- put visual tokens Fvis often large, especially for the high- resolution images [10, 17, 34]. Thus, we apply the SAP module to attentively capture the salient visual semantics while compressing the token redundancy. The processed vi- sual tokens by SAP are denoted by Fs v∈R(w×h s2)×d, which has a much smaller number of tokens than Fv. Afterwards, the compact visual tokens Fs vafter linear projection and the text tokens Ftare fed to the MLLM structure, which are also padded with nlearnable query to- kens at the end of the sequence, denoted as Fq∈Rn×d. In FlashSloth, Fqserves to supplement Fs vin terms of the instruction-related image details. Particularly, to avoid another language modeling [13, 61, 74], Fqwill first attend the multimodal interaction in the MLLM, and then engage in EmbQ for visual querying at the k-th layer: F(k) q=F(k) q+EmbQ (Fv,F(k) t,F(k) q). (1) Here, the output of EmbQ is pure visual attention fea- turesFq v∈Rn×dwith the same length of Fq. Overall, the objective of FlashSloth’s decoding can be defined by: p(A|Fs v,Ft,Fq) =LY i=1p(ai|Fs v,Ft,Fq,Fa<i),(2) where pis the prediction probability distribution, A= {a1, ..., a L}is the answer sequence and Lis its length. Fa<idenotes the answer tokens before the i-th step. From the above introduction, we can see that FlashSloth is different from most MLLMs [10, 33, 58] in two main as- pects. First, FlashSloth refine and compress visual tokens in terms of both visual saliency and instruction-related seman- tics, which can well collaborate with each other for different VL tasks. Second, all compression designs are lightweight and embedded in the architecture of FlashSloth without the requirement of specific tuning or pretraining [28]. In the following section, we will describe them in detail. 3.2. Embedded Visual Compression As discussed above, the main principle of FlashSloth is to improve the discrimination of visual tokens while squeezing their length. To approach this target, we perform the visual compression in two aspects, i.e., visual saliency andinstruc- tion dependency . Moreover, these designs are lightweight and can be embedded into the architecture of FlashSloth, serving the target of model efficiency.3.2.1 Visually Salient Compression We first introduce a spatial attention pooling (SAP) method to refine and compress the semantics in local visual tokens, borrowing the successful attention designs in previous VL research [73]. The intuition is that the visual tokens ex- tracted by the encoders like ViT [49, 67] already have a large receptive field as well as obviously overlapping in- formation. Thus, an MLLM can mainly focus on the most visually salient information in each image regions. Specifically, given the extracted visual tokens Fv∈ R(h×w)×d, we first spatially divide them into a set of region tokens with a size of s×s, denoted Fi v∈Rs2×d. Thus, for each image region, SAP directly use a two-layer mlp(·)to predict its visual attention weights α∈Rs2: α=Softmax mlp(Fi v) . (3) Then, the visually salient feature of each image region fs v∈ Rdis directly obtained via weighed combinations: fs v=s2X i=1αi·fi v, (4) Lastly, those salient features are tiled to form the new visual tokensFs v∈R(w×h s2)×dand fed to FlashSloth. 3.2.2 Instruction-aware Visual Compression Considering the varying difficulties of VL tasks [8, 15, 23], the salient semantics provided by SAP is prone to insuf- ficient for multimodal reasoning. In this case, we fur- ther propose an embedded query (EmbQ) module towards instruction-aware visual compression. In broad terms, EmbQ is similar to previous attempts like Q-Former [28], i.e., querying the text-related visual information, but it still exhibit obvious differences in design and operation. Above all, our requirement for EmbQ is to accomplish coarse-grained visual grounding rather than accurate VL alignment. By revisiting previous VL research [59, 60], we note that this requirement is easy to achieved without com- plex network structure and large-scale pretraining. There- fore, the design of EmbQ is neat and efficient, which is di- rectly embedded into FlashSloth, as shown in Fig. 2. Concretely, a set of learnable tokens Fq∈Rn×dare used asqueries and padded in the input sequence of FlashSloth. After klayers of transformation, F(k) qare fed to EmbQ for visual querying. In particular, we expect this operation will allowFqto obtain enough instruction information from the text tokens via self-attention. But during experiments, we note that more visual semantics are received since the length of visual tokens is much longer than that of the text ones, which contradicts the target of EmbQ. 4 Thus, we first interact F(k) qandF(k) tvia cross-attention: Fq t=Softmax (F(k) qWq)(F(k) tWk)T √dk! F(k) tWv.(5) whereFq t∈Rn×dare the obtained text queries, and Wq,Wk,Wvare the projection weight matrices, and dk denotes their dimension. Then, we can use Fq tto query visual information from the uncompressed visual tokens Fv, defined by Fq v=Softmax(Fq tWq)(FvWk)T √dk FvWv.(6) Lastly,Fq vare up projected and then combined with F(k) q for the following multimodal inference of FlashSloth, as de- scribed in Sec. 3.1. Notably, the process of EmbQ takes into account of the discrimination of well-learned visual tokens, so only one up-projection is used for visual tokens for scal- ing to the MLLM’s dimension. Besides, we also use the embedding of ‘ dot’ token to initialize queries, making them easier to accommodate to the semantic space of MLLM. 3.3. Training and Other Settings Under the default setting, FlashSloth apply a two-stage training paradigm [35]. The pretraining stage. Only the projector and spatial at- tention pooling are optimized for the alignment between vi- sual and text tokens, while the LLM is fixed. The SFT tuning stage. Except vision encoder, FlashSloth are optimized, including LLM and EmbQ. To tackle OCR tasks with a high demand on image res- olution [10, 26, 34], we also propose a high-resolution ver- sion, termed FlashSloth-HD . In particular, FlashSloth-HD inputs images of 768×768resolutions. In terms of im- age processing, we follow LLaV A-NeXT [34] to divide the images into four parts and a low-resolution thumbnail, of which visual tokens are extracted in parallel. Similarly, FlashSloth use SAP to squeeze their length greatly and EmbQ for visual querying. To save training expenditure, we only use high-resolution images in the SFT tuning of FlashSloth-HD, where the vision encoder is unfreezed for better accommodation. Details can refer to our project. 4. Experiment 4.1. Implementation Details In terms of the default FlashSloth, we use siglip- so400m [67] as the visual encoder with an input resolution of 384, and phi2-2.7B [20] as the LLM. The downsampling rate of SAP is set to 3, generating 81 visual tokens. The number of queries in EmbQ is 9, and an embedding query module with a dimension of 576 is inserted at the 8th layer of the /uni0000002f/uni0000002f/uni00000044/uni00000039 /uni00000024 /uni00000014/uni00000011/uni00000018/uni00000010/uni0000001a/uni00000025/uni0000002c/uni00000030/uni00000033 /uni00000016/uni00000011/uni00000014/uni00000025/uni00000030/uni00000052/uni00000045/uni0000004c/uni0000004f/uni00000048/uni00000039/uni0000002f/uni00000030 /uni00000039/uni00000015/uni00000010/uni00000016/uni00000025/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000036/uni0000004f/uni00000052/uni00000057/uni0000004b /uni00000016/uni00000025/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000029/uni0000004f/uni00000044/uni00000056/uni0000004b/uni00000036/uni0000004f/uni00000052/uni00000057/uni0000004b /uni0000002b/uni00000027/uni00000010/uni00000016/uni00000025/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000013/uni00000014/uni00000013/uni00000015/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000001b/uni00000013/uni00000036/uni0000004c/uni00000051/uni0000004a/uni0000004f/uni00000048/uni00000003/uni0000002a/uni00000033/uni00000038/uni00000003/uni00000030/uni00000048/uni00000050/uni00000052/uni00000055/uni0000005c/uni00000003/uni00000038/uni00000056/uni00000044/uni0000004a/uni00000048/uni00000003/uni0000000b/uni0000002a/uni00000025/uni0000000c/uni00000017/uni0000001b/uni00000019/uni00000015 /uni00000018/uni0000001a/uni00000018/uni0000001b /uni00000018/uni00000014/uni00000018/uni00000015 /uni00000014/uni00000015/uni00000017/uni0000001a /uni00000014/uni00000015/uni00000018/uni00000014 /uni00000013/uni00000018/uni00000014/uni00000013/uni00000014/uni00000018/uni00000015/uni00000013/uni00000015/uni00000018/uni00000016/uni00000013/uni00000016/uni00000018 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000037/uni0000004c/uni00000050/uni00000048/uni00000003/uni0000000b/uni0000002a/uni00000033/uni00000038/uni00000003/uni0000004b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000014/uni00000015/uni00000017/uni00000011/uni0000001b /uni0000001b/uni00000018/uni00000011/uni00000019/uni00000014/uni00000016/uni00000015/uni00000011/uni00000013 /uni00000018/uni0000001b/uni00000011/uni00000017/uni0000001a/uni00000015/uni00000011/uni00000013/uni00000033/uni00000055/uni00000048/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni0000002a/uni00000033/uni00000038/uni00000003/uni00000030/uni00000048/uni00000050/uni00000052/uni00000055/uni0000005c/uni00000003/uni0000000b/uni0000002a/uni00000025/uni0000000c /uni00000029/uni0000004c/uni00000051/uni00000048/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni0000002a/uni00000033/uni00000038/uni00000003/uni00000030/uni00000048/uni00000050/uni00000052/uni00000055/uni0000005c/uni00000003/uni0000000b/uni0000002a/uni00000025/uni0000000c /uni00000033/uni00000055/uni00000048/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni0000002a/uni00000033/uni00000038/uni00000003/uni0000002b/uni00000052/uni00000058/uni00000055/uni00000056 /uni00000029/uni0000004c/uni00000051/uni00000048/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni0000002a/uni00000033/uni00000038/uni00000003/uni0000002b/uni00000052/uni00000058/uni00000055/uni00000056Figure 3. Comparison between FlashSloth and three MLLMs in terms of training efficiency. The results are obtained using LLaVA- 665k [33] for fair comparisons. FlashSloth is superior in both GPU memory overhead and training time costs. MLLM by default. The model is trained by AdamW [37] optimizer and cosine learning rate scheduler for a total of 1 epoch. The initial learning rates for pre-training and instruc- tion tuning are 1e-3 and 2e-5 with batch sizes of 256 and 128, respectively. All training is conducted on 8 NVIDIA A800 GPUs. For the FlashSloth-HD, the input image reso- lution is 768, and the image tokens are compressed to 405, while the other settings remain the same as FlashSloth. 4.2. Benchmarks and Metrics We evaluate the model on seven multimodal bench- mark datasets, including MMB [36], MME [14], mm- vet [63], Pope [30], SEED [25], MMMU [66], and Math- Vista [40]. And seven general visual-language datasets, in- cluding SQA [39], AI2D [24], GQA [19], TextVQA [54], ChartQA [46], DocVQA [47], and RealWorldQA. These benchmarks assess MLLMs from diverse perspectives, such as hallucination, multimodal perception and cognition, and multidisciplinary question answering. All evaluations are conducted using the lmms-eval [69]. 4.3. Quantitative Analysis 4.3.1 Efficiency Comparison with Existing MLLMs We first compare the efficiency of FlashSloth with ad- vanced tiny MLLMs, and also use the representative MLLM LLaV A-1.5-7B [33] as reference. Inference Efficiency. We first compare the inference effi- ciency of FlashSloth with three advanced tiny MLLMs [10, 51, 58] in Tab. 1. For better comparisons, we use LLaV A- 1.5-7B [33] as reference. From these statics, we can first observe that tiny MLLMs have a lower requirement of GPU memory than LLaV A due to the use of much smaller LLMs. Likewise, their theoretical computation (FLOPS) is also less than LLaV A. However, their actual inferences are not obviously faster, i.e., the response time or throughput. To explain, the large number of visual tokens will greatly slow down the decoding of first token, i.e., response time, mak- 5 Table 1. A comparison of FlashSloth with the latest MLLMs in terms of inference efficiency across five tasks. The comparison metrics include the number of visual tokens, first-round inference TFLOPs, inference memory usage (GB), model response time (seconds), and throughput (samples/second). The percentages in the table show the change in tiny MLLMs compared to LLaV A with red arrows indicating a decrease in performance and green arrows indicating an improvement. The best results are bold and the second-best results are underlined . Task MetricLLaV A [33] V1.5-7BIMP [51] 3.1BQwen2-VL [58] 2BInternVL2 [10] 2BFlashSloth-HD 3.2BFlashSloth 3.2B GQAToken number 576 729↑27% 352↓39% 1699 ↑195% 414↓28% 90↓84% TFLOPs 4.41 0.30↓93% 1.50↓66% 5.10↑16% 2.71↓39% 0.08↓98% GPU memory 15.3 8.4↓45% 9.3↓39% 9.3↓39% 7.8↓49% 7.6↓50% Response time 0.11 0.12↑9% 0.13↑18% 0.24↑118% 0.11 0% 0.05↓55% Throughput ↑ 7.4 9.2↑24% 5.5↓26% 3.7↓50% 7.1↓4% 12.6↑70% TextVQAToken number 576 729↑27% 977↑70% 1668 ↑190% 414↓28% 90↓84% TFLOPs 4.76 0.31↓93% 4.14↓13% 5.09↑7% 2.83↓41% 0.10↓98% GPU memory 16.0 8.4↓48% 40.8↑155% 8.8↓45% 8.1↓49% 7.6↓53% Response time 0.11 0.12↑9% 0.56↑409% 0.24↑118% 0.11 0% 0.05↓55% Throughput ↑ 5.4 7.1↑31% 1.4↓74% 3.2↓41% 5.6↑4% 9.8↑81% MMEToken number 576 729↑27% 646↑12% 1478 ↑157% 414↓28% 90↓84% TFLOPs 4.43 0.30↓93% 2.72↓39% 4.45 0% 2.72↓39% 0.09↓98% GPU memory 15.3 8.4↓45% 59.2↑287% 10.0↓35% 7.8↓49% 7.6↓50% Response time 0.11 0.12↑9% 0.41↑273% 0.21↑91% 0.11 0% 0.05↓55% Throughput ↑ 5.6 9.1↑63% 1.8↓68% 3.7↓34% 7.3↑30% 11.6↑107% MMBToken number 576 729↑27% 385↓33% 1296 ↑125% 414↓28% 90↓84% TFLOPs 4.77 0.31↓94% 0.84↓82% 3.98↓17% 1.98↓58% 0.09↓98% GPU memory 16.4 8.4↓49% 8.0↓51% 11.5↓30% 8.7↓47% 7.6↓54% Response time 0.11 0.12↑9% 0.09↓18% 0.19↑73% 0.11 0% 0.05↓55% Throughput ↑ 6.1 8.4↑38% 7.0↑15% 4.7↓23% 5.1↓16% 11.7↑92% POPEToken number 576 729↑27% 353↓39% 1666 ↑189% 414↓28% 90↓84% TFLOPs 4.40 0.30↓93% 1.51↓66% 4.99↑13% 2.71↓38% 0.08↓98% GPU memory 15.3 8.4↓45% 7.8↓49% 7.4↓52% 7.8↓49% 7.6↓50% Response time 0.11 0.12↑9% 0.13↑18% 0.23↑109% 0.11 0% 0.05↓55% Throughput ↑ 7.8 9.4↑21% 5.0↓36% 3.8↓51% 7.3↓6% 13.2↑69% AverageToken number 576 729↑27% 466↓19% 1561 ↑171% 414↓28% 90↓84% TFLOPs 4.55 0.30↓93% 2.14↓53% 4.72↑4% 2.59↓43% 0.09↓98% GPU memory 15.7 8.4↓46% 30.8↑96% 9.4↓40% 8.0↓49% 7.6↓52% Response time 0.11 0.12↑9% 0.26↑136% 0.22↑100% 0.11 0% 0.05↓55% Throughput ↑ 6.5 8.6↑32% 4.1↓37% 3.8↓42% 6.50% 11.8↑82% ing their advantages in KV caching [5] based decoding be- come not that obvious, especially that the answers of VL examples are often short [46, 47, 54]. We can also see that the dynamic resolution designs of Qwen2-VL and In- ternVL can help to adjust the number of visual tokens for different images, but still keeps a relatively large number, which also result in large latency. Lastly, we can see that with about 80-89% fewer tokens, FlashSloth exhibits obvi- ous merits than these MLLMs in terms of all inference met- rics. For instance, its response time is about 2 and 5 times faster than LLaV A and InternVL, respectively. In terms of FlashSloth-HD, its overall efficiency is also superior than the compared MLLMs, even through it uses more visual tokens than FlashSloth. These results well confirm the ad- vantages of FlashSloth in inference and visual compression. Training Efficiency. We further report the training expen- ditures of FlashSloth and the other four MLLMs in Fig. 3. For a quick comparison. we use the pretraining and tun-ing splits of LLaV A [33], and the per-GPU batch size is set to 32 for pretraining and 8 for instruction tuning. From these plots, we can first find that FlashSloth consumes much less training time than the other MLLMs, especially pre- training. In practice, its pretraining using the LLaV A split only takes 6.4 GPU hours, about 76% and 68% less than LLaV A and IMP, respectively. Its SFT tuning time (52 GPU hours) is longer due to more examples used, and it is also slightly affected by the input queries. However, the cost is still lower than than IMP (65.6 GPU hours) and Mo- bileVLM (96 GPU hours). Similarly, with a well designed training scheme, FlashSloth-HD has a slightly longer train- ing time (6.4+65.6 GPU hours), which is still cheaper than the other MLLMs. The other observation from Fig. 3 is that tiny MLLMs require GPU memories close to that of LLaV A except our FlashSloth. During pretraining, both FlashSloth and FlashSloth-HD only use about 81 visual to- kens, making its memory overhead much lower than the 6 Table 2. Performance comparison between FlashSloth and the latest tiny MLLMs on seven general multimodal benchmarks and seven common visual-language benchmarks. The best results are bold and the second-best results are underlined . Model Params DataMultimodal benchmarks for MLLM Common vision-language benchmarks POPE MME MMB MM-Vet SEEDIMMMU MathVista GQA SQA TextVQA AI2D ChartQA DocVQA RealWorldQA MLLMs (LLM >3B) LLaV A-1.5 [33] 7B 1.2M 85.9 1511 64.3 30.5 58.6 - - 62.0 66.8 58.2 - - - - LLaV A-NeXT [34] 8B 1.4M 86.5 1519 67.4 - 70.2 35.8 34.6 64.2 70.1 64.9 - - - - Eagle-X5 [53] 7B 1.2M 88.8 1528 68.4 - 73.9 36.3 37.0 64.9 70.0 71.2 - 67.7 - - DeepSeek-VL [38] 7B >100M 88.1 - 73.2 41.5 70.4 36.6 - - - - - - - - Lightweight MLLMs (LLM <3B) MobileVLM-V2 [12] 1.7B 3.6M 84.3 1302 57.7 - - - - 59.3 66.7 52.1 - - - - LLaV A-Phi [75] 3B 1.2M 85.0 1335 59.8 - - - - - 68.4 48.6 - - - - MobileVLM-V2 [12] 3B 3.6M 84.7 1441 63.2 - - - - 61.1 70.0 57.5 - - - - DeepSeek-VL [38] 1.7B >100M 87.6 1532 64.6 34.8 66.7 32.2 31.1 - - - - - - 49.7 mini-gemini [31] 2.2B 2.7M - 1653 59.8 31.1 - 31.7 29.4 - - 56.2 - - 34.2 - PaliGemma [55] 3B >1B - 1686 71.0 33.1 69.6 34.9 28.7 - - 68.1 68.3 - 34.2 - MiniCPM-V-2 [17] 2.8B >500M 87.8 1809 69.6 41.0 67.1 38.2 38.7 - 80.7 74.1 62.9 59.8 71.9 55.8 Imp-v1 [51] 3B 1.2M 88.0 1434 66.5 - - - - - 70.0 59.4 - - - - InternVL1.5 [10] 2.2B >5M 1902 70.9 39.3 69.8 34.6 41.1 61.6 84.9 70.5 69.8 74.8 85.0 - InternVL2 [10] 2.2B >5M 85.2 1877 73.2 44.6 71.6 36.3 46.3 -94.1 73.4 74.1 76.2 86.9 57.3 MM1.5 [68] 3B >45M 88.1 1798 - 41.0 72.4 37.1 44.4 - 82.1 76.5 65.7 74.2 87.7 56.9 Qwen2-VL [58] 2B - - 1872 74.9 49.5 - 41.1 43.0 - - 79.7 74.7 73.5 90.1 62.9 FlashSloth (ours) 3.2B 3.7M 86.3 1702 73.0 41.9 68.0 39.7 42.5 61.1 88.6 64.6 72.5 51.0 48.6 54.8 FlashSloth-HD (ours) 3.2B 3.7M 87.2 1745 75.7 49.0 71.2 37.8 40.6 62.5 91.1 71.0 75.3 69.8 74.8 59.9 other MLLMs. During instruction tuning, their GPU mem- ories increase greatly. To explain, the queries are given for each instruction, and an SFT example is often a multi-round conversation, so the multiple paddings will bring in a larger sequence. With effective compression, this overhead is still lower than the compared methods. Overall, these results show the merits of FlashSloth in training efficiency. 4.3.2 Performance Comparison with Existing MLLMs We make performance comparisons between FlashSloth- HD and a bunch of advanced tiny MLLMs on 14 highly- competitive VL and MLLM benchmarks, as shown in Ta- ble 2. From this table, we can first observe that Flash- Sloth is very competitive on common VL tasks with a much smaller number of visual tokens. For instance, its perfor- mance on MMB [36], GQA [19] and SQA [39] is much better than several previous tiny MLLMs with similar train- ing data amount, e.g., MobileVLM [12], Mini-Gemini [31] and Imp [51]. Compared to the SOTA tiny MLLMs, such as InternVL [10] and Qwen2-VL [58], FlashSloth also ex- hibits good competitiveness on these tasks, but it still lags behind them on the OCR tasks like DocVQA [47], which requires high-resolution image inputs. In this case, we can see that its HD version, i.e., FlashSloth-HD, can well compensate this shortcoming. Overall, retaining better effi- ciency, FlashSloth-HD can generally reach the capability of InternVL2, and well shorten its gap to Qwen2-VL. More- over, FlashSloth can even achieve new SOTA performance among tiny MLLMs on several benchmarks, such as MMB, GQA and AI2D. Considering the much smaller amount of training data for FlashSloth-HD, these results are in fact very notable, well confirming our motivation and designs.Table 3. Ablation study on the component design of FlashSloth, initialization method of query tokens, and the number of query tokens, conducted on four benchmarks and TFLOPs. The method marked with ‡ represents our final selected setting. Choices GQA POPE MME MMB TFLOPs Visual compression designs baseline (729) 61.8 87.8 1491.4 67.9 0.30 + Avg.Pool (81) 59.6 84.4 1440.3 62.7 0.05 + Att.Pool (81) 60.2 86.4 1444.8 63.9 0.06 + Att.Pool & EmbQ‡(90) 60.8 87.8 1491.0 67.9 0.09 Query initialization. Fixed Dot Token 60.6 86.4 1423.0 66.1 0.09 Random Init 60.5 86.9 1469.7 65.6 0.09 Dot init‡ 60.8 87.8 1491.0 67.9 0.09 Number of queries in Emb.Q 0 60.3 86.4 1444.8 63.9 0.06 6 60.5 86.8 1453.7 65.8 0.08 9‡ 60.8 87.8 1491.0 67.9 0.09 12 60.7 86.9 1437.3 67.8 0.10 15 60.8 87.2 1468.5 67.0 0.11 4.3.3 Ablation Study In the first block of Tab. 3, we first compare different visual compression designs. The most simple solution is average pooling , but it tends to lose key visual information, leading to obvious declines in most benchmarks, e.g., MME and MMB for multimodal perception and recognition. In con- trast, our SAP can well keep the visual saliency, so as to obtaining better performance than simple pooling. In addi- tion, we can also see that with the combination of EmbQ and SAP, FlashSloth can obtain very marginal performance drops compared to the baseline, while its efficiency is much better. This result confirms the supplement of EmbQ to 7 Expr.1: What animals are in the picture and how many are there?Expr.2: What kind of environment is the man in and what sport does he play?Expr.3: What color is the hair of the little girl in the picture? Original SAP Emb. Q SAP + Emb. Q Original SAP Emb. Q SAP + Emb. Q Original SAP Emb. Q SAP + Emb. Q(a)Visualization of attention maps for SAP and Emb.Q in FlashSloth. Text : What is stored under the egg? Grapes. A bunch of eggplant is stored under the egg carton. Including eggs, fruits, vegetables, and other items. There is a cluster of green grapes.Qwen2: InternVL : FlashSloth: FlashSloth-HD: 4.09s 0.16s 0.05s 0.11sText : What class is on Wednesday at 11:00? Chinese.Wednesday at 11:00 is History.The class on Wednesday at 11:00 is "History." The class listed is "History".Qwen2: InternVL : FlashSloth: FlashSloth-HD:4.12s 0.25s 0.05s 0.11s Text : What role does zooplankton play in this food chain? Qwen2: 4.12s Zooplankton plays a crucial role as primary producers. They are the first stage of the food chain, feeding on phytoplankton. Zooplankton are small, free-swimming organisms that feed on phytoplankton. InternVL : 0.25s Zooplankton, as depicted in the image, serve as a primary food source for the fish. The arrows indicate that zooplankton are consumed by the fish, suggesting that zooplankton are a significant part of the fish's diet. FlashSloth: 0.05s Zooplankton is the primary consumer in this food chain. It feeds on phytoplankton. Zooplankton are an essential part of the marine ecosystem as they serve as a food source for larger organisms. FlashSloth-HD: 0.11s Zooplankton is shown as a primary consumer, feeding on phytoplankton. In the context of a food chain, zooplankton typically serves as a base for higher trophic levels, providing energy and nutrients to organisms such as the fish and depicted in the image. (2760×1837)(2128×1501) (2000×2000) (b)Predictions of FlashSloth, FlashSloth-HD and SOTA tiny MLLMs. Figure 4. Visualized results of FlashSloth with Qwen2-VL-2B and InternVL2-2.2B. Subfigure-(a) show the attention maps of different visual compressions for FlashSloth, which shows the abilities of SAP in visual saliency compression and EmbQ for instruction-related visual querying. Subfigure-(b) shows FlashSloth’s rapid response time and its performance on common tasks, which is comparable to or better than the SOTA tiny MLLMs. The clock time represents the response of the model. Incorrect answers are in RED. SAP. In the second and last blocks of Tab. 3, we exam- ine the settings of EmbQ. We can first see that the initial- ization of queries has impact on EmbQ. The random ini- tialization can serves the target of EmbQ, but the initializa- tion of text dottoken further improves performance slightly, suggesting their better interactions with other input tokens in MLLMs. Besides, we can also see that the number of queries required by EmbQ is very small, and 9 tokens are enough for visual querying. To explain, EmbQ serves to capture instruction-related information at a coarse granular- ity, as discussed above. As a supplement to SAP, this design only requires a few queries, especially considering the short questions in MLLM tasks. Overall, these results well con- firm the designs of EmbQ. 4.4. Qualitative Analysis To gain deeper insight into FlashSloth’s process of enhanc- ing visual feature description, we visualize the attention re- sults of SAP and EmbQ, as shown in the figure 4a. As observed, SAP distributes attention more broadly, allowing the model to focus on salient information from different re- gions of the image. This helps the model capture salientdetails in images , such as the three small birds in the left image, seabirds in the middle image, and tiny text in the right image. In contrast, the embedded query focuses more narrowly on key, text-related information in the image, such as the elephant in the left image, the surfer in the middle image, and the hair in the right image. By combining these two attention mechanisms, the model can effectively prior- itize the most important information in the image, enhanc- ing the expressiveness of visual features. This demonstrates that the synergy between SAP and EmbQ allows FlashSloth to fully leverage visual information, resulting in improved performance across multi-task scenarios. In the Figure 4b, We visualize the predictions of Flash- Sloth , FlashSloth-HD, Qwen2-VL and InternVL-2 for dif- ferent VL examples. First, FlashSloth exhibits extremely fast response times, with latency significantly lower than the other two models, providing a good user experience. Sec- ond, for coarse-grained real-world QA and scientific QA tasks, FlashSloth’s performance is on par with or even sur- passes that of the other two models. In the top-left example, FlashSloth identifies grapes that the other models miss, and FlashSloth-HD answers in more detail. In the bottom ex- 8 ample, FlashSloth provides the most accurate answer to a biological question. However, due to its lower resolution, FlashSloth underperforms on the OCR tasks. For instance, in the top-right example, FlashSloth fails to recognize cor- rectly, but upon increasing the input resolution, FlashSloth- HD handles fine-grained OCR tasks effectively. 5. Conclusion In this paper, we introduce FlashSloth, a powerful and fast tiny MLLM. By incorporating effective embedded visual compression designs, FlashSloth effectively captures both visual saliency and instruction-related semantics, achiev- ing an optimal balance between performance and efficiency. Extensive comparisons with existing tiny MLLMs on var- ious benchmarks demonstrate that FlashSloth significantly enhances both training and inference efficiency while main- taining competitive performance, which well validates its motivation and designs. 6. Acknowledgments This work was supported by National Science and Tech- nology Major Project (No. 2022ZD0118201), the National Science Fund for Distinguished Young Scholars (No. 62025603), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. 623B2088, No. U23A20383, No. U21A20472, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), and the Natural Science Foundation of Fujian Province of China (No. 2021J06003, No. 2022J06001). References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ah- mad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. 2 [2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE con- ference on computer vision and pattern recognition , pages 6077–6086, 2018. 2 [3] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023. 2 [4] Jeanine Banks and Tris Warkentin. Gemma: Introducing new state-of-the-art open models. Google. Available online at: https://blog. google/technology/developers/gemma-open- models/(accessed 6 April, 2024) , 2024. 2 [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan-tan, Pranav Shyam, Girish Sastry, Amanda Askell, Sand- hini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Pro- cessing Systems , pages 1877–1901. Curran Associates, Inc., 2020. 6 [6] Junbum Cha, Wooyoung Kang, Jonghwan Mun, and Byungseok Roh. Honeybee: Locality-enhanced projector for multimodal llm. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition , pages 13817–13827, 2024. 3 [7] Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Jun- yang Lin, Chang Zhou, and Baobao Chang. An image is worth 1/2 tokens after layer 2: Plug-and-play inference ac- celeration for large vision-language models. arXiv preprint arXiv:2403.06764 , 2024. 2, 3 [8] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedan- tam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zit- nick. Microsoft coco captions: Data collection and evalu- ation server, 2015. 4 [9] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly- scaled multilingual language-image model. arXiv preprint arXiv:2209.06794 , 2022. 3 [10] Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhang- wei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Botian Shi, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Bo Zhang, Pinlong Cai, Licheng Wen, Xiangchao Yan, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. How far are we to gpt-4v? closing the gap to commercial multimodal models with open- source suites, 2024. 1, 2, 3, 4, 5, 6, 7, 13 [11] Xiangxiang Chu, Limeng Qiao, Xinyang Lin, Shuang Xu, Yang Yang, Yiming Hu, Fei Wei, Xinyu Zhang, Bo Zhang, Xiaolin Wei, et al. Mobilevlm: A fast, strong and open vi- sion language assistant for mobile devices. arXiv preprint arXiv:2312.16886 , 2023. 2, 3 [12] Xiangxiang Chu, Limeng Qiao, Xinyu Zhang, Shuang Xu, Fei Wei, Yang Yang, Xiaofei Sun, Yiming Hu, Xinyang Lin, Bo Zhang, et al. Mobilevlm v2: Faster and stronger baseline for vision language model. arXiv preprint arXiv:2402.03766 , 2024. 1, 2, 3, 7, 13 [13] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general- purpose vision-language models with instruction tuning, 2023. 2, 3, 4 [14] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A compre- 9 hensive evaluation benchmark for multimodal large language models, 2024. 2, 5 [15] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Ba- tra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answer- ing. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 6904–6913, 2017. 2, 4 [16] Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, Xinrong Zhang, Zheng Leng Thai, Kaihuo Zhang, Chongyi Wang, Yuan Yao, Chenyang Zhao, Jie Zhou, Jie Cai, Zhongwu Zhai, Ning Ding, Chao Jia, Guoyang Zeng, Dahai Li, Zhiyuan Liu, and Maosong Sun. Minicpm: Un- veiling the potential of small language models with scalable training strategies, 2024. 2 [17] Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models with scalable training strategies. arXiv preprint arXiv:2404.06395 , 2024. 1, 2, 3, 4, 7 [18] Taihang Hu, Linxuan Li, Joost van de Weijer, Hongcheng Gao, Fahad Shahbaz Khan, Jian Yang, Ming-Ming Cheng, Kai Wang, and Yaxing Wang. Token merging for training- free semantic binding in text-to-image synthesis, 2024. 3 [19] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition , pages 6700–6709, 2019. 5, 7 [20] Mojan Javaheripi, S ´ebastien Bubeck, Marah Abdin, Jy- oti Aneja, Sebastien Bubeck, Caio C ´esar Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, et al. Phi-2: The surprising power of small language models. Microsoft Research Blog , 2023. 1, 2, 5 [21] Huaizu Jiang, Ishan Misra, Marcus Rohrbach, Erik Learned- Miller, and Xinlei Chen. In defense of grid features for visual question answering. In Proceedings of the IEEE/CVF con- ference on computer vision and pattern recognition , pages 10267–10276, 2020. 2 [22] Yash Kant, Abhinav Moudgil, Dhruv Batra, Devi Parikh, and Harsh Agrawal. Contrast and classify: Training robust vqa models. In Proceedings of the IEEE/CVF International Con- ference on Computer Vision , pages 1604–1613, 2021. 2 [23] Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame: Referring to objects in pho- tographs of natural scenes. In Proceedings of the 2014 con- ference on empirical methods in natural language processing (EMNLP) , pages 787–798, 2014. 4 [24] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 235– 251. Springer, 2016. 5 [25] Bohao Li, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, and Ying Shan. Seed-bench: Bench- marking multimodal large language models. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , pages 13299–13308, 2024. 2, 5 [26] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895 , 2024. 5 [27] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In Interna- tional conference on machine learning , pages 12888–12900. PMLR, 2022. 2 [28] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In In- ternational conference on machine learning , pages 19730– 19742. PMLR, 2023. 1, 2, 3, 4 [29] Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jianke Zhu, and Lei Zhang. Tokenpacker: Effi- cient visual projector for multimodal llm. arXiv preprint arXiv:2407.02392 , 2024. 3 [30] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. 2, 5 [31] Yanwei Li, Yuechen Zhang, Chengyao Wang, Zhisheng Zhong, Yixin Chen, Ruihang Chu, Shaoteng Liu, and Jiaya Jia. Mini-gemini: Mining the potential of multi-modality vision language models. arXiv preprint arXiv:2403.18814 , 2024. 1, 2, 3, 7 [32] Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and Xiang Bai. Mon- key: Image resolution and text label are important things for large multi-modal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26763–26773, 2024. 1 [33] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26296–26306, 2024. 1, 2, 4, 5, 6, 7 [34] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Im- proved reasoning, ocr, and world knowledge, 2024. 1, 4, 5, 7 [35] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36, 2024. 1, 5 [36] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: Is your multi-modal model an all-around player?, 2024. 2, 5, 7 [37] Ilya Loshchilov, Frank Hutter, et al. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101 , 5, 2017. 5 [38] Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, 10 Hao Yang, et al. Deepseek-vl: towards real-world vision- language understanding. arXiv preprint arXiv:2403.05525 , 2024. 7 [39] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems , 35:2507–2521, 2022. 5, 7 [40] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathemat- ical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. 5 [41] Gen Luo, Yiyi Zhou, Xiaoshuai Sun, Yan Wang, Liujuan Cao, Yongjian Wu, Feiyue Huang, and Rongrong Ji. To- wards lightweight transformer via group-wise transforma- tion for vision-and-language tasks. IEEE Transactions on Image Processing , 31:3386–3398, 2022. 2 [42] Gen Luo, Yiyi Zhou, Tianhe Ren, Shengxin Chen, Xiaoshuai Sun, and Rongrong Ji. Cheap and quick: Efficient vision- language instruction tuning for large language models. In Advances in Neural Information Processing Systems , pages 29615–29627. Curran Associates, Inc., 2023. 1, 2 [43] Gen Luo, Yiyi Zhou, Minglang Huang, Tianhe Ren, Xi- aoshuai Sun, and Rongrong Ji. Moil: Momentum imita- tion learning for efficient vision-language adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024. 2 [44] Gen Luo, Yiyi Zhou, Xiaoshuai Sun, Yongjian Wu, Yue Gao, and Rongrong Ji. Towards language-guided visual recog- nition via dynamic convolutions. International Journal of Computer Vision , 132(1):1–19, 2024. 2 [45] Gen Luo, Yiyi Zhou, Yuxin Zhang, Xiawu Zheng, Xi- aoshuai Sun, and Rongrong Ji. Feast your eyes: Mixture-of- resolution adaptation for multimodal large language models. arXiv preprint arXiv:2403.03003 , 2024. 1, 3, 13 [46] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answer- ing about charts with visual and logical reasoning, 2022. 5, 6 [47] Minesh Mathew, Dimosthenis Karatzas, and C. V . Jawahar. Docvqa: A dataset for vqa on document images, 2021. 5, 6, 7 [48] Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 , 2023. 2 [49] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. 4 [50] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learningtransferable visual models from natural language supervi- sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021. 2 [51] Zhenwei Shao, Zhou Yu, Jun Yu, Xuecheng Ouyang, Lihao Zheng, Zhenbiao Gai, Mingyang Wang, and Jiajun Ding. Imp: Highly capable large multimodal models for mobile devices. arXiv preprint arXiv:2405.12107 , 2024. 1, 2, 3, 4, 5, 6, 7 [52] Baifeng Shi, Ziyang Wu, Maolin Mao, Xin Wang, and Trevor Darrell. When do we not need larger vision models?, 2024. 2 [53] Min Shi, Fuxiao Liu, Shihao Wang, Shijia Liao, Subhashree Radhakrishnan, De-An Huang, Hongxu Yin, Karan Sapra, Yaser Yacoob, Humphrey Shi, Bryan Catanzaro, Andrew Tao, Jan Kautz, Zhiding Yu, and Guilin Liu. Eagle: Ex- ploring the design space for multimodal llms with mixture of encoders. arXiv:2408.15998 , 2024. 3, 7 [54] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 8317–8326, 2019. 5, 6 [55] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivi `ere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and tech- nology. arXiv preprint arXiv:2403.08295 , 2024. 7 [56] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. 1, 2 [57] Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: On the im- portance of pre-training compact models. arXiv preprint arXiv:1908.08962v2 , 2019. 3 [58] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. 1, 2, 3, 4, 5, 6, 7 [59] Kelvin Xu. Show, attend and tell: Neural image cap- tion generation with visual attention. arXiv preprint arXiv:1502.03044 , 2015. 2, 4 [60] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. In Proceedings of the IEEE conference on com- puter vision and pattern recognition , pages 21–29, 2016. 2, 4 [61] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Jun- feng Tian, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. mplug-owl: Modularization empowers large language mod- els with multimodality, 2024. 2, 3, 4 [62] Weihao Ye, Qiong Wu, Wenhao Lin, and Yiyi Zhou. Fit and prune: Fast and training-free visual token pruning 11 for multi-modal large language models. arXiv preprint arXiv:2409.10197 , 2024. 2, 3 [63] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities, 2023. 5 [64] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question an- swering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6281–6290, 2019. 2 [65] Zhengqing Yuan, Zhaoxu Li, Weiran Huang, Yanfang Ye, and Lichao Sun. Tinygpt-v: Efficient multimodal large lan- guage model via small backbones, 2024. 2 [66] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Ren- liang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mmmu: A massive multi-discipline multimodal understand- ing and reasoning benchmark for expert agi. In Proceedings of CVPR , 2024. 2, 5 [67] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training, 2023. 4, 5 [68] Haotian Zhang, Mingfei Gao, Zhe Gan, Philipp Dufter, Nina Wenzel, Forrest Huang, Dhruti Shah, Xianzhi Du, Bowen Zhang, Yanghao Li, et al. Mm1. 5: Methods, analysis & insights from multimodal llm fine-tuning. arXiv preprint arXiv:2409.20566 , 2024. 2, 7 [69] Kaichen Zhang, Bo Li, Peiyuan Zhang, Fanyi Pu, Joshua Adrian Cahyono, Kairui Hu, Shuai Liu, Yuanhan Zhang, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Lmms- eval: Reality check on the evaluation of large multimodal models, 2024. 5 [70] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 5579–5588, 2021. 2 [71] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained trans- former language models. arXiv preprint arXiv:2205.01068 , 2022. 1, 2 [72] Yiyi Zhou, Rongrong Ji, Jinsong Su, Xiangming Li, and Xiaoshuai Sun. Free vqa models from knowledge iner- tia by pairwise inconformity learning. In Proceedings of the AAAI Conference on Artificial Intelligence , pages 9316– 9323, 2019. 2 [73] Yiyi Zhou, Tianhe Ren, Chaoyang Zhu, Xiaoshuai Sun, Jianzhuang Liu, Xinghao Ding, Mingliang Xu, and Ron- grong Ji. Trar: Routing the attention spans in transformer for visual question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages 2074–2084, 2021. 2, 4[74] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mo- hamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models, 2023. 2, 3, 4 [75] Minjie Zhu, Yichen Zhu, Xin Liu, Ning Liu, Zhiyuan Xu, Chaomin Shen, Yaxin Peng, Zhicai Ou, Feifei Feng, and Jian Tang. A comprehensive overhaul of multimodal assistant with small language models. arXiv preprint arXiv:2403.06199 , 2024. 3, 7 12 A. Quantitative analysis A.1. Impact of EmbQ configuration Table 4 examines the impact of EmbQ’s dimensions, num- ber of layers, and insertion positions within the LLM on performance. The first and second blocks show that EmbQ achieves optimal results with a simple configuration of 576 dimensions and a single layer, aligning with FlashSloth’s efficiency goals. The third block explores the effects of in- serting EmbQ into shallow, middle, deep, or multiple layers. Excessively shallow insertions limit Query Tokens to self- interaction, restricting their ability to capture rich image and text features, while overly deep insertions hinder effective propagation of newly learned supplementary information to generated tokens. Table 4. Ablation study on the EmbQ configuration, analyzing the effects of the EmbQ dimensions, the number of EmbQ layers, and the EmbQ insertion layers. The experiments are conducted on four benchmarks. The configuration marked with ‡represents the final selected setting. Choices GQA POPE MME MMB Dimension of EmbQ 576 ‡ 60.8 87.8 1490.7 67.9 768 60.6 86.8 1446.7 66.6 1152 60.6 86.8 1455.5 66.4 2560 60.5 87.1 1420.8 67.1 Number of EmbQ layer 1 ‡ 60.8 87.8 1490.7 67.9 2 60.6 86.9 1488.7 67.1 3 60.6 87.2 1457.5 67.3 EmbQ Insertion Layer 4 60.7 86.9 1433.3 66.2 8 ‡ 60.8 87.8 1490.7 67.9 16 60.4 87.3 1399.2 66.5 24 60.8 87.1 1433.7 67.8 8/16/24 60.8 86.1 1414.5 67.0 A.2. Impact of Feature Fusion Methods Table 5 compares methods for fusing features learned by EmbQ with Query Token features: direct replacement, ad- dition, and gated fusion [45], which adjusts the contribution of each feature. Results show that direct addition achieves the best performance by effectively integrating EmbQ’s fea- tures while retaining the original feature information. A.3. Comparison with Other Vision Compression Methods Table 6 compares our method with various visual fea- ture compression approaches and evaluates the performance gains from incorporating EmbQ. The combination of SAPTable 5. Ablation experiment results on four benchmarks under different feature fusion method Feature Fusion GQA POPE MME MMB Add 60.8 87.8 1490.7 67.9 Replace 60.5 86.3 1443.7 66.4 Gate fusion 60.7 86.1 1468.1 66.1 Table 6. Experimental results with EmbQ combined with various vision compression method to compress visual tokens to 81. Method GQA MME MMB POPE Avg. pooling +EmbQ59.6 1440.3 62.7 84.4 59.8 1454.7 66.4 86.4 Att. pooling +EmbQ60.3 1444.8 63.9 86.4 60.8 1490.7 67.9 87.8 Pixel shuffle [10] +EmbQ58.6 1443.5 65.8 85.0 59.2 1445.3 66.0 86.1 LDP-V2 [12] +EmbQ59.4 1438.5 65.3 85.6 59.9 1464.6 66.6 85.9 and EmbQ achieves the best results, validating the integra- tion of SAP’s saliency features with EmbQ’s instruction- related features. Additionally, embedding EmbQ into any compression method consistently improves performance, highlighting its effectiveness. B. Qualitative Analysis This section showcases practical examples of FlashSloth and FlashSloth-HD, demonstrating their real-world perfor- mance in multidisciplinary question answering, code gen- eration, real-world scene reasoning, fine-grained text infor- mation extraction, and chart analysis reasoning. FlashSloth delivers accurate responses and exceptional performance across these tasks. 13 USER: Please read the text in this image and return the information in the following JSON format (note xxx is placeholder, if the information is not available in the image, put 'NONE' instead).{"airport": xxx, "flight_number": xxx, "from": xxx, "destination": xxx, "class": xxx, "seat": xxx, "date": xxx, "boarding_time": xxx, "gate": xxx, "name": xxx, "ticket_number": xxx} FlashSloth: {"airport": " World Vector Airlines ", "flight_number": "28", "from": "Bangkok", "destination": "Dubai", "class": "economy", "seat": "S-224", "date": "15 AUG 2019", "boarding_time": "16:00", "gate": "A-23", "name": "JOHN VOGAGE", "ticket_number": "NONE"} FlashSloth-HD: { "airport": "World Vector Airlines", "flight_number": "ABC1234", from": "DUBAI / DXB", destination": "BANGKOK / BKK", "class": "M", "seat": "28A", "date": "15 AUG 2019", "boarding_time": "16:30", "gate": "NONE", "name": "JOHN VOYAGE", "ticket_number": "ABC1234",} Mathematical Problem Solving USER: Find the value of the square in the figure. FlashSloth: To solve this brain teaser for the IQ test, we need to use the given information and the properties of shapes to find the value of the square. Given information: - The figure contains a blue circle, a red triangle, and a red square. To find the value of the square, we can use the following steps: 1. Observe the pattern: The sum of the values of the blue circle and the red triangle is 5. 2. The sum of the values of two yellow triangle is 8. 3. The sum of the values of the blue circle and the yellow square is 7. 4. Since the sum of the values of two yellow triangle is 8 , each yellow triangle has a value of 4. 5. From step 3, the sum of the blue circle and the yellow square is 7. 6. From step 1, the sum of the blue circle and the red triangle is 5. Therefore, the value of the square in the figure is 2. FlashSloth-HD: The image shows a brain teaser for an IQ test. The question asks to find the value of the square in the figure. To solve this, we need to look at the relationships between the different shapes and numbers in the image. The image shows a series of shapes - a blue circle, a red square, a yellow triangle, and a red square. The numbers next to these shapes are 5, 8, and 7. The question states that the blue circle plus the red square equals 5. This means that the value of the blue circle is 5. The yellow triangle plus the red square equals 8. This means that the value of the yellow triangle is 8 - 5 = 3. Finally, the red square plus the blue circle equals 7. This means that the value of the red square is 7 - 5 = 2. the value of the square in the figure is 2. Figure 5. FlashSloth’s Performance in Ticket OCR Recognition and Mathematical Question Answering. 14 Multi - object Recognition USER :Output thecolor andnumber ofeach box. FlashSloth :The image shows asetofcolorful blocks arranged inapyramid -likestructure .Each block hasanumber printed onit,ranging from 0to9.The colors oftheblocks are red,orange, yellow, green, blue, purple, pink, andgray FlashSloth -HD:The image shows astack ofblocks with numbers andcolors .The colors ofthe blocks are:-Red:0-Orange :1-Yellow :2-Green :3-Purple :4-Blue :5-Teal:6-Cyan :7-Light Blue :8-Dark Blue :9 Each block hasanumber printed onit,ranging from 0to9.The numbers arearranged inascending order from thebottom tothetopofthestack .Thecolors arearranged inarainbow order, starting with redatthebottom andending with blue atthetop. Biological Problem Solving USER :What isthepart ofthecelllabeled 'B' inimage? FlashSloth :The part ofthecelllabeled 'B' intheimage isthenucleus .The nucleus isthe control center ofthecell, containing thegenetic material (DNA) thatdirects thecell's activities .It istypically located inthecenter ofthecellandis surrounded byanuclear membrane .The nucleus isresponsible forregulating gene expression, cell division, andother important cellular processes . Toarrive atthisanswer, Icarefully examined the labeled diagram ofthecell and identified the structure labeled 'B',which isclearly thenucleus . Ithen provided abrief explanation ofthe nucleus's role asthecontrol center ofthecell, based onmyunderstanding ofcellular biology . When answering questions about scientific diagrams orimages, it'simportant toclosely observe thedetails provided andmatch them to therelevant biological concepts .Inthiscase, the label 'B'directly corresponds tothenucleus, soI was able toconfidently identify andexplain this keycellular component .FlashSloth -HD:The part ofthe cell labeled 'B'intheimage isthenucleus .The nucleus istypically thelargest organelle ina eukaryotic cellandisoften referred toasthe "control center" ofthecell.Itcontains the cell's DNA andisresponsible fordirecting the cell's activities . Figure 6. FlashSloth’s Performance in Recognizing the Colors and Content of Multiple Objects and on Biological Question Answering Tasks. 15 Artistic Problem Solving USER: Who painted this image? FlashSloth: Vincent van Gogh FlashSloth-HD: The painting in the image is by Vincent van Gogh, a Dutch post-impressionist painter. Van Gogh is known for his expressive and emotive style, which often included bold colors and dynamic brushstrokes. This particular painting is titled "The Starry Night Over the Rhone at Saint-Rémy-de-Provence" and was painted in 1888. It is one of his most famous works and is celebrated for its vivid depiction of the night sky and the swirling patterns of the stars. Van Gogh's work is often associated with the Post-Impressionist movement, which sought to break away from the naturalism of Impressionism and instead focus on more symbolic and emotional content. Table Analysis USER: Describe in detail the percentage of each section. FlashSloth: The pie chart shows the percentage of news directors who reported changes in their local TV news budgets in 2012. The chart is divided into three sections, each representing a different category of change. The largest section, colored in a dark red hue, represents 48.8% of news directors who reported an increase in their local TV news budgets. This indicates that nearly half of the news directors surveyed experienced an increase in their news budgets. The second largest section, colored in a light pink hue, represents 38.5% of news directors who reported a "Same" budget. This suggests that a significant portion of news directors did not see any change in their news budgets. The smallest section, colored in a light purple hue, represents 5.7% of news directors who reported a "Decreased" budget. This indicates that a small fraction of news directors experienced a decrease in their news budgets. FlashSloth-HD: The pie chart shows the percentage of news directors who reported changes in their local TV newsroom budgets in 2012. The chart is divided into four sections, each representing a different category of change: 1. Increased: 48.8% - This section represents the news directors who reported an increase in their local TV newsroom budgets in 2012. 2. Same: 38.5% - This section represents the news directors who reported that their local TV newsroom budgets remained the same in 2012. 3. Decreased: 5.7% - This section represents the news directors who reported a decrease in their local TV newsroom budgets in 2012. 4. Not Sure: 7% - This section represents the news directors who were unsure about the change in their local TV newsroom budgets in 2012. Figure 7. FlashSloth’s Performance in Artistic Question Answering and Chart Content Analysis. 16 Code Generation USER :Solve thealgorithmic problem intheimage . FlashSloth -HD:The image describes asnake inamatrix, which isacommon problem in computer science andmathematics .Thesnake isrepresented byasequence ofcommands, where each command iseither "RIGHT", "DOWN", or"LEFT" .Thegoal istodetermine thefinal position ofthe snake after executing allthecommands . Tosolve thisproblem, wecanuseasimple algorithm thatiterates through thecommands andupdates thesnake's position accordingly .Here's aPython code thatimplements thisalgorithm : ```python defsnake_in_matrix (matrix, commands) : #Initialize thesnake's position snake_position =[0,0] #Iterate through thecommands forcommand incommands : #Update thesnake's position based onthecommand ifcommand =="RIGHT" : snake_position [0]+=1 elifcommand =="DOWN" : snake_position [1]+=1 elifcommand =="LEFT" : snake_position [0]-=1 #Return thefinal position ofthesnake return snake_position ``` This function takes twoarguments :thematrix andthesequence ofcommands .Thematrix isa2Dlist representing thegrid, andthecommands arealistofstrings representing thesnake's movements . Thefunction initializes thesnake's position tothetop-leftcorner ofthegrid.Itthen iterates through the commands andupdates thesnake's position based onthecommand .Ifthecommand is"RIGHT", the snake moves onecelltotheright .Ifthecommand is"DOWN", thesnake moves onecelldown .Ifthe command is"LEFT", thesnake moves onecelltotheleft. Finally, thefunction returns thefinal position ofthesnake . Figure 8. FlashSloth’s Performance in Solving Algorithmic Problems Using Code. 17 Vision Reasoning USER :How many apples inthepicture? FlashSloth :2 FlashSloth -HD:2 USER :Arethey thesame apple andwhy? FlashSloth :Yes, they arethesame apple .This canbeinferred from thevisual similarity intheir color andtexture, aswell asthefactthatthey areboth held inthesame hand, suggesting they arebeing compared orareofthesame variety .The slight variations incolor andtexture arecommon inapples anddonotnecessarily indicate different varieties . FlashSloth -HD:The apples appear tobethesame duetotheir similar coloration andsize.The apple intheforeground hasamore pronounced blush andaslightly different pattern ofredandyellow spots, which could beduetotheangle ofthephotograph orthenatural variation intheapple's skin. However, without acloser examination oradditional information, itisnotpossible todefinitively determine ifthey arethesame variety orifthey aresimply apples ofsimilar appearance . USER :Where isthewater compared tothetater tots? FlashSloth :Totheright FlashSloth :Thewater isbehind totheright ofthetater tots. USER :Ihave high cholesterol .Isitsuitable formetoeatthese foods? Why? FlashSloth :Itisgenerally notrecommended forindividuals with high cholesterol toconsume foods that arehigh insaturated fatsandcholesterol, asthese cancontribute toelevated cholesterol levels .Theburger andfries intheimage arelikely tobehigh insaturated fats, which canincrease LDL (bad) cholesterol levels .Additionally, thecheese andmayonnaise ontheburger arealso sources of saturated fats. However, itisimportant tonote thattheimpact ofthese foods oncholesterol levels canvary depending ontheindividual's overall dietandlifestyle .Ifyouhave high cholesterol, itisadvisable toconsult with ahealthcare professional oraregistered dietitian todetermine thebest dietary choices foryour specific needs .They canprovide personalized recommendations andguidance onmanaging your cholesterol levels through dietandother lifestyle modifications . Ingeneral, itisrecommended tochoose healthier options such aslean proteins, whole grains, fruits, and vegetables, and tolimit theintake ofsaturated fats and cholesterol -rich foods .This canhelp maintain abalanced dietandsupport overall health, including cholesterol management . Figure 9. FlashSloth’s Performance in Real-World Scene Reasoning Tasks. 18 | 6 | 2 | The FlashSloth model is based on a smaller-scale architecture with about 2-3 billion parameters (similar to other tiny MLLMs mentioned such as Qwen2-VL-2B). Given the complexity of multimodal tasks, it may require more time than training single-modality models. The dataset size is assumed to be large as it is optimized for vision-language tasks, but not explicitly provided, which leads to the estimate of 2-3 epochs for training. Using 2 GPUs balances memory needs due to the visual token processing and ensures the model training occurs within a reasonable timeframe. Considering the training optimizations, it could be feasible to train in under 8 hours on a single GPU with a simplified batch size but it is likely more efficient on multiple GPUs to meet performance goals effectively. | yes | Yes | Multimodal | FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression | 2024-12-05T00:00:00.000Z | [https://github.com/codefanw/flashsloth] | 2 | https://github.com/codefanw/FlashSloth/tree/main/scripts/eval | 20 min | https://colab.research.google.com/drive/1EbXpI0FmQ27nGKgRtKQCVz3m1EpBKiDY?usp=sharing | Yes | Successfully run |
CIFAR-10 | ABNet-2G-R0 | [] | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | {'Percentage correct': '94.118'} | [
"Percentage correct",
"Top-1 Accuracy",
"Accuracy",
"Parameters",
"Top 1 Accuracy",
"F1",
"Cross Entropy Loss"
] | Given the following paper and codebase:
Paper: ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
Codebase: https://github.com/dvssajay/New_World
Improve the ABNet-2G-R0 model on the CIFAR-10 dataset. The result
should improve on the following metrics: {'Percentage correct': '94.118'}. You must use only the codebase provided.
| ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Venkata Satya Sai Ajay Daliparthi Blekinge Institute of Technology Karlskrona, Sweden venkatasatyasaiajay.daliparthi@bth.se Abstract Inspired by Many-Worlds Interpretation (MWI), this work introduces a novel neural network architecture that splits the same input signal into parallel branches at each layer, utilizing a Hyper Rectified Activation, referred to as AND- HRA. The branched layers do not merge and form a sep- arate network path, leading to multiple network heads for output prediction. For a network with branching factor 2 at three levels, the total heads are 2ˆ3 = 8. The individ- ual heads are jointly trained by combining their respective loss values. However, the proposed architecture requires additional parameters and memory during training due to the additional branches. During inference, the experimen- tal results on CIFAR-10/100 demonstrate that there exists one individual head that outperforms the baseline accuracy, achieving statistically significant improvement with equal parameters and computational cost. 1. Introduction As the depth of the neural networks (NN) starts increasing, the training complexity increases due to the vanishing gradi- ent problem[10]. As the gradients pass through each layer, they shrink, leading to an ineffective update of weights in the earlier layers (close to input). The existing solutions in- vestigated this problem through different dimensions that include non-linear activations (ReLU [21]), initialization techniques (Xavier [6] and He [7]), batch normalization [14], stochastic optimization (Adam [16]), and network ar- chitectures (residual [8], and dense [12] connections). In the network architectures landscape, the prominent ResNets [8] introduced skip-connections between layers to facilitate di- rect gradient flow in deeper architectures. The DenseNet [12] connects each layer to every other layer thus providing each layer with direct access to gradients from all previous layers. Nevertheless, in many cases NNs are trained us- ing a single loss function attached to the final output layer, this is due to the traditional network architecture style. To mention, some earlier works introduced methods like Com- Figure 1. Comparison of training accuracy progression in baseline and proposed method AB (ANDHRA Bandersnatch), in log-scale graph panion objective [19], and Auxiliary loss [18, 23] where an additional loss function is attached to the earlier layers for improvement in gradient flow. However, the place of these auxiliary losses remains arbitrary [19, 25], and the auxiliary prediction is often discarded at the inference stage. To address the vanishing gradient problem through net- work architectures, inspired by Many-Worlds Interpreta- tion (MWI), this work proposes a novel NN architec- ture that grows exponentially by forming branches/splits at each layer where different branches independently han- dle the flow of information, resulting in multiple parallel heads(output layers). A loss function is attached to the in- dividual heads and the whole network is jointly trained by aggregating the individual head losses. The main contribu- tions of this work are as follows: • A non-merging splitting/branching network module called ANDHRA . • A network architecture named ANDHRA Bandersnatch (AB) that uses the ANDHRA module at different levels to create network branches. “The key idea is that by splitting the network into multi- ple independent branches at each level, the flow of gradients is no longer confined to a single path. This should allow the network to effectively propagate gradients through the lay- ers, as multiple paths are available to carry the gradient backward during training. ” 1arXiv:2411.19213v1 [cs.CV] 28 Nov 2024 The figure 1 presents the training accuracy progression of the proposed architecture in comparison with the base- line, where the baseline ( Baseline 1GR3 ) network is equiv- alent to a traditional feed-forward ResNet [8], and the pro- posed network; ANDHRA Bandersnatch ( AB 2GR3 ). The AB 2GR3 network has a branching factor 2 at 3 levels, the total heads for this network are 2ˆ3 = 8 heads. Here, one head in AB 2GR3 is equivalent to the baseline in terms of pa- rameters and computational cost. Thus, in the figure 1, the Baseline 1GR3 curve should be compared with AB 2GR3 one head , and the AB 2GR3 combined is an ensemble pre- diction that is inherent to the proposed architecture. The experiential results on CIFAR-10/100 demonstrate the effectiveness of the proposed architecture by show- ing statistically significant accuracy improvements over the baseline networks. 2. Method This section provides a background on the source of inspira- tion for the proposed method, then introduces the proposed ANDHRA module, Bandersnatch network, and definition of training loss for the proposed method. Source of Inspiration: Many-Worlds Interpretation (MWI) of quantum mechanics assumes that every quan- tum measurement leads to multiple branches of reality, with each branch representing a different outcome of a quantum event. It assumes that all possible outcomes of a quan- tum event actually occur but in different, non-interacting branches of the universe. These parallel realities exist si- multaneously, each one corresponding to a different possi- bility that could have occurred, leading to the idea that par- allel universes are created for every quantum decision. Ac- cording to MWI, a popular quantum paradox, Schr ¨odinger Cat is interpreted as where both outcomes (the cat being dead and the cat being alive) occur, but in separate branches of the universe. There is no collapse of the wave function; the universe simply splits into two branches, one where the cat is dead and one where the cat is alive. A similar idea of parallel realities arising from decisions (like in human choice or action, rather than purely quantum events) has been explored in various ways, often in the con- text of multiverse theories or alternate realities in science fiction (Netflix shows Bandersnatch and Dark). 2.1. Ajay N’ Daliparthi Hyper Rectified Activation (ANDHRA) Idea: ”The idea is to implement a NN architecture based on MWI where the network splits into multiple “branches” or “heads” (representing different paths) that process the same input signal in parallel, each corresponding to differ- ent possible outcomes. Akin to how MWI suggests parallel universes in their treatment of parallelism and branching, the NN architecture involves computational paths that exist Figure 2. MWI based state changes simultaneously, and those outcomes are handled indepen- dently (separate branches or worlds). ”, as depicted in Fig- ure 2 The intuition behind the idea is that by designing a net- work that grows exponentially, the parent layers are shared among the individual branches, thus the shallow/earlier lay- ers (close to input) receive multiple gradient updates from each individual branch. Since these individual branches are identical, the updates from multiple branches shouldn’t de- viate much from the ideal one. Proposed method: Based on the idea, this work proposes a network module referred to as ANDHRA that splits the given input signal into N (branching factor) number of par- allel branches. The A N’D stands for Ajay and Daliparthi, and HRA stands for Hyper Rectified Activation. Since the activation function adds non-linearity to the network, this work interprets the activation function as a decision-making point and makes a design decision to intro- duce the splitting function at the activation layer, the one be- fore reducing the spatial dimensions and passing it to next- level, meaning one module for one-level. By introducing the ANDHRA module, the network grows exponentially in terms of the number of outputs, pa- rameters, and computational complexity. Let’s assume that each layer uses one ANDHRA module, Nis the branching factor, and Lis the level of NN. The number of heads Hat level Lcan be expressed as 1 HL=NL(1) The total number of layers can be expressed as the sum of the layers at each level of the network, also expressed in 2 Layers up to level L =H0+H1+H2+. . .+HL(2) 2 By substituting the formula in eq 1 in eq 2 Layers up to level L = 1 + N+N2+N3+. . .+NL(3) The equation 3 resembles a classic geometric series, where the first term is 1 and the common ratio is N. The sum of the first L+ 1terms of a geometric series is given by the formula: SL=NL+1−1 N−1(4) ∴Layers up to level L =NL+1−1 N−1(5) Where: •Nis the branching factor. •Lis the Level number, starting from 0. 2.2. ANDHRA Bandersnatch (AB) Network The Bandersnatch network is a NN implemented using the ANDHRA module with branching factor N = 2, denoted as ANDHRA Bandersnatch 2G (where G stands for genera- tions also denoting network growth rate/common ratio). It assumes that the network splits into two outcomes at each level. Based on the dataset (input image resolution), the levels will be decided in a network architecture. Figure 3 presents baseline and Bandersnatch-2G network architec- tures side-by-side in which there are four levels (based on CIFAR input resolution 32x32), and ANDHRA module is placed three times, each at level-1, 2, and 3. The baseline ar- chitecture is implemented by replicating ResNet[8], and the Bandersnatch-2G is implemented to match the baseline for a given individual head, this can also be observed from the Figure 3. Using eq 1, the total heads for a 3-leveled network with branching factor 2 is 2ˆ3 = 8. Thus, the Bandersnatch- 2G network consists of 8 identical heads, and the baseline is identical to an individual head in terms of parameters and computational complexity. In Figure 3, the Conv layer at level-0 (with 3 in filters, and 64 out filters), also the first Conv layer, receives gradi- ent updates from eight heads, the two Conv layers at level- 1; each receives gradient updates from four heads, .... (the pattern repeats until the end) Network Notation: Each Conv block is followed by a ResBlock (R), the depth of the ResBlock will be decided during experimentation (R-Depth). A network with R0 means zero residual blocks are present in a network. For networks with R value 3, three residual blocks are stacked on top of each other, each residual block consists of two Conv layers and a skip-connection. For any given Res- Block, the number of input filters an output filters are same. The Conv layers represented in Figure 3 have stride 2, and a point-wise (1x1 Conv) skip connection. Before pass- ing the individual heads into linear layers, there is an av- erage pooling layer with kernel size 4. Since there are 8heads, during inference, the individual head predictions are majority-voted to get the combined prediction. Calculating the number of layers: using equations 2 3 4, the total number of layers for levels 0, 1, 2, and 3 in a Bandersnatch-2G network can be calculated as: For each layer: H0= 1, H 1= 2, H 2= 4, H 3= 8 The total number of Conv layers up to level 3 is: Total layers up to layer 3 = 1 + 2 + 4 + 8 = 15 Using the geometric sum formula: Total heads up to layer 3 =23+1−1 2−1=16−1 1= 15 Thus, the total number of heads up to layer 3 is 15, this can also be manually verified by counting the number of Conv blocks at each level of the Bandersnatch-2G network in Figure 3. 2.3. Training the ANDHRA Bandersnatch network While training, each head is assigned a loss function and these individual losses are combined by summing and av- eraging. Let L1, L2, . . . , L Nbe the individual losses for thenheads. Each Licorresponds to the loss computed for thei-th head of the network. The final loss Ltotalpassed for back-propagation is the average of all individual losses, represented in equation 6 Ltotal=1 nNX i=1Li (6) The reason for summing and averaging the losses is to create a global loss that represents the overall error across all heads. The averaging ensures that the optimization pro- cess treats each head equally, which might help avoid over- fitting to any one branch of the network, ensuring that each head contributes equally to the final loss. For Bandersnatch Network with 8 heads, the total loss from eq 6 can be written as: Ltotal= 0.125·(L1+L2+L3+L4+L5+L6+L7+L8)(7) 3. Evaluation 3.1. Experiment Setup Each network is trained five times and the mean and stan- dard deviation values are reported. These training hyper-parameters are kept the same for both baseline and Bandersnatch Network, and experiments are conducted by replacing just the network (The training and validation function needs adjustments to support the Bandersnatch 2G Network): 3 Figure 3. From the left side, baseline network, the levels & output shapes chart, and the ANDHRA Bandersnatch 2G network • Dataset: CIFAR 10/100 • Training data transforms: RandomCrop(32, padding=4), RandomHorizontalFlip(), and Normalize. For validation data, only Normalization. • Batch Size: 128 • Epochs: 200 • Loss: CrossEntropyLoss • Optimizer: SGD (momentum=0.9, weight decay=5e-4) • Learning rate: 0.1 • Learning rate scheduler: Cosine Annealing (T max=200) • Performance metric: Top-1 accuracy Experiment Hypothesis : Since, the baseline is identical to any individual network branch/(head) in Bandersnatch 2G Network (see Figure 3); if any individual head outper- forms the baseline accuracy, during inference, that particu- lar head can be detached and used for inference, it means improving the performance of the network without adding additional computation and parameter overhead. To check if the experiment hypothesis holds true: a sta- tistical significance test (Paired T-test) is performed be- tween the results of each baseline variant and its corre- sponding top-performing head in Bandersnatch 2G Net- work. If the p-value is equal to or less than 0.05, then the prediction distributions (5 runs) are considered to be statis- tically significant. 3.2. Experiment results In Table 1, and 2; the first column represents the depth of the residual blocks placed at each level (shown in Figure 3) of the network (refer to section 2.2 network notation); the second column represents the performance of the baseline networks; the third column represents the performance of top performing heads out of the eight heads in the Bander- snatch 2G network; the fourth column represents the com- bined prediction of 8 heads. During the comparison, thebaseline performance (col-2) is matched with the top per- forming head (col-3) out of 8 heads. Thus, in the fifth and sixth columns, the statistically significant difference and mean squared error is measured between the 5 runs of base- line and top performing head performance, columns (2 and 3). Table 1 presents results on CIFAR-10 where the top per- forming head in ANDHRA Bandersnatch (2G) network out- performs the baseline from residual depth (0-3) with sta- tistical significance difference. The experiment hypothesis holds true in all cases, at every depth. Table 2 presents results on CIFAR-100 where the per- formance of the top performing head in ANDHRA Bander- snatch (2G) outperforms the baseline from residual depth (1-3) with a statistically significant difference. Expect, in case of residual depth (0), the proposed method slightly under-performs the baseline, thus, no statistically signifi- cant difference is observed. Hence, the experiment Hypoth- esisholds true, except for row one with residual depth zero. Furthermore, in between Table 1 and 2, the performance difference is higher in Table 2 (CIFAR-100), specifically, the rows 3 and 4 in Table 2 with residual depths 2 & 3, this is an interesting result, demonstrating the effectiveness of the proposed method. This difference can also be observed through high mean squared error in rows 3, and 4 (in Table 2). 4. Ablation study on ensemble prediction methods Since the proposed architecture consists of multiple net- work predictions, the combined/ensemble prediction is used for the joint training of individual heads. Thus, an abla- tion study is conducted to compare different ensemble tech- niques on ANDHRA Bandersnatch (AB) Networks trained 4 R-Depth Baseline (1G) ANDHRA Bandersnatch (2G) Significance Mean Sq. Error Top-Head Combined R0 93.546 ±0.190 94.118 ±0.099 94.738 ±0.090 Yes 0.404 R1 95.202 ±0.097 95.536 ±0.078 95.890 ±0.099 Yes 0.138 R2 95.366 ±0.171 95.900 ±0.127 96.230 ±0.108 Yes 0.334 R3 95.474 ±0.162 96.088 ±0.065 96.378 ±0.023 Yes 0.418 Table 1. Experimental results on CIFAR-10, (compare columns 2, and 3) R-Depth Baseline (1G) ANDHRA Bandersnatch (2G) Significance Mean Sq. Error Top-Head Combined R0 73.982 ±0.184 73.930 ±0.233 77.186 ±0.153 No 0.143 R1 77.952 ±0.145 78.792 ±0.173 81.214 ±0.114 Yes 0.733 R2 78.676 ±0.324 80.354 ±0.084 82.422 ±0.113 Yes 2.910 R3 78.610 ±0.361 80.830 ±0.116 82.784 ±0.128 Yes 5.007 Table 2. Experimental results on CIFAR-100, (compare columns 2, and 3) R-Depth Majority V oting Average Probability Product of Experts (PoE) Rank-Based V oting R0 94.738 ±0.090 94.892 ±0.110 94.846 ±0.139 94.818 ±0.113 R1 94.890 ±0.099 96.052 ±0.119 96.094 ±0.095 95.918 ±0.098 R2 96.230 ±0.108 96.348 ±0.102 96.344 ±0.096 96.294 ±0.108 R3 96.378 ±0.023 96.504 ±0.108 96.508 ±0.101 96.428 ±0.037 Table 3. Ablation study on ensemble prediction methods of Bandersnatch network on CIFAR-10 R-Depth Majority V oting Average Probability Product of Experts (PoE) Rank-Based V oting R0 77.186 ±0.153 77.662 ±0.297 78.026 ±0.238 77.664 ±0.218 R1 81.214 ±0.114 81.506 ±0.180 81.712 ±0.125 81.516 ±0.132 R2 82.422 ±0.113 82.584 ±0.126 82.612 ±0.090 82.460 ±0.119 R3 82.784 ±0.128 82.932 ±0.108 82.950 ±0.079 82.872 ±0.138 Table 4. Ablation study on ensemble prediction methods of Bandersnatch network on CIFAR-100 on CIFAR-10/100 in Section 3. Note that the default en- semble method used for the experiments in section 3 is a simple majority voting. 4.1. Selected ensemble techniques Let: •N: Number of heads •yi: Prediction of the i-th head •pi: Softmax probability distribution from the i-th head •ˆy: Final combined prediction 1. Majority V oting [1] This strategy selects the class based on the most frequent vote among the multiple heads. By stacking all the predictions from the heads into a tensor, the mode across the predictions for each sample is calculated, as shown in Equation 8ˆy=mode ([y1, y2, . . . , y N]) (8) 2. Average Probability [4] This strategy averages the probability distributions from each head and chooses the class with the highest average probability. The probabilities from all heads are stacked, the mean is computed, and the class with the highest average probability is chosen, as shown in Equation 9 ˆy= arg max c 1 NNX i=1pi[c]! (9) 3. Product of Experts (PoE) [9] This strategy assumes that the heads are “experts,” and their probabilities are multiplied (in log space) to combine 5 their opinions. The probabilities from all heads are stacked, take the log of each, sum them, and then exponentiate to get the combined probability where the class with the highest combined probability is selected, as shown in Equation 10 ˆy= arg max c exp NX i=1log(pi[c] +ϵ)!! (10) 4. Rank-Based V oting [2] This strategy assigns higher weight to the top-ranked classes for each head. For each class, the rank scores are calculated across all heads. The ranking values are added to a tensor, where each class’s rank gets added to its corre- sponding position, and the class with the highest rank score is chosen. Let ri[c]denote the rank of class cfor head i, the rank-based voting is shown in 11 ˆy= arg max cNX i=11 ri[c](11) 4.2. Ablation study results From Table 3, the ablation results on CIFAR-10, a similar performance is observed between the techniques; average probability and product of experts, they outperform major- ity voting and rank-based voting. In Table 4, the ablation results on CIFAR-100, the prod- uct of experts outperforms other techniques. Similar to table 3, the average probability shows adequate performance. 5. Related Work The Inception [23] module proposed to split the feature map and process them with parallel convolutional layers of dif- ferent kernel sizes, for capturing features at different scales. The ResNeXt[26] extended the ResNet [8] to increase the width of the network by proposing cardinality, the number of independent splits. A similar concept of using multiple parallel convolutions has been investigated in Wide-ResNet [27], and FractalNet[18], Res2Net [5]. Through model ar- chitecture search methods, the RegNet[22], MobilenetV3 [11], and EfficientNet [24] balances between depth, width, and scaling. Grouped Convolutions [17] is a separate branch of con- volutional layers that divide the channels in an input feature map into multiple groups, and each group is processed in- dividually, thus reducing the computational complexity of the convolutional operations. The Shufflenetv2 [20], Con- denseNet [13], and MobilenetV3 [11] demonstrated the ef- fectiveness of grouped convs in designing light-weight net- works. In Xception[3], each channel is processed indepen- dently and a 1x1 convolution is used to combine the chan- nels, this is a special case of grouped convolution where the number of groups is equal to the channels in the input fea- ture map.Nevertheless, the existing works merge or concatenate feature maps after parallel processing/splitting. In contrast, this work proposes to maintain an independent branch after splitting that continues until the output layer of the network, leading to multiple network heads for prediction. On the other hand, the auxiliary loss [23, 25] concept proposes to introduce additional losses at intermediate lay- ers to improve the training of earlier layers (close to input). During inference, the auxiliary heads are discarded, and the final output is considered for prediction, this can be viewed as a regularizing technique [23]. The concept of applying multiple loss functions is prominent in multitask learning [15] where each loss learns to solve a specific task, these losses are combined with the primary loss for training on multiple tasks simultaneously. Instead, this work proposes training a network with mul- tiple identical heads where each head is treated with a loss function and the total losses are summed and scaled before proceeding with gradient updates. 6. Conclusions This work proposes a novel NN architecture that splits the network into parallel branches where the multiple network heads are jointly trained. Due to the shared parent branches, the earlier(close to input) layers in the network receive gra- dient updates from multiple output heads, leading to faster convergence of the individual heads (compared to baseline as shown in Figure 1). The experimental results on CIFAR- 10/100 demonstrate a statistically significant difference by adopting the proposed architecture for simple ResNet style baselines. Unlike traditional methods, the ensemble predic- tion is inherent to the proposed architecture. Moreover, the proposed method is analogous to existing network modules, thus paving a path forward for experimentation. References [1] Krizhevsky Alex. Learning multiple layers of features from tiny images. https://www. cs. toronto. edu/kriz/learning- features-2009-TR. pdf , 2009. 5 [2] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd in- ternational conference on Machine learning , pages 89–96, 2005. 6 [3] Franc ¸ois Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition , pages 1251–1258, 2017. 6 [4] Thomas G Dietterich. Ensemble methods in machine learn- ing. In International workshop on multiple classifier systems , pages 1–15. Springer, 2000. 5 [5] Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and Philip Torr. Res2net: A new 6 multi-scale backbone architecture. IEEE transactions on pattern analysis and machine intelligence , 43(2):652–662, 2019. 6 [6] Xavier Glorot and Yoshua Bengio. Understanding the diffi- culty of training deep feedforward neural networks. In Pro- ceedings of the thirteenth international conference on artifi- cial intelligence and statistics , pages 249–256. JMLR Work- shop and Conference Proceedings, 2010. 1 [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level perfor- mance on imagenet classification. In Proceedings of the IEEE international conference on computer vision , pages 1026–1034, 2015. 1 [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. 1, 2, 3, 6 [9] Geoffrey E Hinton. Training products of experts by mini- mizing contrastive divergence. Neural computation , 14(8): 1771–1800, 2002. 5 [10] Sepp Hochreiter. Recurrent neural net learning and vanishing gradient. International Journal Of Uncertainity, Fuzziness and Knowledge-Based Systems , 6(2):107–116, 1998. 1 [11] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mo- bilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision , pages 1314–1324, 2019. 6 [12] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil- ian Q Weinberger. Densely connected convolutional net- works. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4700–4708, 2017. 1 [13] Gao Huang, Shichen Liu, Laurens Van der Maaten, and Kil- ian Q Weinberger. Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition , pages 2752–2761, 2018. 6 [14] Sergey Ioffe. Batch normalization: Accelerating deep net- work training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015. 1 [15] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geome- try and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 7482–7491, 2018. 6 [16] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. 1 [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. Advances in neural information processing systems , 25, 2012. 6 [18] Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648 , 2016. 1, 6[19] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. In Arti- ficial intelligence and statistics , pages 562–570. Pmlr, 2015. 1 [20] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architec- ture design. In Proceedings of the European conference on computer vision (ECCV) , pages 116–131, 2018. 6 [21] Vinod Nair and Geoffrey E Hinton. Rectified linear units im- prove restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML- 10), pages 807–814, 2010. 1 [22] Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Doll ´ar. Designing network design spaces. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition , pages 10428–10436, 2020. 6 [23] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1–9, 2015. 1, 6 [24] Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In International conference on machine learning , pages 10096–10106. PMLR, 2021. 6 [25] Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd international con- ference on pattern recognition (ICPR) , pages 2464–2469. IEEE, 2016. 1, 6 [26] Saining Xie, Ross Girshick, Piotr Doll ´ar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1492–1500, 2017. 6 [27] Erwan Zerhouni, D ´avid L ´anyi, Matheus Viana, and Maria Gabrani. Wide residual networks for mitosis detection. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) , pages 924–928. IEEE, 2017. 6 7 ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Supplementary Material From the main paper results in Table 1, and Table 2, the net- work with residual depth three (R3) is selected for conduct- ing additional experiments in the supplementary material. This selection is motivated by the accuracy of the networks with residual depth three. Just as in the main paper, each network is trained five times and the mean and standard de- viation values are reported. 7. Parametric Activation In Figure 3 (main paper), the ANDHRA module is imple- mented with two identical ReLU layers. However, using parametric activation functions such as PReLU, the defini- tion of two independent layers becomes more coherent due to separate parameters for each branch. As shown in Figure 4 where the two independent PReLU layers are defined with the number of input channels as a parameter. A parametric version of the baseline and the Bander- snatch -2GR3 networks are implemented by replacing the ReLU layer with PReLU (Params=input channels), and the results are presented in Table 5. The results demonstrate that the top performing head in Bandersnatch -2G outper- forms the baseline networks in the parametric activation scenario, alining with main paper results from Table 1, and Table 2. 8. ANDHRA module at different levels In the main paper, for the network Bandersnatch 2G (refer Figure 3, one ANDHRA module is placed at each network level starting from level 1-3. Thus, the network in Figure 3 consists of three ANDHRA modules, leading to 8 output heads. In this section, an ablation study is performed with: 1. One ANDHRA module = 2 network heads 2. Two ANDHRA modules = 4 network heads 8.1. One ANDHRA module and 2 output heads Since there are three possibilities of placing the ANDHRA module at levels (1, 2, and 3), three networks (AB2GR3- 2H1, AB2GR3-2H2, and AB2GR3-2H3) are implemented as shown in the Figure 5. Note: the network code presented in Figure 8 belongs to this family of networks with one ANDHRA module placed at level 1. (AB2GR 1-2H1) 8.2. Two ANDHRA modules and 4 output heads Since there are two possibilities of placing 2 ANDHRA modules at levels (1-2, and 2-3), two networks (AB2GR3- 4H1 and AB2GR3-4H2) are implemented as shown in the Figure 6.8.3. Results The total 5 five networks (3 two heads - 2H) + 2 four heads - 4H) are trained on CIFAR-10/100, and the results are pre- sented in Table 6, and Table 7 along with the baseline net- work (from main paper, baseline with ReLU). The statisti- cal significance test is performed between the baseline and top-performing head in the Bandersnatch network. In Table 6 and Table 7, all the Bandersnatch 2G vari- ants (2H, 4H) outperformed the baseline network in terms of top-1 accuracy with statistically significant difference. Further, the network AB2GR3-4H1 outperforms out of the five Bandersnatch network variants trained in this ablation study. 9. Implementation This section presents the implementation of the Bandersnacth-2G Network through a minimal network with the ANDHRA module placed only at level 1, meaning splitting is performed only once, thus leading to 2 output heads. In this network, the residual module depth is limited to one (R1). The PyTorch code for implementing this minimal network is presented in three parts (in figures 7, 8, and 9): 1.Network Modules (7) : consists of three building blocks of the network that include the ANDHRA module, a residual module with depth-1, and a residual module for pooling and feature space expansion. 2.Bandersnatch 2G network with 2 heads (8) : con- sists of network definition and forward-pass where the ANDHRA module is only placed at level-1, and the net- work returns two outputs. 3.Training function (9) consists of combined loss and ma- jority voting prediction out of two output heads. 8 CIFAR Baseline (1G-PReLU) ANDHRA Bandersnatch (2G-PReLU) Significance Mean Sq. Error Top-Head Combined 10 95.352 ±0.175 96.146 ±0.042 96.394 ±0.069 Yes 0.665 100 78.658 ±0.504 80.674 ±0.144 82.584 ±0.137 Yes 4.378 Table 5. Parametric activation results on CIFAR10/100, Parametric ANDHRA module 1class ANDHRA(nn.Module): 2 def __init__(self,in_planes): 3 super (ANDHRA,self).__init__() 4 self.Relu1 = nn.PReLU(num_parameters=in_planes) 5 self.Relu2 = nn.PReLU(num_parameters=in_planes) 6 7 def forward(self,x): 8 x1 = self.Relu1(x) 9 x2 = self.Relu2(x) 10 11 return x1, x2 Figure 4. ANDHRA module with PReLU Network Top-1 Accuracy Significance Mean Sq. Error Top-Head Combined Baseline (1GR3) 95.474 ±0.162 - - - AB2GR3-2H1 95.844 ±0.117 95.670 ±0.067 Yes 0.142 AB2GR3-2H2 95.922 ±0.150 95.972 ±0.104 Yes 0.214 AB2GR3-2H3 95.668 ±0.154 95.670 ±0.163 Yes 0.084 AB2GR3-4H1 95.976 ±0.151 96.322 ±0.047 Yes 0.313 AB2GR3-4H2 95.906 ±0.160 95.882 ±0.170 Yes 0.249 Table 6. Ablation study results on CIFAR-10 for ANDHRA module at different levels Network Top-1 Accuracy Significance Mean Sq. Error Top-Head Combined Baseline (1GR3) 78.610 ±0.361 - - - AB2GR3-2H1 79.660 ±0.260 79.674 ±0.182 Yes 1.130 AB2GR3-2H2 80.100 ±0.200 80.140 ±0.036 Yes 2.301 AB2GR3-2H3 79.444 ±0.143 79.380 ±0.116 Yes 0.747 AB2GR3-4H1 80.484 ±0.141 82.188 ±0.260 Yes 3.621 AB2GR3-4H2 80.294 ±0.087 81.324 ±0.299 Yes 2.991 Table 7. Ablation study results on CIFAR-100 for ANDHRA module at different levels 9 Figure 5. From the left side: levels chart, AB2GR3-2H1, AB2GR3-2H2, and AB2GR3-2H3 networks Figure 6. From the left side: levels chart, AB2GR3-4H1, and AB2GR3-4H2 networks 10 Network Modules 1class ANDHRA (nn.Module): # Proposed splitting module 2 def __init__(self): 3 super (ANDHRA,self).__init__() 4 self.Relu1 = nn.ReLU(inplace=False) 5 self.Relu2 = nn.ReLU(inplace=False) 6 7 def forward(self,x): 8 x1 = self.Relu1(x) 9 x2 = self.Relu2(x) 10 11 return x1, x2 12 13class ResBlock(nn.Module): # Residual block with equal in/out filters 14 def __init__(self, in_planes): 15 super (ResBlock3, self).__init__() 16 17 #residual function 18 self.conv = nn.Sequential( 19 nn.Conv2d(in_planes, in_planes, kernel_size=3, stride =1,padding=1, bias=False), 20 nn.BatchNorm2d(in_planes), 21 nn.ReLU(inplace=False), 22 nn.Conv2d(in_planes,in_planes, kernel_size=3, stride =1,padding=1, bias=False), 23 nn.BatchNorm2d(in_planes)) 24 25 #shortcut 26 self.shortcut = nn.Sequential() 27 28 def forward(self, x): 29 out = self.conv(x) 30 out += self.shortcut(x) 31 return out 32 33 34class ResBlockP(nn.Module): # Residual block with inherent pooling that also doubles in filters 35 def __init__(self, in_channels, out_channels, stride): 36 super (ResBlockP, self).__init__() 37 38 #residual function 39 self.residual_function = nn.Sequential( 40 nn.Conv2d(in_channels, out_channels, kernel_size=3, stride= stride, padding=1, bias=False), 41 nn.BatchNorm2d(out_channels), 42 nn.ReLU(inplace=False), 43 nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1, bias=False), 44 nn.BatchNorm2d(out_channels) 45 ) 46 47 #shortcut 48 self.shortcut = nn.Sequential( 49 nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False), 50 nn.BatchNorm2d(out_channels) 51 ) 52 53 def forward(self, x): 54 return nn.ReLU(inplace=False)(self.residual_function(x) + self.shortcut(x)) Figure 7. Modules of the network 11 Bandersnatch 2G network with 2 heads 1class AB_2GR1_2H1(nn.Module): 2 def __init__(self, num_classes): 3 super (AB_2GR1_2H1,self).__init__() 4 self.Conv1 = nn.Sequential( 5 nn.Conv2d(3, 64, kernel_size=3, stride =1,padding=1, bias=False), 6 nn.BatchNorm2d(64), 7 nn.ReLU(inplace=False)) 8 self.Res1 = ResBlock(in_planes = 64) 9 10 self.Act1 = ANDHRA() # Proposed splitting module 11 12 13 self.Conv21 = ResBlockP(in_channels=64, out_channels=128, stride=2) # Branch 1 14 self.Res21 = ResBlock(in_planes = 128) 15 16 self.Conv22 = ResBlockP(in_channels=64, out_channels=128, stride=2) # Branch 2 17 self.Res22 = ResBlock(in_planes = 128) 18 19 self.Act21 = nn.ReLU(inplace=False) 20 self.Act22 = nn.ReLU(inplace=False) 21 22 23 self.Conv31 = ResBlockP(in_channels=128, out_channels=256, stride=2) 24 self.Res31 = ResBlock(in_planes = 256) 25 26 self.Conv32 = ResBlockP(in_channels=128, out_channels=256, stride=2) 27 self.Res32 = ResBlock(in_planes = 256) 28 29 self.Act31 = nn.ReLU(inplace=False) 30 self.Act32 = nn.ReLU(inplace=False) 31 32 33 self.Conv41 = ResBlockP(in_channels=256, out_channels=512, stride=2) 34 self.Res41 = ResBlock(in_planes = 512) 35 36 self.Conv42 = ResBlockP(in_channels=256, out_channels=512, stride=2) 37 self.Res42 = ResBlock(in_planes = 512) 38 39 40 self.Relu = nn.ReLU(inplace=False) 41 self.pool4 = nn.AvgPool2d(kernel_size=4) 42 43 self.Linear1 = nn.Linear(512, num_classes) 44 self.Linear2 = nn.Linear(512, num_classes) 45 46 47 def forward(self,x): 48 49 out = self.Res1(self.Conv1(x)) 50 51 out1, out2 = self.Act1(out) # Splitting at level 1 52 53 out1 = self.Res21(self.Conv21(out1)) # Branch 1 54 out2 = self.Res22(self.Conv22(out2)) # Branch 2 55 56 out1 = self.Act21(out1) 57 out2 = self.Act22(out2) 58 59 60 out1 = self.Res31(self.Conv31(out1)) 61 out2 = self.Res32(self.Conv32(out2)) 62 63 out1 = self.Act31(out1) 64 out2 = self.Act32(out2) 65 66 out1 = self.Linear1(self.pool4(self.Relu(self.Res41(self.Conv41(out1)))).view(out.size(0), -1)) 67 out2 = self.Linear2(self.pool4(self.Relu(self.Res42(self.Conv42(out2)))).view(out.size(0), -1)) 68 69 return out1, out2 Figure 8. Network initialization and forward-pass, ANDHRA module is only placed at level 1 12 Training function with Combined Loss and Majority V oting 1def train(epoch): # Training function 2 print (’\nEpoch: %d’ % epoch) 3 net.train() 4 train_loss = 0 5 correct = 0 6 total = 0 7 8 # Initialize counters for individual model accuracies 9 correct_individual = [0] *2 10 total_individual = 0 11 12 for batch_idx, (inputs, targets) in enumerate (trainloader): 13 inputs, targets = inputs.to(device), targets.to(device) 14 optimizer.zero_grad() 15 out1, out2 = net(inputs) 16 17 # Calculate losses for each output 18 loss1 = criterion(out1, targets) 19 loss2 = criterion(out2, targets) 20 21 # Combine losses and backpropagate 22 loss = 0.5 *(loss1 + loss2) 23 loss.backward() 24 optimizer.step() 25 26 train_loss += loss.item() 27 28 # Predictions and majority voting 29 outputs = [out1, out2] 30 individual_predictions = [output. max(1)[1] for output inoutputs] 31 32 # Majority vote prediction 33 p = torch.stack(individual_predictions, dim=0).cpu().detach().numpy() 34 m = stats.mode(p) 35 predicted_majority = torch.from_numpy(m[0]).squeeze().cuda() 36 37 # Update majority correct count 38 total += targets.size(0) 39 correct += predicted_majority.eq(targets). sum().item() 40 41 # Update individual model correct counts 42 for i, pred in enumerate (individual_predictions): 43 correct_individual[i] += pred.eq(targets). sum().item() 44 total_individual += targets.size(0) Figure 9. Training function with Combined Loss and Majority V oting 13 | 6 | 1 | The ANDHRA Bandersnatch architecture implemented with a branching factor of 2 at three levels results in 8 heads, with a total of 15 convolutional layers based on the geometric formula presented in the paper. Given that the model is intended to be used on the CIFAR-10/100 datasets, which consist of 60,000 (for CIFAR-10) and 100,000 (for CIFAR-100) images, and following the training plan consisting of 200 epochs, each with a batch size of 128, the total number of iterations would be substantial. Models similar to ResNets typically exhibit a training time of approximately 5 to 10 hours on a single GPU for similar datasets and epochs. Given the complexity due to the additional heads and branching, I estimate around 6 hours of training time. The memory requirements, particularly for 8 heads, could be managed with a single high-end GPU (like NVIDIA V100, A100, or similar). Therefore, a single GPU setup can handle these calculations within the 8-hour window based on the architecture and dataset stated. | yes | Yes | CV | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00.000Z | [https://github.com/dvssajay/New_World] | 1 | dataset or example for training or testing found at: [https://github.com/dvssajay/New_World] | 20 | https://colab.research.google.com/drive/16oyFcqCzN797OOwZbD6L9uZm818KurD6?usp=sharing | YES, Successfully run on | But it run on just training set complete successfully, but on testing side need to change in code or some thing is missing like model |
FB15k-237 | DaBR | [] | Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation | 2024-12-05T00:00:00 | https://arxiv.org/abs/2412.04076v2 | [
"https://github.com/llqy123/dabr"
] | {'MRR': '0.373', 'Hits@10': '0.572', 'Hits@3': '0.410', 'Hits@1': '0.247', 'MR': '83'} | [
"Hits@1",
"Hits@3",
"Hits@10",
"MRR",
"MR",
"training time (s)",
"Hit@1",
"Hit@10"
] | Given the following paper and codebase:
Paper: Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation
Codebase: https://github.com/llqy123/dabr
Improve the DaBR model on the FB15k-237 dataset. The result
should improve on the following metrics: {'MRR': '0.373', 'Hits@10': '0.572', 'Hits@3': '0.410', 'Hits@1': '0.247', 'MR': '83'}. You must use only the codebase provided.
| Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation Weihua Wang1,2,3, *, Qiuyu Liang1, Feilong Bao1,2,3, Guanglai Gao1,2,3 1College of Computer Science, Inner Mongolia University, Hohhot, China 2National and Local Joint Engineering Research Center of Intelligent Information Processing Technology for Mongolian, Hohhot, China 3Inner Mongolia Key Laboratory of Multilingual Artificial Intelligence Technology, Hohhot, China Abstract Quaternion contains one real part and three imaginary parts, which provided a more expres- sive hypercomplex space for learning knowl- edge graph. Existing quaternion embedding models measure the plausibility of a triplet through either semantic matching or geometric distance scoring functions. However, it appears that semantic matching diminishes the sepa- rability of entities, while the distance scoring function weakens the semantics of entities. To address this issue, we propose a novel quater- nion knowledge graph embedding model. Our model combines semantic matching with the geometric distance of entities to better mea- sure the plausibility of triplets. Specifically, in the quaternion space, we perform a right rotation on head entity and a reverse rota- tion on tail entity to learn rich semantic fea- tures. We then utilize distance-adaptive trans- lations to learn geometric distance between entities. Furthermore, we provide mathemati- cal proofs to demonstrate our model can han- dle complex logical relationships. Extensive experimental results and analyses show our model significantly outperforms previous mod- els on well-known knowledge graph comple- tion benchmark datasets. Our code is available athttps://github.com/llqy123/DaBR . 1 Introduction Knowledge graphs (KGs) (Liang et al., 2024a) are powerful tools for representing valid factual triplets by capturing entities and their relationships in a graphical format. Owing to the well-structured of graphs, KGs are often used for various Natural Language Processing tasks, such as question an- swering (Mendes et al., 2024; Faldu et al., 2024), entity alignment (Wang et al., 2024a,b), KG-based recommendation (Liang et al., 2024c) and KG en- hanced Large Language Model (Wen et al., 2024). *Corresponding Author. Email: wangwh@imu.edu.cn. (a) QuatE (b) TransERR Figure 1: The visualization embedding of QuatE and TransERR models after 100 epochs training. Points in the same color represent tail entities that have the same (hr, rj)(query) context. However, KGs are usually incomplete and the incompleteness limits their application. As an ef- fective tool for predicting missing facts, knowledge graph completion (KGC) has received considerable attention from researchers. Typically, researchers transform KGC tasks into knowledge graph embed- dings (KGEs). KGE refers to learning representa- tions of entities and relations in a low-dimensional space while preserving the graph’s inherent struc- ture and semantic properties. In this representation space, a scoring function can be defined to mea- sure the plausibility of each triplet, where valid triplets should receive higher scores than these in- valid ones. Quaternion contains one real part and three imag- inary parts, which providing a more expressive space for learning embeddings of entities and re- lations. Rotation in the quaternion space is of- ten used to model the KGs. For example, QuatE (Zhang et al., 2019) learns semantic information about entities by treating relations as rotations from head entities to tail entities. TransERR (Li et al., 2024) encodes the KG by rotating the head and tail entities with their corresponding unit quaternions. These models use either semantic matching or dis- tance scoring functions to measure the plausibility of the triplet, respectively. However, it appearsarXiv:2412.04076v2 [cs.LG] 12 Dec 2024 that semantic matching diminishes the separabil- ity of entities, while the distance scoring function weakens the semantics of entities. For example, we visualized the results for the same query in Fig- ure 11. Specifically, as shown in Figure 1, we observe that QuatE model overlaps some queries when using semantic matching as a scoring func- tion. The entities of TransERR using the distance scoring function are also indistinguishable from each query. To address this issue, we propose a Distance- adaptive quaternion knowledge graph embedding with Bidirectional Rotation model, named as DaBR . Our model combines semantic matching with the geometric distance of entities to better measure the plausibility of triplets. Specifically, in the quaternion space, we perform a right ro- tation on the head entity and a reverse rotation on the tail entity to learn rich semantic features. This process is called bidirectional rotation. We conducted extensive experiments on multiple well- known benchmark datasets for knowledge graph completion task. The experimental results and anal- yses demonstrated the effectiveness and robustness of our model. Our contributions are summarized as follows: •We propose performing a right rotation on the head entity and a reverse rotation on the tail entity to learn rich semantic features. •We propose learning the embedding distance between entities by incorporating distance adaptive translations. •We provide mathematical proofs to demon- strate that our model can handle rich logical relationships. •Extensive experiments show that our model provides consistent and significant improve- ments over previous models in most metrics. 2 Related Work For KGE models, the design of the scoring func- tion directly affects these models’ performance and effectiveness. Based on the calculation methods of scoring functions in previous models, KGE scoring functions can mainly be categorized into semantic matching- and geometric distance-based. 1For more information about queries, see Section 6.4.Semantic matching. Semantic matching scor- ing functions capture the interactions between en- tities and relations through inner products on em- bedding vectors. The hypothesis is that entities connected by relations are close to each other in the semantic space. For example, QuatE (Zhang et al., 2019) obtains semantic information about en- tities through the Hamiltonian rotation of the head entity on the relation in quaternion space. DualE (Cao et al., 2021) further enhances QuatE to model knowledge graphs in dual quaternion space. Qua- tRE (Nguyen et al., 2022) associates each relation with two relation-aware rotations, which are used to rotate the quaternion embeddings of the head and tail entities, respectively. ConvQE (Liang et al., 2024d) investigates the potential of quaternion con- volution in knowledge graph embedding. A common feature of these models is the compu- tation of the inner product between the head entity and the tail entity after a relation transformation. However, these models overlook the geometric dis- tance properties between entities in the knowledge graph, which leads to distorted embeddings of the learned entities. Geometric distance. Geometric distance scor- ing functions assess the plausibility of triplets by calculating the distances between embedding vec- tors in the representation space. The goal of this scoring function is to keep the head/tail entity vec- tor closer to the tail/head entity vector after being transformed through the relation vector. For exam- ple, TransE (Bordes et al., 2013), considered the first model to employ a geometric distance scoring function, assumes that triplets (h, r, t )in knowl- edge graphs should satisfy the expression h+r≈t. However, TransE struggles with more complex re- lation types, such as one-to-many (1-to-N), many- to-one (N-to-1) and many-to-many (N-to-N). To address this limitation, several models using distance-based scoring functions have been pro- posed. For example, Rotate3D (Gao et al., 2020) maps entities to a 3D space, defining the relation as a rotation from the head entity to the tail entity. Trans4E (Nayyeri et al., 2021) performs rotations and translations in a quaternion space. RotateCT (Dong et al., 2022) transforms entity coordinates and represents each relation as a rotation in com- plex space. Rotate4D (Le et al., 2023) employs two distinct rotational transformations to align the head embedding with the tail embedding. DCNE (Dong et al., 2024) maps entities to the dual com- plex number space, using rotations in the 2D space through the multiplication of dual complex num- bers to represent relations. TransERR (Li et al., 2024) encodes knowledge graphs by rotating the head and tail entities with their corresponding unit quaternions. A common feature of these models is that the plausibility of the triplets is evaluated by calculat- ing the distance between the head entity and the tail entity after transformation. However, these models do not consider information about entities within the semantic space, leading to performance degradation. 3 Preliminaries This section begins with a definition of the knowl- edge graph completion task, followed by a brief background on quaternion algebra. 3.1 Knowledge Graph Completion Knowledge graph completion is the task of predict- ing missing elements in a triplet (h, r, t ). This task can be broken down into three sub-tasks: predict- ing the head entity (?, r, t), predicting the relation (h,?, t), and predicting the tail entity (h, r,?). Fol- lowing previous research, our work focuses on pre- dicting the head (?, r, t)and tail (h, r,?)entities. It is because relation information is needed in the training process. 3.2 Quaternion Algebra The quaternion extends the complex number sys- tem to four dimensions. In n-dimensional quater- nion space Qn, a quaternion p∈Qnconsists of one real component and three imaginary compo- nents. It can be formalized as: p=a+bi+cj+dk, where a, b, c, d ∈Rn 4are real numbers and i,j,k are imaginary units. The imaginary part satisfies the Hamilton’s rules (Hamilton, 1844): i2=j2= k2=ijk=−1. Addition. Given two quaternions p=a+bi+ cj+dkandq=e+fi+gj+hk∈Qn, quaternion addition is defined as: p+q= (a+e)+(b+f)i+(c+g)j+(d+h)k(1) Norm. The normalization of quaternions ∥p∥∈ Qncan be defined by the following: ∥p∥=p a2+b2+c2+d2. (2) Inverse. The inverse of quaternions ∥p∥∈Qn can be defined by the following: p−1=¯ p ∥p∥2,¯ p=a−bi−cj−dk,(3)where ¯ p∈Qnis the conjugate of p∈Qn. Hamilton product. Given two quaternions p andq. The quaternion rotation of these two quater- nions can be performed by the Hamilton product: p⊗q=(a◦e−b◦f−c◦g−d◦h)+ (b◦e+a◦f+c◦h−d◦g)i+ (c◦e+a◦g+d◦f−b◦h)j+ (d◦e+a◦h+b◦g−c◦f)k,(4) where ◦denotes the element-wise product. 4 Methodology In this section, we describe our model in detail, which consists of two main parts: •Bidirectional rotation : Performing a right ro- tation on the head entity and a reverse rotation on the tail entity to learn the rich semantic features. •Distance-adaptation : Incorporating a dis- tance adaptive translation to learn the geomet- ric distance between entity embeddings. 4.1 Symbol Description A knowledge graph G=:{(h, r, t )} ∈ E × R × E is a collection of triplet, where EandRare the entity set and relation set. |E|and|R|represent the number of entities and relations, respectively. Given a triplet (h, r, t ), the embeddings of head en- tityh, relation rand tail entity tcan be represented by quaternions: h=ah+bhi+chj+dhk r=p+qi+uj+vk t=at+bti+ctj+dtk(5) 4.2 Part One: Bidirectional Rotation In Figure 2, we show the differences between our proposed bidirectional rotation and previous meth- ods when modeling entity semantics. Specifically, QuatE (Figure 2(a)) performs a right rotation for head entity. QuatRE (Figure 2(b)) performs two times right rotation for head entity and a right ro- tation for tail entity. Our model (Figure 2(c)) per- forms a right rotation for head entity and a reverse rotation for tail entity. We first normalize the relation quaternion rto a unit quaternion rto eliminate the scaling effect by dividing by its norm (Equation 2): r=r ∥r∥=p+qi+uj+vkp p2+q2+u2+v2.(6) (a) QuatE (b) QuatRE (c) DaBR (ours) Figure 2: The comparison of modeling entity semantics of QuatE, QuatRE and DaBR. These models learn the embeddings of knowledge graphs in quaternion spaces. ⊗denotes the Hamilton product (Equation 4). Then, the head entity his right rotated using the relation r, i.e., the entity vector and the relation vector do a Hamilton product (Equation 4): h′=h⊗r. (7) Similarly, the inverse of the relation unit quater- nionris used to make a reverse rotation of the tail entity t: t′=t⊗r−1. (8) Since ris a unit quaternion, we have: t′=t⊗r−1=t⊗¯ r, (9) where ¯ ris the conjugate of r. Therefore, the scoring function s(h, r, t )for the bidirectional rotation modeling entity semantics is defined by: s(h, r, t ) =h′◦t′=h⊗r◦t⊗¯ r,(10) 4.3 Part Two: Distance-Adaptation As shown in Figure 2, the previous QuatE (Figure 2(a)) and QuatRE (Figure 2(b)) can only learn the semantic information of an entity but ignore the geometric distance attribute of an entity. Our DaBR effectively addresses this limitation by adding a distance-adaptation (Figure 2(c)). Therefore, to model the geometric distance infor- mation, we initialize a distance-adaptive relation embedding rd=pd+qdi+udj+vdk. Finally, the geometric distance part scoring function d(h, r, t ) is defined as: d(h, r, t ) =∥h+rd−t∥1, (11) where ∥ · ∥ 1represents the ℓ1norm. Despite its simplicity, we find that the proposed method is effective enough in providing distance information for our model.4.4 Scoring Function After obtaining the scoring functions for modeling entity semantics and entity geometric distances, respectively. We fuse these scoring functions into a new scoring function for model training: ϕ(h, r, t ) =s(h, r, t ) +λd(h, r, t ) =h⊗r·t⊗¯ r+λ∥h+rd−t∥1,(12) where s(h, r, t )represents the semantic matching scoring function, d(h, r, t )represents the geometric distance scoring function, and λ∈Ris an adaptive parameter that learned by our model. 4.5 Loss Function Following Trouillon et al. (2016), we formulate the task as a classification problem, and the model parameters are learned by minimizing the following regularized logistic loss: L=X r(h,t)∈Ω∪Ω−log(1 + exp(−Yhrtϕ(h, r, t ))) +η1∥E∥2 2+η2∥R∥2 2, (13) where EandRdenote the embedding of all entities and relations. Here we use the ℓ2norm with reg- ularization rates η1andη2to regularize EandR, respectively. Ω−is sampled from the unobserved setΩ′using uniform sampling. Yhrt∈ {− 1,1} represents the corresponding label of the triplet (h, r, t ). 4.6 Discussion As described in Chami et al. (2020), there are com- plex logical relationships (such as symmetry, anti- symmetry, inversion and composition relationships) in the knowledge graph. In this part, we analyze the ability of our DaBR to infer these relationships. Lemma 1 DaBR can infer the symmetry relation- ship pattern. (See proof in Appendix A.1) SF ModelWN18RR FB15k-237 MR(↓) MRR H@10 H@3 H@1 MR( ↓) MRR H@10 H@3 H@1 SMTuckER (2019) - .470 .526 .482 .443 - .358 .544 .394 .266 QuatE (2019) 2314 .488 .582 .508 .438 87 .348 .550 .382 .248 DualE (2021) 2270 .492 .584 .513 .444 91 .365 .559 .400 .268 QuatRE (2022) 1986 .493 .592 .519 .439 88 .367 .563 .404 .269 ConvQE (2024d) - .487 .563 .502 .447 - .366 .551 .402 .273 GDATTH (2020) - .486 .573 .499 .443 - .348 .540 .384 .252 Rotate3D (2020) 3328 .489 .579 .505 .442 165 .347 .250 .543 .385 Trans4E (2021) 1755 .469 .577 .487 .416 158 .332 .527 .366 .236 RotateCT (2022) 3285 .492 .579 .507 .448 171 .347 .537 .382 .251 Rotate4D (2023) 3167 .499 .587 .518 .455 181 .353 .547 .391 .257 CompoundE (2023) - .491 .576 .508 .450 - .357 .545 .393 .264 HAQE (2024e) - .496 .584 .512 .450 - .343 .535 .379 .247 DCNE (2024) 3244 .492 .581 .510 .448 169 .354 .547 .393 .257 FHRE (2024b) - .494 .563 .510 .450 - .345 .528 .375 .255 TransERR (2024) 1167 .501 .605 .520 .450 125 .360 .555 .396 .264 SG DaBR (ours) 899 .510 .622 .538 .450 83 .373 .572 .410 .274 Table 1: Knowledge graph completion results on WN18RR and FB15k-237 datasets. Best results are in bold and second best results are underlined. SFindicates the scoring function, SMindicates semantic matching scoring function, GDindicates geometric distance scoring function, and SGindicates our semantic matching and geometric distance scoring function. “-” indicates that there is no result reported. The same settings apply to Table 2. Lemma 2 DaBR can infer the antisymmetry rela- tionship pattern. (See proof in Appendix A.2) Lemma 3 DaBR can infer the inversion relation- ship pattern. (See proof in Appendix A.3) Lemma 4 DaBR can infer the composition rela- tionship pattern. (See proof in Appendix A.4) 5 Experiments In this section, we first introduce the datasets, eval- uation protocol, implementation details and base- lines. Subsequently, we evaluate our model on four benchmark datasets. Datasets . To verify the effectiveness and robust- ness of our model, we conducted extensive experi- ments on four standard knowledge graph comple- tion datasets including WN18RR (Dettmers et al., 2018), FB15k-237 (Toutanova and Chen, 2015), WN18 (Bordes et al., 2013) and FB15k (Bordes et al., 2013). The WN18 and FB15k datasets are known to suffer from a data leakage problem, which causes models to easily inferred and conse- quently performing well on metrics. WN18RR and FB15k-237 were derived as subsets of WN18 and FB15k respectively. These datasets are designed to address data leakage concerns and thereby present a more realistic prediction task. The detailed statis- tics of the four standard datasets are shown in Ap- pendix B. Evaluation protocol . Similar to previous work (Zhang et al., 2019; Li et al., 2024), we employedthe filtered evaluation setup described in reference (Bordes et al., 2013) to filter out real triplets during the evaluation process. This was done to avoid flawed evaluations. We used evaluation metrics encompassed Mean Rank (MR), Mean Reciprocity Rating (MRR) and Hits@n (n=1, 3 or 10). Where a smaller value on the MR indicates a better model. The final scoring model on the test set is derived from the model with the highest Hits@10 score on the validation set. Implementation details . We conduct all our ex- periments on a single NVIDIA GeForce RTX 4090 with 24GB of memory. The ranges of the hyper- parameters for the grid search are set as follows: the embedding dimension ( dim) is selected from {300, 400, 500}; the learning rate ( lr) is chosen from {0.01, 0.02, 0.05, 0.1}; and the number of negative triplets sampled ( neg) per training triplet is selected from {5, 10}. The regularization rates η1andη2are adjusted within {0.01, 0.05, 0.1, 0.5}. We create 100 batches of training samples for dif- ferent datasets. We optimize the loss function by utilizing Adagrad (Duchi et al., 2011). All our hyper-parameters are provided in Appendix C. It is worth noting that our models do not employ the training strategies of self-adversarial negative sampling (Sun et al., 2019) or N3 regularization with reciprocal learning (Lacroix et al., 2018). Baselines . To verify the effectiveness of our model, we compared DaBR with several powerful baseline SF ModelWN18 FB15k MR(↓) MRR H@10 H@3 H@1 MR( ↓) MRR H@10 H@3 H@1 SMTuckER (2019) - .953 .958 .955 .949 - .795 .892 .833 .741 QuatE (2019) 162 .950 .959 .954 .945 17 .782 .900 .835 .711 DualE (2021) 156 .952 .962 .956 .946 21 .813 .896 .850 .766 QuatRE (2022) 116 .939 .963 .953 .946 21 .808 .896 .851 .751 GDRotate3D (2020) 214 .951 .961 .953 .945 39 .789 .887 .832 .728 Trans4E (2021) 175 .950 .960 .953 .944 47 .767 .892 .834 .681 RotateCT (2022) 201 .951 .963 .956 .944 34 .794 .888 .834 .737 Rotate4D (2023) 173 .952 .963 .956 .946 37 .790 .887 .831 .732 DCNE (2024) 192 .952 .963 .955 .945 34 .798 .888 .835 .745 TransERR (2024) 82 .953 .965 .957 .945 41 .815 .896 .848 .767 SG DaBR (ours) 56 .954 .966 .959 .946 18 .819 .900 .854 .769 Table 2: Knowledge graph completion results on WN18 and FB15k datasets. ModelWN18RR FB15k-237 WN18 FB15k MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 MRR H@10 H@3 H@1 DaBR .510 .622 .538 .450 .373 .572 .410 .274 .954 .966 .959 .946 .819 .900 .854 .769 Variant I .505 .617 .532 .445 .370 .569 .404 .272 .953 .964 .956 .943 .816 .894 .844 .766 Variant II .495 .580 .512 .445 .368 .566 .402 .270 .947 .960 .954 .937 .801 .890 .847 .751 Table 3: Ablation results for all datasets. models, including both well-known and recently proposed ones with outstanding results. We divide these models according to the scoring function: 1) Semantic Matching: TuckER (Balazevic et al., 2019), QuatE (Zhang et al., 2019), DualE (Cao et al., 2021), QuatRE (Nguyen et al., 2022) and ConvQE (Liang et al., 2024d). 2) Geometric Distance: ATTH (Chami et al., 2020), Rotate3D (Gao et al., 2020), Trans4E (Nayy- eri et al., 2021), RotateCT (Dong et al., 2022), Ro- tate4D (Le et al., 2023), CompoundE (Ge et al., 2023), HAQE (Liang et al., 2024e), DCNE (Dong et al., 2024), FHRE (Liang et al., 2024b) and TransERR (Li et al., 2024). For a fair comparison, we report the optimal results for these baselines from the original papers. 5.1 Main Results The main results of our DaBR and the baselines for the WN18RR and FB15k-237 datasets are listed in Table 1. We categorize the baseline models into two main groups based on scoring functions, namely semantic matching and geometric distance scoring functions. The models based on Semantic Matching are listed in the upper part of the table, while the Geometric Distance based methods are listed in the lower part of the table. It is worth not- ing that our model’s scoring function is the unique scoring function that simultaneously measures both Semantic and Geometric distances. From Table 1 we can clearly see that our modelachieves the best results on both datasets, except for the H@1 metric on the WN18RR dataset. Specif- ically, compared to the best performing of the se- mantic matching model, QuatRE, our model drops from 1986 to 899 on the MR metric and absolutely improves 3.4%, 5.0%, 3.6% and 2.5% on the MRR, H@10, H@3 and H@1 metrics on the WN18RR dataset. On the FB15k-237 dataset, our model decreases from 88 to 83 on the MR metrics, and absolutely improves on the MRR, H@10, H@3 and H@1 metrics by 1.6%, 1.5%, 1.4% and 1.8%. Compared to the latest and best performance of the geometric distance model, TransERR, our model decreases from 1167 to 899 on the MR met- ric and achieves an absolute improvement of 1.8%, 2.8%, and 3.4% on the MRR, H@10 and H@3 met- rics on the WN18RR dataset. On the FB15k-237 dataset, our model decreases from 125 to 83 on the MR metrics, and absolutely improves on the MRR, H@10, H@3 and H@1 metrics by 3.6%, 3.0%, 3.5% and 3.7%, respectively. The KGC results on WN18 and FB15k datasets are shown in Table 2. The Table 2 illustrates our model superiority over any previous model on the FB15k dataset. On the WN18 dataset, our model achieves the best results on all metrics, except for the H@1 metric which achieves second place. In conclusion, our model not only achieves optimal results compared to semantic matching models, but also achieves competitive results compared to geo- metric distance models. (a) 1-to-N (b) N-to-1 (c) N-to-N Figure 3: MRR scores for QuatE, QuatRE and our DaBR models over 0 to 5200 training epochs. 6 Analysis To demonstrate the superiority of our model, we have conducted in-depth analysis experiments from various aspects. The obtained experimental results and analysis are as follows: 6.1 Ablation Analysis In this section, we aim to evaluate the efficacy of bidirectional rotation and distance-adaptation within our DaBR. We have designed the following model variants: Variant I : We remove the rotation of the tail entity and keep the rotation of the head entity. Variant II : We removed the distance-adaptation. The DaBR degenerates into a semantic matching model. We show the results of the ablation experiments in Table 3. From the table, we can obtain the fol- lowing conclusions: 1) The rotation of the tail en- tity and distance-adaptation are important parts of our model. 2) When our model removed the tail rotation, the model (i.e., Variant I ) still achieved the best results compared to the models in Table 1 and Table 2. We attribute this to the fact that our model can measure both the semantics of en- tities and the embedding distance of entities. 3) When our model removed distance-adaptation, the model (i.e., Variant II ) performance decreased dra- matically on all datasets. It is worth noting that our model still achieves optimal results on most datasets compared to the semantic matching model on most datasets. 6.2 Parameter Comparison Analysis To analyze the number of parameters compared to other models, we compared our DaBR with the best semantic matching model (QuatRE) and the best geometric distance model (TransERR). Given the same embedding dimension n, QuatRE andTransERR have (|E| × n+ 3× |R| × n)parame- ters, while our DaBR has (|E| × n+ 2× |R| × n) parameters, where EandRare the entity set and relation set. Compared to QuatRE and TransERR, our model achieves better results with fewer param- eters. 6.3 Relationship Type Analysis To explore the robustness of our model in the face of different relation types (one-to-many (1-to-N), many-to-one (N-to-1) and many-to-many (N-to- N)), we compared DaBR with QuatE and QuatRE in WN18R dataset. For the results of the QuatE and QuatRE, we reproduce these models following the hyper-parameter settings of their paper. In accordance with the calculation rules set out in Bordes et al. (2013), the test set of WN18RR has been divided into three categories: 1-to-N, N- to-1 and N-to-N. The division results are shown in Appendix D, where ηhandηtrepresent the average degree of head and tail entities, respectively. We show the MRR scores for the QuatE, Qua- tRE, and DaBR models for 0 to 5200 training epochs in Figure 3. This demonstrates the effec- tiveness of our model in modelling different types of relationships. In particular, the model is supe- rior in dealing with 1-to-N relationship. “1-to-N” means that a head entity can form a fact triplet with multiple tail entities. We attribute this superior en- hancement to the distance-adaptive embedding of our model. 6.4 Visualization Analysis In this section, to explore the embedding results of our model after distance adaptive embedding, we visualize the the tail entity embeddings using t- SNE (van der Maaten and Hinton, 2008). Suppose (hi,rj) is a query where hiandrjare the head entity and the relation, respectively. If ( hi,rj,tk) is valid, the entity tkis the answer to query ( hi,rj). (a) QuatE (epoch=1) (b) QuatE (epoch=100) (c) QuatRE (epoch=1) (d) QuatRE (epoch=100) (e) TransERR (epoch=1) (f) TransERR (epoch=100) (g) DaBR (epoch=1) (h) DaBR (epoch=100) Figure 4: Visualization of the embeddings of tail entities using t-SNE. A point represents a tail entity. Points in the same color represent tail entities that have the same (hi, rj)context. We selected 9 queries in FB15k-237 dataset, each of which has 50 answers. For more details about the 9 queries, please refer to the Appendix E. We then use t-SNE to visualize the semantic matching models QuatE and QuatRE, the geomet- ric distance model TransERR, and our combined semantic and geometric distance DaBR to gener- ate the answer embeddings for epoch 1 and epoch 100, respectively. Figure 4 shows the visualization results2. Each entity is represented by a 2D point and points in the same color represent tail entities with the same ( hi,rj) context (i.e. query). Specifically, our model (Figure 4(g)) in the first epoch have demonstrated better embedding com- pared to QuatE, QuatRE and TransERR. At epoch 100, our model (Figure 4(h)) show clear inter- cluster separability, with entities within each clus- ter (intra-cluster) being well-separated from one another. However, the semantic matching model QuatE (Figure 4(b)) and QuatRE (Figure 4(d)) heav- ily overlap entities within clusters despite inter- cluster separability. The geometric distance model TransERR (Figure 4(f)) clusters are indistinguish- able from each other and entities within the clusters (intra-clusters) are distinguishable. Table 4 summarizes our analysis above, which we attribute to the fact that our model combines semantic matching with entity geometric distance to better measure the plausibility of triplets. 2Refer to Appendix F for more visualization results.Model intra-cluster inter-cluster QuatE ✓ QuatRE ✓ TransERR ✓ DaBR ✓ ✓ Table 4: ✓indicates a separable ability. (a) DaBR (with) (b) DaBR (without) Figure 5: DaBR with distance-adaptation and without. 6.5 Visualization Ablation Analysis In Figure 5, we visualize that our model removes the distance adaptive embedding in the first epoch. We can find that the visualization without the dis- tance adaptive embedding (Figure 5(b)) is worse than the with one (Figure 5(a)). By visualizing the ablation experiments, we can further illustrate the advantage of distance adaptive embedding. 7 Conclusion We note that existing quaternion models based on semantic matching diminishes the separability of entities, while the distance scoring function weak- ens the semantics of entities. To address this issue, we propose a novel quaternion knowledge graph embedding model. By combining semantic match- ing with entity geometric distance, our model pro- vides a robust and comprehensive framework for knowledge graph embedding. We provide mathe- matical proofs to demonstrate our model can han- dle complex logical relationships. Visualization results show that our model can learn the geomet- ric distance property between entities to achieve both inter-cluster and intra-cluster separability. Limitations The H@1 metric performance of our model on the WN18 and WN18RR datasets is not optimal. In addition, like most knowledge graph embedding models, our model is unable to predict new entities that do not exist in the training data. Acknowledgements This work is supported by National Natural Science Foundation of China (No.62066033); Inner Mongolia Natural Science Foundation (Nos.2024MS06013, 2022JQ05); Inner Mongo- lia Autonomous Region Science and Technol- ogy Programme Project (Nos.2023YFSW0001, 2022YFDZ0059, 2021GG0158); We also thank all anonymous reviewers for their insightful com- ments. References Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP) , pages 5185–5194, Hong Kong, China. As- sociation for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proceedings of the 26th Interna- tional Conference on Neural Information Processing Systems - Volume 2 , NIPS’13, page 2787–2795. Cur- ran Associates Inc. Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Xiaochun Cao, and Qingming Huang. 2021. Dual quater- nion knowledge graph embeddings. Proceedings of the AAAI Conference on Artificial Intelligence , 35(8):6894–6902.Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. 2020. Low- dimensional hyperbolic knowledge graph embed- dings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 6901–6914, Online. Association for Computational Linguistics. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowl- edge graph embeddings. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence . AAAI Press. Yao Dong, Qingchao Kong, Lei Wang, and Yin Luo. 2024. Dual complex number knowledge graph em- beddings. In Proceedings of the 2024 Joint In- ternational Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 5391–5400, Torino, Italia. ELRA and ICCL. Yao Dong, Lei Wang, Ji Xiang, Xiaobo Guo, and Yuqiang Xie. 2022. RotateCT: Knowledge graph embedding by rotation and coordinate transformation in complex space. In Proceedings of the 29th Inter- national Conference on Computational Linguistics , pages 4918–4932, Gyeongju, Republic of Korea. In- ternational Committee on Computational Linguistics. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. , 12(null):2121–2159. Prayushi Faldu, Indrajit Bhattacharya, and Mausam . 2024. RetinaQA: A robust knowledge base question answering model for both answerable and unanswer- able questions. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 6643–6656, Bangkok, Thailand. Association for Computational Linguistics. Chang Gao, Chengjie Sun, Lili Shan, Lei Lin, and Mingjiang Wang. 2020. Rotate3D: Representing re- lations as rotations in three-dimensional space for knowledge graph embedding. In Proceedings of the 29th ACM International Conference on Infor- mation & Knowledge Management , CIKM ’20, page 385–394, New York, NY , USA. Association for Com- puting Machinery. Xiou Ge, Yun Cheng Wang, Bin Wang, and C.-C. Jay Kuo. 2023. Compounding geometric operations for knowledge graph completion. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 6947–6965, Toronto, Canada. Association for Com- putational Linguistics. Wm R Hamilton. 1844. Theory of quaternions. Pro- ceedings of the Royal Irish Academy (1836-1869) , 3:1–16. Timothee Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the 35th International Conference on Machine Learn- ing, volume 80 of Proceedings of Machine Learning Research , pages 2863–2872. PMLR. Thanh Le, Huy Tran, and Bac Le. 2023. Knowledge graph embedding with the special orthogonal group in quaternion space for link prediction. Knowledge- Based Systems , 266:1–26. Jiang Li, Xiangdong Su, Fujun Zhang, and Guanglai Gao. 2024. TransERR: Translation-based knowl- edge graph embedding via efficient relation rota- tion. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 16727–16737, Torino, Italia. ELRA and ICCL. Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenx- uan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, Fuchun Sun, and Kunlun He. 2024a. A survey of knowledge graph reasoning on graph types: Static, dynamic, and multi-modal. IEEE Transactions on Pattern Analysis and Machine Intelligence , pages 1–20. Qiuyu Liang, Weihua Wang, Feilong Bao, and Guanglai Gao. 2024b. Fully hyperbolic rotation for knowledge graph embedding. In 2024 - 27th European Confer- ence on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain , pages 1615–1622. IOS Press. Qiuyu Liang, Weihua Wang, Lei Lv, and Feilong Bao. 2024c. Knowledge graph-enhanced recommendation with box embeddings. In Chinese Computational Linguistics , pages 274–288. Qiuyu Liang, Weihua Wang, Jie Yu, and Feilong Bao. 2024d. Effective knowledge graph embedding with quaternion convolutional networks. In CCF Interna- tional Conference on Natural Language Processing and Chinese Computing , pages 183–196. Springer. Qiuyu Liang, Weihua Wang, Jie Yu, and Feilong Bao. 2024e. Hierarchy-aware quaternion embedding for knowledge graph completion. In 2024 International Joint Conference on Neural Networks (IJCNN) , pages 1–8. Renê Mendes, Dimas Oliveira, and Victor Garcia. 2024. Application of generative AI as an enterprise wik- ibase knowledge graph Q&A system. In Proceed- ings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024) , pages 35– 42, Bangkok, Thailand. Association for Computa- tional Linguistics. Mojtaba Nayyeri, Gokce Muge Cil, Sahar Vahdati, Francesco Osborne, Mahfuzur Rahman, Simone Angioni, Angelo Salatino, Diego Reforgiato Recu- pero, Nadezhda Vassilyeva, Enrico Motta, and Jens Lehmann. 2021. Trans4E: Link prediction on schol- arly knowledge graphs. Neurocomputing , 461:530– 542.Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, and Dinh Phung. 2022. QuatRE: Relation-aware quater- nions for knowledge graph embeddings. In Compan- ion Proceedings of the Web Conference 2022 , WWW ’22, page 189–192, New York, NY , USA. Association for Computing Machinery. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations . Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compo- sitionality , pages 57–66. Association for Computa- tional Linguistics. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceed- ings of the 33rd International Conference on Interna- tional Conference on Machine Learning - Volume 48 , ICML’16, page 2071–2080. JMLR.org. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research , 9(86):2579–2605. Cunda Wang, Weihua Wang, Qiuyu Liang, Feilong Bao, and Guanglai Gao. 2024a. Unifying dual-space em- bedding for entity alignment via contrastive learning. arXiv preprint arXiv:2412.05028 . Cunda Wang, Weihua Wang, Qiuyu Liang, Jie Yu, and Guanglai Gao. 2024b. Gsea: Global structure-aware graph neural networks for entity alignment. In CCF International Conference on Natural Language Pro- cessing and Chinese Computing , pages 187–199. Springer. Yilin Wen, Zifeng Wang, and Jimeng Sun. 2024. MindMap: Knowledge graph prompting sparks graph of thoughts in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10370–10388, Bangkok, Thailand. Association for Computational Linguistics. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embeddings. In Pro- ceedings of the 33rd International Conference on Neural Information Processing Systems , Red Hook, NY , USA. Curran Associates Inc. Appendix A Proof Given h=ah+bhi+chj+dhk,r=p+qi+ uj+vk,t=at+bti+ctj+dtk, where ris a unit quaternion after normalization operation. We can make λ= 0and then our scoring function can be simplified as follows: ϕ(h, r, t ) =h⊗r·t⊗¯ r = [(ah◦p−bh◦q−ch◦u−dh◦v) + (ah◦q+bh◦p+ch◦v−dh◦u)i + (ah◦u−bh◦v+ch◦p+dh◦q)j + (ah◦v+bh◦u−ch◦q+dh◦p)k] ·[(at◦p+bt◦q+ct◦u+dt◦v) + (−at◦q+bt◦p−ct◦v+dt◦u)i + (−at◦u+bt◦v+ct◦p−dt◦q)j + (−at◦v−bt◦u+ct◦q+dt◦p)k](14) where ⊗is the Hamilton product, ◦denotes the element-wise product, and “ ·” is the inner product. A.1 Proof of Symmetry pattern In order to prove the symmetry pattern, we need to prove the following equality: h⊗r·t⊗¯ r=t⊗r·h⊗¯ r. (15) The symmetry property of DaBR can be proved by setting the imaginary parts of rto zero. A.2 Proof of Antisymmetry pattern In order to prove the antisymmetry pattern, we need to prove the following inequality when imaginary components are nonzero: h⊗r·t⊗¯ r̸=t⊗r·h⊗¯ r. (16) We expand the right term: t⊗r·h⊗¯ r = [(at◦p−bt◦q−ct◦u−dt◦v) + (at◦q+bt◦p+ct◦v−dt◦u)i + (at◦u−bt◦v+ct◦p+dt◦q)j + (at◦v+bt◦u−ct◦q+dt◦p)k] ·[(ah◦p+bh◦q+ch◦u+dh◦v) + (−ah◦q+bh◦p−ch◦v+dh◦u)i + (−ah◦u+bh◦v+ch◦p−dh◦q)j + (−ah◦v−bh◦u+ch◦q+dh◦p)k].(17) We can easily see that those two terms are not equal as the signs for some terms are not the same.A.3 Proof of Inversion pattern To prove the inversion pattern, we need to prove that: h⊗r·t⊗¯ r=t⊗¯ r·h⊗¯ r−1. (18) We expand the right term: t⊗¯ r·h⊗¯ r−1 =t⊗¯ r·h⊗r = [(at◦p+bt◦q+ct◦u+dt◦v) + (−at◦q+bt◦p−ct◦v+dt◦u)i + (−at◦u+bt◦v+ct◦p−dt◦q)j + (−at◦v−bt◦u+ct◦q+dt◦p)k] ·[(ah◦p−bh◦q−ch◦u−dh◦v) + (ah◦q+bh◦p+ch◦v−dh◦u)i + (ah◦u−bh◦v+ch◦p+dh◦q)j + (ah◦v+bh◦u−ch◦q+dh◦p)k].(19) We can easily check the equality of these two terms. Since ris a unit quaternion, we have r−1=¯ r. A.4 Proof of Composition pattern For composition relationships, we can get that: (h⊗r2)⊗r3·(t⊗¯ r2)⊗¯ r3 =h⊗(r2⊗r3)·t⊗(¯ r2⊗¯ r3) =h⊗r1·t⊗¯ r1(20) B Dataset statistics The detailed statistics of the four standard datasets are shown in Table 6. C Optimal hyper-parameters Table 7 shows the optimal hyperparameter settings for our model on the four benchmark datasets. The optimal parameters come from the highest scores of our model on the validation dataset. D Classification rules The classification rules and classification results for WN18RR dataset in the Table 8. E The queries in t-SNE visualization In Table 5, we list the nine queries used in the t- SNE visualization (Section 6.4 in the main text). Note that a query is represented as (h, r,?), where hdenotes the head entity and rdenotes the relation. F More visualization results Figure 6 shows more visualization results. Index Query 1 (political drama, /media_common /netflix_genre /titles, ?) 2 (Academy Award for Best Original Song, /award /award_category /winners. /award /award_honor /ceremony, ?) 3 (Germany, /location /location /contains, ?) 4 (Master’s Degree, /education /educational_degree /people_with_this_degree. /education /education /major_field_of_study, ?) 5 (broccoli, /food/food/nutrients. /food/nutrition_fact /nutrient, ?) 6 (shooting sport, /olympics /olympic_sport /athletes. /olympics /olympic_athlete_affiliation /country,?) 7 (synthpop, /music /genre /artists, ?) 8 (Italian American, /people /ethnicity /people, ?) 9 (organ, /music /performance_role /track_performances. /music /track_contribution /role, ?) Table 5: The queries in t-SNE visualizations. Dataset #Ent #Rel #Train #Valid #Test WN18RR 40k11 86 k 3k 3k FB15k-237 14k237 272 k17k20k WN18 40k18 141 k 5k 5k FB15k 14k1345 483 k50k59k Table 6: Dataset statistics on four datasets. Dataset lr neg dim η 1η2 WN18RR 0.1 5 500 0.5 0.01 FB15k-237 0.05 10 500 0.5 0.01 WN18 0.05 5 300 0.05 0.01 FB15k 0.02 10 400 0.05 0.01 Table 7: Optimal hyper-parameters for our DaBR on each dataset. Category ηh ηt #triplets 1-to-N <1.5>1.5 475 N-to-1 >1.5<1.5 1487 N-to-N >1.5>1.5 1130 Table 8: Classification rules and classification results for WN18RR. The last column is the number after division. (a) QuatE (epoch=1) (b) QuatE (epoch=50) (c) QuatE (epoch=100) (d) QuatRE (epoch=1) (e) QuatRE (epoch=50) (f) QuatRE (epoch=100) (g) TransERR (epoch=1) (h) TransERR (epoch=50) (i) TransERR (epoch=100) (j) DaBR (epoch=1) (k) DaBR (epoch=50) (l) DaBR (epoch=100) Figure 6: Visualization of the embeddings of tail entities using t-SNE. A point represents a tail entity. Points in the same color represent tail entities that have the same (hr, rj)context. | 6 | 1 | The DaBR model has a unique architecture involving quaternion embeddings and bidirectional rotations, likely making it smaller than multi-layered transformer models but larger than simple embeddings. The paper does not mention exact parameter counts, but it's implied that the embedding size can vary (300-500) and relates to entity and relation counts. Assuming around 1000 entities and 500 relations, the parameter count could be roughly 1.5 million. The model was trained on a single NVIDIA GeForce RTX 4090, which has enough memory to handle a batch size of approximately 100 to 200. The choice of hyperparameters and extensive experimental results indicates many epochs (potentially in the hundreds) but since they don't specify epoch counts, we estimate around 300 epochs for robust validation. Given all this and standard computational practices in the field, training could be expected to take around 6 hours. Since this fits within the 8-hour limit on the aforementioned GPU, the model can indeed be trained in under 8 hours on a single GPU. | yes | Yes | Graph | Distance-Adaptive Quaternion Knowledge Graph Embedding with Bidirectional Rotation | 2024-12-05 0:00:00 | https://github.com/llqy123/dabr | 1 | Dataset inside benchmark folder. | 4 and half days. 10000 epochs and each take 42sec. | https://drive.google.com/file/d/1XLeWvyV4sdoLDoVBzAMB6czlAbOAhB0W/view?usp=sharing | Yes | -- Straight forward clone and just run the train_FB15k-237 file. |
CIFAR-10 | ABNet-2G-R0 | [] | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28T00:00:00 | https://arxiv.org/abs/2411.19213v1 | [
"https://github.com/dvssajay/New_World"
] | {'Percentage correct': '94.118'} | [
"Percentage correct",
"Top-1 Accuracy",
"Accuracy",
"Parameters",
"Top 1 Accuracy",
"F1",
"Cross Entropy Loss"
] | Given the following paper and codebase:
Paper: ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
Codebase: https://github.com/dvssajay/New_World
Improve the ABNet-2G-R0 model on the CIFAR-10 dataset. The result
should improve on the following metrics: {'Percentage correct': '94.118'}. You must use only the codebase provided.
| ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Venkata Satya Sai Ajay Daliparthi Blekinge Institute of Technology Karlskrona, Sweden venkatasatyasaiajay.daliparthi@bth.se Abstract Inspired by Many-Worlds Interpretation (MWI), this work introduces a novel neural network architecture that splits the same input signal into parallel branches at each layer, utilizing a Hyper Rectified Activation, referred to as AND- HRA. The branched layers do not merge and form a sep- arate network path, leading to multiple network heads for output prediction. For a network with branching factor 2 at three levels, the total heads are 2ˆ3 = 8. The individ- ual heads are jointly trained by combining their respective loss values. However, the proposed architecture requires additional parameters and memory during training due to the additional branches. During inference, the experimen- tal results on CIFAR-10/100 demonstrate that there exists one individual head that outperforms the baseline accuracy, achieving statistically significant improvement with equal parameters and computational cost. 1. Introduction As the depth of the neural networks (NN) starts increasing, the training complexity increases due to the vanishing gradi- ent problem[10]. As the gradients pass through each layer, they shrink, leading to an ineffective update of weights in the earlier layers (close to input). The existing solutions in- vestigated this problem through different dimensions that include non-linear activations (ReLU [21]), initialization techniques (Xavier [6] and He [7]), batch normalization [14], stochastic optimization (Adam [16]), and network ar- chitectures (residual [8], and dense [12] connections). In the network architectures landscape, the prominent ResNets [8] introduced skip-connections between layers to facilitate di- rect gradient flow in deeper architectures. The DenseNet [12] connects each layer to every other layer thus providing each layer with direct access to gradients from all previous layers. Nevertheless, in many cases NNs are trained us- ing a single loss function attached to the final output layer, this is due to the traditional network architecture style. To mention, some earlier works introduced methods like Com- Figure 1. Comparison of training accuracy progression in baseline and proposed method AB (ANDHRA Bandersnatch), in log-scale graph panion objective [19], and Auxiliary loss [18, 23] where an additional loss function is attached to the earlier layers for improvement in gradient flow. However, the place of these auxiliary losses remains arbitrary [19, 25], and the auxiliary prediction is often discarded at the inference stage. To address the vanishing gradient problem through net- work architectures, inspired by Many-Worlds Interpreta- tion (MWI), this work proposes a novel NN architec- ture that grows exponentially by forming branches/splits at each layer where different branches independently han- dle the flow of information, resulting in multiple parallel heads(output layers). A loss function is attached to the in- dividual heads and the whole network is jointly trained by aggregating the individual head losses. The main contribu- tions of this work are as follows: • A non-merging splitting/branching network module called ANDHRA . • A network architecture named ANDHRA Bandersnatch (AB) that uses the ANDHRA module at different levels to create network branches. “The key idea is that by splitting the network into multi- ple independent branches at each level, the flow of gradients is no longer confined to a single path. This should allow the network to effectively propagate gradients through the lay- ers, as multiple paths are available to carry the gradient backward during training. ” 1arXiv:2411.19213v1 [cs.CV] 28 Nov 2024 The figure 1 presents the training accuracy progression of the proposed architecture in comparison with the base- line, where the baseline ( Baseline 1GR3 ) network is equiv- alent to a traditional feed-forward ResNet [8], and the pro- posed network; ANDHRA Bandersnatch ( AB 2GR3 ). The AB 2GR3 network has a branching factor 2 at 3 levels, the total heads for this network are 2ˆ3 = 8 heads. Here, one head in AB 2GR3 is equivalent to the baseline in terms of pa- rameters and computational cost. Thus, in the figure 1, the Baseline 1GR3 curve should be compared with AB 2GR3 one head , and the AB 2GR3 combined is an ensemble pre- diction that is inherent to the proposed architecture. The experiential results on CIFAR-10/100 demonstrate the effectiveness of the proposed architecture by show- ing statistically significant accuracy improvements over the baseline networks. 2. Method This section provides a background on the source of inspira- tion for the proposed method, then introduces the proposed ANDHRA module, Bandersnatch network, and definition of training loss for the proposed method. Source of Inspiration: Many-Worlds Interpretation (MWI) of quantum mechanics assumes that every quan- tum measurement leads to multiple branches of reality, with each branch representing a different outcome of a quantum event. It assumes that all possible outcomes of a quan- tum event actually occur but in different, non-interacting branches of the universe. These parallel realities exist si- multaneously, each one corresponding to a different possi- bility that could have occurred, leading to the idea that par- allel universes are created for every quantum decision. Ac- cording to MWI, a popular quantum paradox, Schr ¨odinger Cat is interpreted as where both outcomes (the cat being dead and the cat being alive) occur, but in separate branches of the universe. There is no collapse of the wave function; the universe simply splits into two branches, one where the cat is dead and one where the cat is alive. A similar idea of parallel realities arising from decisions (like in human choice or action, rather than purely quantum events) has been explored in various ways, often in the con- text of multiverse theories or alternate realities in science fiction (Netflix shows Bandersnatch and Dark). 2.1. Ajay N’ Daliparthi Hyper Rectified Activation (ANDHRA) Idea: ”The idea is to implement a NN architecture based on MWI where the network splits into multiple “branches” or “heads” (representing different paths) that process the same input signal in parallel, each corresponding to differ- ent possible outcomes. Akin to how MWI suggests parallel universes in their treatment of parallelism and branching, the NN architecture involves computational paths that exist Figure 2. MWI based state changes simultaneously, and those outcomes are handled indepen- dently (separate branches or worlds). ”, as depicted in Fig- ure 2 The intuition behind the idea is that by designing a net- work that grows exponentially, the parent layers are shared among the individual branches, thus the shallow/earlier lay- ers (close to input) receive multiple gradient updates from each individual branch. Since these individual branches are identical, the updates from multiple branches shouldn’t de- viate much from the ideal one. Proposed method: Based on the idea, this work proposes a network module referred to as ANDHRA that splits the given input signal into N (branching factor) number of par- allel branches. The A N’D stands for Ajay and Daliparthi, and HRA stands for Hyper Rectified Activation. Since the activation function adds non-linearity to the network, this work interprets the activation function as a decision-making point and makes a design decision to intro- duce the splitting function at the activation layer, the one be- fore reducing the spatial dimensions and passing it to next- level, meaning one module for one-level. By introducing the ANDHRA module, the network grows exponentially in terms of the number of outputs, pa- rameters, and computational complexity. Let’s assume that each layer uses one ANDHRA module, Nis the branching factor, and Lis the level of NN. The number of heads Hat level Lcan be expressed as 1 HL=NL(1) The total number of layers can be expressed as the sum of the layers at each level of the network, also expressed in 2 Layers up to level L =H0+H1+H2+. . .+HL(2) 2 By substituting the formula in eq 1 in eq 2 Layers up to level L = 1 + N+N2+N3+. . .+NL(3) The equation 3 resembles a classic geometric series, where the first term is 1 and the common ratio is N. The sum of the first L+ 1terms of a geometric series is given by the formula: SL=NL+1−1 N−1(4) ∴Layers up to level L =NL+1−1 N−1(5) Where: •Nis the branching factor. •Lis the Level number, starting from 0. 2.2. ANDHRA Bandersnatch (AB) Network The Bandersnatch network is a NN implemented using the ANDHRA module with branching factor N = 2, denoted as ANDHRA Bandersnatch 2G (where G stands for genera- tions also denoting network growth rate/common ratio). It assumes that the network splits into two outcomes at each level. Based on the dataset (input image resolution), the levels will be decided in a network architecture. Figure 3 presents baseline and Bandersnatch-2G network architec- tures side-by-side in which there are four levels (based on CIFAR input resolution 32x32), and ANDHRA module is placed three times, each at level-1, 2, and 3. The baseline ar- chitecture is implemented by replicating ResNet[8], and the Bandersnatch-2G is implemented to match the baseline for a given individual head, this can also be observed from the Figure 3. Using eq 1, the total heads for a 3-leveled network with branching factor 2 is 2ˆ3 = 8. Thus, the Bandersnatch- 2G network consists of 8 identical heads, and the baseline is identical to an individual head in terms of parameters and computational complexity. In Figure 3, the Conv layer at level-0 (with 3 in filters, and 64 out filters), also the first Conv layer, receives gradi- ent updates from eight heads, the two Conv layers at level- 1; each receives gradient updates from four heads, .... (the pattern repeats until the end) Network Notation: Each Conv block is followed by a ResBlock (R), the depth of the ResBlock will be decided during experimentation (R-Depth). A network with R0 means zero residual blocks are present in a network. For networks with R value 3, three residual blocks are stacked on top of each other, each residual block consists of two Conv layers and a skip-connection. For any given Res- Block, the number of input filters an output filters are same. The Conv layers represented in Figure 3 have stride 2, and a point-wise (1x1 Conv) skip connection. Before pass- ing the individual heads into linear layers, there is an av- erage pooling layer with kernel size 4. Since there are 8heads, during inference, the individual head predictions are majority-voted to get the combined prediction. Calculating the number of layers: using equations 2 3 4, the total number of layers for levels 0, 1, 2, and 3 in a Bandersnatch-2G network can be calculated as: For each layer: H0= 1, H 1= 2, H 2= 4, H 3= 8 The total number of Conv layers up to level 3 is: Total layers up to layer 3 = 1 + 2 + 4 + 8 = 15 Using the geometric sum formula: Total heads up to layer 3 =23+1−1 2−1=16−1 1= 15 Thus, the total number of heads up to layer 3 is 15, this can also be manually verified by counting the number of Conv blocks at each level of the Bandersnatch-2G network in Figure 3. 2.3. Training the ANDHRA Bandersnatch network While training, each head is assigned a loss function and these individual losses are combined by summing and av- eraging. Let L1, L2, . . . , L Nbe the individual losses for thenheads. Each Licorresponds to the loss computed for thei-th head of the network. The final loss Ltotalpassed for back-propagation is the average of all individual losses, represented in equation 6 Ltotal=1 nNX i=1Li (6) The reason for summing and averaging the losses is to create a global loss that represents the overall error across all heads. The averaging ensures that the optimization pro- cess treats each head equally, which might help avoid over- fitting to any one branch of the network, ensuring that each head contributes equally to the final loss. For Bandersnatch Network with 8 heads, the total loss from eq 6 can be written as: Ltotal= 0.125·(L1+L2+L3+L4+L5+L6+L7+L8)(7) 3. Evaluation 3.1. Experiment Setup Each network is trained five times and the mean and stan- dard deviation values are reported. These training hyper-parameters are kept the same for both baseline and Bandersnatch Network, and experiments are conducted by replacing just the network (The training and validation function needs adjustments to support the Bandersnatch 2G Network): 3 Figure 3. From the left side, baseline network, the levels & output shapes chart, and the ANDHRA Bandersnatch 2G network • Dataset: CIFAR 10/100 • Training data transforms: RandomCrop(32, padding=4), RandomHorizontalFlip(), and Normalize. For validation data, only Normalization. • Batch Size: 128 • Epochs: 200 • Loss: CrossEntropyLoss • Optimizer: SGD (momentum=0.9, weight decay=5e-4) • Learning rate: 0.1 • Learning rate scheduler: Cosine Annealing (T max=200) • Performance metric: Top-1 accuracy Experiment Hypothesis : Since, the baseline is identical to any individual network branch/(head) in Bandersnatch 2G Network (see Figure 3); if any individual head outper- forms the baseline accuracy, during inference, that particu- lar head can be detached and used for inference, it means improving the performance of the network without adding additional computation and parameter overhead. To check if the experiment hypothesis holds true: a sta- tistical significance test (Paired T-test) is performed be- tween the results of each baseline variant and its corre- sponding top-performing head in Bandersnatch 2G Net- work. If the p-value is equal to or less than 0.05, then the prediction distributions (5 runs) are considered to be statis- tically significant. 3.2. Experiment results In Table 1, and 2; the first column represents the depth of the residual blocks placed at each level (shown in Figure 3) of the network (refer to section 2.2 network notation); the second column represents the performance of the baseline networks; the third column represents the performance of top performing heads out of the eight heads in the Bander- snatch 2G network; the fourth column represents the com- bined prediction of 8 heads. During the comparison, thebaseline performance (col-2) is matched with the top per- forming head (col-3) out of 8 heads. Thus, in the fifth and sixth columns, the statistically significant difference and mean squared error is measured between the 5 runs of base- line and top performing head performance, columns (2 and 3). Table 1 presents results on CIFAR-10 where the top per- forming head in ANDHRA Bandersnatch (2G) network out- performs the baseline from residual depth (0-3) with sta- tistical significance difference. The experiment hypothesis holds true in all cases, at every depth. Table 2 presents results on CIFAR-100 where the per- formance of the top performing head in ANDHRA Bander- snatch (2G) outperforms the baseline from residual depth (1-3) with a statistically significant difference. Expect, in case of residual depth (0), the proposed method slightly under-performs the baseline, thus, no statistically signifi- cant difference is observed. Hence, the experiment Hypoth- esisholds true, except for row one with residual depth zero. Furthermore, in between Table 1 and 2, the performance difference is higher in Table 2 (CIFAR-100), specifically, the rows 3 and 4 in Table 2 with residual depths 2 & 3, this is an interesting result, demonstrating the effectiveness of the proposed method. This difference can also be observed through high mean squared error in rows 3, and 4 (in Table 2). 4. Ablation study on ensemble prediction methods Since the proposed architecture consists of multiple net- work predictions, the combined/ensemble prediction is used for the joint training of individual heads. Thus, an abla- tion study is conducted to compare different ensemble tech- niques on ANDHRA Bandersnatch (AB) Networks trained 4 R-Depth Baseline (1G) ANDHRA Bandersnatch (2G) Significance Mean Sq. Error Top-Head Combined R0 93.546 ±0.190 94.118 ±0.099 94.738 ±0.090 Yes 0.404 R1 95.202 ±0.097 95.536 ±0.078 95.890 ±0.099 Yes 0.138 R2 95.366 ±0.171 95.900 ±0.127 96.230 ±0.108 Yes 0.334 R3 95.474 ±0.162 96.088 ±0.065 96.378 ±0.023 Yes 0.418 Table 1. Experimental results on CIFAR-10, (compare columns 2, and 3) R-Depth Baseline (1G) ANDHRA Bandersnatch (2G) Significance Mean Sq. Error Top-Head Combined R0 73.982 ±0.184 73.930 ±0.233 77.186 ±0.153 No 0.143 R1 77.952 ±0.145 78.792 ±0.173 81.214 ±0.114 Yes 0.733 R2 78.676 ±0.324 80.354 ±0.084 82.422 ±0.113 Yes 2.910 R3 78.610 ±0.361 80.830 ±0.116 82.784 ±0.128 Yes 5.007 Table 2. Experimental results on CIFAR-100, (compare columns 2, and 3) R-Depth Majority V oting Average Probability Product of Experts (PoE) Rank-Based V oting R0 94.738 ±0.090 94.892 ±0.110 94.846 ±0.139 94.818 ±0.113 R1 94.890 ±0.099 96.052 ±0.119 96.094 ±0.095 95.918 ±0.098 R2 96.230 ±0.108 96.348 ±0.102 96.344 ±0.096 96.294 ±0.108 R3 96.378 ±0.023 96.504 ±0.108 96.508 ±0.101 96.428 ±0.037 Table 3. Ablation study on ensemble prediction methods of Bandersnatch network on CIFAR-10 R-Depth Majority V oting Average Probability Product of Experts (PoE) Rank-Based V oting R0 77.186 ±0.153 77.662 ±0.297 78.026 ±0.238 77.664 ±0.218 R1 81.214 ±0.114 81.506 ±0.180 81.712 ±0.125 81.516 ±0.132 R2 82.422 ±0.113 82.584 ±0.126 82.612 ±0.090 82.460 ±0.119 R3 82.784 ±0.128 82.932 ±0.108 82.950 ±0.079 82.872 ±0.138 Table 4. Ablation study on ensemble prediction methods of Bandersnatch network on CIFAR-100 on CIFAR-10/100 in Section 3. Note that the default en- semble method used for the experiments in section 3 is a simple majority voting. 4.1. Selected ensemble techniques Let: •N: Number of heads •yi: Prediction of the i-th head •pi: Softmax probability distribution from the i-th head •ˆy: Final combined prediction 1. Majority V oting [1] This strategy selects the class based on the most frequent vote among the multiple heads. By stacking all the predictions from the heads into a tensor, the mode across the predictions for each sample is calculated, as shown in Equation 8ˆy=mode ([y1, y2, . . . , y N]) (8) 2. Average Probability [4] This strategy averages the probability distributions from each head and chooses the class with the highest average probability. The probabilities from all heads are stacked, the mean is computed, and the class with the highest average probability is chosen, as shown in Equation 9 ˆy= arg max c 1 NNX i=1pi[c]! (9) 3. Product of Experts (PoE) [9] This strategy assumes that the heads are “experts,” and their probabilities are multiplied (in log space) to combine 5 their opinions. The probabilities from all heads are stacked, take the log of each, sum them, and then exponentiate to get the combined probability where the class with the highest combined probability is selected, as shown in Equation 10 ˆy= arg max c exp NX i=1log(pi[c] +ϵ)!! (10) 4. Rank-Based V oting [2] This strategy assigns higher weight to the top-ranked classes for each head. For each class, the rank scores are calculated across all heads. The ranking values are added to a tensor, where each class’s rank gets added to its corre- sponding position, and the class with the highest rank score is chosen. Let ri[c]denote the rank of class cfor head i, the rank-based voting is shown in 11 ˆy= arg max cNX i=11 ri[c](11) 4.2. Ablation study results From Table 3, the ablation results on CIFAR-10, a similar performance is observed between the techniques; average probability and product of experts, they outperform major- ity voting and rank-based voting. In Table 4, the ablation results on CIFAR-100, the prod- uct of experts outperforms other techniques. Similar to table 3, the average probability shows adequate performance. 5. Related Work The Inception [23] module proposed to split the feature map and process them with parallel convolutional layers of dif- ferent kernel sizes, for capturing features at different scales. The ResNeXt[26] extended the ResNet [8] to increase the width of the network by proposing cardinality, the number of independent splits. A similar concept of using multiple parallel convolutions has been investigated in Wide-ResNet [27], and FractalNet[18], Res2Net [5]. Through model ar- chitecture search methods, the RegNet[22], MobilenetV3 [11], and EfficientNet [24] balances between depth, width, and scaling. Grouped Convolutions [17] is a separate branch of con- volutional layers that divide the channels in an input feature map into multiple groups, and each group is processed in- dividually, thus reducing the computational complexity of the convolutional operations. The Shufflenetv2 [20], Con- denseNet [13], and MobilenetV3 [11] demonstrated the ef- fectiveness of grouped convs in designing light-weight net- works. In Xception[3], each channel is processed indepen- dently and a 1x1 convolution is used to combine the chan- nels, this is a special case of grouped convolution where the number of groups is equal to the channels in the input fea- ture map.Nevertheless, the existing works merge or concatenate feature maps after parallel processing/splitting. In contrast, this work proposes to maintain an independent branch after splitting that continues until the output layer of the network, leading to multiple network heads for prediction. On the other hand, the auxiliary loss [23, 25] concept proposes to introduce additional losses at intermediate lay- ers to improve the training of earlier layers (close to input). During inference, the auxiliary heads are discarded, and the final output is considered for prediction, this can be viewed as a regularizing technique [23]. The concept of applying multiple loss functions is prominent in multitask learning [15] where each loss learns to solve a specific task, these losses are combined with the primary loss for training on multiple tasks simultaneously. Instead, this work proposes training a network with mul- tiple identical heads where each head is treated with a loss function and the total losses are summed and scaled before proceeding with gradient updates. 6. Conclusions This work proposes a novel NN architecture that splits the network into parallel branches where the multiple network heads are jointly trained. Due to the shared parent branches, the earlier(close to input) layers in the network receive gra- dient updates from multiple output heads, leading to faster convergence of the individual heads (compared to baseline as shown in Figure 1). The experimental results on CIFAR- 10/100 demonstrate a statistically significant difference by adopting the proposed architecture for simple ResNet style baselines. Unlike traditional methods, the ensemble predic- tion is inherent to the proposed architecture. Moreover, the proposed method is analogous to existing network modules, thus paving a path forward for experimentation. References [1] Krizhevsky Alex. Learning multiple layers of features from tiny images. https://www. cs. toronto. edu/kriz/learning- features-2009-TR. pdf , 2009. 5 [2] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd in- ternational conference on Machine learning , pages 89–96, 2005. 6 [3] Franc ¸ois Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition , pages 1251–1258, 2017. 6 [4] Thomas G Dietterich. Ensemble methods in machine learn- ing. In International workshop on multiple classifier systems , pages 1–15. Springer, 2000. 5 [5] Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and Philip Torr. Res2net: A new 6 multi-scale backbone architecture. IEEE transactions on pattern analysis and machine intelligence , 43(2):652–662, 2019. 6 [6] Xavier Glorot and Yoshua Bengio. Understanding the diffi- culty of training deep feedforward neural networks. In Pro- ceedings of the thirteenth international conference on artifi- cial intelligence and statistics , pages 249–256. JMLR Work- shop and Conference Proceedings, 2010. 1 [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level perfor- mance on imagenet classification. In Proceedings of the IEEE international conference on computer vision , pages 1026–1034, 2015. 1 [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. 1, 2, 3, 6 [9] Geoffrey E Hinton. Training products of experts by mini- mizing contrastive divergence. Neural computation , 14(8): 1771–1800, 2002. 5 [10] Sepp Hochreiter. Recurrent neural net learning and vanishing gradient. International Journal Of Uncertainity, Fuzziness and Knowledge-Based Systems , 6(2):107–116, 1998. 1 [11] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mo- bilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision , pages 1314–1324, 2019. 6 [12] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil- ian Q Weinberger. Densely connected convolutional net- works. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4700–4708, 2017. 1 [13] Gao Huang, Shichen Liu, Laurens Van der Maaten, and Kil- ian Q Weinberger. Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition , pages 2752–2761, 2018. 6 [14] Sergey Ioffe. Batch normalization: Accelerating deep net- work training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 , 2015. 1 [15] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geome- try and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 7482–7491, 2018. 6 [16] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. 1 [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. Advances in neural information processing systems , 25, 2012. 6 [18] Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648 , 2016. 1, 6[19] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. In Arti- ficial intelligence and statistics , pages 562–570. Pmlr, 2015. 1 [20] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architec- ture design. In Proceedings of the European conference on computer vision (ECCV) , pages 116–131, 2018. 6 [21] Vinod Nair and Geoffrey E Hinton. Rectified linear units im- prove restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML- 10), pages 807–814, 2010. 1 [22] Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Doll ´ar. Designing network design spaces. In Proceedings of the IEEE/CVF conference on com- puter vision and pattern recognition , pages 10428–10436, 2020. 6 [23] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1–9, 2015. 1, 6 [24] Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In International conference on machine learning , pages 10096–10106. PMLR, 2021. 6 [25] Surat Teerapittayanon, Bradley McDanel, and Hsiang-Tsung Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd international con- ference on pattern recognition (ICPR) , pages 2464–2469. IEEE, 2016. 1, 6 [26] Saining Xie, Ross Girshick, Piotr Doll ´ar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1492–1500, 2017. 6 [27] Erwan Zerhouni, D ´avid L ´anyi, Matheus Viana, and Maria Gabrani. Wide residual networks for mitosis detection. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) , pages 924–928. IEEE, 2017. 6 7 ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities Supplementary Material From the main paper results in Table 1, and Table 2, the net- work with residual depth three (R3) is selected for conduct- ing additional experiments in the supplementary material. This selection is motivated by the accuracy of the networks with residual depth three. Just as in the main paper, each network is trained five times and the mean and standard de- viation values are reported. 7. Parametric Activation In Figure 3 (main paper), the ANDHRA module is imple- mented with two identical ReLU layers. However, using parametric activation functions such as PReLU, the defini- tion of two independent layers becomes more coherent due to separate parameters for each branch. As shown in Figure 4 where the two independent PReLU layers are defined with the number of input channels as a parameter. A parametric version of the baseline and the Bander- snatch -2GR3 networks are implemented by replacing the ReLU layer with PReLU (Params=input channels), and the results are presented in Table 5. The results demonstrate that the top performing head in Bandersnatch -2G outper- forms the baseline networks in the parametric activation scenario, alining with main paper results from Table 1, and Table 2. 8. ANDHRA module at different levels In the main paper, for the network Bandersnatch 2G (refer Figure 3, one ANDHRA module is placed at each network level starting from level 1-3. Thus, the network in Figure 3 consists of three ANDHRA modules, leading to 8 output heads. In this section, an ablation study is performed with: 1. One ANDHRA module = 2 network heads 2. Two ANDHRA modules = 4 network heads 8.1. One ANDHRA module and 2 output heads Since there are three possibilities of placing the ANDHRA module at levels (1, 2, and 3), three networks (AB2GR3- 2H1, AB2GR3-2H2, and AB2GR3-2H3) are implemented as shown in the Figure 5. Note: the network code presented in Figure 8 belongs to this family of networks with one ANDHRA module placed at level 1. (AB2GR 1-2H1) 8.2. Two ANDHRA modules and 4 output heads Since there are two possibilities of placing 2 ANDHRA modules at levels (1-2, and 2-3), two networks (AB2GR3- 4H1 and AB2GR3-4H2) are implemented as shown in the Figure 6.8.3. Results The total 5 five networks (3 two heads - 2H) + 2 four heads - 4H) are trained on CIFAR-10/100, and the results are pre- sented in Table 6, and Table 7 along with the baseline net- work (from main paper, baseline with ReLU). The statisti- cal significance test is performed between the baseline and top-performing head in the Bandersnatch network. In Table 6 and Table 7, all the Bandersnatch 2G vari- ants (2H, 4H) outperformed the baseline network in terms of top-1 accuracy with statistically significant difference. Further, the network AB2GR3-4H1 outperforms out of the five Bandersnatch network variants trained in this ablation study. 9. Implementation This section presents the implementation of the Bandersnacth-2G Network through a minimal network with the ANDHRA module placed only at level 1, meaning splitting is performed only once, thus leading to 2 output heads. In this network, the residual module depth is limited to one (R1). The PyTorch code for implementing this minimal network is presented in three parts (in figures 7, 8, and 9): 1.Network Modules (7) : consists of three building blocks of the network that include the ANDHRA module, a residual module with depth-1, and a residual module for pooling and feature space expansion. 2.Bandersnatch 2G network with 2 heads (8) : con- sists of network definition and forward-pass where the ANDHRA module is only placed at level-1, and the net- work returns two outputs. 3.Training function (9) consists of combined loss and ma- jority voting prediction out of two output heads. 8 CIFAR Baseline (1G-PReLU) ANDHRA Bandersnatch (2G-PReLU) Significance Mean Sq. Error Top-Head Combined 10 95.352 ±0.175 96.146 ±0.042 96.394 ±0.069 Yes 0.665 100 78.658 ±0.504 80.674 ±0.144 82.584 ±0.137 Yes 4.378 Table 5. Parametric activation results on CIFAR10/100, Parametric ANDHRA module 1class ANDHRA(nn.Module): 2 def __init__(self,in_planes): 3 super (ANDHRA,self).__init__() 4 self.Relu1 = nn.PReLU(num_parameters=in_planes) 5 self.Relu2 = nn.PReLU(num_parameters=in_planes) 6 7 def forward(self,x): 8 x1 = self.Relu1(x) 9 x2 = self.Relu2(x) 10 11 return x1, x2 Figure 4. ANDHRA module with PReLU Network Top-1 Accuracy Significance Mean Sq. Error Top-Head Combined Baseline (1GR3) 95.474 ±0.162 - - - AB2GR3-2H1 95.844 ±0.117 95.670 ±0.067 Yes 0.142 AB2GR3-2H2 95.922 ±0.150 95.972 ±0.104 Yes 0.214 AB2GR3-2H3 95.668 ±0.154 95.670 ±0.163 Yes 0.084 AB2GR3-4H1 95.976 ±0.151 96.322 ±0.047 Yes 0.313 AB2GR3-4H2 95.906 ±0.160 95.882 ±0.170 Yes 0.249 Table 6. Ablation study results on CIFAR-10 for ANDHRA module at different levels Network Top-1 Accuracy Significance Mean Sq. Error Top-Head Combined Baseline (1GR3) 78.610 ±0.361 - - - AB2GR3-2H1 79.660 ±0.260 79.674 ±0.182 Yes 1.130 AB2GR3-2H2 80.100 ±0.200 80.140 ±0.036 Yes 2.301 AB2GR3-2H3 79.444 ±0.143 79.380 ±0.116 Yes 0.747 AB2GR3-4H1 80.484 ±0.141 82.188 ±0.260 Yes 3.621 AB2GR3-4H2 80.294 ±0.087 81.324 ±0.299 Yes 2.991 Table 7. Ablation study results on CIFAR-100 for ANDHRA module at different levels 9 Figure 5. From the left side: levels chart, AB2GR3-2H1, AB2GR3-2H2, and AB2GR3-2H3 networks Figure 6. From the left side: levels chart, AB2GR3-4H1, and AB2GR3-4H2 networks 10 Network Modules 1class ANDHRA (nn.Module): # Proposed splitting module 2 def __init__(self): 3 super (ANDHRA,self).__init__() 4 self.Relu1 = nn.ReLU(inplace=False) 5 self.Relu2 = nn.ReLU(inplace=False) 6 7 def forward(self,x): 8 x1 = self.Relu1(x) 9 x2 = self.Relu2(x) 10 11 return x1, x2 12 13class ResBlock(nn.Module): # Residual block with equal in/out filters 14 def __init__(self, in_planes): 15 super (ResBlock3, self).__init__() 16 17 #residual function 18 self.conv = nn.Sequential( 19 nn.Conv2d(in_planes, in_planes, kernel_size=3, stride =1,padding=1, bias=False), 20 nn.BatchNorm2d(in_planes), 21 nn.ReLU(inplace=False), 22 nn.Conv2d(in_planes,in_planes, kernel_size=3, stride =1,padding=1, bias=False), 23 nn.BatchNorm2d(in_planes)) 24 25 #shortcut 26 self.shortcut = nn.Sequential() 27 28 def forward(self, x): 29 out = self.conv(x) 30 out += self.shortcut(x) 31 return out 32 33 34class ResBlockP(nn.Module): # Residual block with inherent pooling that also doubles in filters 35 def __init__(self, in_channels, out_channels, stride): 36 super (ResBlockP, self).__init__() 37 38 #residual function 39 self.residual_function = nn.Sequential( 40 nn.Conv2d(in_channels, out_channels, kernel_size=3, stride= stride, padding=1, bias=False), 41 nn.BatchNorm2d(out_channels), 42 nn.ReLU(inplace=False), 43 nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1, bias=False), 44 nn.BatchNorm2d(out_channels) 45 ) 46 47 #shortcut 48 self.shortcut = nn.Sequential( 49 nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False), 50 nn.BatchNorm2d(out_channels) 51 ) 52 53 def forward(self, x): 54 return nn.ReLU(inplace=False)(self.residual_function(x) + self.shortcut(x)) Figure 7. Modules of the network 11 Bandersnatch 2G network with 2 heads 1class AB_2GR1_2H1(nn.Module): 2 def __init__(self, num_classes): 3 super (AB_2GR1_2H1,self).__init__() 4 self.Conv1 = nn.Sequential( 5 nn.Conv2d(3, 64, kernel_size=3, stride =1,padding=1, bias=False), 6 nn.BatchNorm2d(64), 7 nn.ReLU(inplace=False)) 8 self.Res1 = ResBlock(in_planes = 64) 9 10 self.Act1 = ANDHRA() # Proposed splitting module 11 12 13 self.Conv21 = ResBlockP(in_channels=64, out_channels=128, stride=2) # Branch 1 14 self.Res21 = ResBlock(in_planes = 128) 15 16 self.Conv22 = ResBlockP(in_channels=64, out_channels=128, stride=2) # Branch 2 17 self.Res22 = ResBlock(in_planes = 128) 18 19 self.Act21 = nn.ReLU(inplace=False) 20 self.Act22 = nn.ReLU(inplace=False) 21 22 23 self.Conv31 = ResBlockP(in_channels=128, out_channels=256, stride=2) 24 self.Res31 = ResBlock(in_planes = 256) 25 26 self.Conv32 = ResBlockP(in_channels=128, out_channels=256, stride=2) 27 self.Res32 = ResBlock(in_planes = 256) 28 29 self.Act31 = nn.ReLU(inplace=False) 30 self.Act32 = nn.ReLU(inplace=False) 31 32 33 self.Conv41 = ResBlockP(in_channels=256, out_channels=512, stride=2) 34 self.Res41 = ResBlock(in_planes = 512) 35 36 self.Conv42 = ResBlockP(in_channels=256, out_channels=512, stride=2) 37 self.Res42 = ResBlock(in_planes = 512) 38 39 40 self.Relu = nn.ReLU(inplace=False) 41 self.pool4 = nn.AvgPool2d(kernel_size=4) 42 43 self.Linear1 = nn.Linear(512, num_classes) 44 self.Linear2 = nn.Linear(512, num_classes) 45 46 47 def forward(self,x): 48 49 out = self.Res1(self.Conv1(x)) 50 51 out1, out2 = self.Act1(out) # Splitting at level 1 52 53 out1 = self.Res21(self.Conv21(out1)) # Branch 1 54 out2 = self.Res22(self.Conv22(out2)) # Branch 2 55 56 out1 = self.Act21(out1) 57 out2 = self.Act22(out2) 58 59 60 out1 = self.Res31(self.Conv31(out1)) 61 out2 = self.Res32(self.Conv32(out2)) 62 63 out1 = self.Act31(out1) 64 out2 = self.Act32(out2) 65 66 out1 = self.Linear1(self.pool4(self.Relu(self.Res41(self.Conv41(out1)))).view(out.size(0), -1)) 67 out2 = self.Linear2(self.pool4(self.Relu(self.Res42(self.Conv42(out2)))).view(out.size(0), -1)) 68 69 return out1, out2 Figure 8. Network initialization and forward-pass, ANDHRA module is only placed at level 1 12 Training function with Combined Loss and Majority V oting 1def train(epoch): # Training function 2 print (’\nEpoch: %d’ % epoch) 3 net.train() 4 train_loss = 0 5 correct = 0 6 total = 0 7 8 # Initialize counters for individual model accuracies 9 correct_individual = [0] *2 10 total_individual = 0 11 12 for batch_idx, (inputs, targets) in enumerate (trainloader): 13 inputs, targets = inputs.to(device), targets.to(device) 14 optimizer.zero_grad() 15 out1, out2 = net(inputs) 16 17 # Calculate losses for each output 18 loss1 = criterion(out1, targets) 19 loss2 = criterion(out2, targets) 20 21 # Combine losses and backpropagate 22 loss = 0.5 *(loss1 + loss2) 23 loss.backward() 24 optimizer.step() 25 26 train_loss += loss.item() 27 28 # Predictions and majority voting 29 outputs = [out1, out2] 30 individual_predictions = [output. max(1)[1] for output inoutputs] 31 32 # Majority vote prediction 33 p = torch.stack(individual_predictions, dim=0).cpu().detach().numpy() 34 m = stats.mode(p) 35 predicted_majority = torch.from_numpy(m[0]).squeeze().cuda() 36 37 # Update majority correct count 38 total += targets.size(0) 39 correct += predicted_majority.eq(targets). sum().item() 40 41 # Update individual model correct counts 42 for i, pred in enumerate (individual_predictions): 43 correct_individual[i] += pred.eq(targets). sum().item() 44 total_individual += targets.size(0) Figure 9. Training function with Combined Loss and Majority V oting 13 | 6 | 1 | The ANDHRA Bandersnatch architecture implemented with a branching factor of 2 at three levels results in 8 heads, with a total of 15 convolutional layers based on the geometric formula presented in the paper. Given that the model is intended to be used on the CIFAR-10/100 datasets, which consist of 60,000 (for CIFAR-10) and 100,000 (for CIFAR-100) images, and following the training plan consisting of 200 epochs, each with a batch size of 128, the total number of iterations would be substantial. Models similar to ResNets typically exhibit a training time of approximately 5 to 10 hours on a single GPU for similar datasets and epochs. Given the complexity due to the additional heads and branching, I estimate around 6 hours of training time. The memory requirements, particularly for 8 heads, could be managed with a single high-end GPU (like NVIDIA V100, A100, or similar). Therefore, a single GPU setup can handle these calculations within the 8-hour window based on the architecture and dataset stated. | yes | Yes | CV | ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities | 2024-11-28 0:00:00 | https://github.com/dvssajay/New_World | 1 | embedded inside the file to download CIFAR-10 | 200 epochs * 75 sec = 4.2 hours | https://drive.google.com/file/d/1RvV1o-KRUtLVHUwzpcTIPZYB6vpTVmHy/view?usp=sharing | Yes | --Run New_World/mainAB2GR0_10_1.py file.Each model has own code. |
5-Datasets | CODE-CL | [] | CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.15235v2 | [
"https://github.com/mapolinario94/CODE-CL"
] | {'Average Accuracy': '93.32', 'BWT': '-0.25'} | [
"Average Accuracy",
"BWT"
] | Given the following paper and codebase:
Paper: CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning
Codebase: https://github.com/mapolinario94/CODE-CL
Improve the CODE-CL model on the 5-Datasets dataset. The result
should improve on the following metrics: {'Average Accuracy': '93.32', 'BWT': '-0.25'}. You must use only the codebase provided.
| CODE-CL: Co nceptor-Based Gradient Projection for De ep Continual L earning Marco P. E. Apolinario Sakshi Choudhary Kaushik Roy Elmore Family School of Electrical and Computer Engineering Purdue University, West Lafayete, IN 47906 mapolina@purdue.edu, choudh23@purdue.edu, kaushik@purdue.edu Abstract Continual learning (CL) – the ability to progressively ac- quire and integrate new concepts – is essential to intelli- gent systems to adapt to dynamic environments. However, deep neural networks struggle with catastrophic forgetting (CF) when learning tasks sequentially, as training for new tasks often overwrites previously learned knowledge. To ad- dress this, recent approaches constrain updates to orthog- onal subspaces using gradient projection, effectively pre- serving important gradient directions for previous tasks. While effective in reducing forgetting, these approaches in- advertently hinder forward knowledge transfer (FWT), par- ticularly when tasks are highly correlated. In this work, we propose Co nceptor-based gradient projection for De ep Continual L earning (CODE-CL), a novel method that lever- ages conceptor matrix representations, a form of regular- ized reconstruction, to adaptively handle highly correlated tasks. CODE-CL mitigates CF by projecting gradients onto pseudo-orthogonal subspaces of previous task feature spaces while simultaneously promoting FWT. It achieves this by learning a linear combination of shared basis direc- tions, allowing efficient balance between stability and plas- ticity and transfer of knowledge between overlapping input feature representations. Extensive experiments on continual learning benchmarks validate CODE-CL’s efficacy, demon- strating superior performance, reduced forgetting, and im- proved FWT as compared to state-of-the-art methods. 1. Introduction Humans possess the innate ability to continually acquire, retain and update knowledge to adapt naturally to dynami- cally changing environments. In contrast, while deep neu- ral networks (DNNs) excel at leveraging massive amounts of data to generalize across various visual recognition tasks, traditional learning paradigms rely on static datasets. This misalignment with the ever-evolving real-world environ- ments underscores the necessity for these models to retain past knowledge and mitigate catastrophic forgetting, as wellas utilize it to enhance learning on new tasks by encourag- ing knowledge transfer [6, 9, 12, 28]. To address the challenges mentioned above, extensive research has focused on enabling continual learning (CL) in DNNs. Existing techniques fall into three categories: regularization-based, expansion-based, and memory-based methods. Regularization-based methods constrain updates to important model parameters for previous tasks, preserv- ing essential features while allowing flexibility in less crit- ical regions of the parameter space [10, 17, 25, 26, 35]. Expansion-based methods overcome forgetting by dynam- ically allocating new network resources for each task [18, 21, 30, 32, 33]. Memory-based approaches, on the other hand, store representative samples for data replay or track important gradient directions from previous tasks to main- tain performance on earlier data distributions [2, 3, 5, 16, 19, 23, 29, 34]. While these techniques significantly reduce catastrophic forgetting, they inherently limit the model’s ability to leverage shared information across tasks. In other words, they help retain past task performance, but have lim- ited ability to utilize prior knowledge to improve learning on new tasks. Recent works have attempted to enhance knowledge transfer in continual learning scenarios by lever- aging task similarities to integrate past knowledge into new learning[14, 15, 22]. However, we demonstrate that they do not fully incorporate a systematic approach, leaving room for further improvements. In this paper, we propose Conceptor-Based Gradient Projection (CODE-CL), a novel continual learning algo- rithm that minimizes catastrophic forgetting while promot- ing forward knowledge transfer between tasks with highly correlated input activation subspaces. CODE-CL leverages conceptor matrices [8] to enforce constrained gradient up- dates, preventing interference with prior tasks. More pre- cisely, conceptor matrices provide a mathematical frame- work for computing the basis vectors of each layer’s in- put activation/feature space, which, in turn, identify the key gradient directions necessary to retain knowledge from past tasks [36]. We also compute similarities between previously acquired knowledge and the new incoming task to optimize 1arXiv:2411.15235v2 [cs.LG] 7 Mar 2025 Figure 1. Overview of CODE-CL. 1⃝Before learning task t, the importance of input activation space directions for previous tasks is captured in the singular values St−1 i(blue bars) of the conceptor matrix Ct−1. We first identify U∗, the important directions for both previous tasks and the current task t. If such shared directions exist, we define Weffby projecting weights onto a linear combination of these common directions: Weff=W+WU∗MU∗⊤.2⃝During the learning phase, CODE-CL promotes forward knowledge transfer (FWT) by learning an optimal linear combination of the directions ( M), while preventing forgetting by projecting gradients onto I−Ct−1 (purple region). This ensures that the updates do not interfere with the previously acquired knowledge. 3⃝After learning task t, the updated importance of each direction ( St) is computed by obtaining a new conceptor matrix: Ct=Ct−1∨C∗, whereC∗is the conceptor matrix for task t. Since a conceptor matrix can be interpreted as an ellipsoid in space, where its singular vectors ( U) define main axes and its singular values ( S) determine their lengths, operation ∨corresponds to computing the minimal enclosing ellipsoid that encapsulates both conceptors. In this manner, CODE-CL enables efficient continual learning by balancing knowledge retention and adaptation to new tasks. forward transfer (FWT). By encoding past knowledge into conceptor matrices, CODE-CL enables a structured explo- ration of the input activation space, allowing learning in previously restricted regions while preserving critical direc- tions from earlier tasks. We give an overview of our ap- proach in Fig. 1. At the end of each task, CODE-CL com- putes a set of basis vectors, U, that span the input feature space and consequently, the gradient space for each layer using the conceptor matrix C[22, 23, 36]. For a new task, gradient updates are projected in these directions U, scaled according to their importance for previous tasks with a reg- ularization factor α. While this constrained optimization effectively mitigates catastrophic forgetting, it does not ex- plicitly promote forward knowledge transfer. To address this, we compute the intersection between the aggregated conceptor matrix and the pre-conceptor matrix of the cur- rent task, identifying common update directions U∗and encouraging learning along these directions through M, as shown in Fig. 1. Our contributions can be summarized as follows: • We introduce CODE-CL, a novel continual learning al- gorithm that leverages conceptor matrices [8] to mitigate catastrophic forgetting while effectively leveraging past learning to promote forward transfer. • We evaluate the effectiveness of the proposed method through extensive experiments on standard CL vision benchmarks across various model architectures. Com- pared to state-of-the-art approaches, CODE-CL achieves about 1.15% better final accuracy, minimal forgetting, and up to 1.18% improved relative FWT.2. Background In this section, we outline the essential properties of con- ceptor matrices, and provide an overview of related works in continual learning. 2.1. Conceptor Matrices Conceptor matrices constitute a mathematical framework inspired by neuroscience to encode and control the dynam- ics of recurrent neural networks [8]. Given a batch of fea- ture vectors X∈Rb×n, where bis the batch size and n is the dimension of the feature vector space, a conceptor matrix C(X, α)is defined as the solution to the following minimization problem: C(X, α) = arg min C1 b∥X−XC∥2 F+α−2∥C∥2 F(1) Here, α∈(0,∞)is called the aperture and serves as a reg- ularization factor. This optimization problem has the fol- lowing closed-form solution [8]: C(X, α) =X⊤X bX⊤X b+α−2I−1 (2) Therefore, given the singular value decomposition (SVD) of the matrix X⊤=UΣV⊤, the conceptor matrix can be expressed as C=USU⊤=UΣ2(Σ2+bα−2I)−1U⊤. Note, the singular values of Clie between 0and1(0< Si,i<1,∀i∈ {0,1, . . . , n }), representing the importance of directions U:,i. In this way, Cacts as a soft projection matrix onto the linear subspace of the feature vectors of X. 2 Conceptor matrices satisfy most laws of Boolean logic like NOT ( ¬), OR ( ∨), and AND ( ∧) [8], resulting in a sim- ple and intuitive framework to handle the linear subspaces defined within a conceptor matrix. For two conceptor ma- tricesCandB, we have: ¬C=I−C (3) C∧B= (C−1+B−1−I)−1(4) C∨B=¬(¬C∧ ¬B) (5) Here,¬Ccan be interpreted as the pseudo-orthogonal com- plement of the subspace characterized by C.C∧Bsigni- fies the conceptor matrix that describes a space that lies in the intersection between the subspaces characterized by C andB, andC∨Bdescribes the union between the sub- spaces represented by CandB. Please refer to Section A in the Supplementary Material for additional details regard- ing these operations. We can measure the capacity, or memory usage, of a con- ceptor matrix based on the mean value of its singular values: Θ(C) =1 nnX i=0Si,i (6) A capacity of 0would indicate that the conceptor is empty and can be represented as a null matrix, while a capacity of 1would indicate that the conceptor memory is full, essen- tially becoming an identity matrix. 2.2. Related Work Continual learning (CL) techniques can be broadly clas- sified into expansion-based, regularization-based, and memory-based approaches [14, 23, 28]. Regularization-based methods mitigate forgetting by pe- nalizing changes to important model parameters [10, 17, 20, 24, 25, 35]. While effective at preserving knowledge, these methods often rely on complex heuristics to determine pa- rameter importance or require storing multiple model ver- sions, leading to significant memory overhead. Expansion-based methods address catastrophic forget- ting by dynamically increasing the model’s capacity as new tasks arrive [18, 21, 30, 32]. Although these approaches successfully isolate task representations to prevent inter- ference, they result in substantial network growth, making them impractical for resource-constrained environments. Memory-based methods mitigate forgetting by explic- itly retaining information from previous tasks, either in the form of stored samples [2, 19] or gradient-related informa- tion [3, 16]. Within this category, orthogonal gradient pro- jection methods [7, 23, 34] aim to prevent interference be- tween tasks by ensuring that the gradients are orthogonal to important directions for previous tasks. One such ap- proach is Gradient Projection Memory (GPM) [23], whichleverages the fact that gradients lie in the span of input ac- tivations [36]. Consequently, GPM utilizes singular value decomposition (SVD) on the input activations of each layer to compute and store the most important directions for each task. While this prevents forgetting, it also limits forward knowledge transfer (FWT) by keeping the shared directions between old and new tasks frozen, reducing adaptability and degrading accuracy. Trust Region Gradient Projection (TRGP) [14] addresses this by selectively allowing weight updates in a “trusted region”. Specifically, TRGP computes the projection of new task gradients onto the important directions of pre- vious tasks and identifies the top- ktasks with the highest projections. Weight updates are then allowed along the di- rections associated with these ktasks. However, this ap- proach still lacks fine-grained adaptability, as it considers entire task subspaces rather than individual relevant direc- tions. Building on TRGP, Continual Learning with Back- ward Knowledge Transfer (CUBER) [15] introduces posi- tive backward transfer (BWT) by maintaining per-task gra- dient information. If a new task’s gradients exhibit a pos- itive correlation with those of the previous tasks, CUBER relaxes the orthogonality constraint and introduces a regu- larization term to the loss function to align updates along these correlated directions. While this promotes positive BWT, it significantly increases memory complexity due to the need to store per-task gradients. Scaled Gradient Projec- tion (SGP) [22] takes a different approach by relaxing the strict orthogonality constraint of GPM. Instead of enforc- ing full orthogonality, SGP scales the importance of stored task directions, leading to better forward knowledge trans- fer (FWT) and higher average accuracy. However, SGP ap- plies uniform scaling across all tasks, missing opportunities to adaptively exploit task similarities. Other notable meth- ods include Adaptive Plasticity Improvement (API), which combines GPM’s gradient constraints with dynamic model expansion when plasticity is insufficient, and Space Decou- pling (SD) [37], which scales gradient projections based on task correlation, allowing for a more flexible gradient up- date strategy compared to TRGP and GPM. Taking a dif- ferent perspective, Data Augmented Flatness-aware Gradi- ent Projection (DFGP) [31] extends GPM by optimizing the loss as well as loss curvature from the perspective of both data and weights. This improves the generalization abil- ity for new tasks and reduces catastrophic forgetting for the past tasks. However, DFGP does not explicitly leverage task similarities to facilitate knowledge transfer. In contrast to the aforementioned works, CODE-CL con- strains gradients to pseudo-orthogonal directions through a regularized reconstruction framework based on concep- tor matrices, as shown in (2). Additionally, we perform a fine-grained analysis to identify the common important di- rections between tasks. Our proposed approach not only 3 mitigates catastrophic forgetting but also enhances FWT by intelligently reusing prior knowledge. 3. Methodology 3.1. Problem Formulation This work optimizes a DNN model to learn from tem- porally evolving data. We consider a supervised contin- ual learning setting where Ttasks are learned sequen- tially, with each task having sufficient labeled samples. We explore task-incremental learning scenarios in this super- vised setting [28]. Each task is identified by t∈T= {1,2, . . . , T }, and its associated dataset is represented as Dt={(xt i, yt i)nt i=1}, where ntis the number of samples, xt iis the input sample, and yt iis the corresponding label. Using these datasets, we train a neural network with param- etersWt={(W(l),t)L l=1}, where Lrepresents the number of layers of the model. The objective is to learn parameters Wtsuch that the model performs effectively across all T tasks, while mitigating catastrophic forgetting and leverag- ing task similarities for efficient knowledge transfer. 3.2. Approach We demonstrate the flow of our proposed approach, CODE- CL in Algorithm 1. For the first task ( t= 1), learning proceeds with random weight initialization and the model is trained on dataset D1by minimizing the loss function L(W;D1). Optimization is performed using minibatch stochastic gradient descent (SGD) without constraints. Af- ter training for Eepochs, we compute a conceptor matrix C1to encode the input subspace of each layer (lines 21-23, Algorithm 1). Specifically, we randomly sample binputs fromD1and perform a forward pass through the model to form X1= [x1⊤ 1,x1⊤ 2, . . . ,x1⊤ b]for each layer l, i.e. X1={(X(l),1)L l=1}. Based on (2), we compute the con- ceptor C1=C(X1, α). 3.2.1. Task Overlap Analysis Before training for the task t, we analyze the overlap be- tween its input space and that of previous tasks, represented byCt−1. To do this, we forward propagate a set of inputs Xtthrough the model, obtain layer-wise input activations Xtand compute the pre-conceptor matrix Ct,prethrough equation (2) (lines 3-4, Algorithm 1). The overlap of the input space between previous tasks and the current task t is represented by the intersection Ct,and=Ct,pre∧Ct−1, based on (4). If many directions for the current task are en- coded in Ct−1, tasks are highly correlated (or similar). Task correlation is measured by the capacity ratio between con- ceptor matrices (6), defining high (low) correlation when the ratio surpasses (falls below) a threshold ϵ. Case 1 (Θ(Ct,and) Θ(Ct−1)> ϵ): In this high correlation scenario, the directions encoded in Ct−1are important for task t.Algorithm 1 CODE-CL Input: Dt={(xt i, yt i)nt i=1},W={(W(l))L l=1}, aperture α, threshold ϵ, learning rate η, total training epochs E, number of free dimensions K. procedure TRAIN( ) 1. fort= 1,2,3, . . . , T 2. ift >1then 3. Xt←forward (Wt−1,eff, dt)fordt∼Dt 4. Ct,pre←CONCEPTOR (Xt, α) 5. Ct,and←Ct,pre∧Ct−1▷Equation (4) 6. ifΘ(Ct,and) Θ(Ct−1)> ϵthen ▷for each layer l∈L 7. Ut←SVD(Ct,and) 8. Wt,eff←W(I+Ut :,1:KMtUt⊤ :,1:K) 10. else 11. Wt,eff←W 12. end 13. end 14. Wt,eff← {(Wt,eff)L l=1} 15. fore= 1,2,3, . . . , E 16. ∇WL,∇MtL ← SGD(Wt,eff, di)fordt∼Dt 17. ∇WL ← ∇ WL − ∇ WLCt−1▷for each layer l 18. W←W−η∇WL 19. Mt←Mt−η∇MtL 20. end 21. Xt←forward (W, dt)fordt∼Dt 22. Ct,post←CONCEPTOR (Xt, α) 23. ift= 1then 24. Ct←Ct,post 25. else 26. Ct←Ct,post∨Ct−1▷Equation (5) 27. end Hence, the model is allowed to learn in the top Kdirec- tions of Ct,andwithout negatively impacting prior tasks. To achieve this, the weights are projected onto the subspace defined by these directions as follows: Wt,eff=W+WUt,and :,1:KMtUt,and⊤ :,1:K, (7) where Ut,and :,1:Kare the top- Ksingular vectors of Ct,and, and M∈RK×Kis a task-specific learnable matrix which de- fines the extent of learning in these directions. This for- mulation explicitly allows us to utilize past knowledge to improve the performance of the current task, thereby im- proving forward knowledge transfer. Case 2 (Θ(Ct,and) Θ(Ct−1)≤ϵ):In this case, the task overlap is min- imal, leaving little possibility of forward transfer. Thus, the effective weights remain Wt,eff=W. 4 3.2.2. Constrained Gradient Updates While learning task t, the model is trained on Dtto mini- mize the following loss function: Wt,Mt:= arg min W,ML(Wt,eff;Dt) s.t.∇WL=∇WL(I−Ct−1), (8) where the gradients are constrained to lie in the pseudo- orthogonal subspace of the conceptor matrix defined by ¬Ct−1(3), where Ct−1contains important directions for previous tasks, scaled by aperture α, as shown in (2). 3.2.3. Post Training Conceptor Update After training for task t, we merge the current and past task knowledge into a new conceptor matrix Ctfor each layer (line 26, Algorithm 1). This is achieved by first computing the post-training conceptor matrix Ct,post, as shown in lines 21-22 in Algorithm 1. We then merge Ct,postandCt−1 into a new conceptor matrix, consolidating the important directions for all learned tasks based on (5). 4. Experiments In this section, we first provide details regarding our ex- perimental setup, and then show the efficacy of CODE- CL through extensive experiments across various continual learning benchmarks. 4.1. Experimental Setup In this subsection, we outline the benchmarks, network architectures, training hyperparameters, and performance metrics used to evaluate and compare our method with state- of-the-art CL techniques. 4.1.1. Benchmarks and Models We evaluate our method on widely used continual learn- ing (CL) benchmarks, including Split CIFAR100 [11], Split miniImageNet [27], and 5-Datasets [4]. For Split CI- FAR100, the original CIFAR100 dataset is divided into Tgroups, each containing an equal number of classes (100/T). In our experiments, we split the dataset into 10 groups, with each group representing a separate task, and train a 5-layer AlexNet model in a multi-head setting, where each head is associated with one unique task [14, 22, 23]. Similarly, the Split miniImageNet benchmark consists of a subset of 100 classes from the ImageNet dataset, divided into 20 groups. The 5-Datasets benchmark involves train- ing a model sequentially on five different datasets: CI- FAR10, MNIST, SVHN, notMNIST, and Fashion MNIST. For both Split miniImageNet and 5-Datasets, we use a re- duced ResNet18 model in a multi-head setting [14, 22, 23]. To ensure a fair comparison with prior works, we refrain from using data augmentation in our experiments. The dat- aloaders for Split CIFAR100 and 5-Datasets are obtainedfrom GPM[23], while the one for Split miniImageNet was provided by the Avalanche library [1]. 4.1.2. Training Details For all our experiments, we use stochastic gradient descent (SGD) with a learning rate scheduler and early stopping cri- teria [23]. Each task in Split CIFAR100 is trained for a maximum of 200 epochs with a batch size of 64 and aper- tureα= 6. Similarly, each task in Split miniImageNet and 5-Datasets is trained for a maximum of 100 epochs with a batch size of 64, and with α= 16 andα= 8, respec- tively. For all our experiments as shown in Table 1, we use K= 80 . For more details on our implementation, please refer to Section C in the Supplementary Material. 4.1.3. Performance Metrics Similar to previous works [14, 16, 22, 23], we use three metrics to evaluate the performance of our method: the av- erage final accuracy over all tasks, Accuracy (ACC), Back- ward Transfer (BWT), which measures the forgetting of old tasks when learning new tasks, and relative Forward Trans- fer (FWT), which measures the beneficial effects of learning the previous tasks for learning a new one. ACC and BWT are defined as: ACC =TX i=1AT,i T; BWT =T−1X i=1AT,i−Ai,i T−1,(9) where Tis the number of tasks, Aj,iis the accuracy of the model on i-th task after learning the j-th task sequentially (i≤j). Similarly, FWT is defined as: FWT =1 TTX i=1Ai,i−Bi,i, (10) where Bi,iis the accuracy of a baseline method used for training the same model on the i-th task. In our experi- ments, we used GPM [23] as the baseline to compare other methods. 4.2. Results Here, we present the performance of CODE-CL in compar- ison with prior approaches, along with a detailed analysis of its memory complexity. Additionally, we conduct ablation studies to assess the impact of varying the number of free dimensions, K, on the method’s performance. 4.2.1. Performance Comparison As shown in Table 1, our method achieves high accuracy with minimal forgetting across all benchmarks. Specifi- cally, CODE-CL consistently delivers competitive results, outperforming previous methods on all three datasets. In terms of accuracy, on Split CIFAR100, CODE-CL achieves 77.21%, coming close to the upper bound set 5 Table 1. Performance comparison on continual image classification datasets using multi-head networks. Accuracy and BWT (mean ±std) are reported over five trials. Best results are in bold and second best are underlined.†denotes the results taken from [23] and‡denote the results from the respective original papers. All other results are reproduced based on their official open source implementations. MethodSplit CIFAR100 Split MiniImageNet 5-Datasets ACC (%) BWT (%) ACC (%) BWT (%) ACC (%) BWT (%) Multitask†79.58±0.54 − 69.46±0.62 − 91.54±0.28 − OWM [34]†50.94±0.60 -30±1 − − − − EWC [10]†68.80±0.88 -2±1 52 .01±2.53 -12±3 88 .64±0.26 -4±1 HAT [25]†72.06±0.50 0 ±0 59 .78±0.57 -3±0 91 .32±0.18 -1±0 A-GEM [3]†63.98±1.22 -15±2 57 .24±0.72 -12±1 84 .04±0.33 -12±1 ERRes [2]†71.73±0.63 -6±1 58 .94±0.85 -7±1 80 .31±0.22 -4±0 API [13]‡− − 65.9±0.6 -0.3±0.2 91 .1±0.3 -0.5±0.1 DFGP [31]‡74.59±0.33 -0.9 69 .92±0.9 -1 92 .09±0.18 -1 TRGP+SD [37]‡75.50±0.35 -2.88±0.89 65 .8±0.16 -0.49±0.08− − GPM [23] 72.06±0.29 -0.2±0.19 66 .26±1.18 -0.9±1.34 90 .70±0.45 -1.0±0.16 TRGP [14] 75.24±0.29 -0.1±0.18 65 .08±0.94 -0.5±0.74 92 .81±0.54 -0.1±0.03 CUBER [15] 75.30±0.43 0 .1±0.11 64 .25±0.75 -0.7±0.48 92 .77±0.60 -0.03±0.02 SGP [22] 75.69±0.38 -1.4±0.17 68 .50±2.09 -2.0±2.10 90 .42±0.66 -1.61±0.31 CODE-CL (Ours) 77.21±0.32 -1.1±0.28 71.16±0.32 -1.1±0.3 93.51±0.13 -0.11±0.01 Table 2. Comparison of relative FWT with respect to GPM [23]. Values (mean ±std) are reported over five trials. Best results are in bold and second best are underlined. MethodS-CIFAR100 S-MiniImageNet 5-Datasets FWT (%) FWT (%) FWT (%) TRGP [14] 2.86±0.26 -1.56±0.67 1 .16±0.52 CUBER [15] 2.86±0.49 -2.22±0.70 1 .10±0.60 SGP [22] 4.74±0.37 3.37±0.88 0.33±0.37 CODE-CL 5.92±0.34 4 .17±0.41 1 .82±0.12 by Multitask Learning ( 79.58%), which serves as an ideal but unrealistic comparison point. Notably, CODE-CL out- performs other state-of-the-art continual learning methods, achieving higher accuracy than all previous approaches, in- cluding API, DFGP, TRGP+SD, GPM, TRGP, CUBER, and SGP. Similarly, on Split MiniImageNet and 5-Datasets, CODE-CL once again performs exceptionally well, sur- passing all other previous methods. In both cases, it even exceeds the accuracy of the Multitask Learning baseline, il- lustrating the beneficial effect of forward knowledge trans- fer when learning tasks sequentially. This further under- scores CODE-CL’s robustness, particularly on more chal- lenging datasets, where competing methods tend to suffer significant performance drops. Fig. 2 presents the model’s accuracy for each task imme- diately after learning it ( Ai,i) and after sequentially learning all tasks ( AT,i) in the Split CIFAR100 dataset. The dif- ference between these two measures quantifies the extent of forgetting. As shown, CODE-CL achieves superior Ai,i Figure 2. Test accuracy of each task on the Split CIFAR100 bench- mark: (left) immediately after learning the task, Ai,i; (right) after learning all tasks, AT,i. Here, it can be seen that our method out- performs previous methods for all tasks. andAT,icompared to other methods across all tasks. This advantage arises mainly because, unlike methods such as GPM, TRGP, or CUBER, CODE-CL incorporates pseudo- orthogonal gradient projections. Additionally, in contrast to SGP, our method enables the selective release of important shared directions, further enhancing forward transfer. To quantify this, we measure the relative FWT of key representative methods (TRGP, CUBER, and SGP) and compare them against CODE-CL using GPM as a ref- erence. The results, presented in Table 2, demonstrate that CODE-CL consistently achieves better FWT. This can be attributed to its relaxation of gradient projections into pseudo-orthogonal spaces, unlike TRGP or CUBER, and 6 Table 3. Memory complexity comparison among methods. The analysis is done for a single fully-connected layer with Ninputs, M outputs, after being trained on Ttasks. Also, Bis the average number of important direction per task used in [14] and Kis the number of free dimensions parameter used in CODE-CL. Methods GPM TRGP CUBER SGP CODE-CL (Ours) Memory Complexity O(N2)O(N2+TNB +TB2)O(N2+TN2+TNB +TB2)O(N2)O(N2+TNK +TK2) its fine-grained selection of the most important shared di- rections among tasks, unlike the other methods. In terms of BWT, our results further illustrate CODE- CL’s effectiveness in mitigating catastrophic forgetting. On Split CIFAR100, CODE-CL records a BWT of - 1.1%, in- dicating minimal performance loss on previously learned tasks, comparable to prior works. Similarly, on Split Mini- ImageNet, CODE-CL achieves a BWT of - 1.1%, aligning with state-of-the-art methods and demonstrating its ability to retain learned knowledge with minimal degradation. Fi- nally, on the 5-Datasets benchmark, CODE-CL reports a BWT of - 0.11%, performing similarly to TRGP. In summary, the high accuracy, low forgetting, and im- proved FWT of CODE-CL highlight its ability to effec- tively balance the trade-off between plasticity and stability, maintaining strong performance across a range of continual learning tasks while minimizing forgetting. 4.2.2. Memory Complexity We analyze the memory complexity of our proposed ap- proach and compare it with state-of-the-art techniques GPM, SGP, TRGP and CUBER. For simplicity, we analyze a single fully connected layer with Ninputs and Moutputs after training on Ttasks. CODE-CL’s memory complexity is primarily influenced by conceptor matrices of size N2, which encode input vector space information. Addition- ally, as discussed in Section 3, CODE-CL allocates a fixed number of free dimensions per task ( K) to learn an opti- mal linear combination of the Kmost important directions within the subspace formed by the intersection of past and new task conceptors. This introduces an additional mem- ory requirement of TNK +TK2, where TNK accounts for the storage of Kkey directions per task of dimension N, and TK2accounts for learnable square matrices Mt. Consequently, the total memory complexity of CODE-CL isO(N2+TNK +TK2). For GPM and SGP, memory usage is determined solely by the input dimension N, leading to a O(N2)complexity. TRGP shares this base complexity but also stores important directions per task and trusted region projection subspaces, incurring an additional cost of O(TNB +TB2), where B is the number of important directions per task. Similarly, CUBER requires O(N2+TN2+TNB +TB2), with the additional TN2term arising from additional gradient stor- age needs. Table 3 summarizes the memory complexity of each Figure 3. Execution time (left) and memory (right) comparison on the Split CIFAR100 benchmark. Lower means better. Here, CODE-CL represent a more efficient method than techniques such as TRGP or CUBER. method. In particular, as model size (i.e. N) grows, CODE- CL maintains a fixed and significantly smaller number of free dimensions ( K≪N), making its memory require- ments comparable to GPM, SGP, and TRGP, while being significantly lower than CUBER. GPU memory usage mea- surements on Split CIFAR100 (Fig. 3) confirm CODE-CL’s efficiency, requiring half the memory of TRGP and CU- BER. Additionally, CODE-CL achieves a slightly shorter execution time than TRGP and is approximately 3×faster than CUBER. While it introduces some overhead compared to GPM and SGP, this trade-off is justified by its superior performance. 4.2.3. Ablation Study In this section, we examine the impact of the number of free dimensions ( K) and the aperture parameter ( α) on perfor- mance. Note that we modify only one parameter at a time, keeping all other training hyperparameters fixed. Effects αin performance: As shown in Fig. 4, αdi- rectly influences the model’s forgetting rate. Specifically, higher values of αbring BWT closer to zero, meaning the model forgets less. This behavior aligns with the definition of conceptor matrices, as αscales the singular values of the data. When α→ ∞ , the conceptor matrices approximate the identity matrix, preventing forgetting entirely. However, this also means that the model loses plasticity and is unable to integrate new information. This trade-off is evident in 7 (a) Split CIFAR100 (b) Split MiniImageNet Figure 4. Effect of the aperture ( α) parameter on ACC and BWT for the Split CIFAR-100 and Split miniImageNet benchmarks. In both cases, results show that the greater the α(↑) parameter, the lower the BWT ( ↓), meaning the model forgets less. (a) Split CIFAR100 (b) Split MiniImageNet Figure 5. Effect of the number of free dimensions ( K) on the final accuracy and BWT for the Split CIFAR-100 and Split miniIma- geNet benchmarks. In both cases, results show that for K > 20, the greater the K(↑), the greater the ACC ( ↑), while BWT does not change significantly with K. Fig. 4: for Split CIFAR100, peak performance is achieved at α= 6, whereas for Split MiniImageNet, the optimal value isα= 16 . Effects Kin performance: The results for both bench- marks, presented in Fig. 5, indicate that increasing Kgener- ally improves accuracy while maintaining low BWT. When K > 20, higher values of Klead to greater ACC, suggest- ing that increasing Kenhances overall performance by fa- cilitating forward knowledge transfer from previous tasks to new ones. However, its impact on BWT reduction remains minimal. While increasing Kmay seem advantageous, it comes with additional memory overhead, making it crucial to balance performance gains with memory efficiency.4.2.4. Comparison on Tasks with Overlapping Classes Most of the benchmarks used in this study consist of tasks with non-overlapping classes, although they share similar- ities in the feature space, as reflected in neuronal activ- ity representations. While these benchmarks effectively demonstrate CODE-CL’s ability to identify the most rele- vant directions in overlapping feature spaces, evaluating our method on a benchmark with overlapping classes can fur- ther highlight its advantages. To this end, we adopted the OL-CIFAR100 benchmark [15], where the first 50 classes of CIFAR100 are split into seven tasks. Specifically, Tasks 0–6 contain the following class distributions: 0–9, 5–14, 10–19, 20–29, 25–34, 30–39, and 40–49, respectively. The results of this evaluation are summarized in Table 4. Here, CODE-CL outperforms previous methods in terms of ACC, demonstrating the benefits of our approach in sce- narios with class overlap. Additionally, we compute the relative FWT with respect to GPM. The superior FWT of CODE-CL underscores the effectiveness of our fine-grained selection of important directions within overlapping input feature subspaces. This, combined with pseudo-orthogonal gradient updates, leads to more efficient forward trans- fer learning compared to methods like TRGP or CUBER, which rely on full task directions, or SGP, which only con- siders pseudo-orthogonal gradient updates. Table 4. Comparison of methods performance on OL-CIFAR100. Values (mean ±std) are reported over five trials. Best results are in bold and second best are underlined. Method ACC (%) BWT (%) FWT (%) GPM [23] 71.62±0.45 -0.34±0.15 0 TRGP [14] 74.77±0.43 -0.06±0.10 2 .73±0.34 CUBER [15] 75.01±0.23 -0.01±0.26 3 .02±0.20 SGP [22] 75.00±0.68 -1.75±0.59 4 .79±0.42 CODE-CL 76.89±0.42 -1.01±0.186.02±0.36 5. Conclusion We introduce CODE-CL, a novel continual learning al- gorithm that leverages conceptor matrices to mitigate catastrophic forgetting while enhancing forward trans- fer. CODE-CL achieves this by projecting gradients onto pseudo-orthogonal subspaces of previous task fea- ture spaces and learning a linear combination of shared basis directions. This approach effectively balances sta- bility and plasticity, allowing efficient knowledge trans- fer across overlapping feature representations. Extensive experiments on standard continual learning benchmarks demonstrate CODE-CL’s effectiveness, achieving superior accuracy, minimal forgetting, and improved forward trans- fer compared to state-of-the-art methods. 8 Acknowledgments This work was supported in part by the Center for Co-design of Cognitive Systems (CoCoSys), one of the seven cen- ters in JUMP 2.0, a Semiconductor Research Corporation (SRC) program, and in part by the Department of Energy (DoE). References [1] Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, and Vincenzo Lomonaco. Avalanche: A PyTorch Library for Deep Continual Learning. Journal of Machine Learning Research , 24(363):1–6, 2023. 5 [2] Arslan Chaudhry, Marcus Rohrbach Facebook, A I Re- search, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip H S Torr, and Marc ’ Aurelio Ran- zato. On Tiny Episodic Memories in Continual Learning. arXiv:1902.10486 , 2019. 1, 3, 6 [3] Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient Lifelong Learning with A-GEM. International Conference on Learning Representa- tions , 2019. 1, 3, 6 [4] Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, and Marcus Rohrbach. Adversarial Continual Learn- ing. In Computer Vision – ECCV 2020: 16th European Con- ference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI , pages 386–402, Berlin, Heidelberg, 2020. Springer- Verlag. 5 [5] Mehrdad Farajtabar, Navid Azizan, Alex Mott, Ang Li, Deepmind Caltech, and Deepmind Deepmind. Orthogonal Gradient Descent for Continual Learning. In Proceedings of the Twenty Third International Conference on Artificial In- telligence and Statistics , pages 3762–3773. PMLR, 2020. 1 [6] Raia Hadsell, Dushyant Rao, Andrei A. Rusu, and Razvan Pascanu. Embracing Change: Continual Learning in Deep Neural Networks. Trends in Cognitive Sciences , 24(12): 1028–1040, 2020. 1 [7] Xu He and H. Jaeger. Overcoming Catastrophic Interfer- ence using Conceptor-Aided Backpropagation. International Conference on Learning Representations , 2018. 3 [8] Herbert Jaeger. Controlling Recurrent Neural Networks by Conceptors. arXiv:1403.3369 , 2014. 1, 2, 3 [9] Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, and Lei Shu. Achieving forgetting prevention and knowledge transfer in continual learning. In Proceedings of the 35th International Conference on Neural Information Processing Systems , Red Hook, NY , USA, 2021. Curran Associates Inc. 1 [10] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska- Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Ku- maran, and Raia Hadsell. Overcoming catastrophic for- getting in neural networks. Proceedings of the National Academy of Sciences of the United States of America , 114 (13):3521–3526, 2017. 1, 3, 6 [11] Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009. 5[12] Dhireesha Kudithipudi, Mario Aguilar-Simon, Jonathan Babb, Maxim Bazhenov, Douglas Blackiston, Josh Bon- gard, Andrew P. Brna, Suraj Chakravarthi Raja, Nick Ch- eney, Jeff Clune, Anurag Daram, Stefano Fusi, Peter Helfer, Leslie Kay, Nicholas Ketz, Zsolt Kira, Soheil Kolouri, Jef- frey L. Krichmar, Sam Kriegman, Michael Levin, Sandeep Madireddy, Santosh Manicka, Ali Marjaninejad, Bruce Mc- Naughton, Risto Miikkulainen, Zaneta Navratilova, Tej Pan- dit, Alice Parker, Praveen K. Pilly, Sebastian Risi, Terrence J. Sejnowski, Andrea Soltoggio, Nicholas Soures, Andreas S. Tolias, Dar ´ıo Urbina-Mel ´endez, Francisco J. Valero-Cuevas, Gido M. van de Ven, Joshua T. V ogelstein, Felix Wang, Ron Weiss, Angel Yanguas-Gil, Xinyun Zou, and Hava Siegel- mann. Biological underpinnings for lifelong learning ma- chines. Nature Machine Intelligence 2022 4:3 , 4(3):196– 210, 2022. 1 [13] Yan-Shuo Liang and Wu-Jun Li. Adaptive plasticity im- provement for continual learning. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 7816–7825, 2023. 6 [14] Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. TRGP: Trust Region Gradient Projection for Continual Learning. In- ternational Conference on Learning Representations , 2022. 1, 3, 5, 6, 7, 8, 2 [15] Sen Lin, Li Yang, Deliang Fan, and Junshan Zhang. Beyond not-forgetting: continual learning with backward knowledge transfer. In Proceedings of the 36th International Conference on Neural Information Processing Systems , Red Hook, NY , USA, 2022. Curran Associates Inc. 1, 3, 6, 8 [16] David Lopez-Paz and Marc ’ Aurelio Ranzato. Gradient Episodic Memory for Continual Learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems , 2017. 1, 3, 5, 2 [17] Arun Mallya and Svetlana Lazebnik. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning. 2018 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , pages 7765–7773, 2017. 1, 3 [18] Qi Qin, Wenpeng Hu, Han Peng, Dongyan Zhao, and Bing Liu. BNS: Building Network Structures Dynamically for Continual Learning. Advances in Neural Information Pro- cessing Systems , 34:20608–20620, 2021. 1, 3 [19] Sylvestre Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert. iCaRL: Incremental Clas- sifier and Representation Learning. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2017- January:5533–5542, 2017. 1, 3 [20] H. Ritter, Aleksandar Botev, and D. Barber. Online Struc- tured Laplace Approximations For Overcoming Catastrophic Forgetting. Neural Information Processing Systems , 2018. 3 [21] Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Raz- van Pascanu, and Raia Hadsell. Progressive Neural Net- works. arXiv preprint arXiv:1606.04671 , 2016. 1, 3 [22] Gobinda Saha and Kaushik Roy. Continual Learning with Scaled Gradient Projection. Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 , 37:9677– 9685, 2023. 1, 2, 3, 5, 6, 8 9 [23] Gobinda Saha, Isha Garg, and K. Roy. Gradient Projection Memory for Continual Learning. International Conference on Learning Representations , 2021. 1, 2, 3, 5, 6, 8 [24] Jonathan Schwarz, Wojciech M. Czarnecki, Jelena Luketina, A. Grabska-Barwinska, Y . Teh, Razvan Pascanu, and R. Hadsell. Progress & Compress: A scalable framework for continual learning. International Conference on Machine Learning , 2018. 3 [25] J. Serr `a, D ´ıdac Sur ´ıs, M. Miron, and Alexandros Karat- zoglou. Overcoming catastrophic forgetting with hard at- tention to the task. International Conference on Machine Learning , 2018. 1, 3, 6, 2 [26] Yujun Shi, Li Yuan, Yunpeng Chen, and Jiashi Feng. Con- tinual Learning via Bit-Level Information Preserving. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 16669–16678, 2021. 1 [27] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and Daan Wierstra. Matching Networks for One Shot Learning. Neural Information Processing Systems , 2016. 5 [28] Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A Comprehensive Survey of Continual Learning: The- ory, Method and Application. IEEE Transactions on Pat- tern Analysis and Machine Intelligence , 46(08):5362–5383, 2024. 1, 3, 4 [29] Shipeng Wang, Xiaorong Li, Jian Sun, and Zongben Xu. Training Networks in Null Space of Feature Covariance for Continual Learning. 2021 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 184– 193, 2021. 1 [30] Ju Xu and Zhanxing Zhu. Reinforced Continual Learning. In Proceedings of the 32nd International Conference on Neural Information Processing Systems , pages 907–916, 2018. 1, 3 [31] Enneng Yang, Li Shen, Zhenyi Wang, Shiwei Liu, Guib- ing Guo, and Xingwei Wang. Data augmented flatness- aware gradient projection for continual learning. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) , pages 5607–5616, 2023. 3, 6 [32] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong Learning with Dynamically Expandable Networks. International Conference on Learning Represen- tations , 2018. 1, 3 [33] Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang. Scalable and Order-robust Continual Learning with Additive Parameter Decomposition. International Confer- ence on Learning Representations , 2020. 1 [34] Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. Contin- ual learning of context-dependent processing in neural net- works. Nature Machine Intelligence 2019 1:8 , 1(8):364–372, 2019. 1, 3, 6 [35] Friedemann Zenke, Ben Poole, and Surya Ganguli. Contin- ual Learning Through Synaptic Intelligence. Proceedings of machine learning research , 70:3987, 2017. 1, 3 [36] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning re- quires rethinking generalization. In International Confer- ence on Learning Representations , 2017. 1, 2, 3 [37] Zhen Zhao, Zhizhong Zhang, Xin Tan, Jun Liu, Yanyun Qu, Yuan Xie, and Lizhuang Ma. Rethinking gradient projectioncontinual learning: Stability/plasticity feature space decou- pling. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 3718–3727, 2023. 3, 6 10 CODE-CL: Co nceptor-Based Gradient Projection for De ep Continual L earning Supplementary Material A. Conceptor Implementation Details We implement the conceptor operations following the equa- tions presented in Section 2, with one exception: the AND operation (4). The operation defined in (4) is only valid when the con- ceptor matrices are invertible. However, in practice, since we use a limited number of samples to compute the con- ceptors, the resulting matrices are often not full rank. To address this, we adopt a more general version of the AND operation, as proposed in [8]: C∧B=D(D⊤(C†+B†−I)D)−1D⊤, (11) Here,C†andB†denote the pseudo-inverses of CandB, respectively. The matrix Dconsists of columns that form an arbitrary orthonormal basis for the intersection of the col- umn spaces of CandB. The procedure for computing Dis outlined in Algo- rithm 2. Algorithm 2 Computation of matrix Din (11) Input: C,B,β(threshold), N(dimension of CandB) Output: D UC,SC←SVD(C)▷Singular value decomposition UB,SB←SVD(B) kC←num elements (SC> β) ▷# of elements > β kB←num elements (SB> β) U′ C←UC[:, kC:] ▷LastN−kCcolumns U′ B←UB[:, kB:] U,S←SVD(U′ CU′⊤ C+U′ BU′⊤ B) k←num elements (S> β) D←U[:, k:] B. Additional Ablation Studies In this section, we present additional ablation studies to evaluate the impact of the number of free dimensions ( K) and aperture ( α) on the 5-Datasets benchmark, as well as the effect of the threshold parameter ( ϵ) across all three bench- marks. Tables 5 and 6 summarize the results on the 5-Datasets benchmark. We observe that increasing αleads to a reduc- tion in BWT, consistent with the findings in Section 4. Sim- ilarly, increasing Kimproves final accuracy, further validat- ing trends observed in the other datasets. Regarding the threshold parameter ( ϵ), results suggest that lower values of ϵenhance performance by allowing more directions in the intersection of input spaces acrossTable 5. Ablation studies on the aperture ( α) hyperparameter on the 5-Datasets benchmark. Results are reported as mean ±stan- dard deviation over five trials. Other hyperparameters are constant as reported in Section 4. α ACC ( %) BWT ( %) 493.32±0.13 −0.25±0.02 893.51±0.13−0.11±0.01 16 93.46±0.16 −0.04±0.00 Table 6. Ablation studies on the number of free dimensions ( K) parameter on the 5-Datasets benchmark. Results are reported as mean±standard deviation over five trials. Other hyperparameters are constant as reported in Section 4. K ACC ( %) BWT ( %) 091.67±0.31 −1.36±0.07 20 92.70±0.07 −0.43±0.01 40 93.08±0.08 −0.33±0.09 60 93.22±0.16 −0.28±0.00 8093.32±0.13−0.25±0.00 Table 7. Ablation studies on the threshold ( ϵ) across the four benchmarks. Results are reported as mean ±standard deviation over five trials. Other hyperparameters are constant as reported in Section 4. ϵ ACC ( %) BWT ( %) S-CIFAR1000.277.51±0.18−0.84±0.24 0.577.21±0.32 −1.10±0.28 0.875.71±0.40 −0.93±0.36 S-MiniImageNet0.268.61±0.94 −1.30±0.18 0.568.83±0.41−1.10±0.30 0.866.57±0.24 −0.56±0.18 5-Datasets0.293.42±0.11−0.20±0.06 0.593.32±0.13 −0.25±0.02 0.892.28±0.24 −0.71±0.18 tasks to be freed. However, this also increases memory requirements. Therefore, selecting an appropriate ϵin- volves a trade-off between performance and computational resources. C. Experimental Setup This section provides details on the architecture of all mod- els used in this work, the dataset statistics, the hyperparam- eters for each experiment, and the compute resources em- ployed. 1 Table 8. 5-Datasets statistics. Dataset CIFAR10 MNIST SVHN Fashion MNIST notMNIST Number of classes 10 10 10 10 10 Training samples 47500 57000 69595 57000 16011 Validation samples 2500 3000 3662 3000 842 Test samples 10000 10000 26032 10000 1873 Table 9. List of hyperparameters used in our experiments. Dataset Split CIFAR100 Split miniImageNet 5-Datasets Learning rate ( η) 0.01 0 .1 0 .1 Batch size ( b) 64 64 64 Batch size for conceptor comp. ( bs) 125 125 125 Min. learning rate ( ηth) 10−510−510−3 Learning rate decay factor 1/2 1 /2 1 /3 Patience 6 6 5 Number of epochs ( E) 200 100 100 Aperture ( α) 6 8 4 Threshold ( ϵ) 0.5 0 .5 0 .5 Table 10. Split CIFAR100 and Split miniImageNet datasets statis- tics. Dataset Split CIFAR100 Split miniImageNet Number of tasks ( T) 10 20 Sample dimensions 3×32×32 3 ×84×84 Number of classes per task 10 5 Training samples per task 4750 2375 Validation samples per task 250 125 Test samples per task 1000 500 C.1. Model Architecture In this work, we utilize two models: an AlexNet-like archi- tecture, as described in [25], and a Reduced ResNet18 [16]. The AlexNet-like model incorporates batch normaliza- tion (BN) in every layer except the classifier layer. The BN layers are trained during the first task and remain frozen for subsequent tasks. The model consists of three convolu- tional layers with 64,128, and256filters, using kernel sizes of4×4,3×3, and 2×2, respectively. These are followed by two fully connected layers, each containing 2048 neu- rons. ReLU activation functions are used throughout, along with2×2max-pooling layers after each convolutional layer. Dropout is applied with rates of 0.2for the first two layers and0.5for the remaining layers. The Reduced ResNet18 follows the architecture detailed in [23]. For the Split miniImageNet experiments, the first layer uses a stride of 2, while for the 5-Datasets benchmark, it uses a stride of 1.For all models and experiments, cross-entropy loss is employed as the loss function. C.2. Dataset Statistics The statistics for the four benchmarks used in this work for continual image classification are summarized in Table 10 and Table 8. For all benchmarks, we follow the same data partitions as those used in [14, 22, 23]. For the 5-Datasets benchmark, grayscale images are replicated across all RGB channels to ensure compatibility with the architecture. Additionally, all images are resized to 32×32pixels, resulting in an input size of 3×32×32for this benchmark. C.3. Hyperparameters The hyperparameters used in our experiments are detailed in Table 9. C.4. Compute resources All experiments were conducted on a shared internal Linux server equipped with an AMD EPYC 7502 32-Core Pro- cessor, 504 GB of RAM, and four NVIDIA A40 GPUs, each with 48 GB of GDDR6 memory. Additionally, code was implemented using Python 3.9 and PyTorch 2.2.1 with CUDA 11.8. 2 | 6 | 1 | Considering a 5-layer AlexNet model with a typical parameter count around 5 million. The dataset CIFAR100 with 60,000 images (train + test) divided into 10 tasks suggests 6,000 images per task, trained for 200 epochs with a batch size of 64. This leads to significant computational overhead, but not excessive by modern standards. With early stopping and a reasonable learning rate scheduler, it is feasible to train on a single GPU, likely using Nvidia's RTX 3090 or similar. Utilizing efficient SGD, I estimate 6 hours for the entire training process, allowing for overhead and convergence time. This should comfortably fit under 8 hours for a single GPU. | yes | Yes | CV | CODE-CL: Conceptor-Based Gradient Projection for Deep Continual Learning | 2024-11-21 0:00:00 | https://github.com/mapolinario94/CODE-CL | 1 | downloaded automatically when running script | 3.5 to 4.5 hours - Each epoch takes 15 ms and 100 epochs are there. For, 5 dataset it takes total 3.5 to 4.5hrs | https://colab.research.google.com/drive/1-kzSIjBoDKKhnP0x_UUcWJFSE3muxGCC?usp=sharing | Yes | -- Need to pass the arguments. Also dependency was installed accordingly. Everything is on google colab file. |
ISTD+ | RASM | [] | Regional Attention for Shadow Removal | 2024-11-21T00:00:00 | https://arxiv.org/abs/2411.14201v1 | [
"https://github.com/CalcuLuUus/RASM"
] | {'RMSE': '2.53'} | [
"RMSE",
"PSNR",
"SSIM",
"LPIPS"
] | Given the following paper and codebase:
Paper: Regional Attention for Shadow Removal
Codebase: https://github.com/CalcuLuUus/RASM
Improve the RASM model on the ISTD+ dataset. The result
should improve on the following metrics: {'RMSE': '2.53'}. You must use only the codebase provided.
| Regional Attention for Shadow Removal Hengxing Liu chrisliu.jz@gmail.com Tianjin University Tianjin, ChinaMingjia Li mingjiali@tju.edu.cn Tianjin University Tianjin, ChinaXiaojie Guo* xj.max.guo@gmail.com Tianjin University Tianjin, China Figure 1: (a) Performance comparison with previous SOTA methods. Our method achieved a 40.73dB PSNR on the shadow region of the ISTD+ dataset, surpassing the previous SOTA method by 0.90dB; (b) Efficiency comparison with previous SOTA methods. Our method is fast and lightweight with SOTA performance on the SRD dataset; (c) Illustration of self-attentions in shadow removal. Self-attention (used in [ 11,20]) has global information exchangeability but with high computational costs. To reduce the complexity, (shifted-)window attention (in [ 10,33]) only exchanges the information within a pre-defined cell, but may miss useful clues. Our regional attention refines each token with its neighborhoods, reaching a good balance between effectiveness and efficiency. Abstract Shadow, as a natural consequence of light interacting with objects, plays a crucial role in shaping the aesthetics of an image, which however also impairs the content visibility and overall visual qual- ity. Recent shadow removal approaches employ the mechanism of attention, due to its effectiveness, as a key component. How- ever, they often suffer from two issues including large model size *Corresponding Author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia ©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0686-8/24/10 https://doi.org/10.1145/3664647.3681126and high computational complexity for practical use. To address these shortcomings, this work devises a lightweight yet accurate shadow removal framework. First, we analyze the characteristics of the shadow removal task to seek the key information required for reconstructing shadow regions and designing a novel regional attention mechanism to effectively capture such information. Then, we customize a Regional Attention Shadow Removal Model (RASM, in short), which leverages non-shadow areas to assist in restoring shadow ones. Unlike existing attention-based models, our regional attention strategy allows each shadow region to interact more ra- tionally with its surrounding non-shadow areas, for seeking the regional contextual correlation between shadow and non-shadow areas. Extensive experiments are conducted to demonstrate that our proposed method delivers superior performance over other state-of-the-art models in terms of accuracy and efficiency, making it appealing for practical applications. Our code can be found at https://github.com/CalcuLuUus/RASM.arXiv:2411.14201v1 [cs.CV] 21 Nov 2024 MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Hengxing Liu, Mingjia Li, and Xiaojie Guo CCS Concepts •Computing methodologies →Image processing ;Computa- tional photography . Keywords Shadow removal, Regional attention ACM Reference Format: Hengxing Liu, Mingjia Li, and Xiaojie Guo*. 2024. Regional Attention for Shadow Removal. In Proceedings of the 32nd ACM International Conference on Multimedia (MM ’24), October 28-November 1, 2024, Melbourne, VIC, Australia. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3664647.3681126 1 Introduction When light interacts with objects, shadows are cast. In some cases, shadows can enhance the photography aesthetics. While in oth- ers, they act as an interference factor to image quality [ 36], which may degrade the performance of various vision and multimedia algorithms [ 30,37], such as object detection and recognition, and image segmentation. Although this problem has been drawing much attention from the community with significant progress over last years, it still remains challenging for practical use. Because the shadow removal often serves as a preprocessing step for down- stream tasks and, more and more systems prefer to deal with images on portable devices anytime and anywhere, besides the high ac- curacy, its computational cost and model size are expected to be marginal, especially when the computation and memory resources are limited. In other words, a satisfactory shadow removal model shall take into consideration all the removal quality, the processing cost and the model size simultaneously. In the literature, a variety of shadow removal methods [ 5–7, 10,12,19] have been proposed over last few years, aiming to mitigate the negative impact of shadows on image quality and enhance the performance of vision algorithms. Traditional ap- proaches heavily rely on hand-crafted priors, e.g., intrinsic image properties, which are often violated in complex real-world scenar- ios, and thus produce unsatisfactory results. Deep learning tech- niques [ 1,2,8,11,16,17,20] have emerged as powerful alternatives, enabling more robust and data-driven approaches to shadow re- moval. However, most existing advanced shadow removal models barely consider severe model stacking, necessitating substantial computational resources. This issue significantly limits their appli- cability to potential downstream tasks in real-world scenarios. Let us take a closer look at the target problem. Given an image, shadows typically occupy a part of the image. The goal is to convert involved shadow regions into their non-shadow versions, which should be visually consistent with the non-shadow surroundings in the given image. A natural question arises: is all the information in the entire image equally important for the reconstruction of regions affected by the shadow? Intuitively, aside from the darker color of shadowed areas, the most direct way to discern shadows is by the contrast between the shadowed regions and their neighboring non- shadow areas. From this perspective, we can reasonably assume that the critical information for repairing a certain shadow region should be largely from non-shadow areas around the aim region. Based on this assumption, we propose a novel attention mecha- nism called regional attention, and customize a lightweight shadowremoval network, i.e., RASM. Our design is capable of balancing the efficiency and the accuracy of shadow removal in an end-to- end way. As schematically illustrated in Fig. 2, the effectiveness of regional attention obviates the necessity of complicated tricks and complex network designs. The lightweight U-shaped network RASM still demonstrates remarkable performance. Experimental comparisons on widely-used shadow removal datasets (ISTD+ [ 23] and SRD [ 31]) and ablation studies reveal the efficacy and superior performance of RASM over other SOTA methods in terms of the effectiveness and efficiency. The primary contributions of this paper are summarized as: •We rethink the key to shadow removal and propose that the information from the regions surrounding the shadows is essential for effective shadow removal. Inspired by this insight, we introduce a regional attention mechanism that allows each shadowed area to aggregate information from its adjacent non-shadowed regions. •We develop a shadow removal network, RASM, based on a novel regional attention mechanism that optimizes the interaction between shadowed and non-shadowed areas, ef- fectively balancing accuracy and computational efficiency. •The comprehensive experimental evaluations conducted on the widely recognized ISTD+ [ 23] and SRD [ 31] datasets demonstrate that our proposed method achieved a new state- of-the-art performance with a lightweight network architec- ture. 2 Related Work Traditional methods for shadow removal rely on image properties such as chromaticity invariance [ 6,7], gradient consistency [ 5,12] or human interactions [ 9]. The early work in this category can be traced back to [ 6,7], where illumination-invariant images are extracted using a pre-calibrated camera. However, the calibration process is laborious, which limits its practical application. To ad- dress this issue, work [ 5] proposes extracting illumination-invariant images through an optimization procedure that does not require any provenance information of the image. Work [ 12] aims to develop more sophisticated models for the lit and shadowed areas; however, their methods sometimes fail due to the complexity of shadow gen- eralization and imaging procedures. Some works [ 12,21] attempt to address this issue by dividing the shadow removal task into two sub- tasks: shadow detection and mask-based shadow removal. However, since these shadow detectors rely solely on hand-crafted priors, the removal modules may be affected by inaccurate detection results, which can negatively impact overall performance. Furthermore, traditional methods for removing shadows still encounter difficul- ties when dealing with complex distortions in real-world scenarios, particularly in the penumbra area. Deep learning has enabled significant advancements in data- driven methods, with works [ 3,19] investigating unpaired shadow removal training using generative models. However, these models are typically heavy as they model the distribution of shadow-ed im- ages in a generative manner. In parallel, using a large-scale dataset, Qu et al . [31] were among the first to train an end-to-end deep network for recovering shadow regions. Wang et al . [32] proposed the ISTD shadow removal dataset, featuring manually-annotated Regional Attention for Shadow Removal MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Figure 2: An illustration of our proposed framework. (a) Overview of the RASM structure. (b) Channel Attention Module. (c) Regional Attention Module. RASM employs the Channel Attention Module for global information interaction, followed by a Regional Attention Module for spatial information interaction. shadow masks. However, the non-shadow area of shadow images exhibits substantial inconsistency with the corresponding shadow- free images, as noted in [ 23], which proposed mitigating this by transforming the non-shadow area of ground-truths using linear regression. Subsequent studies [ 1,2,8,16–18,28] have explored shadow removal with given shadow masks. Recently, some works have sought to improve the computational efficiency of shadow removal. Zhu et al . [39] proposed a deep unfolding framework for removing shadows efficiently. With the great help of normalizing flow, which reuses the encoder as a decoder, work [ 38] significantly reduced parameter size. However, representation power was also limited since only half the features were used per layer to ensure invertibility. To address the issue of unsatisfactory boundary arti- facts persisting after restoration, Guo et al. [11] proposed the first diffusion-based shadow removal model, which can gradually opti- mize the shadow mask while restoring the image. Work [ 20] lever- ages features extracted from Vision Transformers pre-trained mod- els, which unveil a removal method based on adaptive attention and ViT similarity loss. However, such diffusion-based methods incur significant time and space complexity, leading to significant com- putational overhead. ShadowFormer [ 10] proposed re-weighting the attention map in the transformer using the shadow mask to exploit global correlations between shadow and non-shadow areas. However, unlike ShadowFormer, our emphasis lies on the informa- tion from non-shadow regions closely surrounding the shadows. We enable each shadow region to integrate information from its immediate non-shadow surroundings. This strategy offers greater flexibility than the window attention used by ShadowFormer and is more aligned with the intrinsic characteristics of shadow removal tasks. Our method achieves superior results without increasing the complexity inherent in ShadowFormer.3 Methodology 3.1 Problem Analysis In everyday situations, determining the location of shadowed re- gions often relies on the contrast between the shadowed and ad- jacent non-shadowed areas. Shadows affect specific areas within an image, resulting in significant differences in brightness, color, and texture compared to non-shadowed surroundings. These differ- ences contain the crucial information required for shadow removal. Sharp transitions in lighting at the periphery of shadowed regions usually create distinct gradation zones. Therefore, it is crucial to comprehend and utilize the information from the non-shadowed regions surrounding the shadows to achieve accurate shadow re- moval, shown in Fig. 3. The importance of the information from non-shadowed regions increases with proximity to shadowed areas. Analyzing the characteristics of these areas not only allows for the accurate identification of shadow boundaries but also provides the necessary reference data for shadow removal. Regional attention mechanisms enable models to focus on the critical areas surround- ing the shadows, distinguishing which features are important for the task at hand, thereby facilitating more effective information integration. 3.2 Regional Attention for Shadow Removal Contrary to prior studies such as those by Liang et al . [25] and Wang et al. [33] , which predominantly address image restoration tasks characterized by global corruption, shadow removal presents a distinct challenge due to its nature of partial corruption. In this context, the non-shadow regions are of critical importance, as they play an essential role in the restoration of shadow-affected areas. Shadow removal tasks necessitate a large receptive field to assim- ilate surrounding contextual information effectively. This require- ment is rooted in the primary method of distinguishing shadow regions by contrasting them with adjacent non-shadowed areas. Therefore, the primary goal during restoration is to ensure that MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Hengxing Liu, Mingjia Li, and Xiaojie Guo Figure 3: The first column of images presents scenes with shadows. The highlighted regions in the second column of images represent the non-shadowed areas immediately ad- jacent to the shadows. We posit that the information from these areas is crucial for the task of shadow removal. the reconstructed shadow regions closely resemble their surround- ing non-shadow areas, rather than relying excessively on long- range global spatial information. This is where regional attention becomes pivotal, focusing on the information from neighboring regions to enhance the restoration process. The window attention mechanism [ 25], which projects queries, keys, and values from the information within a specific window to perform self-attention, is less effective for shadow removal. This limitation arises because the non-shadow information required to address shadow discrepancies varies significantly across different locations. Unlike the window attention mechanism utilized in prior works [ 10,25], which general- izes the attention within a window, our proposed regional attention mechanism is specifically designed to tailor the attention to the specific needs of each shadowed area. By doing so, it ensures that each shadow region can access and integrate distinct and locally relevant non-shadow information, thus facilitating more precise and context-aware restoration. To fulfill the outlined objectives, we have devised a Transformer- based network leveraging regional attention, termed RASM, specif- ically for shadow removal. Predicated on our hypothesis that non- shadow information nearer to shadows is of heightened importance, we allocate a minimal proportion of parameters and computational resources to rapidly process global feature interactions, thus al- lowing greater computational resources and parameters to be con- centrated on regional attention processes. RASM operates as a multi-scale encoder-decoder model. Initially, we utilize Channel Attention Module(CA Module) [ 15] to effectively capture global information. Subsequently, we introduce a module equipped with regional attention, which utilizes spatial and channel-wise contex- tual information from non-shadow areas to facilitate the restoration of shadow regions during the bottleneck phase. 3.2.1 Overall Architecture. For a shadow input 𝐼𝑠∈R3×𝐻×𝑊ac- companied by a shadow mask 𝐼𝑚∈R𝐻×𝑊, a linear projection LinearProj(·)is initially applied to generate the low-level featureembedding 𝑋0∈R𝐶×𝐻×𝑊, with𝐶representing the embedding dimension. Subsequently, 𝑋0is processed through an encoder- decoder framework, each composed of CA modules designed to integrate multi-scale global features. Within each CA module are two CA blocks and a scaling layer—specifically, a down-sampling layer in the encoder and an up-sampling layer in the decoder, as depicted in Fig. 2. The CA block functions by compressing spatial information through CA and subsequently capturing long-range correlations using a feed-forward MLP [4], structured as follows: ˜𝑋=CA(LN(𝑋))+𝑋, (1) ˆ𝑋=GELU(MLP(LN(˜𝑋)))+ ˜𝑋, (2) where LN(·)denotes the layer normalization, GELU(·)denotes the GELU activation layer, and MLP(·)denotes multi-layer perceptron. After passing through 𝐿modules within the encoder, we receive the hierarchical features {𝑋1,𝑋2,...,𝑋𝐿}. We calculate the regional contextual correlation via the Regional Attention Module (RAM) according to the pooled feature 𝑋𝐿in the bottleneck stage. Next, the features input to each CA module of the decoder is the concate- nation of the up-sampled features and the corresponding features from the encoder through skip-connection. 3.2.2 Regional Attention Module. Given that shadow removal is a task of partial corruption, existing local attention mechanisms [ 25, 33] face considerable constraints during the shadow removal pro- cess, as the areas within a window may be entirely corrupted. While [ 10] mitigates this to a certain extent, it is still limited by the constraints of window-based attention, lacking the flexibility to provide each shadow region with unique, spatially relevant non- shadow information. To address this, we propose a novel Regional Attention Module (RAM), which enables each shadowed location to more effectively utilize regional attention information across spatial and channel dimensions. In the development of our Regional Attention Module, we draw inspiration from the Neighborhood Attention Transformer [ 14], adapting its core concepts to better suit the specific challenges of shadow removal. Given a feature map 𝑌∈R𝐶×𝐻′×𝑊′normal- ized by a layer normalization and reshape it as 𝑋∈R𝑛×𝑑, where 𝑛=𝐻′×𝑊′,𝑑=𝐶.𝑋is transformed into 𝑄,𝐾, and𝑉, and relative positional biases 𝐵through linear projections. We define the atten- tion weight for the 𝑖-th input within a region of size 𝑟as the dot product between the 𝑄of the𝑖-th input and the 𝐾of the𝑟elements in the surrounding region: 𝐴𝑟 𝑖=𝑄𝑖𝐾𝑇 𝑅1(𝑖)+𝐵(𝑖,𝑅1(𝑖)) 𝑄𝑖𝐾𝑇 𝑅2(𝑖)+𝐵(𝑖,𝑅2(𝑖)) ... 𝑄𝑖𝐾𝑇 𝑅𝑟(𝑖)+𝐵(𝑖,𝑅𝑟(𝑖)), (3) where𝑅𝑗(𝑖)denotes𝑖’s𝑗-th element in the region. The schematic illustration of 𝑄for the input and 𝐾for the𝑟elements of the surrounding region is shown in Fig. 1(c). We then define values, 𝑉𝑟 𝑖, as a matrix which contains 𝑟value projections from elements which are in the region of 𝑖-th input : 𝑉𝑟 𝑖=h 𝑉𝑇 𝑅1(𝑖)𝑉𝑇 𝑅2(𝑖)···𝑉𝑇 𝑅𝑟(𝑖)i𝑇 . (4) Regional Attention for Shadow Removal MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Regional Attention for the 𝑖-th token with region size 𝑟is then defined as: 𝑁𝐴𝑟(𝑖)=softmax𝐴𝑟 𝑖√ 𝑑𝑉𝑟 𝑖 , (5) where√ 𝑑is the scaling parameter. This operation is repeated for every pixel in the feature map. Finally, the output from RAM is subjected to CA for global information interaction and feature fine- tuning. RAM can be represented as follows: ˜𝑋=CA(RAM(LN(𝑋)))+𝑋, (6) ˆ𝑋=GELU(MLP(LN(˜𝑋)))+ ˜𝑋, (7) 3.2.3 Regional Attention With Larger Receptive Field. Shadows are usually not isolated, they are intimately connected with their sur- roundings. During shadow removal, maintaining the naturalness and coherence of the image is crucial. If the receptive field is too small, the model might only see parts of the shadow or objects obscured by the shadow, potentially leading to incorrect shadow perception and removal. A larger receptive field enables a model to more comprehensively understand the relationships between shadowed and non-shadowed areas, helps the model maintain con- sistency and natural transitions in the surrounding environment while removing shadows, and avoids unnatural patches or color discrepancies in the processed image. Similarly, regional attention benefits from a larger receptive field, as tokens calculated within this area can access more information. Inspired by DiNAT [ 13], to balance model complexity and perfor- mance, we propose a regional attention mechanism with a dilation factor. Specifically, we expand the receptive field to a greater range by increasing the stride when selecting regions, thereby maintain- ing the overall attention span. Using the regional attention mecha- nism with the dilation factor allows us to extend the receptive field of regional attention without increasing model complexity, further enhancing performance. 3.3 Loss Function We employ two loss terms: content loss, and perceptual loss. We provide a detailed description of these loss terms below. Content Loss. The content loss ensures consistency between the output image and the ground truth training data. In the image domain, we adopt the Charbonnier Loss [ 22]. The content loss can be expressed as: L𝑐𝑜𝑛𝑡=√︃ (ˆ𝐼−𝐼𝑔𝑡)2+𝜖, (8) where the ˆ𝐼is the output image and 𝐼𝑔𝑡is the ground truth shadow- free image. The 𝜖is set to 10−6to ensure numerical stability. Perceptual Loss. Perceptual loss has been widely used in vari- ous image restoration and generation tasks to preserve the high- level features and semantic information of an image while minimiz- ing the differences between the restored image and the ground truth. We minimize the 𝑙1difference between the feature of ˆ𝐼and𝐼𝑔𝑡in the {conv1_2, conv2_2, conv3_2, conv4_2, conv5_2} of a imagenet- pretrained VGG-19 model. Denoting the 𝑖-th feature extractor as Ψ𝑖(·), the perceptual loss we adapt can be expressed as follows: L𝑝𝑒𝑟=∑︁ 𝑖𝑤𝑖∥Ψ𝑖(ˆ𝐼)−Ψ𝑖(𝐼𝑔𝑡)∥1, (9)where𝑤𝑖is the weight among different layers, the value of which is empirically set as {0.1, 0.1, 1, 1, 1}. The total loss function turns out to be : L=𝛼1L𝑝𝑒𝑟+𝛼2L𝑐𝑜𝑛𝑡, (10) where𝛼1,𝛼2={0.001,1}are empirically set. RASM undergoes end- to-end supervised training using the loss function L, achieving state-of-the-art results in shadow removal. 4 Experiments 4.1 Implementation Details The proposed model is implemented using PyTorch. We train our model using AdamW optimizer [ 29] with the momentum as (0.9,0.999) and the weight decay as 0.02. The initial learning rate is set to 4×10−4, then gradually reduces to 10−6with the cosine anneal- ing [27]. We set the region size of the regional attention to 11 and the dilation factor to 2 in our experiments. Our RASM adopts an encoder-decoder structure (𝐿=3). We set the first feature embed- ding dimension as 𝐶=32. During the training stage, we employed data augmentation techniques, including rotation, horizontal flip- ping, vertical flipping, MixUp [ 35], and adjustments in the H and S components of HSV color space. To validate the performance of our model, we conduct experi- ments on two datasets. SRD [ 31] is a paired dataset with 3088 pairs of shadow and shadow-free images. We use the predicted masks that are provided by DHAN [ 2]. For the adjusted ISTD [ 23] dataset, we use 1330 paired shadow and shadow-free images for training and 540 for testing. Following previous works [ 2,12,17,32], we conduct the Root Mean Square Error (RMSE) between output image and ground-truth shadow image in the color space of CIE LAB as a quantitative metric (the lower the better). To make the comparison more comprehensive, we also follow [ 10] to report the Peak-Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) in RGB space (the higher the better). The FLOPs are reported on 256 ×256 images. 4.2 Performance Evaluation 4.2.1 Quantitative Comparisons. We first compare our proposed method with the state-of-the-art shadow removal methods on the ISTD+ [ 23] dataset. The competitors are DC-ShadowNet [ 19], BMNet [ 38], DHAN [ 2], AutoExposure [ 8], G2R [ 28], Shadow- Former [ 10], ShadowDiffusion [ 11], Li et al . [24] and Liu et al . [26] . The input images are all randomly cropped to 320×320following the existing method [ 10] for comparison. The results are depicted in Tab. 1. Our approach surpasses existing methods across all met- rics in comparisons of Shadow Region, All Image, and also in the PSNR and RMSE metrics for Non-Shadow Regions, achieving SOTA performance. In Non-Shadow Regions, our SSIM is essentially on par with the best-reported results. We also compare our method with the state-of-the-art shadow removal methods on the SRD [ 31] dataset. The competitors are consist of 11 methods, DSC [ 16], DHAN [ 2], AutoExposure [ 8], DC- ShadowNet [ 19], Unfolding [ 39], BMNet [ 38] ShadowFormer [ 10], ShadowDiffusion [ 11], Li et al . [24] , Liu et al . [26] and DeS3 [ 20]. Since there exists no ground-truth mask to evaluate the perfor- mance in the shadow region and non-shadow region separately, we MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Hengxing Liu, Mingjia Li, and Xiaojie Guo MethodShadow Region (S) Non-Shadow Region (NS) All Image (ALL) PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ RMSE↓ DHAN [2] 33.08 0.988 9.49 27.28 0.972 7.39 25.78 0.958 7.74 G2R [28] 33.88 0.978 8.71 35.94 0.977 2.81 30.85 0.946 3.78 DC-ShadowNet [19] 32.20 0.977 10.83 34.45 0.973 3.44 29.17 0.939 4.70 AutoExposure [8] 36.02 0.976 6.67 30.95 0.88 3.84 29.28 0.847 4.28 BMNet [38] 38.17 0.991 5.72 37.95 0.986 2.42 34.34 0.974 2.93 ShadowFormer [10] 39.67 0.992 5.21 38.82 0.983 2.30 35.46 0.973 2.80 ShadowDiffusion [11] 39.82 - 4.90 38.90 - 2.30 35.72 - 2.70 Li et al. [24] 38.46 0.989 5.93 37.27 0.977 2.90 34.14 0.960 3.39 Liu et al. [26] 38.04 0.990 5.69 39.15 0.984 2.31 34.96 0.968 2.87 Ours 40.73 0.993 4.41 39.23 0.985 2.17 36.16 0.976 2.53 Table 1: The quantitative results on ISTD+ [ 23] dataset. The best result is in bold, while the second-best one is underlined . To make a fair comparison, we use results published by the authors. −indicates that the metric is missed in the original paper. MethodShadow Region (S) Non-Shadow Region (NS) All Image (ALL) PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ RMSE↓ DSC [16] 30.65 0.960 8.62 31.94 0.965 4.41 27.76 0.903 5.71 DHAN [2] 32.71 0.943 6.60 33.88 0.949 3.46 29.72 0.923 4.40 AutoExposure [8] 31.34 0.933 7.90 29.74 0.916 5.21 26.99 0.869 5.95 DC-ShadowNet [19] 32.10 0.927 6.91 33.48 0.936 3.66 29.35 0.902 4.61 BMNet [38] 33.81 0.940 7.44 34.91 0.946 5.99 30.68 0.923 3.92 Unfolding[39] 34.94 0.980 7.44 35.85 0.982 3.74 31.72 0.952 4.79 ShadowFormer [10] 36.91 0.989 5.90 36.22 0.989 3.44 32.90 0.958 4.04 ShadowDiffusion [11] 38.72 0.987 4.98 37.78 0.985 3.44 34.73 0.970 3.63 Li et al. [24] 39.33 0.984 6.09 35.61 0.967 2.97 33.17 0.939 3.83 Liu et al. [26] 36.51 0.983 5.49 37.71 0.986 3.00 33.48 0.967 3.66 DeS3 [20] 37.45 0.984 5.88 38.12 0.988 2.83 34.11 0.968 3.72 Ours 37.91 0.988 5.02 38.70 0.992 2.72 34.46 0.976 3.37 Table 2: The quantitative results on SRD [31] dataset. The best result is in bold, while the second-best one is underlined . Method Params (M) GFLOPs RMSE DHAN [2] 16.4 126.0 4.40 AutoExposure [8] 19.7 53.0 5.95 BMNet [38] 0.4 11.6 3.92 Unfolding [39] 10.1 48.2 4.79 ShadowFormer [10] 11.4 63.1 4.04 ShadowDiffusion [11] 55.2 896.7 3.63 Li et al. [24] 23.9 68.3 3.83 Ours 5.2 25.2 3.37 Table 3: Efficiency evaluation. Parameters count and GFLOPs are metered with fvcore[ 34] on 256×256 inputs. The best result is in bold, and the second-best result is underlined . use a mask extracted from DHAN [ 2] following existing method [ 10] for comparison. The results are depicted in Tab. 2. As shown in Tab. 2, our method outperforms existing techniques across all met- rics for the Non-Shadow Region. Performance in the shadow region does not quite match that of ShadowDiffusion [ 11] and Li et al . [24] , possibly due to stricter adherence to the shadow mask guidance. It is anticipated that our method would achieve a more satisfying result by employing a higher-quality shadow mask or utilizing user- provided masks. Notably, despite imprecise masks, our method is still the best among the competitors under RMSE for All Image.To validate the efficiency of our method, we also conduct a com- parison of FLOPs and parameter counts. As depicted in Tab. 3, our model has a small number of parameters and low FLOPs, utilizing a negligible amount of computational resources while achieving superior performance, demonstrating that our model effectively balances model complexity and model performance. 4.2.2 Qualitative Comparisons. This part exhibits several examples from SRD and ISTD+ datasets to compare the visual quality shown in Fig. 4 and 5. Our method achieves state-of-the-art performance with fewer residual shadow components and no visual artifact. Moreover, our total parameters are significantly fewer than most of the previous arts, which demonstrates the efficacy of our practical designs. 4.3 Model Analysis Discussion on Regional Attention and Window Attention. To validate that our proposed regional attention is more suitable for the shadow removal task than traditional window attention, we choose the baseline model and its variants for comparison. Specif- ically, we replace all the window attention in the baseline model with regional attention, maintain the same area size of regional attention and window attention, and represent them as {Window Regional Attention for Shadow Removal MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia (a) Input (b) DC [19] (c) BMNet [38] (d) SF [10] (e) SD [11] (f) Ours (g) GT Figure 4: Qualitative comparison on ISTD+ dataset. Please zoom in for more details. VariantShadow Region (S) All Image (ALL) PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ RMSE↓ 𝑊𝑖𝑛𝑑𝑜𝑤𝐴𝑡𝑡. 40.06 0.992 4.84 35.78 0.975 2.62 𝑅𝑒𝑔𝑖𝑜𝑛𝑎𝑙𝐴𝑡𝑡. 40.73 0.993 4.41 36.16 0.976 2.53 Table 4: Experiments for modeling on ISTD+ dataset. The best result is in bold. Att., Regional Att.}. As shown in Tab. 4, the regional attention se- lected outperforms window-based attention on all three metrics, proving the superiority of our design. Discussion on Receptive Field of Regional Attention. The size of the receptive field and the final effect of the shadow removal task are closely related. Here, we discuss two prominent parameters that control the receptive field size of our proposed regional attention mechanism: the size of the region 𝑟and the dilation factor 𝑑. We tried different region sizes and dilation factors to see how they affect the results. As shown in Tab. 5, we found that as the 𝑟increases, the perfor- mance of the model improves while the computational load also increases. When 𝑟is greater than or equal to 15×15, the bene- fits are obtained by adjusting 𝑟approach saturation. When 𝑑isRegion Size Dilation PSNR SSIM RMSE GFLOPs 7×7 1 35.74 0.974 2.67 24.7 11×11 1 35.94 0.976 2.56 25.2 15×15 1 36.01 0.976 2.60 26.0 21×21 1 36.00 0.976 2.60 27.6 11×11 1 35.94 0.976 2.56 25.2 11×11 2 36.16 0.976 2.53 25.2 11×11 3 36.02 0.976 2.59 25.2 Table 5: Experiments for region size and dilation rate on ISTD+ dataset. Our final choice is marked in bold. within the appropriate range, the model benefits most. However, when𝑑is too small, the spatial attention receptive field is limited. Conversely, when 𝑑is too large, the spatial attention will choose a sparse distribution of region elements, making it difficult to ag- gregate non-shadow information from the surrounding shadowed area, leading to performance degradation. To balance performance and computational complexity, we choose a regional attention size of11×11and a dilation rate of 2 for our model. MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Hengxing Liu, Mingjia Li, and Xiaojie Guo (a) Input (b) DC [19] (c) BMNet [38] (d) SD [11] (e) DeS3 [20] (f) Ours (g) GT Figure 5: Qualitative comparison on SRD dataset. Please zoom in for more details. Figure 6: A visualization of our regional attention. The origi- nal image is on the left, and the star marks the selected points. The heatmaps indicate the regional attention weight of the marked tokens. Brighter colors indicate a larger attention score. Visualization of our regional attention. To validate whether our proposed regional attention mechanism truly enables shadow areas to interact with their adjacent non-shadow areas, we selected several points on the image and visualized their attention weight allocation. As shown in Fig. 6, we can see that in completely il- luminated areas or shadow areas, the attention weights of these points are relatively low and even, while points that notice shadowshave a much larger attention weight when the attention area can encompass the surrounding non-shadow areas. Moreover, points with different shadow positions are paying attention to different non-shadow areas, corresponding to the fact that each shadow area information interacts with the adjacent non-shadow area infor- mation, which is consistent with our proposed regional attention mechanism. 5 Concluding Remarks In this work, we rethought the most significant information source for shadow removal, namely, the non-shadow areas adjacent to the shadow region, which plays a crucial role in this task. Based on this, we proposed a novel regional attention mechanism and introduced a lightweight shadow removal model, RASM. The regional atten- tion mechanism introduced by us enables each shadow region to focus on specific information from surrounding non-shadow areas, thereby effectively utilizing this information for shadow removal. We demonstrated that RASM strikes a good balance between model complexity and model performance. Our model uses fewer parame- ters, lower FLOPs computation, and achieves superior performance on SRD and ISRD+ datasets. Regional Attention for Shadow Removal MM ’24, October 28-November 1, 2024, Melbourne, VIC, Australia Acknowledgments This work was supported by the National Natural Science Founda- tion of China under Grant nos. 62372251 and 62072327. References [1]Zipei Chen, Chengjiang Long, Ling Zhang, and Chunxia Xiao. 2021. Canet: A context-aware network for shadow removal. In ICCV . 4743–4752. [2]Xiaodong Cun, Chi-Man Pun, and Cheng Shi. 2020. Towards Ghost-Free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN. InAAAI . AAAI Press, 10680–10687. [3]Bin Ding, Chengjiang Long, Ling Zhang, and Chunxia Xiao. 2019. Argan: Atten- tive recurrent generative adversarial network for shadow detection and removal. InICCV . 10213–10222. [4]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xi- aohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR . OpenReview.net. [5]Graham D. Finlayson, Mark S. Drew, and Cheng Lu. 2009. Entropy Minimization for Shadow Removal. IJCV 85, 1 (2009), 35–57. [6]Graham D. Finlayson, Steven D. Hordley, and Mark S. Drew. 2002. Removing Shadows from Images. In ECCV , Vol. 2353. 823–836. [7]Graham D. Finlayson, Steven D. Hordley, Cheng Lu, and Mark S. Drew. 2006. On the Removal of Shadows from Images. IEEE TPAMI 28, 1 (2006), 59–68. [8]Lan Fu, Changqing Zhou, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Wei Feng, Yang Liu, and Song Wang. 2021. Auto-exposure fusion for single-image shadow removal. In CVPR . 10571–10580. [9]Han Gong and Darren Cosker. 2014. Interactive Shadow Removal and Ground Truth for Variable Scene Categories. In BMVC . [10] Lanqing Guo, Siyu Huang, Ding Liu, Hao Cheng, and Bihan Wen. 2023. Shadow- Former: Global Context Helps Image Shadow Removal. In AAAI . [11] Lanqing Guo, Chong Wang, Wenhan Yang, Siyu Huang, Yufei Wang, Hanspeter Pfister, and Bihan Wen. 2023. Shadowdiffusion: When degradation prior meets diffusion model for shadow removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 14049–14058. [12] Ruiqi Guo, Qieyun Dai, and Derek Hoiem. 2012. Paired regions for shadow detection and removal. IEEE TPAMI 35, 12 (2012), 2956–2967. [13] Ali Hassani and Humphrey Shi. 2022. Dilated Neighborhood Attention Trans- former. (2022). arXiv:2209.15001 [cs.CV] https://arxiv.org/abs/2209.15001 [14] Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 2023. Neigh- borhood Attention Transformer. In CVPR . IEEE, 6185–6194. [15] Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-Excitation Networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition . https: //doi.org/10.1109/cvpr.2018.00745 [16] Xiaowei Hu, Chi-Wing Fu, Lei Zhu, Jing Qin, and Pheng-Ann Heng. 2020. Direction-Aware Spatial Context Features for Shadow Detection and Removal. IEEE TPAMI 42, 11 (2020), 2795–2808. [17] Xiaowei Hu, Yitong Jiang, Chi-Wing Fu, and Pheng-Ann Heng. 2019. Mask- ShadowGAN: Learning to remove shadows from unpaired data. In ICCV . 2472– 2481. [18] Yeying Jin, Ruoteng Li, Wenhan Yang, and Robby T. Tan. 2023. Estimating Reflectance Layer from a Single Image: Integrating Reflectance Guidance and Shadow/Specular Aware Learning. In AAAI . AAAI Press, 1069–1077.[19] Yeying Jin, Aashish Sharma, and Robby T Tan. 2021. DC-ShadowNet: Single- Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network. In CVPR . 5027–5036. [20] Yeying Jin, Wei Ye, Wenhan Yang, Yuan Yuan, and Robby T. Tan. 2024. DeS3: Adaptive Attention-Driven Self and Soft Shadow Removal Using ViT Similarity. InAAAI . AAAI Press, 2634–2642. [21] Salman Hameed Khan, Mohammed Bennamoun, Ferdous Ahmed Sohel, and Roberto Togneri. 2016. Automatic Shadow Detection and Removal from a Single Image. IEEE TPAMI 38, 3 (2016), 431–446. [22] Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2019. Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks. IEEE TPAMI 41, 11 (2019), 2599–2613. [23] Hieu M. Le and Dimitris Samaras. 2019. Shadow Removal via Shadow Image Decomposition. In ICCV . IEEE, 8577–8586. https://doi.org/10.1109/ICCV.2019. 00867 [24] Xiaoguang Li, Qing Guo, Rabab Abdelfattah, Di Lin, Wei Feng, Ivor W. Tsang, and Song Wang. 2023. Leveraging Inpainting for Single-Image Shadow Removal. InICCV . IEEE, 13009–13018. [25] Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. 2021. SwinIR: Image Restoration Using Swin Transformer. In ICCVW . IEEE, 1833–1844. [26] Yuhao Liu, Zhanghan Ke, Ke Xu, Fang Liu, Zhenwei Wang, and Rynson W. H. Lau. 2024. Recasting Regional Lighting for Shadow Removal. In AAAI . AAAI Press, 3810–3818. [27] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In ICCV . IEEE, 9992–10002. [28] Zhihao Liu, Hui Yin, Xinyi Wu, Zhenyao Wu, Yang Mi, and Song Wang. 2021. From Shadow Generation to Shadow Removal. In CVPR . [29] Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. InICLR (Poster) . OpenReview.net. [30] Sohail Nadimi and Bir Bhanu. 2004. Physical Models for Moving Shadow and Object Detection in Video. IEEE TPAMI 26, 8 (2004), 1079–1087. [31] Liangqiong Qu, Jiandong Tian, Shengfeng He, Yandong Tang, and Rynson W. H. Lau. 2017. DeshadowNet: A Multi-context Embedding Deep Network for Shadow Removal. In CVPR . IEEE Computer Society, 2308–2316. https://doi.org/10.1109/ CVPR.2017.248 [32] Jifeng Wang, Xiang Li, and Jian Yang. 2018. Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. InCVPR . 1788–1797. [33] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. 2022. Uformer: A General U-Shaped Transformer for Image Restoration. In CVPR . IEEE, 17662–17672. [34] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. 2019. Detectron2. https://github.com/facebookresearch/detectron2. [35] Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. 2018. mixup: Beyond Empirical Risk Minimization. In ICLR (Poster) . OpenReview.net. [36] Ling Zhang, Qing Zhang, and Chunxia Xiao. 2015. Shadow Remover: Image Shadow Removal Based on Illumination Recovering Optimization. IEEE TIP 24, 11 (2015), 4623–4636. [37] Yiqi Zhong, Xianming Liu, Deming Zhai, Junjun Jiang, and Xiangyang Ji. 2022. Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon. In CVPR . 15324–15333. [38] Yurui Zhu, Jie Huang, Xueyang Fu, Feng Zhao, Qibin Sun, and Zheng-Jun Zha. 2022. Bijective Mapping Network for Shadow Removal. In CVPR . 5627–5636. [39] Yurui Zhu, Zeyu Xiao, Yanchi Fang, Xueyang Fu, Zhiwei Xiong, and Zheng-Jun Zha. 2022. Efficient Model-Driven Network for Shadow Removal. In AAAI . | 6 | 1 | The model proposed, RASM, has a lightweight architecture aiming for efficiency, suggesting a lower parameter count than bulkier models. Given that it adopts a U-shaped encoder-decoder architecture with a feature embedding dimension of 32 and focuses on regional attention rather than full-scale attention, I estimate approximately 6 hours of training time on a standard dataset size. The datasets used are SRD (3088 images) and ISTD+ (1330 images), which are moderate sizes. A moderate learning rate schedule and standard data augmentation imply that the training process is optimized, potentially allowing for reasonable convergence in fewer epochs, ideally under 8 hours on a single GPU. Data augmentation and model simplicity support this timeframe. Based on similar architectures, this model is trainable on a single GPU given these factors, alongside the reported FLOPs indicating not excessively high computational demand, leading to an estimate of 1 GPU being able to handle it effectively. | yes | Yes | CV | Regional Attention for Shadow Removal | 2024-11-21 0:00:00 | https://github.com/CalcuLuUus/RASM | 1 | https://drive.usercontent.google.com/download?id=1I0qw-65KBA6np8vIZzO6oeiOvcDBttAY&export=download&authuser=0 | 6min 23 sec * 1000 = 4.4 days | https://colab.research.google.com/drive/1OqVyOBRCgHGl5p0_lPBeW1xMLYVuZys7?usp=sharing | Yes | -- I have included all the path and commands in colab file. U can change the epoch to reduce the training time. |
Training and validation dataset of capsule vision 2024 challenge. | BiomedCLIP+PubmedBERT | [] | A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT | 2024-10-25T00:00:00 | https://arxiv.org/abs/2410.19944v3 | [
"https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge"
] | {'Total Accuracy': '97.75'} | [
"Total Accuracy"
] | Given the following paper and codebase:
Paper: A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT
Codebase: https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge
Improve the BiomedCLIP+PubmedBERT model on the Training and validation dataset of capsule vision 2024 challenge. dataset. The result
should improve on the following metrics: {'Total Accuracy': '97.75'}. You must use only the codebase provided.
| A MULTIMODAL APPROACH FOR ENDOSCOPIC VCE IMAGE CLASSIFICATION USING BiomedCLIP-PubMedBERT A PREPRINT Dr. Nagarajan Ganapathy∗ Department of Biomedical Engineering Indian Institute of Technology Hyderabad Sangareddy, Hyderabad, India gnagarajan@bme.iith.ac.in Podakanti Satyajith Chary Department of Biomedical Engineering MedInfoLab, IIT Hyderabad Sangareddy, Hyderabad, India satyajithpodakanti@gmail.com Teja Venkata Ramana Kumar Pithani Department of Biomedical Engineering Indian Institute of Technology University Sangareddy, Hyderabad, India pithani.tejavenkataramanakumar@gmail.com Pavan Kavati Department of Biomedical Engineering Indian Institute of Technology University Sangareddy, Hyderabad, India bm23resch01001@iith.ac.in Arun Kumar S Department of Biomedical Engineering Indian Institute of Technology University Sangareddy, Hyderabad, India bm24resch12001@iith.ac.in October 25, 2024 ABSTRACT This Paper presents an advanced approach for fine-tuning BiomedCLIP-PubMedBERT, a multimodal model, to classify abnormalities in Video Capsule Endoscopy (VCE) frames, aiming to enhance diagnostic efficiency in gastrointestinal healthcare. By integrating the PubMedBERT language model with a Vision Transformer (ViT) to process endoscopic images, our method categorizes images into ten specific classes: angioectasia, bleeding, erosion, erythema, foreign body, lymphangiectasia, polyp, ulcer, worms, and normal. Our workflow incorporates image preprocessing and fine-tunes the BiomedCLIP model to generate high-quality embeddings for both visual and textual inputs, aligning them through similarity scoring for classification. Performance metrics, including classification accuracy, recall, and F1 score, indicate the model’s strong ability to accurately identify abnormalities in endoscopic frames, showing promise for practical use in clinical diagnostics. We are proud to share that our approach earned 2nd position in the Capsule Vision Challenge 2024, reflecting the robustness and applicability of our methodology in addressing real-world challenges in endoscopic diagnostics. Keywords BiomedCLIP-PubMedBERT · OpenCLIP · VCE Video Frames 1 Introduction Video Capsule Endoscopy (VCE) is a widely utilized, non-invasive technique in gastrointestinal diagnostics, capturing thousands of images as it moves through the digestive tract. This method provides clinicians with valuable insights into gastrointestinal abnormalities, but the sheer volume of VCE frames poses a significant challenge for manual interpretation. The process is time-intensive, subject to human error, and requires specialized expertise, making it less efficient for clinical use at scale. Consequently, artificial intelligence (AI) methods have emerged as promising tools to ∗ https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 2 automate abnormality detection, streamline diagnostic workflows, and reduce clinician workload. The accurate classification of gastrointestinal abnormalities within VCE images, however, presents unique challenges. These include handling high intra-class variability, dealing with imbalanced datasets, and ensuring that models generalize well across different patient populations and imaging conditions. Previous approaches have primarily relied on supervised learning models that require large, annotated datasets, which are often difficult to obtain in medical imaging due to privacy concerns and the specialized knowledge required for labeling. To address these challenges, this study employs a fine-tuning approach using BiomedCLIP—a multimodal vision- language model that combines the PubMedBERT language model with a Vision Transformer (ViT). BiomedCLIP leverages the strengths of PubMedBERT’s domain-specific language processing and ViT’s ability to capture spatial features within images, enabling a powerful classification system for endoscopic images. By generating embeddings for both visual and text data, the model aligns these embeddings via similarity scoring, enabling the classification of VCE frames into ten abnormality categories: angioectasia, bleeding, erosion, erythema, foreign body, lymphangiectasia, polyp, ulcer, worms, and normal. This study aims to demonstrate the efficacy of fine-tuning BiomedCLIP on VCE images for accurate abnormality classification. We evaluate the model’s performance using key metrics such as accuracy, precision, recall, and F1 score. The findings contribute to the growing body of research on AI-driven diagnostics in gastrointestinal healthcare and underscore the potential of vision-language models to improve diagnostic efficiency and accuracy in clinical settings. 2 Dataset The dataset used in this study was provided by the Capsule Vision 2024 Challenge: Multi-Class Abnormality Classification for Video Capsule Endoscopy[2] , focusing on multi-class abnormality classification in Video Capsule Endoscopy (VCE) frames. This dataset includes labeled images across ten gastrointestinal categories: angioectasia, bleeding, erosion, erythema, foreign body, lymphangiectasia, polyp, ulcer, worms, and normal, providing a compre- hensive foundation for fine-tuning AI models to automatically classify these abnormalities. The dataset was curated from both publicly available and proprietary sources to represent diverse imaging conditions, enhancing the model’s generalizability. 2.1 Dataset Composition The dataset was developed using three publicly available VCE datasets—SEE-AI, KID, and Kvasir-Capsule, as well as one private dataset from AIIMS. The training dataset consists of 37,607 frames, while the validation dataset contains 16,132 frames, both mapped to the ten abnormality classes: angioectasia, bleeding, erosion, erythema, foreign body, lymphangiectasia, polyp, ulcer, worms, and normal. Source Training Frames Validation Frames KID 376 165 Kvasir 26,511 11,581 SEE-AI 9,092 4,291 AIIMS 224 97 Total 37,607 16,132 Table 1: Dataset Composition The dataset is organized into training and validation directories, with each class (e.g., Angioectasia, Bleeding) stored in a corresponding subfolder. The images are named and stored according to their respective abnormality class. Each set is accompanied by a metadata file (training-data.xlsx and validation-data.xlsx) containing the image paths and corresponding labels. 2.2 Data Organization and Metadata Each abnormality class is represented as a subfolder within the training and validation directories, enabling systematic organization of the image data. For instance, images classified as “Angioectasia” are stored in a dedicated folder named A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 3 after the class, making it easier for the model to associate images with corresponding labels. Additionally, metadata files (training-data.xlsx and validation-data.xlsx) accompany the dataset, listing each image’s path and its label, which ensures smooth data retrieval during model training and validation. 2.3 Data Augmentation and Preprocessing To address class imbalances and enhance model robustness, various data augmentation techniques were applied, including rotation, flipping, and cropping. Each image was resized to 224x224 pixels, matching the input size requirements of the Vision Transformer (ViT) model. Furthermore, we utilized the preprocess function from the openclip package to transform the images into tensor format, standardizing pixel values and ensuring compatibility with the BiomedCLIP architecture. 3 Methodology This section describes the steps involved in developing and implementing a fine-tuned BiomedCLIP-PubMedBERT model to classify abnormalities in VCE images into ten categories. The methodology encompasses dataset preparation, model architecture, training process, and evaluation metrics. Each subsection provides a comprehensive view of the data flow and architecture used to achieve efficient classification in endoscopic frames. Figure 1: Pipeline of Project 3.1 Dataset Preparation The dataset is structured into two parts: a training set and a validation set. Each set contains subfolders corresponding to the ten categories of abnormalities: • Training Set: Contains labeled images within subfolders named after each of the ten categories. • Validation Set: Contains labeled images used for internal validation. Each image in the dataset corresponds to one of the ten abnormality classes. Additionally, an Excel file, training- data.xlsx and validation-data.xlsx, contains metadata about each image. The goal is to develop a model that can classify A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 4 unseen test images into one of these ten categories. The following steps were used to preprocess the data set: • Data loading: The images were loaded from the dataset folders using Python’s os and PIL.Image libraries. A loop iterates over each class folder to build a list of image paths and corresponding labels. • Data Augmentation and Re-Sizing: Since medical imaging datasets are often imbalanced and small, augmentation techniques such as rotation, flipping, and cropping were employed to increase dataset variability. The images were resized to 224x224 pixels to match the input size of the transformer-based architecture of the model, which helps to reduce computational cost. • Preprocessing: The preprocess function, provided by the openclip package, was used to transform the images into a tensor format compatible with the BiomedCLIP model. This function normalizes the pixel values and ensures that the input is in the correct shape for the model. 3.2 Model Architecture The architecture of our model is based on BiomedCLIP, a pre-trained vision-language model designed specifically for medical image classification tasks. For the Capsule Vision 2024 Challenge, we employed BiomedCLIP- PubMedBERT integrated with a Vision Transformer (ViT) for the task of abnormality classification in Video Capsule Endoscopy (VCE) images. Below, we provide a detailed explanation of each component of the model, along with the data flow, technical specifications, and how these elements work together to address the multi-class classification task. 3.2.1 Model Overview and Flow The model architecture is composed of two key components: • A Vision Transformer (ViT) , which processes the endoscopic images and extracts image features. • PubMedBERT, part of the BiomedCLIP framework, which generates embeddings from the text (i.e., abnormality class labels). In the abnormality classification pipeline, these components work together as follows: 1. Input Stage: • Image Input: VCE images from the dataset are preprocessed into a format compatible with the Vision Transformer. Each image is resized to 224x224 pixels, which is standard for ViT models. • Text Input: The text-based class labels (e.g., "Angioectasia," "Bleeding," etc.) are tokenized and transformed into embeddings using PubMedBERT. 2. Feature Extraction: • The Vision Transformer (ViT) processes the image and extracts a sequence of features. These features capture the important spatial details necessary for classifying the image. • Simultaneously, the PubMedBERT model processes the text labels, converting them into semantic embeddings. This step links the textual meaning of each abnormality class to its corresponding medical imagery. 3. Image-Text Matching: • BiomedCLIP then computes the similarity between the visual and textual embeddings. The model generates a similarity score for each class label, indicating how closely the extracted image features match the text embeddings. 4. Classification Output: A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 5 • Finally, a softmax layer is applied to the similarity scores to generate probabilities for each of the ten abnormality classes. The class with the highest probability is selected as the predicted abnormality for that image. This fine-tuning approach allows the model to classify abnormalities without needing explicit labels for every possible abnormality in the training phase, a critical advantage for medical imaging tasks where fully labeled datasets are scarce. 3.2.2 Detailed Breakdown of Model Components Here, we explain each part of the architecture in more technical detail, specifically tailored for the task of abnormality classification in VCE images. 1. Vision Transformer (ViT) • Input: The ViT model accepts an input image of size 224x224x3 (height, width, and RGB channels). • Patch Embedding: The image is divided into 16x16 non-overlapping patches. Each patch is flattened into a single 256-dimensional vector, then linearly projected to a 768-dimensional embedding. Given the 224x224 input, this results in 196 patches (14x14 grid), each represented as a 768-dimensional feature vector. • Position Embeddings: To retain spatial information, the model adds position embeddings to each patch embedding, maintaining information about each patch’s location in the image, which is crucial for capturing spatial relationships between abnormality features. • Transformer Encoder: The patch embeddings are processed through multiple transformer layers. Each layer consists of multi-head self-attention mechanisms and feed-forward networks, which allow the ViT to capture long-range dependencies across patches and understand global image structures. • Output: The ViT produces a sequence of 196 feature vectors, each of 768 dimensions, representing the encoded visual information across the entire image. A specific [CLS] token at the beginning of this sequence is used as a summary of the image’s features, which will be aligned with the textual embedding in the cross-modal matching stage. 2. BiomedCLIP-PubMedBERT for Text Embeddings • Input: PubMedBERT receives textual descriptions of each abnormality class (e.g., "This is an endo- scopic image of Angioectasia"), designed to generate contextualized embeddings reflective of medical terminology and concepts. • Tokenization: Text input is tokenized with a vocabulary specialized for biomedical terms, allowing recognition of domain-specific language. Each class description is tokenized up to a maximum sequence length of 256 tokens, with token embeddings initialized based on PubMedBERT’s pre-trained weights. • Contextual Embedding Layers: PubMedBERT uses a stack of 12 transformer encoder layers, each containing multi-head attention and feed-forward networks. These layers capture the semantic relation- ships between tokens, adapting BERT’s general-purpose embeddings to interpret biomedical context effectively. • Pooling and Output Representation: The [CLS] token, a special token added to the beginning of each text input, is used as a summary embedding for the entire sequence. After processing through all transformer layers, this [CLS] embedding becomes the final 768-dimensional text embedding representing each class label, capturing the biomedical semantics specific to the abnormality. 3. Cross-modal Similarity Matching • Embedding Alignment in a Shared Space: After generating image embeddings from ViT and text embeddings from PubMedBERT, BiomedCLIP projects both embeddings into a shared multimodal feature space. This alignment allows direct comparison between visual features and semantic representations of abnormality classes. • Similarity Calculation Using Dot Product: The model computes similarity scores between each image embedding and the ten class text embeddings using a dot product. The dot product quantifies the alignment between the visual features of the image and the semantic meaning of each class description, resulting in a logit score for each class. A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 6 • Logit Scale Adjustment: A learnable scalar parameter, called the logit scale, is applied to the similarity scores. This scaling factor adjusts the logits, calibrating the model’s confidence in its predictions and helping prevent overly confident misclassifications. 4. Softmax Classification • Softmax Function for Probability Distribution: After similarity scores are computed for all classes, a softmax layer converts these scores into a probability distribution over the ten classes. Each probability represents the model’s confidence that the input image belongs to a particular class. • Class Prediction: The model selects the class with the highest probability as the final prediction for the image. This results in a 10-dimensional output vector, where each entry corresponds to the probability of the image belonging to each abnormality class. 3.2.3 Data Flow and Technical Specifications • Image Input Size: 224x224x3 (for the Vision Transformer). Text Input Size: Maximum of 256 tokens (for PubMedBERT). • Vision Transformer Output: A sequence of image features of size 196x768. • PubMedBERT Output: A semantic embedding of size 768 for each class label. • Similarity Calculation: Dot product between image and text embeddings, outputting a similarity score for each class. • Final Output: A 10-dimensional probability vector representing the likelihood of each class, followed by the predicted class label. Figure 2: Block Diagram of the Developed Pipeline. 3.2.4 Strengths of the Model for Abnormality Classification • Fine-Tuning Capability: The model’s ability to classify abnormalities without requiring large amounts of labeled training data makes it highly efficient in medical scenarios where labeled datasets are limited. • Vision-Text Alignment: By aligning image features with textual medical descriptions, BiomedCLIP leverages the power of large-scale pre-training, enabling it to generalize across unseen abnormalities. • Scalability: The use of transformers for both vision and language tasks makes the model highly scalable. It can be fine-tuned or adapted for other medical imaging classification tasks with minimal changes. 4 Training Process Although BiomedCLIP is a pre-trained model, additional fine-tuning was performed using the labeled training dataset. The steps involved in training are as follows: A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 7 • Batch Processing: Images were processed in mini-batches of 32, allowing efficient utilization of GPU memory. For each batch, VCE frames were fed through the Vision Transformer, producing a set of image embeddings. Simultaneously, text embeddings for the ten classes were loaded and matched with each image in the batch. • Similarity Computation: For each image in the batch, similarity scores were calculated between the image embedding and each of the ten class embeddings. This process produced a set of logits, representing the model’s confidence in assigning the image to each class. • Backpropagation: Using the computed cross-entropy loss, gradients were backpropagated through the model’s layers. The optimizer then adjusted the weights of the ViT and PubMedBERT components, fine-tuning them to improve class alignment. • Performance Tracking: During training, accuracy and loss metrics were monitored at the end of each epoch. By evaluating these metrics on both training and validation sets, we tracked the model’s convergence and prevented overfitting. • Epochs and Early Stopping: Training was conducted over 30 epochs with early stopping criteria. If validation loss plateaued over multiple epochs, training was halted to avoid overfitting while preserving model generalizability. 5 Results This section details the results obtained from the Fine-Tuning for classification model using the BiomedCLIP- PubMedBERT architecture on the Capsule Vision 2024 Challenge dataset. We evaluate the model’s performance using multiple metrics, including accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). The classification results are generated for each of the ten abnormality classes. 5.1 Model Performance Metrics The classification performance of the model is evaluated on the validation dataset, which contains labeled images from the same ten abnormality classes as the training set. Below are the key evaluation metrics used to assess the model: • Accuracy: Overall classification accuracy, i.e., the percentage of correct predictions. • Precision: Measures the proportion of true positive predictions out of all positive predictions for each class. • Recall (Sensitivity): Indicates the model’s ability to correctly identify true positives out of all actual positive instances. • F1 Score: The harmonic mean of precision and recall, providing a balanced measure when both metrics are considered important. • ROC AUC: The area under the ROC curve, indicating how well the model distinguishes between classes across various decision thresholds. 5.2 Training and Validation Performance Analysis Training Results: Epoch 1: Train Loss: 0.3927 | Train Accuracy: 87.67% Epoch 2: Train Loss: 0.2061 | Train Accuracy: 93.26% Epoch 3: Train Loss: 0.1108 | Train Accuracy: 96.17% A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 8 Validation Results: Epoch 1: Val Loss: 0.2621 | Val Accuracy: 91.46% Epoch 2: Val Loss: 0.2369 | Val Accuracy: 92.36% Epoch 3: Val Loss: 0.1822 | Val Accuracy: 94.04% The final model achieved an impressive Training Accuracy of 97.75% and Validation Accuracy of 94.04% by the third epoch, indicating a robust training process with minimal overfitting. The model demonstrates strong performance in distinguishing between positive and negative cases across classes, as evidenced by the high AUC values observed in the ROC curves for both training and validation sets . ● The training ROC curves 2 for each of the ten classes (Angioectasia, Bleeding, Erosion, Erythema, Foreign Body, Lymphangiectasia, Normal, Polyp, Ulcer, and Worms) show AUC values of 1.00 , indicating perfect discriminatory ability during training. ● The validation ROC curves3 show slight performance degradation compared to the training curves but still maintain high AUC values, ranging from 0.99 to 1.00 . ● This minor performance difference between training and validation suggests Good Generalization Ability of the model. 5.3 Class-Specific Performance The model's performance varied across classes, with strong results in certain areas and challenges in others: Training Set Metrics: Balanced Accuracy: 0.9388 Mean AUC: 0.9990 Mean Average Precision (MAP): 0.9774 Mean F1 Score: 0.9389 A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 9 Validation Set Metrics: Balanced Accuracy: 0.8464 Mean AUC: 0.9940 Mean Average Precision (MAP): 0.9093 Mean F1 Score: 0.8539 Challenging Classes: Polyp Detection: Lower AUC-ROC at 0.17, indicating difficulties in accurate detection. Worms: AUC-ROC at 0.23, reflecting limited detection capability. Bleeding: Moderate performance with an AUC-ROC of 0.25. 5.4 ROC Curves for Training and Validation To assess the model’s ability to distinguish between classes at various thresholds, ROC curves for both the training and validation sets were analyzed: Training ROC Curve : A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 10 Validation ROC Curve : The ROC curves demonstrate high AUC values, showing that the model effectively distinguishes between positive and negative cases across classes, with minor performance degradation from training to validation, suggesting good generalization. 5.5 Confusion Matrices The confusion matrices for training and validation further illustrate the model’s performance by visualizing misclassifications: Training Confusion Matrix : A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 11 Validation Confusion Matrix : . These matrices highlight areas where the model excelled (e.g., "Foreign Body") and struggled (e.g., "Erosion" vs. "Ulcer"), revealing class-specific performance variations. Confusion matrices provide a visual representation of the model's performance by highlighting correct predictions along the diagonal and misclassifications as off-diagonal entries. Training confusion matrix shows that the model correctly classified a significant number of instances for the "Normal" class (28498) and "Erosion" class (2463). However, there are also some misclassifications, particularly between the "Erosion" and "Ulcer" classes. Similarly, the validation confusion matrix shows that the model performed well in classifying "Normal" instances (12088) and "Erosion" (884). The validation confusion matrix also shows some misclassifications, particularly between "Erosion" and "Ulcer." 5.6 Per-Class Metrics To gain more insight into class-specific performance, AUC, average precision, and F1 scores for each class were calculated on both the training and validation sets: These visualizations indicate high performance for most classes, with slight drops in challenging classes such as "Erosion" and "Ulcer" on validation data. A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 12 Training Per-Class Metrics : Validation Per-Class Metrics : Class-Specific Training AUC-ROC and F1 Scores: Angioectasia: AUC-ROC: 0.9984 | F1 Score: 0.9298 Bleeding: AUC-ROC: 0.9997 | F1 Score: 0.9298 Erosion: AUC-ROC: 0.9978 | F1 Score: 0.9115 Erythema: AUC-ROC: 0.9972 | F1 Score: 0.8101 A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 13 Foreign Body: AUC-ROC: 0.9998 | F1 Score: 0.9612 Lymphangiectasia: AUC-ROC: 0.9992| F1 Score: 0.9637 Normal: AUC-ROC: 0.9996 | F1 Score: 0.9948 Polyp: AUC-ROC: 0.9986 | F1 Score: 0.8998 Ulcer: AUC-ROC: 1.0000 | F1 Score: 0.9992 Worms: AUC-ROC: 1.0000 | F1 Score: 1.0000 Examining per-class metrics like AUC, average precision, and F1 scores provides a more granular understanding of the model's performance for each class. For instance, "Foreign Body," "Lymphangiectasia," "Normal," "Ulcer," and "Worms" achieve high AUC- ROC and F1 scores in both training and validation settings. Classes like "Erosion" and "Erythema" exhibit slightly lower performance on validation data compared to training, indicating potential challenges in accurately classifying these specific classes. 5.7 Precision-Recall Curve for Validation The precision-recall curve for the validation set provides additional information on the trade-offs between precision and recall across different thresholds: Training Precision-Recall Curve : A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 14 Validation Precision-Recall Curve : This curve shows a good balance for most classes, though some challenging classes may need further threshold tuning to improve recall without sacrificing precision. The precision-recall curves visualize the trade-off between precision (the proportion of correctly predicted positive instances out of all instances predicted as positive) and recall (the proportion of correctly predicted positive instances out of all actual positive instances) across different thresholds. The training precision-recall curves illustrate high average precision (AP) values for most classes, with values ranging from 0.97 for Angioectasia to 1.00 for Ulcer and Worms. The validation precision-recall curves show slightly lower AP values, particularly for "Erythema" (0.71), "Foreign Body" (0.96), and "Polyp" (0.77). These curves suggest that while the model generally achieves good performance, some classes, such as "Erythema" and "Polyp", may benefit from further threshold tuning to improve recall without significantly sacrificing precision 6 Discussion The BiomedCLIP-PubMedBERT model demonstrated strong performance in fine-tuning for multi-class classification of abnormalities in Video Capsule Endoscopy (VCE) images. This section discusses the model’s key strengths, limitations, and potential clinical implications. 6.1 Key Findings The performance analysis provides valuable insights: Classification Challenges: Varying AUC-ROC scores across abnormality types indicate the challenge of handling class imbalance. High specificity (0.90) suggests effective identification of negative cases, but the model faced challenges in sensitivity, particularly in detecting positive instances for some classes. A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 15 Performance Stability: Consistent specificity and stable balanced accuracy around 0.94 suggest that while the model reliably identifies negatives, it may benefit from further optimization to improve overall classification accuracy. 6.2 Clinical Implications This model could significantly impact diagnostic efficiency in gastrointestinal health: Increased Diagnostic Speed: Automating the classification of VCE frames allows clinicians to focus on images flagged as abnormal, improving diagnostic efficiency. Reduced Human Error: The AI model’s consistent pattern recognition reduces the likelihood of oversight, a common issue in manual review. Vendor Independence: The model's adaptability to different datasets and imaging conditions supports integration across clinical workflows. 6.3 Limitations and Challenges Despite promising results, some challenges remain: Data Imbalance: Underrepresented classes, such as "Lymphangiectasia" and "Foreign Body," affected model performance. Although augmentation helped, limited diversity in these classes restricted the model's generalization ability. Visual Similarities Between Classes: Visually similar classes, such as "Erosion" and "Erythema," challenged the model, underscoring the complexities of medical image classification. Dependence on Pre-trained Text Embeddings: PubMedBERT embeddings, while domain-specific, may lack nuanced clinical context for rarer conditions. Future advancements in medical language models could address this limitation. 6.4 Future Directions Incorporating Temporal Features: Given VCE’s sequential nature, integrating temporal features (e.g., via RNNs or spatio-temporal transformers) could enhance the model’s ability to differentiate similar abnormalities. Active Learning: Implementing active learning to annotate uncertain cases would improve low-confidence regions, minimizing the need for extensive manual labeling. Interpretability Enhancements: Exploring interpretable AI methods, such as attention-based mechanisms or visual heatmaps, could enhance transparency and clinician trust in the model’s outputs. Expansion to Other Modalities: This model shows potential for adaptation to other medical imaging domains (e.g., CT, MRI, X-ray), widening its applicability with minimal retraining. Model Interpretability: Exploring interpretable AI methods, such as attention-based mechanisms or generating visual heatmaps, could help clinicians understand how the model arrived at a specific diagnosis. This would enhance trust and transparency in clinical settings. Cross Attention Mechanism: We would like to do Cross Attention Mechanism during Similarity matching to get more accuracy and Attention Mechanis make the model to remember the learned knowledge for longer dependencies. 6.3 Ethical and Regulatory Considerations When applying AI models in medical diagnostics, it is essential to consider the ethical and regulatory implications. Any clinical application of the model would need to undergo thorough validation and regulatory approval to ensure its safety and effectiveness. Additionally, measures must be taken to ensure that patient data used in training and evaluation is handled securely and complies with relevant privacy regulations (e.g., HIPAA). A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 16 In clinical practice, AI models should function as decision-support systems rather than replacing human expertise. The goal is to enhance clinician productivity and reduce diagnostic errors, while ensuring that the final decisions remain in the hands of trained medical professionals. 7 Conclusion The goal of this research was to develop an automated, Fine-Tuned classification model for detecting abnormalities in Video Capsule Endoscopy (VCE) frames using the BiomedCLIP-PubMedBERT framework. The study aimed to address the challenges of manual image interpretation in gastrointestinal diagnostics, which is both time-consuming and prone to error. By leveraging pre-trained vision and language models, we successfully created a system capable of classifying ten distinct abnormality categories: Angioectasia, Bleeding, Erosion, Erythema, Foreign Body, Lymphangiectasia, Polyp, Ulcer, Worms, and Normal. 7.3 Key Contributions The primary contributions of this study are: Fine - Tuning Application: We demonstrated the effective use of BiomedCLIP-PubMedBERT for Fine tuning in medical image classification. By aligning image features with text embeddings, the model could classify abnormalities without needing large annotated datasets. High Accuracy and Generalization: The model achieved high classification accuracy, precision, recall, and F1 scores across most abnormality classes. Its ability to generalize across unseen validation images is especially important in clinical settings where labeled data may be limited. Integration of Vision and Language Models: The integration of a Vision Transformer (ViT) with PubMedBERT allowed the model to capture both visual and semantic information, resulting in improved classification performance. The model’s success in mapping images to text-based class labels demonstrates the potential of multimodal models for medical diagnostics. Scalability and Vendor Independence: The model is scalable and adaptable to various clinical settings, thanks to its pre-trained nature and fine-tuning capabilities. It can work across different datasets and equipment without requiring extensive retraining. 7.4 Final Remarks The development of AI-based models for medical image classification is a rapidly growing field, with the potential to revolutionize diagnostic workflows. This study contributes to that effort by introducing a novel fine-tuning approach for abnormality classification in endoscopic images. By reducing the need for extensive manual labeling, our model holds promise for improving diagnostic efficiency, reducing human error, and ensuring more timely and accurate medical diagnoses. As the healthcare field continues to embrace AI-driven tools, models like BiomedCLIP-PubMedBERT will play an increasingly important role in enhancing the capabilities of clinicians and improving patient outcomes. With further refinement and validation, such models could become an integral part of future medical imaging systems. 8 Acknowledgments As participants in the Capsule Vision 2024 Challenge, we fully comply with the competition’s rules as outlined in [1]. Our AI model development is based exclusively on the datasets provided in the official release in [2]. We also acknowledge the developers of the BiomedCLIP and PubMedBERT models for their contributions to the A Multimodal Approach for Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT 17 open-source community, enabling us to leverage these state-of-the-art tools for medical image classification. Additionally, we would like to thank our colleagues and collaborators who provided insightful feedback during the development and evaluation stages of the project. Their input was invaluable in refining the approach and improving the overall performance of the model. References [1] Palak Handa, Amirreza Mahbod, Florian Schwarzhans, Ramona Woitek, Nidhi Goel, Deepti Chhabra, Shreshtha Jha, Manas Dhir, Deepak Gunjan, Jagadeesh Kakarla, et al. Capsule vision 2024 challenge: Multi-class abnormality classification for video capsule endoscopy. arXiv preprint arXiv:2408.04940 , 2024. [2] Palak Handa, Amirreza Mahbod, Florian Schwarzhans, Ramona Woitek, Nidhi Goel, Deepti Chhabra, Shreshtha Jha, Manas Dhir, Deepak Gunjan, Jagadeesh Kakarla, and Balasubramanian Raman. Training and Validation Dataset of Capsule Vision 2024 Challenge. Fishare, 7 2024. [3] Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Omid Dabeer. WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 19606–19616, 2023. [4] Jiaxiang Liu, Tianxiang Hu, Yan Zhang, Xiaotang Gai, Yang Feng, and Zuozhu Liu. A ChatGPT Aided Explainable Framework for Zero-Shot Medical Image Diagnosis. ArXiv, 2023. [5] Debdoot Mahapatra, Babak Bozorgtabar, and Zongyuan Ge. Medical Image Classification Using Generalized Zero Shot Learning. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) , pages 3337–3346, 2021. [6] Sheng Zhang, Yanbo Xu, Naoto Usuyama, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Matthew Lungren, Tristan Naumann, and Hoifung Poon. Large-scale domain-specific pretraining for biomedical vision-language processing, 2023. [7] Qian Zhao, Wenming Yang, and Q. Liao. AFA-RN: An Abnormal Feature Attention Relation Network for Multi- class Disease Classification in gastrointestinal endoscopic images. 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI) , pages 1–4, 2021. [8] Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, and Jiming Chen. AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection. ArXiv, 2023. | 6 | 1 | The BiomedCLIP model utilizes a Vision Transformer (ViT) and PubMedBERT, both known for their large parameter counts. Given the complexity of multimodal tasks (vision and language), a standard transformer model could have around 300 million parameters. Training with 37,607 frames and a batch size of 32 results in about 1,176 iterations per epoch (total of around 30 epochs for training). Computing, even if on a single GPU (like an NVIDIA V100 or similar), should yield training completion within 6 hours as long as the conditions are optimal and some efficiency components (like early stopping) are in place. Considering also augmentation may slow down training, but it would not exceed 8 hours on a single GPU setup suitable for this task. | yes | Yes | Multimodal | A Multimodal Approach For Endoscopic VCE Image Classification Using BiomedCLIP-PubMedBERT | 2024-10-25 0:00:00 | https://github.com/Satyajithchary/MedInfoLab_Capsule_Vision_2024_Challenge | 1 | https://github.com/misahub2023/Capsule-Vision-2024-Challenge. | 1hr * 3 epoch = 3 hour | https://colab.research.google.com/drive/19Y7kge6PwOugIf_jdkhXoxjUYkU3iqSG?usp=sharing | Yes | -- Dataset is downloaded using the script provided in github. Then need to change the path of the dataset link in the colab file.Downlaod the medinfolab-capsule-vision-2024-challenge.ipynb file from repo or just run this colab file i pasted to run the code. |
Electricity (192) | CycleNet | [] | CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns | 2024-09-27T00:00:00 | https://arxiv.org/abs/2409.18479v2 | [
"https://github.com/ACAT-SCUT/CycleNet"
] | {'MSE': '0.144', 'MAE': '0.237'} | [
"MSE",
"MAE"
] | Given the following paper and codebase:
Paper: CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns
Codebase: https://github.com/ACAT-SCUT/CycleNet
Improve the CycleNet model on the Electricity (192) dataset. The result
should improve on the following metrics: {'MSE': '0.144', 'MAE': '0.237'}. You must use only the codebase provided.
| CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns Shengsheng Lin1, Weiwei Lin1,2,∗, Xinyi Hu3, Wentai Wu4, Ruichao Mo1, Haocheng Zhong1 1School of Computer Science and Engineering, South China University of Technology, China 2Pengcheng Laboratory, China 3Department of Computer Science and Engineering, The Chinese University of Hong Kong 4College of Information Science and Technology, Jinan University, China cslinshengsheng@mail.scut.edu.cn, linww@scut.edu.cn, xyhu@cse.cuhk.edu.hk, wentaiwu@jnu.edu.cn, {cs_moruichao, cshczhong}@mail.scut.edu.cn Abstract The stable periodic patterns present in time series data serve as the foundation for conducting long-horizon forecasts. In this paper, we pioneer the exploration of explicitly modeling this periodicity to enhance the performance of models in long-term time series forecasting (LTSF) tasks. Specifically, we introduce the Residual Cycle Forecasting (RCF) technique, which utilizes learnable recur- rent cycles to model the inherent periodic patterns within sequences, and then performs predictions on the residual components of the modeled cycles. Com- bining RCF with a Linear layer or a shallow MLP forms the simple yet powerful method proposed in this paper, called CycleNet. CycleNet achieves state-of-the- art prediction accuracy in multiple domains including electricity, weather, and energy, while offering significant efficiency advantages by reducing over 90% of the required parameter quantity. Furthermore, as a novel plug-and-play tech- nique, the RCF can also significantly improve the prediction accuracy of existing models, including PatchTST and iTransformer. The source code is available at: https://github.com/ACAT-SCUT/CycleNet. 1 Introduction Time series forecasting (TSF) plays a crucial role in various domains such as weather forecasting, transportation, and energy management, providing insights for early warnings and facilitating proac- tive planning. Particularly, accurate predictions over long horizons (e.g., spanning several days or months) offer increased convenience, referred to as Long-term Time Series Forecasting (LTSF) [59,56,17,42,6]. However, the principle enabling long-horizon prediction lies in understanding the inherent periodicity within the data [ 32]. Unlike short-term forecasting, long-term predictions cannot rely solely on recent temporal information (including means, trends, etc.). For instance, a user’s electricity consumption thirty days ahead not only correlates with their consumption patterns in the past few days. In such cases, long-term dependencies, or in other words, underlying stable periodicity within the data, serve as the practical foundation for conducting long-term predictions [ 32]. This is why existing models emphasize their capability to extract features with long-term dependencies. Models like Informer [ 59], Autoformer [ 51], and PatchTST [ 40] utilize the Transformer’s ability for long-distance modeling to address LTSF tasks. ModernTCN [ 38] employs large convolutional kernels to enhance TCNs’ ability to capture long-range dependencies, and SegRNN [ 31] uses segment-wise iterations to ∗Corresponding author 38th Conference on Neural Information Processing Systems (NeurIPS 2024).arXiv:2409.18479v2 [cs.LG] 15 Oct 2024 improve RNN methods’ handling of long sequences. If a model can accurately capture long-range dependencies, it can precisely extract periodic patterns from historical long sequences, enabling more accurate long-horizon predictions. However, if the purpose of constructing deep and complex models is solely to better extract periodic features from long-range dependencies, why not directly model the patterns? As illustrated in Figure 1, electricity data exhibits clear daily periodic patterns (in addition to possible weekly patterns). We can use a globally shared daily segment to represent the periodic pattern in electricity consumption. By repeating this daily segment Ntimes, we can continuously represent the cyclic components of N days’ electricity consumption sequences. timeShared Periodic Pattern Figure 1: Shared daily periodic patterns present in the Electricity dataset.Based on the above motivation, we pioneer ex- plicit modeling of periodic patterns in the data to enhance the model’s performance on LTSF tasks in this paper. Specifically, we propose the Residual Cycle Forecasting (RCF) technique. It involves using learnable recurrent cycles to explicitly model the inherent periodic patterns within time series data, followed by predicting the residual components of the modeled cycles. Combining the RCF technique with either a single-layer Linear or a dual-layer MLP results inCycleNet , a simple yet powerful method. Cy- cleNet achieves consistent state-of-the-art per- formance across multiple domains and offers significant efficiency advantages. In summary, this paper contributes: •We identify the presence of shared periodic patterns in long-horizon forecasting domains and propose explicit modeling of these patterns to enhance the model’s performance on LTSF tasks. •Technically, we introduce the RCF technique, which utilizes learnable recurrent cycles to explicitly model the inherent periodic patterns within time series data, followed by predicting the residual components of the modeled cycles. The RCF technique significantly enhances the performance of basic (or existing) models. •Applying RCF with a Linear layer or a shallow MLP forms the proposed simple yet powerful method, called CycleNet. CycleNet achieves consistent state-of-the-art performance across multiple domains and offers significant efficiency advantages. 2 Related work In fact, utilizing periodic information to enhance model prediction accuracy is not a novel concept. Numerous studies, in particular, have introduced a series of Seasonal-Trend Decomposition (STD) techniques that allow models to better leverage periodic information. Popular models such as Autoformer [ 51], FEDformer [ 60], and DLinear [ 56] utilize the classical STD approach to decompose the original time series into two equally sized subsequences: seasonal and trend components, which are then modeled independently. These classical STD methods typically use a basic moving average (MOV) kernel to perform a sliding aggregation to obtain the trend component. Recently, Leddam [ 55] proposed replacing the traditional MOV kernel in STD with a Learnable Decomposition (LD) kernel, leading to improved performance. Additionally, DEPTS [ 8] treats the periodicity of sequences as a parameterized function with respect to time, and learns periodic and residual components layer-wise through its periodic and local blocks. SparseTSF [ 32], another recent work, utilizes cross-period sparse forecasting technique to decouple cycles and trends, achieving impressive performance at extremely low cost. The RCF technique proposed in this paper can essentially be considered a type of STD method. The key difference from existing techniques lies in its explicit modeling of global periodic patterns within independent sequences using learnable recurrent cycles. The proposed RCF technique is conceptually simple ,computationally efficient , and yields significant improvements in prediction accuracy. The 2 further proposed CycleNet, which combines the RCF technique with a simple backbone, is a Linear- or MLP-based model that is simple, efficient, and powerful for time series forecasting. To correctly position CycleNet, we have provided a detailed review of the development of different categories of time series forecasting methods (including Transformer-based, RNN-based, etc.) in Appendix A. 3 CycleNet Given a time series XwithDvariables or channels, the objective of time series forecasting is to predict future horizons Hsteps ahead based on past Lobservations, mathematically represented as f:xt−L+1:t∈RL×D→¯xt+1:t+H∈RH×D. In fact, the inherent periodicity within time series is fundamental for accurate prediction, particularly when forecasting over large horizons, such as 96-720 steps (corresponding to several days or months). To enhance the model’s performance on long-term prediction tasks, we propose the Residual Cycle Forecasting (RCF) technique. It combines a Linear layer or a shallow MLP to form a simple yet powerful method CycleNet, as illustrated in Figure 2, with detailed pseudocode provided in Appendix B.1. Linear or MLPRemove Cycle Restore CycleLearnable Recurrent Cycles 𝑄∈ℝ𝐷×𝑊 Align and Repeat Align and Repeat 𝑥𝑡−𝐿+1:𝑡∈ℝ𝐷×𝐿𝑐𝑡−𝐿+1:𝑡∈ℝ𝐷×𝐿𝑥′𝑡−𝐿+1:𝑡∈ℝ𝐷×𝐿ҧ𝑥′𝑡+1:𝑡+𝐻∈ℝ𝐷×𝐻𝑐𝑡+1:𝑡+𝐻∈ℝ𝐷×𝐻ҧ𝑥𝑡+1:𝑡+𝐻∈ℝ𝐷×𝐻 Figure 2: CycleNet architecture. CycleNet/Linear and CycleNet/MLP represent using a single-layer Linear model and a dual-layer MLP model, respectively, as the backbone of CycleNet. Here, D= 3. 3.1 Residual cycle forecasting The RCF technique comprises two steps: the first step involves modeling the periodic patterns of sequences through learnable recurrent cycles within independent channels, and the second step entails predicting the residual components of the modeled cycles. Periodic patterns modeling Given Dchannels with a priori cycle length W, we first generate learnable recurrent cycles Q∈RW×D, all initialized to zeros . These recurrent cycles are globally shared within channels, meaning that by performing cyclic replications, we can obtain cyclic com- ponents Cof the sequence Xof the same length. These recurrent cycles Qof length Wundergo gradient backpropagation training along with the backbone module for prediction, yielding learned representations (distinct from the originally initialized zeros) that unveil the internal cyclic patterns within the sequence. Here, the cycle length Wdepends on the a priori characteristics of the dataset and should be set to the maximum stable cycle within the dataset. Considering that scenes requiring long-term predictions usually exhibit prominent, explicit cycles (e.g., electrical consumption and traffic data exhibit clear daily and weekly cycles), determining the specific cycle length is available and straightforward. Ad- ditionally, the dataset’s cycles can be further examined through autocorrelation functions (ACF) [ 39], as revealed in Appendix B.2. 3 Residual forecasting Predictions made on the residual components of the modeled cycles, termed residual forecasting, are as follows: 1.Remove the cyclic components ct−L+1:tfrom the original input xt−L+1:tto obtain residual components x′ t−L+1:t. 2.Passx′ t−L+1:tthrough the backbone to obtain predictions for the residual components, ¯x′ t+1:t+H. 3.Add the predicted residual components ¯x′ t+1:t+Hto the cyclic components ct+1:t+Hto obtain ¯xt+1:t+H. It is important to note that, since the cyclic components Care virtual sequences derived from the cyclic replications of Q, we cannot directly obtain the aforementioned sub-sequences ct−L+1:tand ct+1:t+H. Therefore, as illustrated in Figure 3, appropriate alignments and repetitions of the recurrent cycles Qare needed to obtain equivalent sub-sequences: (i) Left-shift Qbytmod Wpositions to obtain Q(t). Here, tmod Wcan be viewed as the relative positional index of the current sequence sample within Q. (ii) Repeat Q(t)⌊L/W⌋times and concatenate Q(t) 0:LmodW. Mathematically, these two equivalent subsequences can be represented as: 𝑄∈ℝ𝐷×𝑊𝑐𝑡−𝐿+1:𝑡∈ℝ𝐷×𝐿 𝑄(𝑡)∈ℝ𝐷×𝑊𝑄(𝑡+𝐿)∈ℝ𝐷×𝑊 𝐿/𝑊×W 𝐿mod𝑊𝑡mod𝑊 (𝑡+𝐿)mod𝑊 𝐻/𝑊×W 𝐻mod𝑊2. Repeat and Concat 2. Repeat and Concat𝑐𝑡+1:𝑡+𝐻∈ℝ𝐷×𝐻 1. Align (Roll) 1. Align (Roll) Figure 3: Alignments and repetitions of the recurrent cycles Q. Here, D= 1. ct−L+1:t= [Q(t),···, Q(t) |{z} ⌊L/W⌋, Q(t) 0:LmodW], (1) ct+1:t+H= [Q(t+L),···, Q(t+L) | {z } ⌊H/W⌋, Q(t+L) 0:HmodW]. (2) Backbone The original prediction task is transformed into cyclic residual component modeling, which can serve as normal sequence modeling. Therefore, any existing time series forecast model can be employed as a backbone. In this paper, our aim is to propose and examine a method for enhancing time series prediction by explicitly modeling cycles (i.e., RCF). Thus, we opt for the most basic backbone, namely a single-layer Linear and a dual-layer MLP, forming our simple yet powerful methods, CycleNet/Linear and CycleNet/MLP. Herein, each channel utilizes the same backbone with parameter sharing for modeling, which is also referred to as the Channel Independent strategy [13]. 3.2 Instance normalization The statistical properties of time series data, such as the mean, often vary over time, which is referred to as distributional shifts. This can lead to poor performance of models trained on historical training sets when applied to future data. To address this issue, recent research has introduced Instance Normalization strategies like RevIN [ 45,22,26]. Mainstream approaches such as iTransformer [ 37], PatchTST [ 40], and SparseTSF [ 32] have widely adopted similar techniques to enhance performance. To improve the robustness of CycleNet, we also incorporate a similar optional strategy (see the full ablation study in Appendix C.4). Specifically, we remove the varying statistical properties from the model’s internal representations outside of CycleNet’s input and output steps: 4 xt−L+1:t=xt−L+1:t−µ√σ+ϵ, (3) ¯xt+1:t+H= ¯xt+1:t+H×√ σ+ϵ+µ, (4) where µandσrepresent the mean and standard deviation of the input window, respectively, and ϵis a small constant for numerical stability. This method aligns with the RevIN version that excludes learnable affine parameters [22]. 3.3 Loss function To remain consistent with current mainstream methods, CycleNet defaults to using Mean Squared Error (MSE) as the loss function to ensure fair comparison with other methods, formulated as: Loss=∥xt+1:t+H−¯xt+1:t+H∥2 2. (5) 4 Experiments 4.1 Setup Datasets We utilized widely adopted benchmark datasets including the ETT series [ 59], Weather, Traffic, Electricity, and Solar-Energy [ 24]. Preprocessing operations on the datasets, such as dataset splitting and normalization methods, remained consistent with prior works (e.g., Autoformer [ 51], iTransformer [37], etc.). The information of the datasets is shown in Table 1. Note that these datasets all exhibit stable cyclic patterns, such as daily and weekly, which form the realistic basis for performing long-horizon forecasting. Combined with the sampling frequency of the datasets, we can infer the maximum cycle length of the datasets, such as 24 for ETTh1 and 168 for Electricity. These manually inferred cycle lengths can be further confirmed through the ACF analysis, details of which are provided in Appendix B.2. The hyperparameter Wof CycleNet is set by default to match the cycle length in Table 1. Table 1: Dataset Information. Dataset ETTh1 & ETTh2 ETTm1 & ETTm2 Electricity Solar-Energy Traffic Weather Timesteps 17,420 69,680 26,304 52,560 17,544 52,696 Channels 7 7 321 137 862 21 Frequency 1 hour 15 mins 1 hour 10 mins 1 hour 10 mins Cyclic Patterns Daily Daily Daily & Weekly Daily Daily & Weekly Daily Cycle Length 24 96 168 144 168 144 Baselines We compared CycleNet against state-of-the-art models in recent years, including iTrans- former [ 37], PatchTST [ 40], Crossformer [ 58], TiDE [ 5], TimesNet [ 52], DLinear [ 56], SCINet [ 34], FEDformer [ 60], Autoformer [ 51]. To comprehensively evaluate CycleNet’s performance, the Mean Squared Error (MSE) and Mean Absolute Error (MAE) metrics were employed. Environments All experiments in this paper were implemented using PyTorch [ 41], trained using the Adam [ 23] optimizer, and executed on a single NVIDIA GeForce RTX 4090 GPU with 24 GB memory. 4.2 Main results Table 2 shows the comparison results of CycleNet with other models on multivariate LTSF tasks. Overall, CycleNet achieves state-of-the-art performance (except for the Traffic dataset), with Cy- cleNet/MLP ranking first overall, and CycleNet/Linear ranking second overall. Due to the nonlinear 5 mapping capability of MLP compared to Linear, CycleNet/MLP performs better on high-dimensional datasets such as Electricity and Solar-Energy (i.e., datasets with more than 100 channels). In summary, with the support of the RCF technique, even a very simple and basic model (i.e., Linear and MLP) can achieve the current best performance, surpassing other deep models. This fully demonstrates the advantages of the RCF technique. Table 2: Multivariate long-term time series forecasting results. The look-back length Lis fixed as 96 and the results are averaged from all prediction horizons of H∈ {96,192,336,720}. Full results and more comparison results on longer look-back lengths are available in Appendix C.2. The results of other models are sourced from iTransformer [ 37] and TimeMixer [ 48]. The best results are highlighted in bold and the second best are underlined . Dataset ETTh1 ETTh2 ETTm1 ETTm2 Electricity Solar-Energy Traffic Weather Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Autoformer [2021] 0.496 0.487 0.450 0.459 0.588 0.517 0.327 0.371 0.227 0.338 0.885 0.711 0.628 0.379 0.338 0.382 FEDformer [2022] 0.440 0.460 0.437 0.449 0.448 0.452 0.305 0.349 0.214 0.327 0.291 0.381 0.610 0.376 0.309 0.360 SCINet [2022] 0.747 0.647 0.954 0.723 0.485 0.481 0.571 0.537 0.268 0.365 0.282 0.375 0.804 0.509 0.292 0.363 DLinear [2023] 0.456 0.452 0.559 0.515 0.403 0.407 0.350 0.401 0.212 0.300 0.330 0.401 0.625 0.383 0.265 0.317 TimesNet [2023] 0.458 0.450 0.414 0.427 0.400 0.406 0.291 0.333 0.192 0.295 0.301 0.319 0.620 0.336 0.259 0.287 TiDE [2023] 0.541 0.507 0.611 0.550 0.419 0.419 0.358 0.404 0.251 0.344 0.347 0.417 0.760 0.473 0.271 0.320 Crossformer [2023] 0.529 0.522 0.942 0.684 0.513 0.496 0.757 0.610 0.244 0.334 0.641 0.639 0.550 0.304 0.259 0.315 PatchTST [2023] 0.469 0.454 0.387 0.407 0.387 0.400 0.281 0.326 0.205 0.290 0.270 0.307 0.481 0.304 0.259 0.281 TimeMixer [2024] 0.447 0.440 0.364 0.395 0.381 0.395 0.275 0.323 0.182 0.272 0.216 0.280 0.484 0.297 0.240 0.271 iTransformer [2024] 0.454 0.447 0.383 0.407 0.407 0.410 0.288 0.332 0.178 0.270 0.233 0.262 0.428 0.282 0.258 0.278 CycleNet/Linear 0.432 0.427 0.383 0.404 0.386 0.395 0.272 0.315 0.170 0.260 0.235 0.270 0.485 0.313 0.254 0.279 CycleNet/MLP 0.457 0.441 0.388 0.409 0.379 0.396 0.266 0.314 0.168 0.259 0.210 0.261 0.472 0.301 0.243 0.271 Furthermore, we can observe that CycleNet’s performance on the Traffic dataset is inferior to iTrans- former, which models multivariate relationships in time series data using an inverted Transformer. This is because the Traffic dataset exhibits spatiotemporal characteristics and temporal lag char- acteristics, where the traffic flow at a certain detection point significantly affects the future values of neighboring detection points. In such cases, modeling sufficient inter-channel relationships is necessary, and iTransformer accomplishes this. In contrast, CycleNet independently models the temporal dependencies of each channel, hence it suffers a disadvantage in this scenario. However, CycleNet still significantly outperforms other baselines on the Traffic dataset, demonstrating the competitiveness of CycleNet. Additionally, we have included more analysis of CycleNet in traffic scenarios in Appendix C.5, including a full comparison of results on the PEMS datasets. 4.3 Efficiency analysis Table 3: Efficiency comparison between CycleNet and other models on the Electricity dataset with look-back length L= 96 and forecast horizon H= 720 . Training Time denotes the average time required per epoch for the model. Model Parameters MACs Training Time(s) Informer [2021] 12.53M 3.97G 70.1 Autoformer [2021] 12.22M 4.41G 107.7 FEDformer [2022] 17.98M 4.41G 238.7 DLinear [2023] 139.6K 44.91M 18.1 PatchTST [2023] 10.74M 25.87G 129.5 iTransformer [2024] 5.15M 1.65G 35.1 CycleNet/MLP 472.9K 134.84M 30.8 CycleNet/Linear 123.7K 22.42M 29.6 RCF part 53.9K 0 12.8The proposed RCF technique, as a plug-and- play module, requires minimal overhead, need- ing only additional W×Dlearnable param- eters and no additional Multiply-Accumulate Operations (MACs). The backbones of Cy- cleNet, namely single-layer Linear and dual- layer MLP, are also significantly lightweight compared to other multi-layer stacked models. Table 3 demonstrates the efficiency comparison between CycleNet and other mainstream models, where CycleNet shows significant advantages. Particularly, compared to iTransformer, which also possesses strong capabilities in modeling long-term dependencies and nonlinear learning, CycleNet/MLP has over ten times fewer param- eters and MACs. As for CycleNet/Linear, which shares the same single-layer linear backbone as DLinear, it also has fewer parameters and MACs. However, in terms of training speed, DLinear is still faster than CycleNet/Linear. This is because the RCF technique requires aligning the recurrent cycles with each data sample, which incurs additional CPU time. Overall, considering the significant improvement in prediction accuracy brought by the RCF technique, CycleNet achieves the best balance between performance and efficiency. 6 4.4 Ablation study and analysis Effectiveness of RCF To investigate the effectiveness of RCF, we conducted comprehensive ablation experiments on two datasets with significant periodicity: Electricity and Traffic. The results are shown in Table 4. Table 4: Ablation study of RCF technique. The Linear and MLP backbones apply the same instance normalization strategy as CycleNet by default to fully demonstrate the effect of RCF technique. Dataset Electricity Traffic Horizon 96 192 336 720 96 192 336 720 Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Linear 0.197 0.274 0.197 0.277 0.212 0.292 0.253 0.324 0.645 0.383 0.598 0.361 0.605 0.362 0.643 0.381 + RCF 0.141 0.234 0.155 0.247 0.172 0.264 0.210 0.296 0.480 0.314 0.482 0.313 0.476 0.303 0.503 0.320 Improve 28.6% 14.6% 21.4% 10.8% 18.8% 9.5% 17.1% 8.7% 25.6% 18.0% 19.5% 13.2% 21.3% 16.2% 21.8% 16.1% MLP 0.175 0.259 0.181 0.265 0.197 0.282 0.240 0.317 0.500 0.325 0.496 0.321 0.509 0.325 0.542 0.342 + RCF 0.136 0.229 0.152 0.244 0.170 0.264 0.212 0.299 0.458 0.296 0.457 0.294 0.470 0.299 0.502 0.314 Improve 22.2% 11.6% 15.9% 8.0% 13.6% 6.3% 11.6% 5.7% 8.5% 8.9% 7.9% 8.3% 7.7% 8.0% 7.3% 8.1% DLinear 0.195 0.278 0.194 0.281 0.207 0.297 0.243 0.331 0.649 0.398 0.599 0.372 0.606 0.375 0.646 0.396 + RCF 0.143 0.240 0.156 0.253 0.171 0.270 0.204 0.302 0.506 0.317 0.499 0.317 0.512 0.325 0.545 0.343 Improve 26.6% 13.6% 19.7% 10.0% 17.4% 8.9% 16.3% 8.8% 22.1% 20.4% 16.6% 14.6% 15.4% 13.3% 15.6% 13.5% PatchTST 0.168 0.260 0.176 0.266 0.193 0.282 0.233 0.317 0.436 0.281 0.449 0.285 0.464 0.293 0.499 0.310 + RCF 0.136 0.231 0.153 0.246 0.170 0.264 0.211 0.299 0.438 0.264 0.457 0.270 0.469 0.275 0.509 0.292 Improve 19.0% 11.0% 13.0% 7.6% 11.7% 6.6% 9.4% 5.7% -0.5% 6.1% -1.8% 5.5% -1.0% 6.3% -2.0% 6.1% iTransformer 0.148 0.240 0.162 0.253 0.178 0.269 0.225 0.317 0.395 0.268 0.417 0.276 0.433 0.283 0.467 0.302 + RCF 0.136 0.231 0.153 0.247 0.168 0.263 0.194 0.287 0.415 0.263 0.440 0.271 0.456 0.278 0.491 0.294 Improve 8.1% 3.7% 5.6% 2.4% 5.8% 2.2% 13.8% 9.5% -5.1% 1.9% -5.5% 1.8% -5.3% 1.8% -5.1% 2.6% Firstly, when combining the basic Linear and MLP backbones (both utilizing instance normalization by default) with the RCF technique, a significant improvement in prediction accuracy (approximately 10% to 20%) is observed. This demonstrates that the success of CycleNet is largely attributed to the RCF technique rather than the backbones themselves or the instance normalization strategy. Overall, the performance of MLP is stronger than that of Linear, regardless of whether the RCF technique is applied. This indicates that non-linear mapping capability is necessary when modeling high-dimensional datasets with the channel-independent strategy (sharing parameters across each channel), aligning with previous research findings [26]. Secondly, we further verified whether RCF can enhance the prediction accuracy of existing models, as RCF is essentially a plug-and-play flexible technique. It is observed that incorporating RCF still improves the performance of existing complex designed, deep stacked models (approximately 5% to 10%), such as PatchTST [ 40] and iTransformer [ 37]. Even for DLinear, which already employs the classical MOV-based STD technique, RCF was able to provide an improvement of approximately 20%. This further indicates the effectiveness and portability of RCF. However, an interesting phenomenon was observed: although the MAE decreases when PatchTST and iTransformer are combined with RCF, the MSE increases. The most important reason behind this is that there are extreme points in the Traffic dataset that could affect the effectiveness of RCF, which fundamentally relies on learning the historical average cycles in the dataset. We further analyze this phenomenon in detail in Appendix C.5 and suggest potential directions for improving the RCF technique. Comparison of different STD techniques The proposed RCF technique is essentially a more powerful STD approach. Unlike existing methods that decompose the periodic (seasonal) component from a limited look-back window, RCF learns the global periodic component from the training set. Here, we compare the effectiveness of RCF with existing STD techniques, using a pure Linear model as the backbone (without applying any instance normalization strategies). The comparison includes LD from Leddam [ 55], MOV from DLinear [ 56], and Sparse technique from SparseTSF [ 32]. As shown in Table 5, RCF significantly outperforms other STD methods, particularly on datasets with strong periodicity, such as Electricity and Solar-Energy. In contrast, the other STD methods did not show significant advantages over the pure Linear model. There are several reasons for this. First, MOV and LD-based STD methods achieve trend estimation by sliding aggregation within the look-back window, which suffers from inherent issues [ 27,26]: (i) The sliding window of the moving average needs to be larger than the maximum period of the seasonal component; otherwise, the decomposition may be incomplete (especially when the period length exceeds the look-back sequence length, making decomposition potentially impossible). 7 (ii) Zero-padding is required at the edges of the sequence samples to obtain equally sized moving average sequences, leading to distortion of the sequence edges. As for the Sparse technique, being a lightweight decomposition method, it relys more on longer look-back windows and instance normalization strategies to ensure adequate performance. Table 5: Comparison of different STD techniques. To directly compare the effects of STD, the configuration used here is consistent with that of DLinear [ 56], with a sufficient look-back window length of 336 and no additional instance normalization strategies. Thus, CLinear here refers to CycleNet/Linear without RevIN. The reported results are averaged across all prediction horizons of H∈ {96,192,336,720}, with full results available in Appendix C.3. SetupCLinear (RCF+Linear)LDLinear (LD+Linear)DLinear (MOV+Linear)SLinear (Sparse+Linear)Linear Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ETTh1 0.418 0.434 0.427 0.439 0.425 0.437 0.424 0.436 0.427 0.439 ETTh2 0.451 0.456 0.455 0.457 0.471 0.467 0.460 0.460 0.460 0.462 ETTm1 0.349 0.382 0.365 0.387 0.367 0.390 0.362 0.383 0.362 0.384 ETTm2 0.266 0.330 0.273 0.336 0.280 0.341 0.290 0.352 0.269 0.331 Electricity 0.157 0.255 0.167 0.264 0.167 0.264 0.172 0.268 0.167 0.265 Solar-Energy 0.220 0.259 0.253 0.316 0.254 0.318 0.255 0.315 0.253 0.318 Traffic 0.423 0.289 0.434 0.296 0.434 0.296 0.435 0.292 0.434 0.296 Weather 0.245 0.300 0.244 0.297 0.244 0.296 0.246 0.298 0.245 0.297Additionally, these meth- ods that decouple trend and seasonality within the look-back window are es- sentially equivalent to un- constrained or weakly con- strained linear regression [44], which means that after full training convergence, linear-based models com- bined with these methods are theoretically equivalent to pure linear models. In contrast, the periodic com- ponents obtained by the RCF technique are globally estimated from the training set, allowing it to surpass the limitations of a finite- length look-back window, and thus, its capabilities extend beyond standard linear regression. Table 6: Performance of the CycleNet/Linear model with varied W. The forecast horizon is set as 96. Setup RCF/W=168 RCF/W=144 RCF/W=96 RCF/W=24 W/o. RCF Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Electricity 0.142 0.234 0.196 0.275 0.196 0.274 0.195 0.274 0.197 0.274 Trafiic 0.480 0.314 0.617 0.386 0.617 0.385 0.618 0.385 0.645 0.383 Solar-Energy 0.289 0.376 0.208 0.256 0.276 0.365 0.287 0.375 0.286 0.375 ETTm1 0.350 0.369 0.340 0.366 0.325 0.363 0.348 0.367 0.351 0.372 ETTh1 0.395 0.402 0.384 0.395 0.383 0.393 0.377 0.391 0.384 0.392Impact of hyperparame- terW The hyperparame- terWdetermines the length of the learnable recurrent cycles Qin the RCF tech- nique. In principle, it must match the maximum pri- mary cycle length in the data to correctly model the periodic patterns of the se- quence. We investigate the performance of the CycleNet/Linear model under different settings of Wfor different datasets in Table 6. When correctly setting the hyperparameter Wto the max cycle length of the dataset (i.e., the cycle length pre-inferred in Table 1), RCF can play a significant role, yielding a large performance gap compared to the cases when it is not correctly set. This indicates the necessity of inferring and setting the correct Wfor RCF to function properly. Furthermore, when Wis incorrectly set, the model’s performance is almost the same as when RCF is not used at all. This suggests that even in the worst-case scenario, RCF does not bring significant negative effects. Visualization of the learned periodic patterns The purpose of the RCF technique is to utilize the learnable recurrent cycles Q(initialized to zero) to model the periodic patterns in time series data. After co-training with the backbone, the recurrent cycles can represent the inherent periodic patterns of the sequence. Figure 4 illustrates the different periodic patterns learned from different datasets and channels. For example, Figure 4(c) shows the daily operating pattern of solar photovoltaic generation, while Figure 4(d) displays the weekly operating pattern of traffic flow, featuring peak traffic in the mornings on weekdays. These periodic patterns learned from the global sequence provide important supplementary information to the prediction model, especially when the length of the look-back window is limited and may not provide sufficient cyclic information when the cycle length is long. Furthermore, although the cycle length is the same for different channels within the same dataset, the specific periodic patterns differ, as shown in Figure 4(e-h). Particularly, Figure 4(f) demonstrates the intermittent periodicity of household electricity consumption on weekdays, while others exhibit relatively uniform weekday patterns in their respective channels. This highlights the necessity of separately modeling the periodic patterns for each channel. 8 0 20 40 60 80−0.4−0.20.00.20.40.60.8(a) ETTm1, 7th 0 20 40 60 80 100 120 140−0.6−0.5−0.4−0.3−0.2−0.1 (b) Weather, 7th 0 20 40 60 80 100 120 140−0.50.00.51.01.5 (c) Solar-Energy, 137th 0 25 50 75 100 125 150 175−1.0−0.50.00.51.01.5 (d) Traffic, 607th 0 25 50 75 100 125 150 175−0.8−0.6−0.4−0.20.00.20.40.60.8 (e) Electricity, 311st 0 25 50 75 100 125 150 175−1.0−0.50.00.51.01.5 (f) Electricity, 318th 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.01.5 (g) Electricity, 320th 0 25 50 75 100 125 150 175−1.0−0.50.00.51.0 (h) Electricity, 321st Figure 4: Visualization of the periodic patterns learned by CycleNet/Linear. Panels (a-d) display different periodic patterns learned from different datasets, and panels (e-h) show different periodic patterns learned from different channels within the same dataset. The ith indicates the index of the channel within the dataset. In conclusion, these findings demonstrate that the RCF technique can effectively learn the inherent periodic patterns in time series data, serving as a crucial explanatory factor contributing to the state- of-the-art performance of CycleNet. Additionally, we have included further analysis in Appendix C.1, showcasing the learned periodic patterns of RCF under different configurations to better illustrate how RCF operates. 4896 192 336 528 720 Lookback Length0.120.140.160.180.200.220.240.26MSE DLinear PatchTST iTransformer CycleNet/Linear CycleNet/MLP (a) Electricity 4896 192 336 528 720 Lookback Length0.40.50.60.70.80.9MSE DLinear PatchTST iTransformer CycleNet/Linear CycleNet/MLP (b) Traffic Figure 5: Performance of CycleNet and comparative models with different look-back lengths. The forecast horizon is set as 96. Performance with varied look-back length The look-back length determines the richness of historical information that can be utilized. Theoretically, the larger it is, the better the model performance should be, especially for models capable of capturing long-term dependencies. Figure 5 shows the performance of different models under different look-back lengths. It can be observed that CycleNet, as well as representatives of current state-of-the-art models such as iTransformer [ 37], PatchTST [ 40], and DLinear [ 56], all achieve better performance with longer look-back lengths. This indicates that these models all possess strong capabilities in modeling long-term dependencies. It is worth highlighting that (i) on the Electricity dataset, CycleNet outperforms current state-of-the- art models at any prediction length; (ii) on the Traffic dataset, CycleNet still falls short compared to powerful existing multivariate forecasting models, such as iTransformer. This indicates that in scenarios with strong periodicity but without additional spatiotemporal relationships, fully leveraging the periodic components is sufficient to achieve high-accuracy predictions. However, in more complex scenarios that require thorough modeling of relationships between variables, a simple channel-independent strategy combined with a basic backbone, like CycleNet, still struggles to fully meet the demands. Therefore, in Appendix C.5, we further analyze the current limitations of the current RCF technique in spatiotemporal scenarios (such as the traffic domain) and and point out 9 potential directions for future improvements. Finally, we also provide a comparison of CycleNet with existing models on full datasets using longer look-back windows in Appendix C.2. 5 Discussion Potential limitations CycleNet demonstrates its efficacy in LTSF scenarios characterized by prominent and explicit periodic patterns. However, there are several potential limitations of CycleNet that warrant discussion here: •Unstable cycle length: CycleNet may not be suitable for datasets where the cycle length (or frequency) varies over time, such as electrocardiogram (ECG) data, because CycleNet can only learn a fixed-length cycle. •Varying cycle lengths across channels: When different channels within a dataset exhibit cycles of varying lengths, CycleNet may encounter challenges because it defaults to mod- eling all channels with the same cycle length W. Given CycleNet’s channel-independent modeling strategy, one potential solution is to pre-process the dataset by splitting it based on cycle lengths or to independently model each channel as a separate dataset. •Impact of outliers: If the dataset contains significant outliers, CycleNet’s performance may be affected. This is because the fundamental working principle of RCF is to learn the historical average cycles in the dataset. When significant outliers exist, the mean of a certain point in the cycle learned by RCF can be exaggerated, leading to inaccurate estimation of both the periodic and residual components, which subsequently impacts the prediction process. •Long-range cycle modeling: The RCF technique is effective for modeling mid-range stable cycles (e.g., daily or weekly). However, considering longer dependencies (such as yearly cycles) presents a more challenging task for the RCF technique. Although, in theory, CycleNet’s Wcan be set to a yearly cycle length to model annual cycles, the biggest difficulty lies in collecting sufficiently long historical data to train a complete yearly cycle, which might require decades of data. In this case, future research needs to develop more advanced techniques to specifically address long-range cycle modeling. Future work: further modeling inter-channel relationships The RCF technique enhances the model’s ability to model the periodicity of time series data but does not explicitly consider the relationships between multiple variables. In some spatio-temporal scenarios where spatial and temporal dependencies between variables exist, these relationships are crucial. For example, recent studies such as iTransformer [ 37] and SOFTS [ 12] indicate that appropriately modeling inter-channel relationships can improve performance in traffic scenarios. However, directly applying the RCF technique to iTransformer does not lead to significant improvement (at least for the MSE metric), as demonstrated in Table 4. We believe that devising a more reasonable multivariate modeling approach that combines CycleNet could be promising and valuable, and we leave it for future exploration. 6 Conclusion This paper reveals the presence of inherent periodic patterns in time series data and pioneers the exploration of explicitly modeling this periodicity to enhance the performance of time series fore- casting models. Technically, we propose the Residual Cycle Forecasting (RCF) technique, which models the shared periodic patterns in sequences through recurrent cycles and predicts the residual cyclic components via a backbone. Furthermore, we introduce the simple yet powerful LTSF meth- ods CycleNet/Linear and CycleNet/MLP, which combine single-layer Linear and dual-layer MLP respectively with the RCF technique. Extensive experiments demonstrate the effectiveness of the RCF technique, and CycleNet as a novel and simple method achieves state-of-the-art results with significant efficiency advantages. The findings in this paper underscore the importance of periodicity as a key characteristic for accurate time series prediction, which should be given greater emphasis in the modeling process. Finally, integrating CycleNet with effective inter-channel relationship modeling methods serves as a promising and valuable future research direction. 10 Acknowledgments This work is supported by Guangdong Major Project of Basic and Applied Basic Research (2019B030302002), National Natural Science Foundation of China (62072187), Guangzhou Devel- opment Zone Science and Technology Project (2023GH02) and the Major Key Project of PCL, China under Grant PCL2023A09. References [1]Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 , 2018. [2]Shane Bergsma, Tim Zeyl, and Lei Guo. Sutranets: Sub-series autoregressive networks for long-sequence, probabilistic forecasting. Advances in Neural Information Processing Systems , 36:30518–30533, 2023. [3]Defu Cao, Furong Jia, Sercan O Arik, Tomas Pfister, Yixiang Zheng, Wen Ye, and Yan Liu. Tempo: Prompt-based generative pre-trained transformer for time series forecasting. arXiv preprint arXiv:2310.04948 , 2023. [4]Cristian Challu, Kin G Olivares, Boris N Oreshkin, Federico Garza Ramirez, Max Mergenthaler Canseco, and Artur Dubrawski. Nhits: Neural hierarchical interpolation for time series fore- casting. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, pages 6989–6997, 2023. [5]Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan Mathur, Rajat Sen, and Rose Yu. Long- term forecasting with tide: Time-series dense encoder. arXiv preprint arXiv:2304.08424 , 2023. [6]Jinliang Deng, Feiyang Ye, Du Yin, Xuan Song, Ivor Wai-Hung Tsang, and Hui Xiong. Parsi- mony or capability? decomposition delivers both in long-term time series forecasting. 2024. URL https://api.semanticscholar.org/CorpusID:267068391 . [7]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020. [8]Wei Fan, Shun Zheng, Xiaohan Yi, Wei Cao, Yanjie Fu, Jiang Bian, and Tie-Yan Liu. Depts: Deep expansion learning for periodic time series forecasting. arXiv preprint arXiv:2203.07681 , 2022. [9]Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable repre- sentation learning for multivariate time series. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Sys- tems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/ paper_files/paper/2019/file/53c6de78244e9f528eb3e1cda69699bb-Paper.pdf . [10] Zeying Gong, Yujin Tang, and Junwei Liang. Patchmixer: A patch-mixing architecture for long-term time series forecasting. arXiv preprint arXiv:2310.00655 , 2023. [11] Nate Gruver, Marc Finzi, Shikai Qiu, and Andrew G Wilson. Large language models are zero-shot time series forecasters. Advances in Neural Information Processing Systems , 36, 2024. [12] Lu Han, Xu-Yang Chen, Han-Jia Ye, and De-Chuan Zhan. Softs: Efficient multivariate time series forecasting with series-core fusion. arXiv preprint arXiv:2404.14197 , 2024. [13] Lu Han, Han-Jia Ye, and De-Chuan Zhan. The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. IEEE Transactions on Knowledge and Data Engineering , 2024. 11 [14] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16000–16009, 2022. [15] Haowen Hou and F Richard Yu. Rwkv-ts: Beyond traditional recurrent neural network for time series tasks. arXiv preprint arXiv:2401.09093 , 2024. [16] Qihe Huang, Lei Shen, Ruixin Zhang, Jiahuan Cheng, Shouhong Ding, Zhengyang Zhou, and Yang Wang. Hdmixer: Hierarchical dependency with extendable patch for multivariate time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 12608–12616, 2024. [17] Qihe Huang, Lei Shen, Ruixin Zhang, Shouhong Ding, Binwu Wang, Zhengyang Zhou, and Yang Wang. Crossgnn: Confronting noisy multivariate time series via cross interaction refine- ment. Advances in Neural Information Processing Systems , 36, 2024. [18] Qihe Huang, Zhengyang Zhou, Kuo Yang, Gengyu Lin, Zhongchao Yi, and Yang Wang. Leret: Language-empowered retentive network for time series forecasting. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24 , 2024. [19] Yuxin Jia, Youfang Lin, Xinyan Hao, Yan Lin, Shengnan Guo, and Huaiyu Wan. Witran: Water-wave information transmission and recurrent acceleration network for long-range time series forecasting. Advances in Neural Information Processing Systems , 36, 2024. [20] Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, et al. Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728 , 2023. [21] Ming Jin, Yifan Zhang, Wei Chen, Kexin Zhang, Yuxuan Liang, Bin Yang, Jindong Wang, Shirui Pan, and Qingsong Wen. Position: What can large language models tell us about time series analysis. In Forty-first International Conference on Machine Learning , 2024. [22] Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. InInternational Conference on Learning Representations , 2021. [23] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [24] Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In The 41st international ACM SIGIR conference on research & development in information retrieval , pages 95–104, 2018. [25] Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in neural information processing systems , 32, 2019. [26] Zhe Li, Shiyi Qi, Yiduo Li, and Zenglin Xu. Revisiting long-term time series forecasting: An investigation on linear mapping. arXiv preprint arXiv:2305.10721 , 2023. [27] Zhe Li, Zhongwen Rao, Lujia Pan, Pengyun Wang, and Zenglin Xu. Ti-mae: Self-supervised masked time series autoencoders. arXiv preprint arXiv:2301.08871 , 2023. [28] Zhe Li, Zhongwen Rao, Lujia Pan, and Zenglin Xu. Mts-mixers: Multivariate time series forecasting via factorized temporal and channel mixing. arXiv preprint arXiv:2302.04501 , 2023. [29] Bryan Lim, Sercan Ö Arık, Nicolas Loeff, and Tomas Pfister. Temporal fusion transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting , 37(4): 1748–1764, 2021. [30] Shengsheng Lin, Weiwei Lin, Wentai Wu, Songbo Wang, and Yongxiang Wang. Petformer: Long-term time series forecasting via placeholder-enhanced transformer. arXiv preprint arXiv:2308.04791 , 2023. 12 [31] Shengsheng Lin, Weiwei Lin, Wentai Wu, Feiyu Zhao, Ruichao Mo, and Haotong Zhang. Segrnn: Segment recurrent neural network for long-term time series forecasting. arXiv preprint arXiv:2308.11200 , 2023. [32] Shengsheng Lin, Weiwei Lin, Wentai Wu, Haojun Chen, and Junjie Yang. Sparsetsf: Modeling long-term time series forecasting with 1k parameters. In Forty-first International Conference on Machine Learning , 2024. [33] Haoxin Liu, Zhiyuan Zhao, Jindong Wang, Harshavardhan Kamarthi, and B Aditya Prakash. Lstprompt: Large language models as zero-shot time series forecasters by long-short-term prompting. arXiv preprint arXiv:2402.16132 , 2024. [34] Minhao Liu, Ailing Zeng, Muxi Chen, Zhijian Xu, Qiuxia Lai, Lingna Ma, and Qiang Xu. Scinet: Time series modeling and forecasting with sample convolution and interaction. Advances in Neural Information Processing Systems , 35:5816–5828, 2022. [35] Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, and Schahram Dustdar. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In International conference on learning representations , 2021. [36] Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Exploring the stationarity in time series forecasting. Advances in Neural Information Processing Systems , 35:9881–9893, 2022. [37] Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series forecasting. In The Twelfth International Conference on Learning Representations , 2024. [38] Donghao Luo and Xue Wang. Moderntcn: A modern pure convolution structure for general time series analysis. In The Twelfth International Conference on Learning Representations , 2024. [39] Henrik Madsen. Time series analysis . CRC Press, 2007. [40] Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations , 2023. [41] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems , 32, 2019. [42] Xiangfei Qiu, Jilin Hu, Lekui Zhou, Xingjian Wu, Junyang Du, Buang Zhang, Chenjuan Guo, Aoying Zhou, Christian S. Jensen, Zhenli Sheng, and Bin Yang. Tfb: Towards comprehensive and fair benchmarking of time series forecasting methods. Proc. VLDB Endow. , 17(9):2363– 2377, 2024. [43] David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. International journal of forecasting , 36(3): 1181–1191, 2020. [44] William Toner and Luke Nicholas Darlow. An analysis of linear time series forecasting models. InForty-first International Conference on Machine Learning , 2024. [45] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 , 2016. [46] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [47] Huiqiang Wang, Jian Peng, Feihu Huang, Jince Wang, Junhui Chen, and Yifei Xiao. Micn: Multi-scale local and global context modeling for long-term series forecasting. In The Eleventh International Conference on Learning Representations , 2022. 13 [48] Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y Zhang, and JUN ZHOU. Timemixer: Decomposable multiscale mixing for time series forecasting. In The Twelfth International Conference on Learning Representations , 2024. [49] Qingsong Wen, Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan, and Liang Sun. Transformers in time series: A survey. arXiv preprint arXiv:2202.07125 , 2022. [50] Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, and Steven Hoi. Etsformer: Expo- nential smoothing transformers for time-series forecasting. arXiv preprint arXiv:2202.01381 , 2022. [51] Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition trans- formers with auto-correlation for long-term series forecasting. Advances in neural information processing systems , 34:22419–22430, 2021. [52] Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In International Conference on Learning Representations , 2023. [53] Zhijian Xu, Ailing Zeng, and Qiang Xu. Fits: Modeling time series with 10kparameters. In The Twelfth International Conference on Learning Representations , 2024. [54] Hao Xue and Flora D Salim. Promptcast: A new prompt-based learning paradigm for time series forecasting. IEEE Transactions on Knowledge and Data Engineering , 2023. [55] Guoqi Yu, Jing Zou, Xiaowei Hu, Angelica I Aviles-Rivero, Jing Qin, and Shujun Wang. Revitalizing multivariate time series forecasting: Learnable decomposition with inter-series dependencies and intra-series variations modeling. In Forty-first International Conference on Machine Learning , 2024. [56] Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence , volume 37, pages 11121–11128, 2023. [57] Tianping Zhang, Yizhuo Zhang, Wei Cao, Jiang Bian, Xiaohan Yi, Shun Zheng, and Jian Li. Less is more: Fast multivariate time series forecasting with light sampling-oriented mlp structures. arXiv preprint arXiv:2207.01186 , 2022. [58] Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In International Conference on Learning Representa- tions , 2023. [59] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pages 11106–11115, 2021. [60] Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International conference on machine learning , pages 27268–27286. PMLR, 2022. [61] Tian Zhou, Peisong Niu, Liang Sun, Rong Jin, et al. One fits all: Power general time series analysis by pretrained lm. Advances in neural information processing systems , 36, 2024. 14 A Development of time series forecasting In recent years, the time series analysis community has shifted its focus from short-term forecasting to tasks with longer prediction horizons, also known as LTSF tasks . This shift offers greater convenience but also poses increased challenges. Mainstream approaches can be roughly classified into the following five distinct classes: Transformer-based Models It is widely recognized that Transformers possess impressive capabil- ities for long-distance modeling, and thus researchers have high expectations for their adaptation to long time series tasks [ 46,49]. Early research works, such as LogTrans [ 25], TFT [ 29], Informer [ 59], Autoformer [ 51], Pyraformer [ 35], FEDformer [ 60], ETSformer [ 50], and NSTransformer [ 36], focused on optimizing the original Transformer architecture for time series analysis tasks. However, more recent research has found that satisfactory performance can be achieved by simply partitioning patches, drawing inspiration from patch techniques used in the computer vision community [ 7,14]). Approaches like PatchTST [ 40], PETformer [ 30], and Crossformer [ 58] have demonstrated promising results by adopting this patch-based approach. Linear- and MLP-based Models Linear- and MLP-based methods are often lighter-weight, especially compared to Transformer methods that require stacking multiple blocks [ 57,4]. A particularly notable breakthrough is the observation made by DLinear [ 56], which demonstrates that a single-layer linear approach could outperform many complex Transformer designs. This observation leads to a sequence of works, including TiDE [ 5], MTS-Mixers [ 28], TSMixer [ 28], TimeMixer [ 48], HDMixer [ 16], SOFTS [ 12], FITS [ 53], SparseTSF [ 32], and SSCNN [ 6]. The proposed CycleNet in this paper is also a Linear- or MLP-based model that is simple, efficient, and powerful for time series forecasting. RNN-based Models Conceptually, Recurrent Neural Networks (RNN) are considered to be the most suitable models for modeling time series data [ 24,43]. However, due to difficulties in parallelization and modeling long sequences, RNNs are not the most popular choice in works on LTSF tasks. Recent works aim to revitalize RNN models in long sequence modeling tasks, such as SegRNN [31], WITRAN [19], SutraNets [2], and RWKV-TS [15]. TCN-based Models Because of the parallelizability of convolution operations and their ability to capture features at different time scales, Temporal Convolutional Networks (TCN) methods are considered strong competitors for addressing time series tasks [ 1,9]. Recent works that apply TCN methods to LTSF tasks include SCINet [ 34], MICN [ 47], TimesNet [ 52], PatchMixer [ 10], and ModernTCN [38]. LLM-based Models The remarkable capabilities demonstrated by large language models (LLM) have sparked interest among researchers from various fields, including those working on time series forecasting tasks [ 21,18]. Some works consider fine-tuning pre-trained LLMs to perform time series analysis tasks, including OFA [ 61], Time-LLM [ 20], and TEMPO [ 3]. Other works aim to achieve zero-shot inference using large pre-trained LLMs through prompt engineering, including LLMTime [11], PromptCast [54], and LSTPrompt [33]. B More details of CycleNet B.1 Overall pseudocode Algorithm 1 demonstrates the implementation of modeling periodic patterns through recurrent cycles. Specifically, the first line defines the learnable parameter queue Qand initializes it to zero. Lines 2-11 define the getCycle function, which will be called by CycleNet to obtain the corresponding truncated equivalent cyclic subsequences. This function takes two parameters, iandl, where irepresents the relative positional index for Q, and lrepresents the length of the required subsequence. Qlearns internal periodic patterns within the sequence through co-training with the backbone. Furthermore, Algorithm 2 illustrates the workflow of CycleNet. The first step is to normalize the samples based on their mean and standard deviation, then call the getCycle function to remove the cyclic components of the input data. Subsequently, predict the residual components through the backbone. Finally, add back the cyclic components of the output data, and perform instance denormalization to obtain the final prediction result. Here, the cycle index icorresponds to tmod W, as described in Section 3.1. 15 Algorithm 1 Modeling periodic patterns through recurrent cycles Require: Number of channels Dand cycle length W Ensure: Learned periodic patterns Q∈RW×D 1:Initialize learnable parameters Q←0 ▷ Q∈RW×D 2:function GETCYCLE (i, l) ▷Define function 3: Q′←Roll(Q,shifts =−i,dim= 0) ▷Roll the queue to the appropriate index 4: ifl < W then ▷Retrieve the required part directly from Q′ 5: return Q′ 0:l 6: else ▷Repeat Q′to match the required length 7: n← ⌊l/W⌋ 8: d←lmod W 9: return Concat ([Q′]×n,[Q′ 0:d]) ▷Concatenate replicated Q′and the remaining part 10: end if 11:end function Algorithm 2 Workflow of CycleNet Require: Look-back length L, forecast horizon H, cycle index i, and input xt−L+1:t∈RL×D Ensure: Forecast output ¯xt+1:t+H∈RH×D 1:ifRevIN is applied then 2: µ, σ←Mean (xt−L+1:t),STD(xt−L+1:t) ▷Compute mean and standard deviation 3: xt−L+1:t←xt−L+1:t−µ σ+ϵ▷Remove instance-specific statistics 4:end if 5:x′ t−L+1:t←xt−L+1:t−getCycle (i, L) ▷Remove the cycle component 6:¯x′ t+1:t+H←Backbone (x′ t−L+1:t) ▷Forecast using backbone model 7:¯xt+1:t+H←¯x′ t+1:t+H+getCycle (i+L, H) ▷Restore the cycle component 8:ifRevIN is applied then 9: ¯xt+1:t+H←¯xt+1:t+H×(σ+ϵ) +µ ▷ Restore instance-specific statistics 10:end if B.2 Utilizing ACF analysis to determine cycle length The RCF technique utilizes recurrent cycles Q∈RW×Dto model the internal periodic patterns of sequences. Here, the hyperparameter Wdetermines the length of the recurrent cycles, which should precisely match the length of the periodic patterns within the data. As shown in the results of Table 6, when Wis not accurately set, the RCF technique fails to fulfill its intended purpose. Although, in practice, we can infer the maximum cycle length of the dataset by considering the data’s sampling frequency and the potential existing periodic patterns (as shown in Table 1), this manual inference method may introduce errors. Therefore, we may need a more scientific and precise approach to find the hyperparameter W. In such cases, the autocorrelation function (ACF) [ 39] serves as a powerful mathematical tool to help us determine the periodicity within the data. The autocorrelation function measures the correlation between a time series and its lagged values, indicating the presence of autocorrelation within the data. Mathematically, this can be expressed as: ACF =PN−k t=1(xt−¯x)(xt+k−¯x) PN t=1(xt−¯x)2, (6) where Nrepresents the total number of observations, xtdenotes the value of the time series at time t, kis the lag time, and ¯xis the mean of the time series values. Here, when the lag time kaligns with the data’s cycle, the ACF value exhibits a significant peak. Specifically, the largest peak corresponds to the lag that aligns with the length of the maximum cycle present in the dataset. Conversely, if the data lacks periodicity, no significant peaks or troughs will be observed. We present the ACF results for each dataset in Figure 6. It can be observed that these datasets all display evident periodicity, indicated by prominent peaks and troughs in the plots. More importantly, 16 0 50 100 150 200 Lags0.00.20.40.60.81.0Autocorrelation(a) ETTh1, W= 24 0 50 100 150 200 Lags0.60.70.80.91.0Autocorrelation (b) ETTh2, W= 24 0 50 100 150 200 Lags0.00.20.40.60.81.0Autocorrelation (c) ETTm1, W= 96 0 50 100 150 200 Lags0.70.80.91.0Autocorrelation (d) ETTm2, W= 96 0 50 100 150 200 Lags0.00.20.40.60.81.0Autocorrelation (e) Electricity, W= 168 0 50 100 150 200 Lags−0.50.00.51.0Autocorrelation (f) Solar-Energy, W= 144 0 50 100 150 200 Lags−0.250.000.250.500.751.00Autocorrelation (g) Traffic, W= 168 0 50 100 150 200 Lags−0.50−0.250.000.250.500.751.00Autocorrelation (h) Weather, W= 144 Figure 6: Visualization of ACF results on the training set of different datasets. The hyperparameter Wshould be set to the lag corresponding to the observed maximum peak. the maximum cycles shown in the plots align with the pre-inferred cycle lengths from Table 1. This indicates the correctness of the pre-inferred lengths, and Wshould be strictly set to these values. B.3 Experimental details We utilized widely used benchmark datasets for LTSF tasks, including the ETT series, Electricity, Solar-Energy, Traffic, and Weather. Following prior works such as Autoformer [ 51] and iTrans- former [ 37], we split the ETTs dataset into training, validation, and test sets with a ratio of 6:2:2, while the other datasets were split in a ratio of 7:1:2. We implemented CycleNet using PyTorch [ 41] and conducted experiments on a single NVIDIA RTX 4090 GPU with 24GB of memory. CycleNet was trained for 30 epochs with early stopping based on a patience of 5 on the validation set. The batch size was set uniformly to 256 for ETTs and the Weather dataset, and 64 for the remaining datasets. This adjustment was made because the latter datasets have a larger number of channels, requiring a relatively smaller batch size to avoid out-of-memory issues. The learning rate was selected from the range {0.002, 0.005, 0.01} based on the performance on the validation set. The hyperparameter Wwas set consistently to the pre-inferred cycle length as shown in Table 1. Additionally, the hidden layer size of CycleNet/MLP was uniformly set to 512. By default, CycleNet uses RevIN without learnable affine parameters [ 22]. However, we found that on the Solar dataset, using RevIN leads to a significant performance drop, as shown in Appendix C.4. The primary reason for this may be that photovoltaic power generation data contains continuous segments of zero values (no power generation at night). When the look-back windows are not an integer multiple of a day, the calculation of means in RevIN can be significantly affected, leading to decreased performance. Therefore, for this dataset, we did not apply the RevIN strategy. C More experimental results C.1 Periodic patterns learned under different configurations The proposed RCF technique can effectively learn the inherent periodic patterns within time series data. This capability is a significant advantage, revealing the potential value of RCF or its underlying cyclic modeling approach as a superior method to assist data engineers in analyzing patterns in time series data. To further elucidate the working principle behind RCF, we delve into the periodic patterns learned by the RCF technique under different configurations, as illustrated in Figure 7: •Forecast horizon H: The learned patterns remain almost unchanged as the horizon length varies. This indicates that the horizon length does not affect the learned pattern results. 17 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0(a) Horizon-96 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (b) Horizon-192 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (c) Horizon-336 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (d) Horizon-720 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (e) Lookback-96 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (f) Lookback-192 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (g) Lookback-336 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (h) Lookback-720 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (i) Backbone-Linear 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (j) Backbone-DLinear 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (k) Backbone-PatchTST 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (l) Backbone-iTransformer 0 25 50 75 100 125 150 175−1.5−1.0−0.50.00.51.0 (m) W-168 0 20 40 60 80−1.5−1.0−0.50.00.51.0 (n) W-96 0 5 10 15 20−1.5−1.0−0.50.00.51.0 (o) W-24 0 5 10 15 20−1.5−1.0−0.50.00.51.0 (p) W-23 Figure 7: Periodic patterns of the 321st channel in the Electricity dataset, learned under different configurations. The basic configuration includes both a look-back and horizon length of 96, a simple Linear model as the backbone, and the correct cycle length Wset to 168. •Look-back length L: The overall pattern remains unchanged as the look-back window changes. However, with closer observation, it is noticeable that the learned pattern becomes smoother with an increased look-back. This is because a longer look-back provides the backbone with richer periodic information, thereby reducing the reliance on the learned pattern component. •Backbone : The patterns vary somewhat with different backbones. When DLinear is used as the backbone, the learned patterns are smoother, as DLinear’s decomposition technique itself extracts certain periodic features. When iTransformer is the backbone, the learned patterns differ more, as it additionally models multichannel relationships, so the learned periodic patterns may consider multichannel feature interactions. PatchTST’s performance is more similar to that of Linear, as it is also a regular single-channel modeling method, though with stronger nonlinear learning capabilities compared to the Linear model. •Cycle length W: When Wis set to 168 (the weekly cycle length for the Electricity dataset), the recurrent cycle Qlearns the complete periodic pattern, including both weekly and daily cycles. When Wis set to 24 (the daily cycle length), the recurrent cycle Qonly learns the daily cycle pattern. When Wis set to 96 (four times the daily cycle length), the recurrent cycle Qlearns four repeated daily cycle models. However, when Wis set to 23 (without matching any semantic meaning), the recurrent cycle Qfails to learn any meaningful pattern, resulting in a straight line. C.2 Full results with different look-back lengths Table 2 presents the comparison results of CycleNet with other models on the mean performance at look-back length L= 96 for various forecast horizons H∈ {96,192,336,720}. Here, we further 18 Table 7: Full results of different models with the look-back length L= 96 . The reported results with standard deviation of CycleNet are averaged from 5 runs (with different random seeds of {2024,2025,2026,2027,2028}). The results of other models are sourced from iTransformer [ 37]. The best results are highlighted in bold and the second best are underlined . Model FEDformer TimesNet iTransformer CycleNet/Linear CycleNet/MLP Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.376 0.419 0.384 0.402 0.386 0.405 0.378±0.001 0.391±0.001 0.375±0.001 0.395 ±0.001 192 0.420 0.448 0.436 0.429 0.441 0.436 0.426±0.001 0.419±0.001 0.436±0.002 0.428 ±0.002 336 0.459 0.465 0.491 0.469 0.487 0.458 0.464±0.001 0.439±0.001 0.496±0.001 0.455 ±0.003 720 0.506 0.507 0.521 0.500 0.503 0.491 0.461±0.001 0.460±0.001 0.520±0.021 0.484 ±0.012 Avg 0.440 0.460 0.458 0.450 0.454 0.448 0.432±0.001 0.427±0.001 0.457±0.006 0.441 ±0.004ETTh296 0.358 0.397 0.340 0.374 0.297 0.349 0.285±0.001 0.335±0.001 0.298±0.003 0.344 ±0.001 192 0.429 0.439 0.402 0.414 0.380 0.400 0.373±0.001 0.391±0.001 0.372±0.002 0.396 ±0.002 336 0.496 0.487 0.452 0.452 0.428 0.432 0.421±0.001 0.433 ±0.001 0.431±0.007 0.439 ±0.005 720 0.463 0.474 0.462 0.468 0.427 0.445 0.453±0.003 0.458 ±0.002 0.450±0.010 0.458 ±0.005 Avg 0.437 0.449 0.414 0.427 0.383 0.407 0.383±0.001 0.404±0.001 0.388±0.005 0.409 ±0.003ETTm196 0.379 0.419 0.338 0.375 0.334 0.368 0.325±0.001 0.363 ±0.001 0.319±0.001 0.360 ±0.001 192 0.426 0.441 0.374 0.387 0.377 0.391 0.366±0.001 0.382 ±0.001 0.360±0.002 0.381±0.001 336 0.445 0.459 0.410 0.411 0.426 0.420 0.396±0.001 0.401±0.001 0.389±0.001 0.403 ±0.001 720 0.543 0.490 0.478 0.450 0.491 0.459 0.457±0.001 0.433±0.001 0.447±0.001 0.441 ±0.001 Avg 0.448 0.452 0.400 0.406 0.407 0.410 0.386±0.001 0.395±0.001 0.379±0.001 0.396 ±0.001ETTm296 0.203 0.287 0.187 0.267 0.180 0.264 0.166±0.001 0.248 ±0.001 0.163±0.001 0.246±0.001 192 0.269 0.328 0.249 0.309 0.250 0.309 0.233±0.001 0.291 ±0.001 0.229±0.001 0.290±0.001 336 0.325 0.366 0.321 0.351 0.311 0.348 0.293±0.001 0.330 ±0.001 0.284±0.001 0.327±0.001 720 0.421 0.415 0.408 0.403 0.412 0.407 0.395±0.001 0.389±0.001 0.389±0.003 0.391 ±0.002 Avg 0.305 0.349 0.291 0.333 0.288 0.332 0.272±0.001 0.315 ±0.001 0.266±0.001 0.314±0.001Electricity96 0.193 0.308 0.168 0.272 0.148 0.240 0.141±0.001 0.234 ±0.001 0.136±0.001 0.229±0.001 192 0.201 0.315 0.184 0.289 0.162 0.253 0.155±0.001 0.247 ±0.001 0.152±0.001 0.244±0.001 336 0.214 0.329 0.198 0.300 0.178 0.269 0.172±0.001 0.264±0.001 0.170±0.001 0.264±0.001 720 0.246 0.355 0.220 0.320 0.225 0.317 0.210±0.001 0.296±0.001 0.212±0.001 0.299 ±0.001 Avg 0.214 0.327 0.193 0.295 0.178 0.270 0.170±0.001 0.260 ±0.001 0.168±0.001 0.259±0.001Solar-Energy96 0.242 0.342 0.250 0.292 0.203 0.237 0.209±0.001 0.260 ±0.003 0.190±0.007 0.247 ±0.003 192 0.285 0.380 0.296 0.318 0.233 0.261 0.231±0.002 0.269 ±0.002 0.210±0.004 0.266 ±0.008 336 0.282 0.376 0.319 0.330 0.248 0.273 0.246±0.002 0.275 ±0.003 0.217±0.006 0.266±0.006 720 0.357 0.427 0.338 0.337 0.249 0.275 0.255±0.001 0.274 ±0.003 0.223±0.003 0.266±0.003 Avg 0.292 0.381 0.301 0.319 0.233 0.262 0.235±0.001 0.270 ±0.002 0.210±0.005 0.261±0.005Traffic96 0.587 0.366 0.593 0.321 0.395 0.268 0.480±0.001 0.314 ±0.001 0.458±0.001 0.296 ±0.001 192 0.604 0.373 0.617 0.336 0.417 0.276 0.482±0.001 0.313 ±0.001 0.457±0.001 0.294 ±0.001 336 0.621 0.383 0.629 0.336 0.433 0.283 0.476±0.001 0.303 ±0.001 0.470±0.001 0.299 ±0.001 720 0.626 0.382 0.640 0.350 0.467 0.302 0.503±0.001 0.320 ±0.001 0.502±0.001 0.314 ±0.001 Avg 0.610 0.376 0.620 0.336 0.428 0.282 0.485±0.001 0.313 ±0.001 0.472±0.001 0.301 ±0.001Weather96 0.217 0.296 0.172 0.220 0.174 0.214 0.170±0.001 0.216 ±0.001 0.158±0.001 0.203±0.001 192 0.276 0.336 0.219 0.261 0.221 0.254 0.222±0.001 0.259 ±0.001 0.207±0.001 0.247±0.001 336 0.339 0.380 0.280 0.306 0.278 0.296 0.275±0.001 0.296 ±0.001 0.262±0.001 0.289±0.001 720 0.403 0.428 0.365 0.359 0.358 0.349 0.349±0.001 0.345 ±0.001 0.344±0.001 0.344±0.001 Avg 0.309 0.360 0.259 0.287 0.258 0.278 0.254±0.001 0.279 ±0.001 0.243±0.001 0.271±0.001 showcase the complete comparison results for different forecast horizons in Table 7. It can be observed that in most settings, CycleNet achieves state-of-the-art results, consistent with the findings in Table 2. Additionally, the standard deviation of CycleNet’s results is mostly below 0.001. This strongly indicates the robustness of CycleNet. Additionally, the look-back length is a crucial hyperparameter that significantly impacts the perfor- mance of time series forecasting models, as it determines the richness of information the model can leverage. Initially, the community focused primarily on exploring the application of Transformers in time series forecasting tasks. Due to the inherent complexity of Transformers, using excessively long look-back windows resulted in a significant increase in runtime. As a result, many popular models at the time, such as Informer [ 59], Autoformer [ 51], and FEDformer [ 60], employed shorter look-back windows, typically with L= 96 . 19 Table 8: Full results of different models with longer look-back lengths L∈ {336,720}. The reported results of CycleNet are averaged from 5 runs (with different random seeds of {2024,2025,2026,2027,2028}). The results of other models are reproduced after fixing a long- standing bug (discarding the last batch of data during the test phase). The best results are highlighted inbold and the second best are underlined . Lookback L= 336 L= 720 ModelDLinear [2023]PatchTST [2023]CycleNet /LinearCycleNet /MLPSegRNN [2023]SparseTSF [2024]CycleNet /LinearCycleNet /MLP Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.374 0.398 0.385 0.405 0.374 0.396 0.382 0.403 0.351 0.392 0.362 0.388 0.379 0.403 0.385 0.412 192 0.430 0.440 0.414 0.421 0.406 0.415 0.421 0.426 0.390 0.418 0.403 0.411 0.416 0.425 0.424 0.438 336 0.442 0.445 0.440 0.440 0.431 0.430 0.449 0.444 0.449 0.452 0.434 0.428 0.447 0.445 0.460 0.463 720 0.497 0.507 0.456 0.470 0.450 0.464 0.497 0.485 0.492 0.494 0.426 0.447 0.477 0.483 0.486 0.487 Avg 0.436 0.448 0.424 0.434 0.415 0.426 0.437 0.440 0.421 0.439 0.406 0.419 0.430 0.439 0.439 0.450ETTh296 0.281 0.347 0.275 0.337 0.279 0.341 0.300 0.355 0.275 0.338 0.294 0.346 0.271 0.337 0.293 0.352 192 0.367 0.404 0.338 0.379 0.342 0.385 0.373 0.403 0.338 0.380 0.339 0.377 0.332 0.380 0.359 0.395 336 0.438 0.454 0.365 0.398 0.371 0.413 0.384 0.419 0.419 0.445 0.359 0.397 0.362 0.408 0.392 0.423 720 0.598 0.549 0.391 0.429 0.426 0.451 0.428 0.450 0.431 0.464 0.383 0.424 0.415 0.449 0.425 0.451 Avg 0.421 0.439 0.342 0.386 0.355 0.398 0.371 0.407 0.366 0.407 0.344 0.386 0.345 0.394 0.367 0.405ETTm196 0.307 0.350 0.291 0.343 0.299 0.348 0.297 0.351 0.295 0.356 0.312 0.354 0.307 0.353 0.301 0.357 192 0.340 0.373 0.334 0.370 0.334 0.367 0.338 0.377 0.334 0.382 0.347 0.376 0.337 0.371 0.341 0.377 336 0.377 0.397 0.367 0.392 0.368 0.386 0.374 0.400 0.359 0.401 0.367 0.386 0.364 0.387 0.376 0.396 720 0.433 0.433 0.422 0.426 0.417 0.414 0.436 0.431 0.415 0.435 0.419 0.413 0.410 0.411 0.431 0.425 Avg 0.364 0.388 0.354 0.383 0.355 0.379 0.361 0.390 0.351 0.394 0.361 0.382 0.355 0.381 0.362 0.389ETTm296 0.165 0.257 0.164 0.254 0.159 0.247 0.178 0.262 0.165 0.251 0.163 0.252 0.159 0.249 0.176 0.265 192 0.227 0.307 0.221 0.293 0.214 0.286 0.238 0.303 0.226 0.300 0.217 0.290 0.214 0.289 0.231 0.305 336 0.304 0.362 0.276 0.328 0.269 0.322 0.292 0.339 0.282 0.341 0.270 0.327 0.268 0.326 0.282 0.338 720 0.431 0.441 0.366 0.383 0.363 0.382 0.374 0.391 0.361 0.392 0.352 0.379 0.353 0.384 0.361 0.388 Avg 0.282 0.342 0.257 0.315 0.251 0.309 0.271 0.324 0.259 0.321 0.251 0.312 0.249 0.312 0.263 0.324Electricity96 0.140 0.237 0.131 0.225 0.128 0.223 0.126 0.221 0.130 0.228 0.138 0.233 0.128 0.223 0.127 0.223 192 0.153 0.250 0.148 0.240 0.144 0.237 0.144 0.237 0.152 0.251 0.151 0.244 0.143 0.237 0.144 0.239 336 0.169 0.267 0.165 0.259 0.160 0.254 0.160 0.255 0.170 0.272 0.166 0.260 0.159 0.254 0.159 0.255 720 0.203 0.299 0.202 0.291 0.198 0.287 0.199 0.291 0.203 0.304 0.205 0.293 0.197 0.287 0.196 0.290 Avg 0.166 0.263 0.162 0.254 0.158 0.250 0.157 0.251 0.164 0.264 0.165 0.258 0.157 0.250 0.157 0.252Solar-Energy96 0.222 0.292 0.190 0.278 0.200 0.250 0.182 0.245 0.175 0.236 0.195 0.243 0.194 0.255 0.174 0.232 192 0.249 0.313 0.206 0.252 0.221 0.261 0.191 0.254 0.193 0.268 0.215 0.254 0.205 0.251 0.187 0.246 336 0.268 0.327 0.217 0.254 0.236 0.272 0.197 0.257 0.209 0.263 0.232 0.262 0.218 0.257 0.194 0.252 720 0.271 0.326 0.219 0.255 0.245 0.277 0.207 0.264 0.205 0.264 0.237 0.263 0.239 0.278 0.201 0.259 Avg 0.253 0.315 0.208 0.260 0.226 0.265 0.194 0.255 0.196 0.258 0.220 0.256 0.214 0.260 0.189 0.247Traffic96 0.410 0.282 0.373 0.254 0.397 0.278 0.386 0.268 0.356 0.255 0.389 0.268 0.381 0.266 0.374 0.268 192 0.423 0.288 0.391 0.262 0.411 0.283 0.404 0.276 0.374 0.268 0.398 0.270 0.394 0.273 0.390 0.275 336 0.436 0.296 0.404 0.269 0.424 0.289 0.416 0.281 0.393 0.273 0.411 0.275 0.406 0.279 0.405 0.282 720 0.466 0.315 0.436 0.287 0.450 0.305 0.445 0.300 0.434 0.294 0.448 0.297 0.441 0.300 0.441 0.302 Avg 0.434 0.295 0.401 0.268 0.421 0.289 0.413 0.281 0.389 0.273 0.412 0.278 0.406 0.280 0.403 0.282Weather96 0.174 0.235 0.155 0.204 0.167 0.221 0.148 0.200 0.141 0.205 0.169 0.223 0.164 0.220 0.149 0.203 192 0.219 0.281 0.195 0.242 0.212 0.258 0.190 0.240 0.185 0.250 0.214 0.262 0.209 0.258 0.192 0.244 336 0.264 0.317 0.249 0.283 0.260 0.293 0.243 0.283 0.241 0.297 0.257 0.293 0.255 0.292 0.242 0.283 720 0.324 0.363 0.321 0.334 0.328 0.339 0.322 0.339 0.318 0.352 0.321 0.340 0.320 0.338 0.312 0.333 Avg 0.245 0.299 0.230 0.266 0.242 0.278 0.226 0.266 0.221 0.276 0.240 0.280 0.237 0.277 0.224 0.266 With the recent development of model lightweighting techniques, particularly the adoption of channel- independent strategies (first applied in DLinear [ 56] and PatchTST [ 40]), more models have started to experiment with longer look-back windows in pursuit of higher predictive accuracy. For instance, DLinear and PatchTST default to using look-back windows of L= 336 , while SegRNN [ 31] and SparseTSF [ 32] default to using L= 720 . To explore CycleNet’s performance with longer look-back windows, we compared CycleNet with these advanced models using their respective default, longer look-back windows in Table 8. It is important to note that we re-ran the official open-source code of these baselines to obtain the corresponding results, using the same MSE as the loss function (as SegRNN originally used MAE as its loss). Additionally, there was a long-standing bug in their original repositories, where the data from the last batch was discarded during testing [ 42,53]. This issue could have affected the model’s performance, so we fixed this problem before re-running the experiments. It can be observed that even with a longer look-back length, CycleNet generally maintains a significant advantage, achieving state-of-the-art performance in most scenarios. This demonstrates CycleNet’s excellent performance across different look-back lengths. It is worth noting that both PatchTST and 20 SegRNN outperform CycleNet on the Traffic dataset, even though they are also channel-independent models. This is partly because the Traffic dataset contains more outliers (see more discussion in Appendix C.5), which may impact the performance of RCF; additionally, PatchTST and SegRNN are more complex deep models with stronger nonlinear capabilities, enabling them to fit various patterns across numerous channels (the Traffic dataset has up to 862 channels). C.3 Full results with different STD techniques Table 9: Full results of comparison of different STD techniques. The configuration used here is consistent with that of DLinear [ 56], where a pure Linear model serves as the backbone, a look-back length of 336 is employed, and no additional instance normalization strategies are applied. Thus, CLinear here refers to CycleNet/Linear without RevIN. The best results are highlighted in bold and the second best are underlined . ModelCLinear (RCF+Linear)LDLinear (LD+Linear)DLinear (MOV+Linear)SLinear (Sparse+Linear)Linear Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.370 0.395 0.372 0.394 0.372 0.394 0.366 0.388 0.374 0.395 192 0.404 0.417 0.410 0.420 0.408 0.417 0.406 0.414 0.409 0.418 336 0.434 0.440 0.449 0.452 0.441 0.442 0.440 0.442 0.442 0.444 720 0.465 0.486 0.476 0.492 0.480 0.494 0.483 0.501 0.484 0.498 Avg 0.418 0.434 0.427 0.439 0.425 0.437 0.424 0.436 0.427 0.439ETTh296 0.308 0.369 0.292 0.357 0.297 0.362 0.340 0.389 0.305 0.368 192 0.382 0.416 0.372 0.409 0.398 0.426 0.379 0.413 0.385 0.419 336 0.454 0.465 0.479 0.480 0.496 0.489 0.404 0.437 0.458 0.470 720 0.661 0.575 0.675 0.582 0.694 0.592 0.720 0.600 0.691 0.592 Avg 0.451 0.456 0.455 0.457 0.471 0.467 0.460 0.460 0.460 0.462ETTm196 0.298 0.350 0.305 0.350 0.309 0.356 0.306 0.349 0.305 0.349 192 0.330 0.370 0.335 0.366 0.346 0.380 0.339 0.370 0.338 0.369 336 0.359 0.388 0.372 0.390 0.373 0.391 0.372 0.389 0.371 0.389 720 0.410 0.421 0.445 0.443 0.439 0.435 0.430 0.426 0.433 0.428 Avg 0.349 0.382 0.365 0.387 0.367 0.390 0.362 0.383 0.362 0.384ETTm296 0.164 0.260 0.165 0.257 0.165 0.257 0.177 0.272 0.166 0.259 192 0.225 0.304 0.240 0.318 0.232 0.310 0.246 0.325 0.228 0.305 336 0.271 0.332 0.290 0.349 0.295 0.356 0.309 0.370 0.275 0.334 720 0.406 0.423 0.396 0.419 0.427 0.442 0.427 0.440 0.407 0.425 Avg 0.266 0.330 0.273 0.336 0.280 0.341 0.290 0.352 0.269 0.331Electricity96 0.131 0.228 0.140 0.237 0.140 0.237 0.148 0.243 0.140 0.238 192 0.145 0.242 0.154 0.250 0.154 0.250 0.159 0.254 0.154 0.251 336 0.160 0.260 0.170 0.268 0.169 0.268 0.173 0.271 0.170 0.269 720 0.193 0.292 0.204 0.300 0.204 0.301 0.207 0.303 0.204 0.301 Avg 0.157 0.255 0.167 0.264 0.167 0.264 0.172 0.268 0.167 0.265Solar-Energy96 0.192 0.251 0.222 0.294 0.222 0.298 0.226 0.296 0.224 0.302 192 0.218 0.258 0.249 0.315 0.250 0.312 0.252 0.312 0.250 0.310 336 0.231 0.262 0.268 0.326 0.270 0.335 0.270 0.326 0.269 0.325 720 0.239 0.265 0.271 0.327 0.272 0.327 0.271 0.327 0.270 0.333 Avg 0.220 0.259 0.253 0.316 0.254 0.318 0.255 0.315 0.253 0.318Traffic96 0.397 0.275 0.411 0.285 0.411 0.284 0.414 0.281 0.411 0.283 192 0.412 0.282 0.423 0.288 0.423 0.289 0.425 0.285 0.423 0.289 336 0.426 0.290 0.436 0.296 0.436 0.296 0.436 0.293 0.437 0.297 720 0.456 0.308 0.466 0.315 0.466 0.316 0.464 0.310 0.466 0.316 Avg 0.423 0.289 0.434 0.296 0.434 0.296 0.435 0.292 0.434 0.296Weather96 0.174 0.240 0.174 0.235 0.175 0.237 0.176 0.235 0.175 0.235 192 0.218 0.279 0.215 0.271 0.215 0.273 0.218 0.277 0.218 0.276 336 0.262 0.314 0.263 0.315 0.261 0.311 0.265 0.316 0.262 0.312 720 0.328 0.367 0.325 0.365 0.324 0.363 0.325 0.363 0.327 0.366 Avg 0.245 0.300 0.244 0.297 0.244 0.296 0.246 0.298 0.245 0.297 The proposed RCF technique is essentially a type of Seasonal-Trend Decomposition (STD) method. To directly compare RCF with existing related STD techniques, we adopted a strategy consistent with DLinear, using a pure Linear model as the backbone and not applying any instance normalization 21 techniques. We previously reported the mean performance of these techniques across different horizons H∈ {96,192,336,720}in Table 5. Here, we further present the complete comparative results for all horizons in Table 9. The results show that the RCF technique consistently outperforms other techniques. A notable exception is the relatively noisy weather dataset, where RCF does not show a significant advantage. However, in this case, the performance of several STD techniques is similar to that of the pure Linear model. Overall, these findings strongly support RCF as a new STD method that enhances model performance in scenarios with strong periodicity. C.4 Ablation study of RevIN Table 10: Ablatioin results of RevIN. ModelCycleNet/L w. RevINCycleNet/L w/o. RevINRLinear [2023]CycleNet/M w. RevINCycleNet/M w/o. RevINRMLP [2023] Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTh196 0.377 0.391 0.379 0.399 0.385 0.393 0.378 0.397 0.383 0.401 0.383 0.401 192 0.426 0.419 0.423 0.428 0.439 0.424 0.440 0.431 0.431 0.436 0.437 0.432 336 0.464 0.439 0.460 0.452 0.483 0.448 0.495 0.453 0.486 0.467 0.494 0.461 720 0.462 0.460 0.484 0.494 0.481 0.470 0.502 0.473 0.547 0.516 0.540 0.499ETTh296 0.286 0.336 0.328 0.381 0.291 0.339 0.298 0.344 0.326 0.377 0.299 0.345 192 0.372 0.391 0.467 0.464 0.375 0.389 0.374 0.400 0.421 0.435 0.371 0.394 336 0.422 0.433 0.570 0.523 0.414 0.425 0.425 0.435 0.522 0.490 0.420 0.429 720 0.457 0.460 0.773 0.630 0.420 0.440 0.442 0.454 0.876 0.647 0.438 0.450ETTm196 0.325 0.363 0.327 0.371 0.351 0.372 0.320 0.361 0.338 0.383 0.327 0.366 192 0.366 0.382 0.359 0.388 0.390 0.390 0.361 0.382 0.367 0.393 0.370 0.386 336 0.396 0.402 0.391 0.414 0.423 0.414 0.392 0.404 0.396 0.419 0.404 0.410 720 0.457 0.434 0.434 0.442 0.486 0.448 0.448 0.441 0.447 0.448 0.462 0.445ETTm296 0.168 0.249 0.176 0.272 0.184 0.266 0.164 0.246 0.174 0.266 0.178 0.259 192 0.232 0.290 0.249 0.324 0.248 0.305 0.232 0.291 0.248 0.318 0.242 0.302 336 0.293 0.330 0.325 0.378 0.307 0.342 0.283 0.328 0.304 0.361 0.299 0.340 720 0.394 0.389 0.526 0.495 0.408 0.397 0.385 0.389 0.512 0.478 0.400 0.398Electricity96 0.142 0.234 0.142 0.239 0.198 0.275 0.136 0.230 0.138 0.235 0.182 0.265 192 0.156 0.247 0.155 0.252 0.198 0.277 0.153 0.245 0.154 0.250 0.187 0.270 336 0.173 0.265 0.170 0.269 0.212 0.293 0.170 0.264 0.171 0.269 0.203 0.287 720 0.211 0.297 0.199 0.298 0.254 0.325 0.212 0.300 0.206 0.302 0.244 0.319Solar96 0.250 0.277 0.208 0.256 0.308 0.332 0.195 0.252 0.187 0.245 0.236 0.270 192 0.289 0.299 0.231 0.269 0.345 0.349 0.225 0.272 0.215 0.275 0.270 0.290 336 0.338 0.323 0.247 0.272 0.387 0.364 0.248 0.289 0.212 0.257 0.296 0.305 720 0.351 0.326 0.258 0.275 0.390 0.358 0.253 0.286 0.228 0.269 0.296 0.303Traffic96 0.480 0.314 0.475 0.302 0.647 0.386 0.459 0.297 0.469 0.298 0.510 0.331 192 0.482 0.313 0.475 0.305 0.600 0.362 0.457 0.295 0.477 0.304 0.505 0.327 336 0.476 0.303 0.489 0.313 0.607 0.365 0.470 0.300 0.487 0.302 0.518 0.332 720 0.505 0.321 0.518 0.327 0.644 0.383 0.502 0.314 0.522 0.315 0.553 0.350Weather96 0.170 0.216 0.209 0.284 0.197 0.236 0.158 0.203 0.179 0.247 0.181 0.219 192 0.222 0.260 0.265 0.334 0.239 0.270 0.207 0.248 0.220 0.284 0.228 0.259 336 0.276 0.296 0.314 0.368 0.292 0.307 0.263 0.290 0.273 0.325 0.282 0.299 720 0.350 0.345 0.378 0.410 0.365 0.353 0.344 0.345 0.345 0.377 0.357 0.347 Instance normalization strategies constitute essential factors for the success of current models, such as PatchTST [ 40], TiDE [ 5], iTransformer [ 37], SparseTSF [ 32], etc. By default, CycleNet also adopts this strategy, namely the version of RevIN without learnable affine parameters [ 22]. Here, we meticulously investigate the impact of RevIN on the performance of CycleNet, and the results are shown in Table 10. On the ETTh2 and Weather datasets, RevIN significantly enhances the performance of CycleNet, possibly due to more severe distribution drift issues in these datasets. However, on the Solar dataset, RevIN leads to poorer performance, likely because the photovoltaic power generation data contains continuous segments of zero values (no power generation at night), which significantly affects the calculation of means in RevIN. 22 Overall, in most cases, RevIN leads to better performance. We acknowledge that RevIN is an indispensable cornerstone of CycleNet’s success, but it is not the key factor that sets CycleNet apart from other models in terms of performance. As shown in the comparison results in Table 10, CycleNet exhibits a significant advantage over RLinear and RMLP, which can be viewed as CycleNet without RCF technique. This clearly demonstrates that the RCF technique is the key factor that significantly enhances the model’s prediction accuracy, constituting the core contribution of this paper. C.5 Further Analysis in Traffic Scenarios Table 11: Comparison results on the PEMS datasets. The look-back length Lis fixed at 96, and the forecast horizons are set to H∈ {12,24,48,96}. The results of other models are sourced from iTransformer [37]. The best results are highlighted in bold , and the second-best are underlined . ModelCycleNet /MLPCycleNet /LinearRLinear [2023]iTransformer [2024]PatchTST [2023]Crossformer [2023]DLinear [2023]SCINet [2022] Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEPEMS0312 0.066 0.172 0.080 0.192 0.126 0.236 0.071 0.174 0.099 0.216 0.090 0.203 0.122 0.243 0.066 0.172 24 0.089 0.201 0.120 0.237 0.246 0.334 0.093 0.201 0.142 0.259 0.121 0.240 0.201 0.317 0.085 0.198 48 0.136 0.247 0.156 0.258 0.551 0.529 0.125 0.236 0.211 0.319 0.202 0.317 0.333 0.425 0.127 0.238 96 0.182 0.282 0.199 0.292 1.057 0.787 0.164 0.275 0.269 0.370 0.262 0.367 0.457 0.515 0.178 0.287PEMS0412 0.078 0.186 0.089 0.201 0.138 0.252 0.078 0.183 0.105 0.224 0.098 0.218 0.148 0.272 0.073 0.177 24 0.099 0.212 0.127 0.245 0.258 0.348 0.095 0.205 0.153 0.275 0.131 0.256 0.224 0.340 0.084 0.193 48 0.133 0.248 0.169 0.286 0.572 0.544 0.120 0.233 0.229 0.339 0.205 0.326 0.355 0.437 0.099 0.211 96 0.167 0.281 0.189 0.293 1.137 0.820 0.150 0.262 0.291 0.389 0.402 0.457 0.452 0.504 0.114 0.227PEMS0712 0.062 0.162 0.075 0.183 0.118 0.235 0.067 0.165 0.095 0.207 0.094 0.200 0.115 0.242 0.068 0.171 24 0.086 0.192 0.113 0.225 0.242 0.341 0.088 0.190 0.150 0.262 0.139 0.247 0.210 0.329 0.119 0.225 48 0.128 0.234 0.157 0.254 0.562 0.541 0.110 0.215 0.253 0.340 0.311 0.369 0.398 0.458 0.149 0.237 96 0.176 0.268 0.207 0.291 1.096 0.795 0.139 0.245 0.346 0.404 0.396 0.442 0.594 0.553 0.141 0.234PEMS0812 0.082 0.185 0.091 0.201 0.133 0.247 0.079 0.182 0.168 0.232 0.165 0.214 0.154 0.276 0.087 0.184 24 0.117 0.226 0.140 0.251 0.249 0.343 0.115 0.219 0.224 0.281 0.215 0.260 0.248 0.353 0.122 0.221 48 0.169 0.268 0.200 0.291 0.569 0.544 0.186 0.235 0.321 0.354 0.315 0.355 0.440 0.470 0.189 0.270 96 0.233 0.306 0.272 0.328 1.166 0.814 0.221 0.267 0.408 0.417 0.377 0.397 0.674 0.565 0.236 0.300 Avg. 0.125 0.229 0.149 0.252 0.514 0.482 0.119 0.218 0.217 0.306 0.220 0.304 0.320 0.394 0.121 0.222 CycleNet, formed by combining the RCF technique with a simple backbone, achieved state-of-the-art performance across multiple domains but fell short in the traffic domain. To further investigate the reasons behind this, we supplemented the complete performance of CycleNet on the PEMS dataset (the same four public subsets adopted in SCINet [ 34]) in Table 11. The results show that: (i) CycleNet still achieved top-tier prediction accuracy, and (ii) although CycleNet underperformed compared to iTransformer in this scenario, the gap in MSE on the Traffic dataset was reduced from approximately 10% to about 5%. Regarding the first point, it is important to highlight the effectiveness of RCF. CycleNet’s backbone is merely a single-layer Linear or a two-layer MLP, without any additional design or deep stacking, yet it still delivers excellent results. Specifically, when comparing CycleNet/Linear with RLinear and DLinear, it becomes evident that RCF is the major contributor to narrowing the gap between the simple Linear model and those state-of-the-art models. Table 12: Statistical characteristics of datasets, including average number of extreme points per channel (Z-Score > 6), average maximum extreme value per channel, and cosine similarity between channels. Traffic Electricity Solar-Energy ETTh1 PEMS03 PEMS04 PEMS07 PEMS08 Avg. Extreme Points 23.8 1.4 0 0 0.9 0.1 3.5 4.8 Avg. Max Extreme 9.27 4.14 2.92 4.08 2.87 2.66 2.61 2.77 Cosine Similarity 0.56 0.46 0.92 0.21 0.84 0.77 0.80 0.78 For the second point, we further analyzed the statistical characteristics of the datasets to explore the underlying reasons in Table 12. Specifically, we examined the presence of extreme values in the channels and the cosine similarity between channels. It was found that the Traffic dataset contains very significant outliers, both in terms of quantity and magnitude. The presence of these outliers: (i)May affect the effectiveness of RCF. The fundamental working principle of RCF is to learn the historical average cycles in the dataset. In such cases, the average cycles learned in RCF can be skewed by these significant outliers, such as the mean of a certain point in the cycle being exaggerated. 23 Consequently, during each prediction process, the original sequence subtracts a locally exaggerated average cycle, resulting in an inaccurate residual component and affecting the local point predictions within each cycle. The more inaccurate these local point predictions are, the larger the discrepancy between MSE and MAE, as MSE significantly amplifies the impact of a few large errors. This explains why in Table 4, combining iTransformer with RCF decreases MAE but increases MSE, indicating overall prediction accuracy improvement but anomalies in local point predictions. (ii)Highlight the necessity of stronger spatiotemporal relationship modeling. Models like iTransformer and GNN, which accurately model inter-channel relationships, are more suitable for scenarios with extreme points and temporal lag characteristics. For example, when a sudden traffic surge occurs at a certain junction, these models, having correctly modeled the spatiotemporal relationships, can accurately predict possible traffic surges at other junctions. In contrast, the current CycleNet only considers single-channel relationship modeling, making it somewhat limited in this scenario. These underlying reasons explain why CycleNet did not achieve the best performance on the Traffic dataset and showed a relative large performance gap. On the PEMS dataset, although it is also a traffic dataset, the presence of extreme points is significantly less severe compared to the Traffic dataset. Therefore, CycleNet’s performance on the PEMS dataset improved compared to the Traffic dataset (the gap in MSE compared to the state-of-the-art reduced from approximately 10% to about 5%). This further validates the effectiveness of RCF but also indicates that in more complex traffic scenarios, reasonable spatiotemporal relationship modeling (or multivariate relationship modeling) is essential. Additionally, while intuitively the solar scenarios might also involve significant spatiotemporal relationships, in practice, these relationships are much weaker compared to the traffic scenarios. Firstly, the weather conditions in the same region are often similar, leading to similar power generation curves. For instance, the Solar-Energy dataset’s channels have a cosine similarity as high as 0.92 (shown in Table 12), which indirectly indicates weaker spatial characteristics. Secondly, extreme points are rare in the solar scenarios because photovoltaic systems have a maximum power threshold. Fewer extreme points mean that the impact of temporal lag characteristics is smaller. This explains why, compared to the Traffic dataset, the gains from the RCF technique are much more significant on the Solar-Energy dataset. In summary, when dealing with traffic scenarios that may involve significant outliers and emphasize spatiotemporal relationship modeling, the current version of CycleNet may not be fully adequate. There are two direct and meaningful directions for improvement that could address this issue: (1) Enhancing the current RCF technique to be more robust to the presence of outliers; (2) Exploring a more reasonable multi-channel modeling technique within the RCF framework. We leave these challenges for future work and encourage the community to further research more robust and powerful periodic modeling techniques. 24 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: The main claims in the abstract accurately reflect our contributions. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discuss the limitations of this work in Section 5. Guidelines: •The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] 25 Justification: This paper does not include theoretical results. Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We provide complete experimental details in Appendix B.3. Additionally, we have shared the full reproducible code in an anonymous repository (link provided in the abstract). Guidelines: • The answer NA means that the paper does not include experiments. •If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code 26 Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We provide an anonymous link to the code and describe how to reproduce the experimental results in the README file of the code. Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). •Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We describe the complete experimental details and hyperparameter choices in Appendix B.3. Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: We report the standard deviations of the results for our proposed method under different settings in Table 7. Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. 27 •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). •It should be clear whether the error bar is the standard deviation or the standard error of the mean. •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We report the computational resource requirements of our proposed method in Table 3. Guidelines: • The answer NA means that the paper does not include experiments. •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: Our research aligns with the NeurIPS Code of Ethics. Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: The paper focuses on advancing the field of machine learning. While our work may have various societal implications, we believe none are significant enough to warrant specific mention here. Guidelines: 28 • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. •Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: This paper poses no such risks. Guidelines: • The answer NA means that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: The code and datasets used in the paper are publicly available and properly credited. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. 29 •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. •If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: We will make the code publicly available upon acceptance of the paper and provide detailed documentation. Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: This work does not involve crowdsourcing nor research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: This work does not involve crowdsourcing nor research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. 30 •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 31 | 6 | 1 | The CycleNet model has a minimal parameter count (around 472.9K for MLP and 123.7K for Linear variations), which suggests a relatively lightweight architecture, allowing efficient training on a single GPU. The datasets utilized are reasonably sized, with the largest being the Electricity dataset (26,304 timesteps with 321 channels), yet the model's efficient design and usage of instance normalization contribute to manageable memory usage. Considering these factors, combined with the model's training time benchmarks relative to other models that demonstrated faster training times, an estimate of 6 hours for training is practical. Given that training was performed on an NVIDIA GeForce RTX 4090 with 24 GB memory, it supports the feasibility of training CycleNet within 8 hours on a single GPU, particularly with the efficiency advantages outlined in the paper. | yes | Yes | Time Series | CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns | 2024-09-27 0:00:00 | https://github.com/ACAT-SCUT/CycleNet | 1 | https://drive.usercontent.google.com/download?id=1bNbw1y8VYp-8pkRTqbjoW-TA-G8T0EQf&export=download&authuser=0 | 25s * 30 epochs = 12.5 min for each seq length. There are multiple seq length | https://drive.google.com/file/d/18IdZY2MOml8pmTVAoEcMoWWuU_1fI8aT/view?usp=sharing | Yes | -- Tested just for electricity. I have include the command on colab files. 192 seq was not available so I used 336 which was the nearest. It works just inspect run_main.sh |
PeMSD4 | PM-DMNet(R) | [] | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12T00:00:00 | https://arxiv.org/abs/2408.07100v1 | [
"https://github.com/wengwenchao123/PM-DMNet"
] | {'12 steps MAE': '18.37', '12 steps RMSE': '30.68', '12 steps MAPE': '12.01'} | [
"12 steps MAE",
"12 steps MAPE",
"12 steps RMSE"
] | Given the following paper and codebase:
Paper: Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction
Codebase: https://github.com/wengwenchao123/PM-DMNet
Improve the PM-DMNet(R) model on the PeMSD4 dataset. The result
should improve on the following metrics: {'12 steps MAE': '18.37', '12 steps RMSE': '30.68', '12 steps MAPE': '12.01'}. You must use only the codebase provided.
| 1 Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction Wenchao Weng, Mei Wu, Hanyu Jiang, Wanzeng Kong, Senior Member, IEEE , Xiangjie Kong, Senior Member, IEEE , and Feng Xia, Senior Member, IEEE Abstract —In recent years, deep learning has increasingly gained attention in the field of traffic prediction. Existing traffic prediction models often rely on GCNs or attention mecha- nisms with O(N2)complexity to dynamically extract traffic node features, which lack efficiency and are not lightweight. Additionally, these models typically only utilize historical data for prediction, without considering the impact of the target information on the prediction. To address these issues, we propose a Pattern-Matching Dynamic Memory Network (PM- DMNet). PM-DMNet employs a novel dynamic memory network to capture traffic pattern features with only O(N)complexity, significantly reducing computational overhead while achieving excellent performance. The PM-DMNet also introduces two prediction methods: Recursive Multi-step Prediction (RMP) and Parallel Multi-step Prediction (PMP), which leverage the time features of the prediction targets to assist in the forecasting process. Furthermore, a transfer attention mechanism is in- tegrated into PMP, transforming historical data features to better align with the predicted target states, thereby capturing trend changes more accurately and reducing errors. Extensive experiments demonstrate the superiority of the proposed model over existing benchmarks. The source codes are available at: https://github.com/wengwenchao123/PM-DMNet. Index Terms —Traffic Prediction, Memory Network, Transfer Attention, Traffic Pattern, Time Embedding I. I NTRODUCTION WITH the development of society and technology, there has been a significant increase in vehicles within cities, as well as the growing popularity of services like shared bicycles and ride-hailing platforms such as Uber and Didi. This expansion has broadened the application of urban traffic management by governments and heightened public transportation demands. However, issues such as limited re- sources and inadequate scheduling systems have increasingly highlighted challenges in traffic management and the imbal- ance of transportation demand. As a result, accurate traffic This work was supported in part by the ”Pioneer” and ”Leading Goose” R&D Program of Zhejiang under Grant 2024C01214, and in part by the National Natural Science Foundation of China under Grant 62072409. (cor- responding author: Xiangjie Kong.) Wenchao Weng and Xiangjie Kong are with the College of Computer Sci- ence and Technology, Zhejiang Universityof Technology, Hangzhou 310023, China (e-mail: 111124120010@zjut.edu.cn; xjkong@ieee.org). Mei Wu and Hanyu Jiang are with the Hangzhou Dianzi University ITMO Joint Institute, Hangzhou Dianzi University, Hangzhou 310018, China (e-mail: 222320007@hdu.edu.cn; 22320324@hdu.edu.cn). Wanzeng Kong is with College of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China (e-mail: kongwanzeng@hdu.edu.cn). Feng Xia is with School of Computing Technologies, RMIT University, Melbourne, VIC 3000, Australia (e-mail: f.xia@ieee.org).forecasting has become a crucial issue in fields such as traffic management, urban planning, and the sharing economy. Pre- cise traffic prediction enables governments to better allocate social resources to maintain urban transportation operations. It also allows companies to distribute resources such as shared bicycles and taxis to areas with high demand, thereby avoiding their idle presence in low-demand areas. This approach can reduce energy consumption and passenger waiting times. In recent years, researchers have conducted extensive stud- ies in traffic prediction to promote the development of intel- ligent transportation systems. Early traffic prediction methods utilized statistical approaches for prediction. Auto-regressive (AR), Moving Average (MA), and Auto-Regressive Integrated Moving Average (ARIMA) models [1], as the most repre- sentative classical statistical methods, have been extensively employed in traffic prediction. Additionally, machine learning techniques represented by Support Vector Regression (SVR) [2] and Kalman filters [3] have also been applied to traffic prediction to achieve more accurate predictions and handle more complex sequences. However, these methods require data to exhibit stationarity to be effective, which limits their ability to capture the intricate non-linear spatio-temporal correlations present in traffic condition. In recent years, the advancements of deep learning in domains such as Computer Vision and Natural Language Pro- cessing have motivated researchers to explore its application in traffic prediction for improved outcomes. Early deep learning prediction models conceptualized urban traffic as images and segmented them into grids. Convolutional Neural Networks (CNNs) [4] were employed to analyze spatial correlations within these grids, while Recurrent Neural Networks (RNNs) [5], [6], [7] or CNNs [8], [9] were utilized to capture temporal dependencies. However, the structure of the transportation network can be viewed as a topological graph, containing non- Euclidean attributes. CNNs only extract features from the sur- rounding nodes and cannot capture features from other loca- tions across space. As Graph Convolutional Networks (GCNs) [10] are effective in handling non-Euclidean structures, it has been widely applied in the field of transportation [11], [8], [6]. Additionally, attention mechanisms [12], [13], [14] have been incorporated for spatio-temporal feature modeling. However, current methods still possess the following limi- tations: 1) Lack of Effective Traffic Feature Extraction: Traffic data inherently exhibits complex spatio-temporal correlations. To capture these spatio-temporal correlations, researchers have employed GCN to capture spatial relationships between nodes,arXiv:2408.07100v1 [cs.LG] 12 Aug 2024 2 X 1 X 2 X 3 X N... ...X 1 X 2 X 3 X NO(N2) (a) Graph Convolution Network (GCN). ... ...X 1 X 2 X 3 X NP 1 P 2 P MO(N)(b) Dynamic Memory Network (DMN). Fig. 1: Comparison between GCN and DMN. As Mis constant, the time complexity of GCN and DMN is O(N2) andO(N), respectively. achieving significant success. As shown in Figure 1(a), current methods require evaluating the correlations between all pairs of nodes to dynamically generate the graph structure and then use GCN to extract spatio-temporal correlations [6], [11], resulting in an O(N2)computational complexity. However, in practical scenarios, the structure of transportation networks often exhibits sparsity, meaning that nodes are only correlated with a subset of other nodes, and most nodes do not have correlations with each other. As illustrated in Figure 2(a), Nodes A, B, and C exhibit evident correlations, representing a specific traffic pattern, while Nodes D and E signify another traffic pattern. Computing similarities between Nodes A, B, C, and Nodes D, E would be meaningless and resource-intensive. Recent studies [15], [16], [17] have focused on reducing com- putational complexity, but they each come with limitations. For instance, STWave [17] introduces an MS-ESGAT (Multi-Scale Edge-based Spatial Graph Attention) mechanism to achieve linear complexity. However, this method relies heavily on predefined graph structures, making it unsuitable for scenarios where no predefined graph is available. 2) Uncertainty in Predicting Trend Changes: Figure 2(b) illustrates two sets of historical data and their corresponding prediction targets, where the red segment represents historical data and the yellow segment represents the prediction target. As shown, the left side’s historical data and corresponding pre- diction targets remain within a stable trend channel. However, on the right side, although the historical data is also within a stable trend channel, the corresponding prediction target shifts into a downward trend channel. This indicates that relying solely on historical data for prediction makes it challenging to capture such trend shifts. Although current studies [18], [5], [19] have proposed various methods to extract spatiotemporal features, they rely exclusively on historical data to model traffic conditions, leading to limitations in accurately capturing the trend changes of prediction targets. To address the above issues, a novel Pattern-Matching Dynamic Memory Network (PM-DMNet) model for traffic prediction is proposed in this paper. For the first challenge, a Dynamic Memory Network (DMN) is designed to extract pattern features from nodes. Specifically, a learnable memory matrix is defined to learn representative traffic patterns within (a) Nodes with different traffic patterns. (b) Similar historical traffic conditions, different future traffic condi- tions. Fig. 2: The findings about traffic data. the traffic conditions. The traffic features input to the model are then used in conjunction with these embeddings to compute a pattern attention matrix, which facilitates the extraction of features from the most similar traffic patterns. Simultane- ously, the DMN dynamically adjusts the representative traffic patterns at each time point by combining time embeddings with memory embeddings, thus avoiding issues related to traffic pattern homogenization. Moreover, as illustrated in Figure 1, compared to the high computational complexity of GCN, which is O(N2), this method reduces the computational complexity to O(N), significantly enhancing computational efficiency. To address the second challenge, two prediction methods are designed: Recurrent Multi-step Prediction (RMP) and Parallel Multi-step Prediction (PMP). RMP uses the traditional recur- sive approach, where predictions are made during the decoding phase by recursively utilizing the time features and extracted hidden features for the target time points. PMP directly uses the time features for the target time points and the hidden features extracted from historical data for prediction. To miti- gate the errors caused by discrepancies between historical data and prediction targets, a novel Transition Attention Mechanism is introduced in PMP. Specifically, this attention mechanism leverages the inherent periodicity in traffic data by integrating the input data, its time features, and the time features of the prediction targets. This transforms the hidden states to better align with the conditions of the target time points. This method enhances the adaptability of the extracted latent features to the prediction target states, improving accuracy. Furthermore, PMP reduces the required computation time compared to RMP, as it does not involve recursion, and it also enhances prediction performance. In summary, the contributions of this paper can be summa- rized as follows: •We present a new traffic prediction model, named Pattern Matching Dynamic Memory Network (PM-DMNet). This 3 model can achieve both Parallel Multi-step Prediction (PMP) and Recurrent Multi-step Prediction (RMP) in the decoder stage depending on the requirements. Compared to RMP, PMP avoids the cyclic recursion process, thereby enhancing computational efficiency. •We propose a novel Dynamic Memory Network (DMN) module designed to learn inherent representative traffic patterns within the data associated with each node. By employing a pattern matching approach, this module identifies and extracts traffic pattern features most similar to the input data while effectively reducing computational overhead. •We introduce a new Transfer Attention Mechanism (TAM). TAM transforms the existing historical hidden states into latent states aligned with the prediction target features, mitigating the error caused by the discrepancy between historical data and prediction targets. •Experimental results on ten authentic datasets substantiate that our proposed framework significantly outperforms state-of-the-art methods across all datasets. II. R ELATED WORK A. Spatio-Temporal Prediction As one of the most representative tasks in spatio-temporal prediction, researchers employed a myriad of methodologies to model the spatio-temporal characteristics within traffic condition. STGCN [20] leveraged GCN and predefined matri- ces to capture spatial correlations between nodes, employing Gate CNNs to model such spatial dependencies. DCRNN [7] integrated diffusion convolution with GRU to model the spatio- temporal relationships inherent in traffic condition. MTGNN [21] utilized adaptive embeddings to generate an adaptive graph structure, capturing spatial correlations among diverse nodes. CCRNN [22] introduced a novel graph convolutional structure termed as CGC and employed a hierarchical cou- pling mechanism, linking upper-layer graph structures with underlying ones to extract temporal-spatial features. GMAN [13] harnessed three distinct attention mechanisms to capture the spatio-temporal characteristics present in traffic condition. MPGCN [15] utilized GCN to identify mobility patterns at bus stops through clustering and employed GCN2Flow to predict passenger flow based on various mobility patterns. Building on the foundation of MPGCN, MPGNNFormer [16] designed a STGNNFormer to extract both temporal and spatial dependen- cies. Although these spatiotemporal prediction models have achieved notable success, the GCNs and attention mechanisms they use often require O(N2)or even higher complexity, resulting in substantial computational costs. B. Neural Memory Network The Memory Network [23] introduced an external memory mechanism, enabling it to better handle and utilize long- term information. Memory networks have found extensive applications in the domains of natural language processing and machine translation. MemN2N [24] introduced a novel end- to-end memory network framework that facilitates its straight- forward application in real-world environments. Kaiser et al.[25] proposed memory networks with the capability to adapt to various zero-shot scenarios. Mem2seq [26] integrated multi- hop attention mechanisms with memory networks, enabling their deployment in dialog systems. MemAE [27] explored the application of memory networks in video anomaly detec- tion tasks, subsequent studies [28] validating the feasibility of this approach. MTNet [29] endeavored to apply mem- ory networks in multi-variate time series prediction, yielding promising results. In the most recent advancements, PM- MemNet [30] devised a novel Graph Convolutional Memory Network (GCMem) to model the spatio-temporal correlations inherent in given traffic condition. Additionally, MegaCRN [31], inspired by memory network principles, designed a Meta-Graph Learner to construct dynamic graphs, addressing temporal-spatial heterogeneities. Although memory networks have been applied in traffic prediction, they still require integration with other feature extraction methods (e.g., GCN) to perform effectively. Unlike previous spatio-temporal prediction models, PM- DMNet uses a dynamic memory network to extract traffic pat- tern features, achieving superior performance while reducing complexity to O(N), which significantly lowers computational costs. Additionally, prior research overlooks the impact of time features corresponding to the prediction targets on the targets themselves. PM-DMNet fully considers this characteristic and designs two prediction methods to utilize these time features, leading to successful outcomes. III. PRELIMINARIES A. Temporal Indexing Function TABLE I: Example of time index transformation Time d(t) w(t) Monday,00:05 0:05:00 Monday Monday,01:00 1:00:00 Monday Thursday,01:00 1:00:00 Thursday Given that traffic condition is collected at regular time intervals, each set of traffic condition possesses unique and systematic temporal information. To harness these temporal characteristics effectively, we employ a temporal indexing function to extract time-related information. Let d(t)and w(t)represent the intra-daily and weekly indexing functions, respectively. These functions transform the temporal informa- tion of the traffic condition into corresponding intra-daily and weekly time-related attributes. For specific examples, refer to Table I. B. Traffic Prediction The objective of traffic prediction is to utilize historical traffic condition to forecast future traffic condition. We represent the traffic condition Xt∈RN×CforNnodes in the road network at time t, where Cis the dimensionality of traffic condition, signifying Ctypes of traffic condition. We model the historical traffic condition X= [X1, X2, ..., X n]∈ Rn×N×Cover the past ntime steps using the model fto predict the traffic condition Y= [Yn+1, Yn+2, ..., Y n+m]∈ 4 ... DPMGRU TAM DPMGRU DPMGRU DPMGRU DPMGRU ... Time = (n+1, n+2, ..., n+m)Time = (1, 2, ..., n) ............ TE generatorDPMGRU DPMGRU DPMGRUTE generator Decoder Encoder Fig. 3: Overview of PM-DMNet structure. Rm×N×Cfor the future mtime steps, which can be expressed as: [X1, X2, ..., X n]f− →[Yn+1, Yn+2, ..., Y n+m] (1) In addition, The corresponding actual values are represented byˆY= [ˆYn+1,ˆYn+2, ...,ˆYn+m]∈Rm×N×C, time step: tT t Fig. 4: The construction of TE generator. IV. M ODEL ARCHITHECTURE Figure 3 illustrates the comprehensive architecture of PM- DMNet, which comprises a Time Embedding Generator (TE Generator), Dynamic Pattern Matching Gated Recurrent Unit (DPMGRU), and Transfer Attention Mechanism (TAM). In the subsequent sections, we will provide a detailed exposition of each module. A. Time Embedding Generator Traffic condition is influenced by people’s travel habits and lifestyles, exhibiting clear temporal such as rush hours during mornings and evenings. To fully leverage temporal features, we introduce two independent embedding pools TD∈RNd×p, TW∈RNw×pto learn features for intra- daily and weekly patterns. Here, Ndrepresents the number of time slots in a day, and Nw= 7 represents the number of days in a week. As depicted in Figure 4, based on the time information t, we derive the intra-daily index d(t)and the weekly index w(t). Utilizing d(t)andw(t), we obtainthe intra-daily time feature embedding TD d(t)and the weekly time feature embedding TW w(t)corresponding to the specific time point. Ultimately, these TD d(t)∈RpandTW w(t)∈Rpare integrated to yield a combined time embedding, which can be expressed as follows: Tt=TD d(t)⊙TW w(t) (2) where ⊙denotes the hadamard product. B. Dynamic Memory Network The memory module incorporates a learnable memory matrix P= [P1, P2, ..., PM]∈RM×p, where symbolizes a unique traffic pattern. To dynamically adjust the memory matrix, thereby avoiding pattern singularization and adapting to the prevailing traffic conditions at time t, we integrate the current time embedding Ttwith P. This fusion can be represented as: Pt=P⊙Tt (3) where Pt∈RM×prepresents the memory network module at timet. Through training, Ptcan learn the most representative traffic patterns at time t. By integrating the time embedding Ttdynamically, the model can adjust its memory Ptto better capture evolving traffic patterns and conditions over time. As shown in Figure 5, we extract dynamic signals from the traffic condition, which can be represented as: Fi t=MLP (xi t) (4) where Fi t∈Rprepresents the dynamic signal extracted from the traffic condition xi tat node i. It is used to query the memory matrix for the traffic pattern most similar to xi t. Afterwards, the similarity weight between Fi tand the mem- ory matrix Ptis computed through a similarity calculation: wi t=softmax (Fi tPT t) (5) where wi t∈RMrepresents the similarity weight vector. 5 MLP similar ity calculation...Memory Module MLP Fig. 5: Dynamic memory network. Subsequently, Ptis linearly transformed to obtain the pat- tern features corresponding to various traffic patterns. It is then multiplied with the similarity weight vector wi tto extract the pattern features most similar to xi t, as follows: hi t=wi tPt (6) where ht∈RM×Foutrepresents the pattern features in the memory matrix Pt, and hi trepresents the extracted traffic pattern features. Finally, the residual connection is employed to concatenate hi tandxi tfor extracting hidden features: Hi t= (hi t||xi t)Θ (7) where Θ∈RFin×Foutrepresents learnable parameters. All node hidden states Hi tare aggregated into Ht= (H1 t, H2 t, ..., HN t), serving as the final output of the dynamic memory network. C. Node Adaptive Parameter Learning To enable each node to learn its unique traffic pattern, enhancing the model’s robustness and effectiveness, we utilize two parameter matrices to optimize the learnable parameters Θ. Specifically, we use the node embedding matrix E∈RN×d and the weight pool W∈Rd×Fin×Foutto generate Θ∈ RN×Fin×Fout, which can be expressed as: Hi t= (hi t||xi t)Θ = (hi t||xi t)E·W(8) where ·represents the multiplication of matrices in different dimensions. From the perspective of an individual node, E provides dindependent traffic patterns, and the node adjusts Win a data-driven way to assign appropriate weights to each pattern. These weights are combined to create the node’s unique traffic pattern. D. Dynamic Pattern Matching Gated Recurrent Unit To capture the spatio-temporal features inherent in traffic condition, we integrate the gated recurrent unit (GRU) with a dynamic memory network to construct a framework that encapsulates both temporal dynamics and spatial correlations.Specifically, we replace the MLP layer in the GRU with a dynamic memory network, resulting in the Dynamic Pattern Matching Gated Recurrent Unit (DPMGRU). Mathematically, DPMGRU can be formulated as: rt=σ(ϑr∗G(xt||Ht−1)) ut=σ(ϑu∗G(xt||Ht−1)) ht=tanh(ϑh∗G(xt||ut⊙Ht−1)) Ht=rt⊙Ht−1+ (1−rt)⊙ht(9) where XtandHtdenote the input and output at time step t, respectively. σrepresents the sigmoid activation function. randucorrespond to the reset gate and update gate, re- spectively. ∗Gdenotes the dynamic memory network module, while ϑr, ϑu, ϑhare the learnable parameters associated with the relevant memory network module. E. Transfer Attention Mechanism To mitigate the discrepancy between historical data and the prediction target leading to errors, we employ a transfer atten- tion mechanism to transform the learned hidden features from historical data. Specifically, we first linearly transform the en- coder’s output Hn∈RN×D, historical time embedding Tn∈ Rp, and future embeddings TF= (Tn+1, Tn+2, ..., T n+m)∈ Rm×pinto queries, keys, and values, represented as: Q=∀(Hn, TF)WQ, K=∀(Hn, Tn)WK, V=∀(Hn, Tn)WV (10) where WQ, WK, WV∈R(D+p)×dkserve as learnable param- eters, and ∀()denotes a broadcasting operation. Subsequently, the transfer attention can be expressed as: HTA=attention (H, T F, Tn) =softmax (QKT √dKV)(11) Finally, the feature fusion between HnandHTA∈ Rm×N×Dis achieved using residual connections to obtain the input for the decoder: Hout=MLP (∀(Hn, HTA)) (12) where Hout= (Hn+1, Hn+2, ..., H n+m)∈Rm×N×Dcorre- spond to the hidden features from time points n+1ton+mfor the prediction target. By employing TNandTF, these features undergo transfer learning to adapt more effectively to the state of the prediction target time points. F . Encoder-Decoder Architecture The traditional encoder-decoder architecture typically em- ploys the Recurrent Multi-step Prediction (RMP) method for forecasting. However, recurrent decoding has inherent lim- itations, including: (i) error accumulation due to recurrent predictions, and (ii) the sequential nature of recursion, which restricts the model’s ability for parallel computation, thus lim- iting the improvement of inference speed. [32] demonstrates that Parallel Multi-step Prediction (PMP) methods can achieve comparable or even better results than RMP when appropriate techniques are applied. Therefore, two variants are designed to implement and investigate these prediction methods: 6 n+1 n+2 n+3 n+4n+1 n+2 n+3 n+4 +1+2 +3 +1 n+1 n+1 n+2 n+3+2 +3 +4 n+4 n+2 n+3 n+4 a) RMP b) PMP DPMGRU TAM Fig. 6: Comparison of Recurrent Multi-step Prediction (RMP) and Parallel Multi-step Prediction (PMP). PM-DMNet(R) As illustrated in Figure 6(a), PM- DMNet(R) employs the classic Recurrent Multi-step Predic- tion (RMP) method, where Ytis derived through single-step prediction. Subsequently, the predicted Ytserves as the input for predicting Yt+1, iterating this process until the complete prediction output is obtained. PM-DMNet(P) Inspired by [32], PM-DMNet(P) adopts the Parallel Multi-step Prediction (PMP) method. As shown in Figures 6(b), during the decoding phase, the encoder’s output Hnis first processed through TAM to obtain Hout, which al- leviates the discrepancy between historical data and prediction targets, aligning it more closely with the state of the prediction targets. Subsequently, Hout= (Hn+1, Hn+2, . . . , H n+m)and TF= (Tn+1, Tn+2, . . . , T n+m)are segmented and input into PDMGRU to predict the corresponding targets Y= (Yn+1, Yn+2, . . . , Y n+m). Since recursive compilation is not required, the prediction targets Ycan be predicted in parallel, avoiding the issue of accumulating prediction errors with recursion steps. The details of model training and prediction are presented in Algorithm 1. V. E XPERIMENTAL SETUP A. Datasets & Settings In this section, experiments are conducted on ten real-world datasets to validate the effectiveness of the proposed PM- DMNet. The datasets used are categorized into four types: bike demand datasets include NYC-Bike14 [4], NYC-Bike15 [33], and NYC-Bike16 [22]; taxi demand datasets include NYC-Taxi15 [33] and NYC-Taxi16 [22]; traffic flow datasets include PEMSD4 [34], PEMSD7 [20], and PEMSD8 [34]; and traffic speed datasets include PEMSD7(M) and PEMSD7(L) [9]. Detailed information about the datasets and the training set divisions can be found in Table II. Moreover, Unlike traffic flow and traffic speed datasets, the traffic demand datasets have two dimensions: ’Pick-up’ and ’Drop-off’. We set n= 12 historical time steps to predict m= 12 future time steps. All experiments are conducted on a server equipped with an NVIDIA GeForce GTX 4090 GPU. The Adam optimizer is used for model optimization, and the Mean Absolute Error (MAE) is adopted as the loss function. The hyper-parameterAlgorithm 1 Training algorithm of PM-DMNet. Input: The traffic dataset O, encoder’s function fen(·), de- coder’s function fde(·), TAM’s function ftam(·), predic- tion type T, scheduled sampling function fss(·) 1:repeat 2: select a input X∈Rn×N×C, label ˆY∈Rm×N×C, time information t, initialize hidden state H0. 3: compute Tt=TD d(t)⊙TW w(t) 4: foriin1,2, ..., n do 5: compute Hi=fen(X[i, ...], Hi−1, Ti) 6: end for 7: ifT= PM-DMNet(P) then 8: Initialize a zero tensor Yin∈Rm×N×Cas the input to the decoder. 9: compute Hout=ftam(Hn, TN, TF) 10: compute Y=fde(Yin, Hout, TF) 11: end if 12: ifT= PM-DMNet(R) then 13: set iter = 1; 14: Initialize a zero tensor Yin∈RN×Cas the input to the decoder. 15: forqin1,2, ..., m do 16: compute Y[q,:] =fde(Yin, Hm+q−1, Tn+q) 17: compute εi=fss(iter) 18: generate a random number µ∼N(0,1). 19: ifµ < ε ithen 20: Yin=ˆY[q, ...]. 21: else 22: Yin=Y[q, ...] 23: end if 24: end for 25: end if 26: Calculate loss Lby using MAE. 27: Update model parameters according to loss L. 28:until convergence of the model is achieved Output: learned model. TABLE II: Statistics of datasets. Data type Datasets Nodes Time steps Time Range Time interval Train/Val/Test Bike DemandNYC-Bike14 128 4392 04/2014 - 09/2014 1 hour 7/1/2 NYC-Bike15 200 2880 01/2015 - 03/2015 30 min 7/1/2 NYC-Bike16 250 4368 04/2016 - 06/2016 30 min 7/1.5/1.5 Taxi DemandNYC-Taxi15 200 2880 01/2015 - 03/2015 30 min 7/1/2 NYC-Taxi16 266 4368 04/2016 - 06/2016 30 min 7/1.5/1.5 Traffic FlowPEMSD4 307 16992 01/2018 - 02/2018 5min 6/2/2 PEMSD7 883 28224 05/2017 - 08/2017 5min 6/2/2 PEMSD8 170 17856 07/2016 - 08/2016 5min 6/2/2 Traffic SpeedPEMSD7(M) 228 12672 05/2012 - 06/2012 5min 6/2/2 PEMSD7(L) 1026 12672 05/2012 - 06/2012 5min 6/2/2 settings for the model under the two different prediction methods, such as the temporal embedding dimension p, node embedding dimension d, memory network matrix dimension M, batch size, and learning rate , are detailed in Table III. During training, an early stopping strategy is employed to terminate training and prevent over-fitting. Additionally, a scheduled sampling strategy [35] is applied to PM-DMNet(R) to enhance its robustness. 7 TABLE III: Model hyper-parameter settings. DatasetsPM-DMNet(P) PM-DMNet(R)M batchsize learning ratep d p d NYC-Bike14 20 10 20 10 10 64 0.03 NYC-Bike15 12 6 12 6 10 64 0.03 NYC-Bike16 20 10 20 10 10 64 0.03 NYC-Taxi15 20 10 20 10 10 64 0.03 NYC-Taxi16 20 10 20 10 10 64 0.03 PEMSD4 24 12 20 10 10 64 0.03 PEMSD7 24 12 24 12 10 64 0.03 PEMSD8 20 10 12 6 10 64 0.03 PEMSD7(M) 8 4 10 5 10 64 0.03 PEMSD7(L) 16 8 20 10 10 64 0.03 B. Baselines To compare performance, the following 24 baselines with official code are compared with PM-DMNet: 1) Traditional Models: •HA [36]: It utilizes historical averages to iteratively predict the future. •ARIMA [1]: It integrates moving averages into an auto- regressive model. •V AR [37]: It is a statistical model capable of capturing spatial dependencies. 2) Machine Learning Models: •SVR [2]: It uses support vector machines for prediction. •XGBoost [38]: It is a classical and widely adopted machine learning model. 3) Deep Learning Models: •LSTM [39]: It makes predictions through iterations. •TCN [40]: It employs causal convolutions and dilated convolutions to capture temporal correlations. •STGCN [4]: It uses graph convolution and one- dimensional convolutional neural networks to separately extract spatial and temporal correlations. •STGCN [9]: It combines TCN with GCN to extract spatio-temporal dependencies. •DCRNN [7]: It combines diffusion convolution and GRU to extract spatiotemporal correlations. •STG2Seq [41]: It captures temporal dependencies from both long-term and short-term perspectives. •GWN [8]: It integrates gated TCN and adaptive graph GCN to capture spatiotemporal dependencies. •ASTGCN [34]: It performs attention mechanism analy- sis on spatio-temporal convolutions to extract dynamic spatio-temporal correlations. •LSGCN [42]: It uses graph convolutional networks and a novel cosine graph attention network to capture long-term and short-term spatial dependencies. •STFGNN [20]: It designs a spatio-temporal fusion graph to capture local spatio-temporal correlations. •STSGCN [20]: It constructs a three-dimensional graph for graph convolution to capture spatio-temporal correlations between nodes. •MTGNN [21]: It employs self-learned adjacency matrices and a time convolution module to capture spatio-temporal correlations between different variables. •CCRNN [5]: It designs a Coupled Layer-wise Graph Convolution for prediction.•STFGNN [20]: It designs a spatio-temporal fusion graph to capture local spatio-temporal correlations. •STGODE [43]: It leverages neural ODE to reconstruct GCN, alleviating the over-smoothing problem in deep GCNs. •GTS [44]: It learns the graph structure among multiple time series and simultaneously makes predictions using GNN. •ESG [19]: It designs an evolving structure learner to con- struct a series of adjacency matrices. These matrices not only receive information from the current input but also maintain the hidden states of historical graph structures. •MVFN [18]: It uses graph convolution and attention mechanisms to extract local and global spatial features. Additionally, it employs multi-channel and separable tem- poral convolutional networks to extract overall temporal features. •STWave [17]: It uses the DWT algorithm to decouple traffic data for modeling. Additionally, it designs a novel local graph attention network to efficiently and effectively model dynamic spatial correlations. •MegaCRN [31]: It designs a Meta-Graph Learner to construct dynamic graphs, addressing temporal-spatial heterogeneities. C. Metrics The following three evaluation metrics are chosen to as- sess model performance: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Empirical Correlation Coefficient (CORR). MAE =1 ϕϕX i=1|Yi−ˆYi| (13) RMSE =vuut1 ϕϕX i=1(Yi−ˆYi)2 (14) MAPE =1 ϕϕX i=1|Yi−ˆYi Yi| (15) CORR =1 NNX n=1Pϕ i=1(Yn,i−Yn)(ˆYn,i−ˆYn)qPϕ i=1(Yn,i−Yn)2(ˆYn,i−ˆYn)2(16) where ϕrepresents the length of the predicted sequence, andYnandˆYndenote the mean values of the true and predicted values at node n, respectively. A smaller value of these metrics indicates higher prediction accuracy and better prediction performance. VI. E XPERIMENTS A. Performance Comparison Table IV presents the results of our model and baselines across different datasets. Clearly, optimal results are achieved by our model across all five datasets. XGBoost, being a 8 TABLE IV: Performance comparison between PM-DMNet and the baselines on five traffic demand datasets. The best results are highlighted in bold, and the second-best results are underlined. MethodNYC-Bike16 NYC-Taxi16 NYC-Bike14 NYC-Bike15 NYC-Taxi15 RMSE MAE CORR RMSE MAE CORR RMSE MAE CORR RMSE MAE CORR RMSE MAE CORR XGBoost 4.0494 2.4689 0.4107 21.1994 11.6806 0.4416 10.3137 4.8228 0.3322 8.1780 2.7175 0.1289 44.1421 14.8994 0.2195 DCRNN 3.2274 1.8973 0.6601 14.8318 8.4835 0.6671 6.3259 2.7483 0.5184 3.8320 1.2645 0.2844 16.6155 5.6424 0.4909 STGCN 3.7829 2.2076 0.5933 14.6473 7.8435 0.7257 8.5412 3.5833 0.4481 5.6169 1.6101 0.2529 28.1391 9.1844 0.3454 STG2Seq 3.7843 2.2055 0.5413 19.2077 10.4925 0.5389 10.8561 4.4999 0.3751 8.2462 2.3272 0.1855 39.4318 12.8251 0.3764 STSGCN 2.8846 1.7538 0.7126 10.9692 5.8299 0.8242 7.8272 3.2998 0.4656 5.4722 1.6086 0.2373 28.0221 8.9541 0.3695 MTGNN 2.7791 1.6595 0.7353 10.9472 5.9192 0.8249 6.3548 2.8172 0.5154 3.9407 1.2947 0.2640 18.1113 5.9255 0.5284 CCRNN 2.7674 1.7133 0.7333 9.8744 5.6636 0.8416 7.4890 3.5197 0.4861 4.4359 1.5249 0.2681 23.0052 8.5411 0.4049 GTS 2.9258 1.7798 0.6985 12.7511 7.2095 0.7348 6.7053 2.9446 0.5044 4.1698 1.3632 0.2654 17.8672 6.0408 0.4462 ESG 2.6727 1.6129 0.7449 8.9759 5.0344 0.8592 6.3503 2.7972 0.5175 3.8054 1.2293 0.2756 16.7635 5.5279 0.5247 MVFN 2.6981 1.6565 0.7380 8.7953 4.9433 0.5607 6.4116 2.8228 0.5131 3.9282 1.2928 0.2793 16.2687 5.5613 0.5296 MegaCRN 2.7480 1.6321 0.7425 8.7082 4.9082 0.8619 6.3258 2.8005 0.5185 3.9459 1.2681 0.2836 15.4985 5.2107 0.5398 PM-DMNet(P) 2.5631 1.5566 0.7709 8.4699 4.7682 0.8674 5.8790 2.5687 0.5274 3.5302 1.1678 0.2849 14.6360 4.8126 0.5509 PM-DMNet(R) 2.5964 1.5667 0.7638 8.4659 4.7635 0.8675 5.8656 2.5582 0.5246 3.7118 1.1947 0.2700 14.7843 4.8629 0.5429 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni00000014/uni0000001a/uni00000018/uni00000014/uni0000001b/uni00000018/uni00000014/uni0000001c/uni00000018/uni00000014/uni0000001d/uni00000018/uni00000014/uni0000001e/uni00000018/uni00000014/uni0000001f/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (a) RMSE on NYC-Bike16 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni0000001d/uni00000014/uni0000001b/uni0000001e/uni00000014/uni00000016/uni0000001e/uni00000014/uni0000001b/uni0000001f/uni00000014/uni00000016/uni0000001f/uni00000014/uni0000001b/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (b) RMSE on NYC-Taxi16 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni0000001a/uni00000014/uni0000001e/uni0000001b/uni00000014/uni00000018/uni0000001b/uni00000014/uni0000001c/uni0000001c/uni00000014/uni00000016/uni0000001c/uni00000014/uni0000001a/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (c) RMSE on NYC-Bike14 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000019/uni00000014/uni00000018/uni00000019/uni00000014/uni0000001a/uni00000019/uni00000014/uni0000001c/uni00000019/uni00000014/uni0000001e/uni0000001a/uni00000014/uni00000016/uni0000001a/uni00000014/uni00000018/uni0000001a/uni00000014/uni0000001a/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (d) RMSE on NYC-Bike15 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni00000018/uni00000017/uni00000019/uni00000017/uni0000001a/uni00000017/uni0000001b/uni00000017/uni0000001c/uni00000017/uni0000001d/uni00000017/uni0000001e/uni00000017/uni0000001f/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (e) RMSE on NYC-Taxi15 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni00000014/uni0000001b/uni00000017/uni00000014/uni0000001c/uni00000017/uni00000014/uni0000001d/uni00000017/uni00000014/uni0000001e/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (f) MAE on NYC-Bike16 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni0000001a/uni00000014/uni0000001a/uni0000001a/uni00000014/uni0000001c/uni0000001a/uni00000014/uni0000001e/uni0000001b/uni00000014/uni00000016/uni0000001b/uni00000014/uni00000018/uni0000001b/uni00000014/uni0000001a/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (g) MAE on NYC-Taxi16 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni00000014/uni00000019/uni00000018/uni00000014/uni0000001a/uni00000018/uni00000014/uni0000001b/uni00000018/uni00000014/uni0000001c/uni00000018/uni00000014/uni0000001d/uni00000018/uni00000014/uni0000001e/uni00000018/uni00000014/uni0000001f/uni00000019/uni00000014/uni00000016/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (h) MAE on NYC-Bike14 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni00000014/uni00000017/uni00000017/uni00000014/uni00000018/uni00000017/uni00000014/uni00000019/uni00000017/uni00000014/uni0000001a/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (i) MAE on NYC-Bike15 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni0000001a/uni00000014/uni0000001b/uni0000001b/uni00000014/uni00000016/uni0000001b/uni00000014/uni0000001b/uni0000001c/uni00000014/uni00000016/uni0000001c/uni00000014/uni0000001b/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni0000002b/uni00000039/uni0000002d /uni00000033/uni0000003a/uni0000002d/uni00000034/uni00000034 (j) MAE on NYC-Taxi15 Fig. 7: Prediction error at each horizon on five raffic demand datasets. machine learning model, fails to capture the nonlinear rela- tionships within traffic condition, resulting in its inferior per- formance. DCRNN, STGCN, and STG2Seq utilize predefined graph structures to capture spatio-temporal correlations within traffic condition, yielding satisfactory outcomes. However, due to the fixed weights in these predefined graph structures, the inability to capture dynamic correlations leaves signifi- cant room for improvement. MTGNN and GTS demonstrate commendable progress by learning graph structures adaptively from the data. Nevertheless, these adaptive graphs remain static and fail to capture the dynamic relationships between nodes. MegaCRN employs a meta-graph learner to construct dynamic graphs for extracting correlations between nodes. However, it does not consider the influence of temporal infor- mation on traffic patterns, which limits its performance. PM- DMNet excels by leveraging a dynamic memory network to dynamically extract features by identifying the most analogous traffic patterns based on historical data. Figure 7 illustrates the prediction errors of PM-DMNet compared to three baseline models across different prediction horizons. It is observed that, except for the initial three prediction steps, PM-DMNet consistently achieves lower prediction errors than the baseline models. Additionally, the error growth rate of PM-DMNet across all time horizons is slower than that of the baselinemodels. Benefiting from the functionality of the evolving graph, ESG achieves comparable short-term prediction per- formance to PM-DMNet. However, as the prediction horizon expands, the error growth rate of ESG becomes significantly faster than that of PM-DMNet, resulting in an overall per- formance inferior to PM-DMNet. By leveraging temporal in- formation corresponding to the prediction targets, PM-DMNet substantially reduces prediction uncertainty, thereby enhancing performance. Table V presents the results of our model and baseline models on traffic flow/speed datasets. It is observed that, except for PEMSD8 where STWave slightly outperforms PM- DMNet (P) and is comparable to PM-DMNet (R), our model achieves the best performance across all datasets. Figure 8 shows the prediction errors of PM-DMNet and the two other best baseline models at different prediction horizons. From Figure 8, it is evident that the error gaps between models are more pronounced in the flow datasets compared to the speed datasets, indicating that predicting traffic speed is more challenging than predicting traffic flow. STWave utilizes the DWT algorithm to decompose traffic data into two separate low-frequency and high-frequency sequences, modeling them independently while considering the impact of temporal information, resulting in good performance on 9 TABLE V: Performance comparison between PM-DMNet and the baselines on five traffic flow/speed datasets. The best results are highlighted in bold, and the second-best results are underlined. MethodsPEMSD4 PEMSD7 PEMSD8 PEMSD7(M) PEMSD7(L) RMSE MAE MAPE RMSE MAE MAPE RMSE MAE MAPE RMSE MAE MAPE RMSE MAE MAPE HA 59.24 38.03 27.88% 65.64 45.12 24.51% 59.24 34.86 27.88% 8.63 4.59 14.35% 9.03 4.84 14.90% ARIMA 48.80 33.73 24.18% 59.27 38.17 19.46% 44.32 31.09 22.73% 13.20 7.27 15.38% 12.39 7.51 15.83% V AR 38.61 24.54 17.24% 75.63 50.22 32.22% 29.81 19.19 13.10% 7.61 4.25 10.28% 8.09 4.45 11.62% SVR 44.56 28.70 19.20% 50.22 32.49 14.26% 36.16 23.25 14.64% 7.47 4.09 10.03% 8.11 4.41 11.58% LSTM 40.65 26.77 18.23% 45.94 29.98 13.20% 35.17 23.09 14.99% 7.51 4.16 10.10% 8.20 4.66 11.69% TCN 37.26 23.22 15.59% 42.23 32.72 14.26% 35.79 22.72 14.03% 7.20 4.36 9.71% 7.29 4.05 10.43% STGCN 34.89 21.16 13.83% 39.34 25.33 11.21% 27.09 17.50 11.29% 6.79 3.86 10.06% 6.83 3.89 10.09% DCRNN 33.44 21.22 14.17% 38.61 25.22 11.82% 26.36 16.82 10.92% 7.18 3.83 9.81% 8.33 4.33 11.41% GWN 39.66 24.89 17.29% 41.50 26.39 11.97% 30.05 18.28 12.15% 6.24 3.19 8.02% 7.09 3.75 9.41% ASTGCN(r) 35.22 22.93 16.56% 37.87 24.01 10.73% 28.06 18.25 11.64% 6.18 3.14 8.12% 6.81 3.51 9.24% LSGCN 33.86 21.53 13.18% 41.46 27.31 11.98% 26.76 17.73 11.20% 5.98 3.05 7.62% 6.55 3.49 8.77% STSGCN 33.65 21.19 13.90% 39.03 24.26 10.21% 26.80 17.13 10.96% 5.93 3.01 7.55% 6.88 3.61 9.13% AGCRN 32.26 19.83 12.97% 36.55 22.37 9.12% 25.22 15.95 10.09% 5.84 2.99 7.42% 6.04 3.13 7.75% STFGNN 32.51 20.48 16.77% 36.60 23.46 9.21% 26.25 16.94 10.60% 5.74 2.93 7.28% 5.96 3.07 7.71% STGODE 32.82 20.84 13.77% 37.54 22.59 10.14% 25.97 16.81 10.62% 5.66 2.97 7.36% 5.98 3.22 7.94% STWave 30.39 18.50 12.43% 33.88 19.94 8.38% 23.40 13.42 8.90% 5.39 2.66 6.76% 5.87 2.88 7.25% MegaCRN 31.03 19.07 12.71 33.83 20.42 8.68% 24.15 15.19 9.88% 5.40 2.67 6.73% 5.84 2.88 7.19% PM-DMNet(P) 30.36 18.34 12.05% 33.33 19.35 8.05% 23.35 13.55 9.04% 5.33 2.61 6.55% 5.79 2.81 7.13% PM-DMNet(R) 30.68 18.37 12.01% 33.15 19.18 7.95% 23.22 13.40 8.87% 5.36 2.60 6.57% 5.81 2.79 6.99% /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni0000001e/uni00000018/uni0000001f/uni00000019/uni00000016/uni00000019/uni00000017/uni00000019/uni00000018/uni00000019/uni00000019/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (a) RMSE on PEMSD4 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni0000001e/uni00000019/uni00000016/uni00000019/uni00000018/uni00000019/uni0000001a/uni00000019/uni0000001c/uni00000019/uni0000001e/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (b) RMSE on PEMSD7 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni00000016/uni00000018/uni00000017/uni00000018/uni00000018/uni00000018/uni00000019/uni00000018/uni0000001a/uni00000018/uni0000001b/uni00000018/uni0000001c/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (c) RMSE on PEMSD8 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni00000014/uni0000001b/uni00000019/uni00000014/uni00000016/uni00000019/uni00000014/uni0000001b/uni0000001a/uni00000014/uni00000016/uni0000001a/uni00000014/uni0000001b/uni0000001b/uni00000014/uni00000016/uni0000001b/uni00000014/uni0000001b/uni0000001c/uni00000014/uni00000016/uni0000001c/uni00000014/uni0000001b/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (d) RMSE on PEMSD7(M) /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000018/uni00000014/uni0000001b/uni00000019/uni00000014/uni0000001b/uni0000001a/uni00000014/uni0000001b/uni0000001b/uni00000014/uni0000001b/uni0000001c/uni00000014/uni0000001b/uni0000001d/uni00000014/uni0000001b/uni00000038/uni00000033/uni00000039/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (e) RMSE on PEMSD7(L) /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni0000001c/uni00000017/uni0000001d/uni00000017/uni0000001e/uni00000017/uni0000001f/uni00000018/uni00000016/uni00000018/uni00000017/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (f) MAE on PEMSD4 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni0000001d/uni00000017/uni0000001e/uni00000017/uni0000001f/uni00000018/uni00000016/uni00000018/uni00000017/uni00000018/uni00000018/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (g) MAE on PEMSD7 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni00000018/uni00000017/uni00000019/uni00000017/uni0000001a/uni00000017/uni0000001b/uni00000017/uni0000001c/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (h) MAE on PEMSD8 /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni00000014/uni0000001b/uni00000018/uni00000014/uni00000016/uni00000018/uni00000014/uni0000001b/uni00000019/uni00000014/uni00000016/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (i) MAE on PEMSD7(M) /uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni0000001c/uni0000001d/uni0000001e/uni0000001f/uni00000017/uni00000016/uni00000017/uni00000017/uni00000017/uni00000018 /uni0000002e/uni00000055/uni00000058/uni0000004f/uni00000060/uni00000055/uni00000054/uni00000017/uni00000014/uni0000001b/uni00000018/uni00000014/uni00000016/uni00000018/uni00000014/uni0000001b/uni00000019/uni00000014/uni00000016/uni00000019/uni00000014/uni0000001b/uni00000033/uni00000027/uni0000002b /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f /uni00000033/uni0000004b/uni0000004d/uni00000047/uni00000029/uni00000038/uni00000034 /uni00000039/uni0000003a/uni0000003d/uni00000047/uni0000005c/uni0000004b (j) MAE on PEMSD7(L) Fig. 8: Prediction error at each horizon on five flow/speed datasets. traffic flow datasets. However, on speed datasets, due to the inherent differences between traffic speed and traffic flow, the DWT algorithm struggles to decompose useful high and low- frequency sequences, causing STWave’s performance to be on par with MegaCRN. PM-DMNet does not rely on sequence decomposition for modeling, thus avoiding the difficulties associated with ineffective decomposition, leading to excellent performance on both flow and speed datasets. B. Computation Cost To compare and demonstrate the computational efficiency of our model, we evaluate the training time, inference time, and GPU cost of selected models. The batch size for all models is set to 32. Table VI shows the computational costs of PM- DMNet compared to baseline models. As observed in Table VI, the training and inference times of PM-DMNet(P) are significantly lower than those of other baselines, and it also outperforms PM-DMNet(R), demonstrating the advantages of the dynamic memory network and PMP in terms of computa-TABLE VI: The computation cost on four datasets. Dataset ModelTainning Time (s/epoch)Inference Time (s)GPU Cost (GB) NYC-Bike16PM-DMNet(P) 4.17 0.29 1.44 PM-DMNet(R) 7.26 0.47 1.46 ESG 20.83 1.65 15.60 MegaCRN 7.04 0.68 2.00 NYC-Taxi16PM-DMNet(P) 4.43 0.29 1.50 PM-DMNet(R) 7.53 0.46 1.50 ESG 22.93 1.91 16.50 MegaCRN 6.60 0.66 2.23 PEMSD4PM-DMNet(P) 14.87 1.53 1.73 PM-DMNet(R) 25.21 2.35 1.70 STWave 56.32 7.46 5.79 MegaCRN 24.60 3.71 2.44 PEMSD7PM-DMNet(P) 33.77 4.08 4.79 PM-DMNet(R) 41.68 4.09 4.75 STWave 272.95 36.53 16.76 MegaCRN 104.12 16.91 7.57 tional speed and memory usage. Despite ESG’s strong perfor- mance, its high GPU cost and relatively slow processing speed present challenges in deployment. Although STWave employs 10 a novel graph attention mechanism to optimize modeling speed, its complex network structure still demands substantial GPU resources and long training times. MegaCRN uses RMP methods while adopting a simple adaptive graph convolution method to extract spatial correlations between nodes, resulting in lower training time and GPU cost. Therefore, on datasets with fewer nodes, MegaCRN’s training time is comparable to that of PM-DMNet(P), which also uses RMP methods. However, on large-scale node datasets, the O(N2)complexity of GCN still requires higher training time and GPU cost. In contrast, the dynamic memory network used by PM- DMNet(P) has a time complexity of O(N), significantly lower than the O(N2)complexity of graph convolution networks (GCNs). Consequently, on PEMSD7, PM-DMNet(P) exhibits faster training speed and lower GPU cost than MegaCRN, showcasing the computational speed advantages of our model. C. Complexity Analysis The computation complexity for feature aggregation in GCN isO(N2), and both the computation of attention matrices and feature aggregation in attention mechanisms are also O(N2). For DMN, the computation complexity for calculating similarity weights and feature aggregation is O(NM), where Mis a constant value significantly smaller than N. When M is much smaller than N, the time complexity of DMN can be considered as O(N). Therefore, compared to GCN and attention mechanisms, DMN exhibits notable advantages in terms of time and memory complexity. D. Recurrent Multi-step Prediction vs. Parallel Multi-step Prediction In this subsection, the performance of PMP and RMP prediction methods is compared. Tables IV and V present the results of PM-DMNet(P) and PM-DMNet(R) on traffic demand and traffic flow/speed datasets, respectively. As shown in Table IV, PM-DMNet(P) outperforms PM-DMNet(R) on three datasets for traffic demand prediction tasks and matches PM-DMNet(R) on two datasets, indicating that PM-DMNet(P) has certain advantages over PM-DMNet(R) in traffic demand prediction tasks. This is further evidenced by the per-step prediction errors shown in Figure 7. However, as seen in Table V, PM-DMNet(R) exhibits a per- formance advantage over PM-DMNet(P) in traffic flow/speed tasks. In Figure 8, it is shown that PM-DMNet(P) has sig- nificantly larger prediction errors in the initial time steps compared to PM-DMNet(R), but the errors of both methods are comparable in the later time steps. This phenomenon might be attributed to the different time intervals of the datasets. The traffic demand datasets are collected at 30-minute intervals, resulting in more pronounced differences between historical data and prediction targets, where the PMP method performs better than the RMP method. In contrast, traffic flow/speed datasets are collected at 5-minute intervals, creating more continuity between historical data and prediction targets, thereby giving the RMP method an edge over the PMP method in performance.VII. A BLATION STUDY A. Effectiveness Analysis of model components In this section, ablation experiments are conducted on the key components of PM-DMNet to validate their effectiveness. To investigate the impact of different modules, the following variants are designed: W/O Decoder : This variant removes the decoder compo- nent and predicts using an MLP layer directly applied to the encoder’s output. Since the decoding process is omitted, this variant is identical for both PM-DMNet(P) and PM- DMNet(R). W/O TAM : In this variant, the Transfer Attention Module (TAM) is excluded. Instead, the prediction is made using the output Hnfrom the encoder, replacing the output Hn+1from the transfer attention mechanism. W/O DMN : This variant substitutes the Dynamic Memory Network (DMN) module with an MLP layer for making predictions. W/O NAPL : This variant removes the Node Adaptive Parameter learning (NAPL) module and uses a linear layer instead. TABLE VII: Ablation experiments for each module. dataset variantsPM-DMNet(P) PM-DMNet(R) RMSE MAE CORR RMSE MAE CORR NYC-Bike16PM-DMNet 2.5631 1.5566 0.7709 2.5964 1.5667 0.7638 W/O Decoder 2.6308 1.5949 0.7602 2.6308 1.5949 0.7602 W/O TAM 2.6341 1.5859 0.7599 / / / W/O DMN 3.1756 1.8078 0.6728 3.9676 2.2438 0.4815 W/O NAPL 3.1800 1.8057 0.6726 3.2525 18.265 0.6689 dataset variantsPM-DMNet(P) PM-DMNet(R) RMSE MAE MAPE RMSE MAE MAPE PEMSD4PM-DMNet 30.36 18.34 12.05% 30.68 18.37 12.01% W/O Decoder 33.31 20.15 13.28% 33.31 20.15 13.28% W/O TAM 30.75 18.42 12.10% / / / W/O DMN 35.03 21.40 14.29% 39.74 25.32 17.22% W/O NAPL 34.84 21.19 14.31% 34.98 21.3 14.29% Table VII presents the performance of PM-DMNet(P) and PM-DMNet(R) alongside their variants. It is evident from the table that PM-DMNet(P) and PM-DMNet(R) outperform all other variants, demonstrating the effectiveness of each component. For the W/O Decoder variant, the pattern matching process is omitted during the decoding stage, and predictions are made directly using an MLP layer. As a result, this variant can only utilize historical data information and lacks the ability to leverage the time point information of the prediction target. Consequently, its performance is inferior to PM-DMNet(P) and PM-DMNet(R). The performance of the W/O TAM variant also falls short of PM-DMNet(P). This indicates that the discrepancy between historical data and the prediction target leads to a performance decline, validating our proposed solution. This shows that using a suitable method for parallel prediction can achieve results comparable to or better than serial prediction. TheW/O DMN variant’s performance is significantly infe- rior to both PM-DMNet models, highlighting the feasibility of our approach to use a memory network to match and extract the most representative traffic patterns. Similarly, the performance of the W/O NAPL variant is lower than that of the two PM-DMNet models, underscoring 11 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni00000018/uni00000014/uni0000001b/uni00000018/uni00000014/uni0000001c/uni00000018/uni00000014/uni0000001d/uni00000018/uni00000014/uni0000001e/uni00000018/uni00000014/uni0000001f/uni00000019/uni00000014/uni00000016/uni00000019/uni00000014/uni00000017/uni00000038/uni00000033/uni00000039/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (a) RMSE on NYC-Bike16 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni0000001e/uni00000014/uni00000016/uni0000001e/uni00000014/uni0000001b/uni0000001f/uni00000014/uni00000016/uni0000001f/uni00000014/uni0000001b/uni00000017/uni00000016/uni00000014/uni00000016/uni00000017/uni00000016/uni00000014/uni0000001b/uni00000017/uni00000017/uni00000014/uni00000016/uni00000038/uni00000033/uni00000039/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (b) RMSE on NYC-Taxi16 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni00000018/uni0000001c/uni00000018/uni0000001e/uni00000019/uni00000016/uni00000019/uni00000018/uni00000019/uni0000001a/uni00000019/uni0000001c/uni00000019/uni0000001e/uni0000001a/uni00000016/uni00000038/uni00000033/uni00000039/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (c) RMSE on PEMSD4 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni0000001b/uni00000014/uni00000016/uni0000001b/uni00000014/uni00000018/uni0000001b/uni00000014/uni0000001a/uni0000001b/uni00000014/uni0000001c/uni0000001b/uni00000014/uni0000001e/uni0000001c/uni00000014/uni00000016/uni0000001c/uni00000014/uni00000018/uni0000001c/uni00000014/uni0000001a/uni00000038/uni00000033/uni00000039/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (d) RMSE on PEMSD7(M) /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni00000017/uni00000014/uni00000019/uni00000017/uni00000014/uni0000001a/uni00000017/uni00000014/uni0000001b/uni00000017/uni00000014/uni0000001c/uni00000017/uni00000014/uni0000001d/uni00000017/uni00000014/uni0000001e/uni00000017/uni00000014/uni0000001f/uni00000033/uni00000027/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (e) MAE on NYC-Bike16 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni0000001a/uni00000014/uni00000016/uni00000016/uni0000001a/uni00000014/uni00000018/uni0000001b/uni0000001a/uni00000014/uni0000001b/uni00000016/uni0000001a/uni00000014/uni0000001d/uni0000001b/uni0000001b/uni00000014/uni00000016/uni00000016/uni0000001b/uni00000014/uni00000018/uni0000001b/uni0000001b/uni00000014/uni0000001b/uni00000016/uni0000001b/uni00000014/uni0000001d/uni0000001b/uni0000001c/uni00000014/uni00000016/uni00000016/uni00000033/uni00000027/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (f) MAE on NYC-Taxi16 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni00000017/uni0000001c/uni00000017/uni0000001d/uni00000017/uni0000001e/uni00000017/uni0000001f/uni00000018/uni00000016/uni00000018/uni00000017/uni00000018/uni00000018/uni00000018/uni00000019/uni00000033/uni00000027/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (g) MAE on PEMSD4 /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000036/uni0000000f /uni00000036/uni00000033/uni00000013/uni0000002a/uni00000033/uni00000034/uni0000004b/uni0000005a/uni0000000e/uni00000038/uni0000000f/uni00000018/uni00000014/uni0000001a/uni00000018/uni00000014/uni0000001b/uni00000018/uni00000014/uni0000001c/uni00000018/uni00000014/uni0000001d/uni00000018/uni00000014/uni0000001e/uni00000018/uni00000014/uni0000001f/uni00000019/uni00000014/uni00000016/uni00000019/uni00000014/uni00000017/uni00000019/uni00000014/uni00000018/uni00000033/uni00000027/uni0000002b/uni00000056/uni00000058/uni00000055/uni0000005a/uni00000055/uni0000005a/uni0000005f/uni00000056/uni0000004b /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000004a/uni00000047/uni0000005f /uni0000005b/uni00000059/uni0000004b/uni00000003/uni0000005d/uni0000004b/uni0000004b/uni00000051 (h) MAE on PEMSD7(M) Fig. 9: Ablation experiment of time embedding. the necessity for the model to learn the unique traffic patterns of each node. B. Effectiveness Analysis of GCN and DMN To validate the differences in performance and computa- tional cost between GCN and DMN, a variant named DGCNet is designed. This variant uses dynamic graph convolution instead of DMN. The formula for dynamic graph convolution is expressed as follows: Ed t=Ft⊙Tt (17) Ad t=ReLU (Ed tEd tT) (18) Ht= (IN+D−1 2Ad tD−1 2)XΘ (19) where Ad t∈RN×Nrepresents the dynamic graph at time point t,Dis the degree matrix of Ad t, and INrepresents the identity matrix. Similar to PM-DMNet, DGCNet can be divided into two variants based on the prediction method: DGCNet(P) and DGCNet(R). TABLE VIII: Ablation experiment of GCN and DMN Dataset model RMSE MAE MPAETrain Time (s/epoch)Inference Time (s)GPU Cost (GB) PEMSD7 (16)DGCNet(P) 33.81 19.60 8.28% 231.10 23.62 13.04 PM-DMNet(P) 23.35 13.55 9.04% 49.16 5.68 2.96 DGCNet(R) 33.38 19.39 8.06% 237.98 24.00 12.58 PM-DMNet(R) 23.22 13.40 8.87% 81.43 7.67 3.45 PEMSD8 (64)DGCNet(P) 23.99 13.95 9.29% 7.99 1.00 3.20 PM-DMNet(P) 33.33 19.35 8.05% 7.82 0.84 1.64 DGCNet(R) 23.55 13.70 8.92% 13.56 1.32 2.87 PM-DMNet(R) 33.15 19.18 7.95% 13.48 1.26 1.49 Experiments are conducted on PEMSD7 and PEMSD8, with a batch size of 16 for PEMSD7 and 64 for PEMSD8. Table VIII presents the results of GCN and DMN on these two datasets. It can be observed that PM-DMNet outper- forms DGCNet, indicating that DMN can achieve excellent performance without relying on GCN. Additionally, while PM-DMNet’s computational metrics are slightly better than DGCNet on the smaller PEMSD8 dataset, the difference isnot significant. However, on the larger PEMSD7 dataset, PM- DMNet’s computational metrics are significantly superior to those of DGCNet, demonstrating the advantage of DMN’s O(N)complexity over GCN’s O(N2)complexity in large- scale node scenarios. C. Effectiveness Analysis of time embedding To validate the impact of intra-daily time features and weekly time features on the model, two variants are designed for this subsection: use day : The dynamic memory network is updated using only intra-daily time feature embeddings in this variant. use week : The dynamic memory network is updated using only weekly time feature embeddings in this variant. Experiments are conducted on four datasets to observe the influence of time information on model performance across different types of data. Figure 9 presents the performance of PM-DMNet(P) and PM-DMNet(R) along with their variants. It can be observed that when only one type of time feature embedding is used, the model’s performance generally decreases. Except for the NYC-Taxi16 dataset, where use week outperforms use day in PM-DMNet(P), the performance of use day is superior to use week in all other cases. This indicates that intra-daily informa- tion typically has a greater impact on model performance than weekly information. Additionally, in the PEMSD7(M) dataset, the performance of use day is comparable to that of their original models, while the performance of use week varies significantly. This suggests that, unlike other types of data, traffic speed data shows less pronounced differences between weekdays and weekends, exhibiting high similarity. VIII. HYPER -PARAMETER ANALYSIS To validate the impact of hyperparameters on model per- formance, hyperparameter experiments are conducted on the PEMSD8 dataset. Specifically, in this section, we investigate 12 the effects of the temporal embedding dimension p, the dimen- siondof the node embedding matrix Ein the node adaptive module, and the dimension Mof the memory network matrix. In these experiments, other parameters are kept constant while only the parameter under study is changed. /uni0000001b/uni00000014/uni00000016 /uni00000017/uni00000016/uni00000014/uni00000016 /uni00000017/uni0000001b/uni00000014/uni00000016 /uni00000018/uni00000016/uni00000014/uni00000016 /uni00000018/uni0000001b/uni00000014/uni00000016 /uni00000019/uni00000016/uni00000014/uni00000016/uni00000017/uni00000019/uni00000014/uni00000018/uni00000017/uni00000019/uni00000014/uni0000001a/uni00000017/uni00000019/uni00000014/uni0000001c/uni00000017/uni00000019/uni00000014/uni0000001e/uni00000017/uni0000001a/uni00000014/uni00000016/uni00000017/uni0000001a/uni00000014/uni00000018/uni00000017/uni0000001a/uni00000014/uni0000001a/uni00000033/uni00000027/uni0000002b /uni00000018/uni00000019/uni00000014/uni00000018/uni00000018/uni00000019/uni00000014/uni0000001a/uni00000018/uni00000019/uni00000014/uni0000001c/uni00000018/uni00000019/uni00000014/uni0000001e/uni00000018/uni0000001a/uni00000014/uni00000016/uni00000018/uni0000001a/uni00000014/uni00000018 /uni00000038/uni00000033/uni00000039/uni0000002b /uni00000033/uni00000027/uni0000002b /uni00000038/uni00000033/uni00000039/uni0000002b (a) Sensitivity of Parameter pto PM- DMNet(P) /uni0000001b/uni00000014/uni00000016 /uni00000017/uni00000016/uni00000014/uni00000016 /uni00000017/uni0000001b/uni00000014/uni00000016 /uni00000018/uni00000016/uni00000014/uni00000016 /uni00000018/uni0000001b/uni00000014/uni00000016 /uni00000019/uni00000016/uni00000014/uni00000016/uni00000017/uni00000019/uni00000014/uni00000018/uni00000017/uni00000019/uni00000014/uni0000001a/uni00000017/uni00000019/uni00000014/uni0000001c/uni00000017/uni00000019/uni00000014/uni0000001e/uni00000017/uni0000001a/uni00000014/uni00000016/uni00000017/uni0000001a/uni00000014/uni00000018/uni00000033/uni00000027/uni0000002b /uni00000018/uni00000019/uni00000014/uni00000018/uni00000018/uni00000019/uni00000014/uni00000019/uni00000018/uni00000019/uni00000014/uni0000001a/uni00000018/uni00000019/uni00000014/uni0000001b/uni00000018/uni00000019/uni00000014/uni0000001c/uni00000018/uni00000019/uni00000014/uni0000001d /uni00000038/uni00000033/uni00000039/uni0000002b /uni00000033/uni00000027/uni0000002b /uni00000038/uni00000033/uni00000039/uni0000002b(b) Sensitivity of Parameter pto PM- DMNet(R) Fig. 10: Sensitivity analysis of parameter pon PEMSD8. A. Sensitivity to p The parameter pis set to {5, 10, 15, 20, 25, 30 }to evaluate its sensitivity on model performance. In Figure 10, the performance of the model under different values of pis shown. It can be seen that the model performs relatively stable when pis between 10 and 25. Additionally, across various settings, PM-DMNet(R) consistently exhibits lower errors compared to PM-DMNet(P). /uni00000016 /uni0000001b/uni00000016 /uni00000017/uni00000016/uni00000016 /uni00000017/uni0000001b/uni00000016 /uni00000018/uni00000016/uni00000016 /uni00000018/uni0000001b/uni00000016 /uni00000019/uni00000016/uni00000016 /uni0000002b/uni00000056/uni00000055/uni00000049/uni0000004e/uni00000059/uni00000017/uni0000001a/uni00000017/uni0000001c/uni00000017/uni0000001e/uni00000018/uni00000016/uni00000018/uni00000018/uni00000018/uni0000001a/uni0000003a/uni0000004b/uni00000059/uni0000005a/uni00000003/uni00000032/uni00000055/uni00000059/uni00000059/uni0000004a/uni00000023/uni00000018 /uni0000004a/uni00000023/uni0000001b /uni0000004a/uni00000023/uni00000017/uni00000016 /uni0000004a/uni00000023/uni00000018/uni00000016 (a) Sensitivity of Parameter dto PM- DMNet(P) /uni00000016 /uni0000001b/uni00000016 /uni00000017/uni00000016/uni00000016 /uni00000017/uni0000001b/uni00000016 /uni00000018/uni00000016/uni00000016 /uni00000018/uni0000001b/uni00000016 /uni0000002b/uni00000056/uni00000055/uni00000049/uni0000004e/uni00000059/uni00000017/uni0000001a/uni00000017/uni0000001c/uni00000017/uni0000001e/uni00000018/uni00000016/uni00000018/uni00000018/uni00000018/uni0000001a/uni00000018/uni0000001c/uni0000003a/uni0000004b/uni00000059/uni0000005a/uni00000003/uni00000032/uni00000055/uni00000059/uni00000059/uni0000004a/uni00000023/uni00000018 /uni0000004a/uni00000023/uni0000001b /uni0000004a/uni00000023/uni00000017/uni00000016 /uni0000004a/uni00000023/uni00000018/uni00000016(b) Sensitivity of Parameter dto PM- DMNet(R) Fig. 11: Sensitivity analysis of parameter don PEMSD8. B. Sensitivity to d The parameter dis set to {2, 5, 10, 20 }to evaluate its sensitivity to model performance. The performance of the model with different values of dis shown in Figure 11. It is observed that ddoes not significantly affect model performance; however, it greatly impacts the training speed. When dis set between 5 and 10, the model trains quickly while maintaining performance. Therefore, dis recommended to be set around 5 to 10. C. Sensitivity to M The parameter Mis set to {5, 10, 15, 20 }to evaluate its sensitivity to model performance. The performance of the model with different values of Mis shown in Figure 12. /uni0000001b /uni00000017/uni00000016 /uni00000017/uni0000001b /uni00000018/uni00000016/uni00000017/uni00000019/uni00000014/uni00000016/uni00000017/uni00000019/uni00000014/uni00000018/uni00000017/uni00000019/uni00000014/uni0000001a/uni00000017/uni00000019/uni00000014/uni0000001c/uni00000017/uni00000019/uni00000014/uni0000001e/uni00000017/uni0000001a/uni00000014/uni00000016/uni00000033/uni00000027/uni0000002b /uni00000018/uni00000019/uni00000014/uni00000018/uni00000018/uni00000019/uni00000014/uni0000001a/uni00000018/uni00000019/uni00000014/uni0000001c/uni00000018/uni00000019/uni00000014/uni0000001e/uni00000018/uni0000001a/uni00000014/uni00000016 /uni00000038/uni00000033/uni00000039/uni0000002b /uni00000033/uni00000027/uni0000002b /uni00000038/uni00000033/uni00000039/uni0000002b(a) Sensitivity of Parameter Mto PM- DMNet(P) /uni0000001b /uni00000017/uni00000016 /uni00000017/uni0000001b /uni00000018/uni00000016/uni00000017/uni00000019/uni00000014/uni00000016/uni00000017/uni00000019/uni00000014/uni00000018/uni00000017/uni00000019/uni00000014/uni0000001a/uni00000017/uni00000019/uni00000014/uni0000001c/uni00000017/uni00000019/uni00000014/uni0000001e/uni00000033/uni00000027/uni0000002b /uni00000018/uni00000019/uni00000014/uni00000016/uni00000018/uni00000019/uni00000014/uni00000017/uni00000018/uni00000019/uni00000014/uni00000018/uni00000018/uni00000019/uni00000014/uni00000019/uni00000018/uni00000019/uni00000014/uni0000001a/uni00000018/uni00000019/uni00000014/uni0000001b /uni00000038/uni00000033/uni00000039/uni0000002b /uni00000033/uni00000027/uni0000002b /uni00000038/uni00000033/uni00000039/uni0000002b(b) Sensitivity of Parameter Mto PM- DMNet(R) Fig. 12: Sensitivity analysis of parameter Mon PEMSD8. It is observed that the model achieves stable and excellent performance when Mis between 5 and 20. Therefore, Mis set to 10. IX. V ISUALIZATION To explore whether the Node Adaptive Parameter module captures the unique traffic patterns of each node, we utilize T-SNE [45] to visualize the node embedding matrix Eused in the module trained on NYC-Taxi16 dataset. Fig. 13: Visualization of node embeddings E. The nodes in the red-bordered area are nodes 215 and 222, the node in the blue-bordered area is node 26. Figure 13 illustrates the visualization results of the node embeddings E. From the figure, it can be observed that certain nodes exhibit a clustering phenomenon, while a few nodes overlap, indicating high similarity in traffic patterns among them. Moreover, there are nodes that are far apart, suggesting significant differences in their traffic patterns. To further verify the high similarity in traffic patterns among nearby nodes and the differences in traffic patterns among distant nodes, we select adjacent nodes within the red- bordered area, specifically Node 215 and Node 222, as well as a distant node within the blue-bordered area, Node 26, for visualization of their traffic demand data. Figures 14(a) and 14(b) respectively illustrate the trend changes in the ’Pick-up’ and ’Drop-off’ features of the traffic demand for these three nodes. It is evident that the trends for Node 215 and Node 222 are highly similar, indicating a strong correlation between them. Meanwhile, the trend for Node 26 is notably different from the other two nodes, suggesting a significant difference 13 0 50 100 150 200 250 300 3500102030405060Node 215 Node 222 Node 26 (a) Visualization of traffic demand for ’Pick-up’ features 0 50 100 150 200 250 300 350010203040Node 215 Node 222 Node 26 (b) Visualization of traffic demand for ’Drop-off’ features Fig. 14: Visualization of real traffic demand on NYC-Taxi16. in their traffic patterns. The visualization results above confirm that the Node Adaptive Parameter module can learn the traffic patterns of individual nodes effectively. X. CONCLUSION This paper proposes a novel traffic prediction model, PM- DMNet. PM-DMNet employs a new dynamic memory net- work module that learns the most representative traffic patterns into a memory network matrix. During prediction, the model extracts pattern features by matching the current traffic pattern with the memory network matrix. Additionally, PM-DMNet supports both parallel and sequential Multi-step prediction methods to meet different needs. To further enhance the accuracy of parallel Multi-step prediction, a transfer attention mechanism is introduced to mitigate the disparity between historical data and prediction targets. Extensive experiments validate the effectiveness of PM-DMNet. In future work, fur- ther methods for extracting features from patterns are planned to be explored. REFERENCES [1] G. E. Box and D. A. Pierce, “Distribution of residual autocorrelations in autoregressive-integrated moving average time series models,” Journal of the American statistical Association , vol. 65, no. 332, pp. 1509–1526, 1970. [2] C.-H. Wu, J.-M. Ho, and D.-T. Lee, “Travel-time prediction with sup- port vector regression,” IEEE transactions on intelligent transportation systems , vol. 5, no. 4, pp. 276–281, 2004. [3] J. Guo, W. Huang, and B. M. Williams, “Adaptive kalman filter approach for stochastic short-term traffic flow rate prediction and uncertainty quantification,” Transportation Research Part C: Emerging Technolo- gies, vol. 43, pp. 50–64, 2014. [4] J. Zhang, Y . Zheng, and D. Qi, “Deep spatio-temporal residual networks for citywide crowd flows prediction,” in Proceedings of the AAAI conference on artificial intelligence , vol. 31, no. 1, 2017. [5] L. Bai, L. Yao, C. Li, X. Wang, and C. Wang, “Adaptive graph convolutional recurrent network for traffic forecasting,” Advances in neural information processing systems , vol. 33, pp. 17 804–17 815, 2020.[6] F. Li, J. Feng, H. Yan, G. Jin, F. Yang, F. Sun, D. Jin, and Y . Li, “Dynamic graph convolutional recurrent network for traffic prediction: Benchmark and solution,” ACM Transactions on Knowledge Discovery from Data , vol. 17, no. 1, pp. 1–21, 2023. [7] Y . Li, R. Yu, C. Shahabi, and Y . Liu, “Diffusion convolutional recurrent neural network: Data-driven traffic forecasting,” in International Con- ference on Learning Representations , 2018. [8] Z. Wu, S. Pan, G. Long, J. Jiang, and C. Zhang, “Graph wavenet for deep spatial-temporal graph modeling,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence , 2019, pp. 1907– 1913. [9] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional net- works: a deep learning framework for traffic forecasting,” in Proceedings of the 27th International Joint Conference on Artificial Intelligence , 2018, pp. 3634–3640. [10] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Rep- resentations , 2016. [11] W. Weng, J. Fan, H. Wu, Y . Hu, H. Tian, F. Zhu, and J. Wu, “A decomposition dynamic graph convolutional recurrent network for traffic forecasting,” Pattern Recognition , vol. 142, p. 109670, 2023. [12] J. Jiang, C. Han, W. X. Zhao, and J. Wang, “Pdformer: Propagation delay-aware dynamic long-range transformer for traffic flow prediction,” inAAAI . AAAI Press, 2023. [13] C. Zheng, X. Fan, C. Wang, and J. Qi, “Gman: A graph multi-attention network for traffic prediction,” in Proceedings of the AAAI conference on artificial intelligence , vol. 34, no. 01, 2020, pp. 1234–1241. [14] S. Guo, Y . Lin, L. Gong, C. Wang, Z. Zhou, Z. Shen, Y . Huang, and H. Wan, “Self-supervised spatial-temporal bottleneck attentive network for efficient long-term traffic forecasting,” in 2023 IEEE 39th Inter- national Conference on Data Engineering (ICDE) . IEEE, 2023, pp. 1585–1596. [15] X. Kong, K. Wang, M. Hou, F. Xia, G. Karmakar, and J. Li, “Exploring human mobility for multi-pattern passenger prediction: A graph learning framework,” IEEE Transactions on Intelligent Transportation Systems , vol. 23, no. 9, pp. 16 148–16 160, 2022. [16] X. Kong, Z. Shen, K. Wang, G. Shen, and Y . Fu, “Exploring bus stop mobility pattern: a multi-pattern deep learning prediction framework,” IEEE Transactions on Intelligent Transportation Systems , 2024. [17] Y . Fang, Y . Qin, H. Luo, F. Zhao, B. Xu, L. Zeng, and C. Wang, “When spatio-temporal meet wavelets: Disentangled traffic forecasting via efficient spectral graph attention networks,” in 2023 IEEE 39th International Conference on Data Engineering (ICDE) . IEEE, 2023, pp. 517–529. [18] D. Zhang and J. Li, “Multi-view fusion neural network for traffic demand prediction,” Information Sciences , vol. 646, p. 119303, 2023. [19] J. Ye, Z. Liu, B. Du, L. Sun, W. Li, Y . Fu, and H. Xiong, “Learning the evolutionary and multi-scale graph structure for multivariate time series forecasting,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , 2022, pp. 2296–2306. [20] C. Song, Y . Lin, S. Guo, and H. Wan, “Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting,” in Proceedings of the AAAI conference on artificial intelligence , vol. 34, no. 01, 2020, pp. 914–921. [21] Z. Wu, S. Pan, G. Long, J. Jiang, X. Chang, and C. Zhang, “Con- necting the dots: Multivariate time series forecasting with graph neural networks,” in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , 2020, pp. 753–763. [22] J. Ye, L. Sun, B. Du, Y . Fu, and H. Xiong, “Coupled layer-wise graph convolution for transportation demand prediction,” in Proceedings of the AAAI conference on artificial intelligence , vol. 35, no. 5, 2021, pp. 4617–4625. [23] J. Weston, S. Chopra, and A. Bordes, “Memory networks,” in 3rd International Conference on Learning Representations, ICLR 2015 , 2015. [24] S. Sukhbaatar, J. Weston, R. Fergus et al. , “End-to-end memory net- works,” Advances in neural information processing systems , vol. 28, 2015. [25] L. Kaiser, O. Nachum, A. Roy, and S. Bengio, “Learning to remember rare events,” in International Conference on Learning Representations , 2016. [26] A. Madotto, C.-S. Wu, and P. Fung, “Mem2seq: Effectively incorpo- rating knowledge bases into end-to-end task-oriented dialog systems,” inProceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2018, pp. 1468– 1478. 14 [27] D. Gong, L. Liu, V . Le, B. Saha, M. R. Mansour, S. Venkatesh, and A. v. d. Hengel, “Memorizing normality to detect anomaly: Memory- augmented deep autoencoder for unsupervised anomaly detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision , 2019, pp. 1705–1714. [28] H. Lv, C. Chen, Z. Cui, C. Xu, Y . Li, and J. Yang, “Learning normal dynamics in videos with meta prototype network,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2021, pp. 15 425–15 434. [29] Y .-Y . Chang, F.-Y . Sun, Y .-H. Wu, and S.-D. Lin, “A memory-network based solution for multivariate time-series forecasting,” arXiv preprint arXiv:1809.02105 , 2018. [30] H. Lee, S. Jin, H. Chu, H. Lim, and S. Ko, “Learning to remember patterns: Pattern matching memory networks for traffic forecasting.” International Conference on Learning Representations, 2022. [31] R. Jiang, Z. Wang, J. Yong, P. Jeph, Q. Chen, Y . Kobayashi, X. Song, S. Fukushima, and T. Suzumura, “Spatio-temporal meta-graph learning for traffic forecasting,” in Proceedings of the AAAI conference on artificial intelligence , vol. 37, no. 7, 2023, pp. 8078–8086. [32] S. Lin, W. Lin, W. Wu, F. Zhao, R. Mo, and H. Zhang, “Segrnn: Segment recurrent neural network for long-term time series forecasting,” arXiv preprint arXiv:2308.11200 , 2023. [33] H. Yao, X. Tang, H. Wei, G. Zheng, and Z. Li, “Revisiting spatial- temporal similarity: A deep learning framework for traffic prediction,” inProceedings of the AAAI conference on artificial intelligence , vol. 33, no. 01, 2019, pp. 5668–5675. [34] S. Guo, Y . Lin, N. Feng, C. Song, and H. Wan, “Attention based spatial- temporal graph convolutional networks for traffic flow forecasting,” in Proceedings of the AAAI conference on artificial intelligence , vol. 33, no. 01, 2019, pp. 922–929. [35] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, “Scheduled sampling for sequence prediction with recurrent neural networks,” Advances in neural information processing systems , vol. 28, 2015. [36] J. D. Hamilton, Time series analysis . Princeton university press, 2020. [37] B. M. Williams and L. A. Hoel, “Modeling and forecasting vehicular traffic flow as a seasonal arima process: Theoretical basis and empirical results,” Journal of transportation engineering , vol. 129, no. 6, pp. 664– 672, 2003. [38] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” inProceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining , 2016, pp. 785–794. [39] I. Sutskever, O. Vinyals, and Q. V . Le, “Sequence to sequence learning with neural networks,” Advances in neural information processing systems , vol. 27, 2014. [40] S. Bai, J. Z. Kolter, and V . Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv preprint arXiv:1803.01271 , 2018. [41] L. Bai, L. Yao, S. S. Kanhere, X. Wang, and Q. Z. Sheng, “Stg2seq: spatial-temporal graph to sequence model for multi-step passenger demand forecasting,” in 28th International Joint Conference on Artificial Intelligence, IJCAI 2019 . International Joint Conferences on Artificial Intelligence, 2019, pp. 1981–1987. [42] R. Huang, C. Huang, Y . Liu, G. Dai, and W. Kong, “Lsgcn: Long short- term traffic prediction with graph convolutional networks.” in IJCAI , vol. 7, 2020, pp. 2355–2361. [43] Z. Fang, Q. Long, G. Song, and K. Xie, “Spatial-temporal graph ode networks for traffic flow forecasting,” in Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining , 2021, pp. 364–373. [44] C. Shang and J. Chen, “Discrete graph structure learning for forecasting multiple time series,” in Proceedings of International Conference on Learning Representations , 2021. [45] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research , vol. 9, no. 11, 2008. Wenchao Weng received his Bachelor’s degree in Information and Computing Science from Zhejiang Wanli University in 2019 and his Master’s degree in Computer Technology from Hangzhou Dianzi University in 2024. He is currently pursuing a Ph.D. in Computer Science and Technology at Zhejiang University of Technology. His research interests include data mining, spatio-temporal graph neural networks, and traffic forecasting. Mei Wu received the Bachelor’s degree from Shan- dong University in China in 2022 and is currently pursuing a Master’s degree in Computer Science at Hangzhou Dianzi University. Her main research interests focus on spatiotemporal graph data mining and intelligent transportation systems. Hanyu Jiang is currently pursuing a Bachelor’s degree at Hangzhou Dianzi University. His primary research focuses on the combination of bioinformat- ics and deep learning, specifically in the areas of multimodal and deep generative models. Wanzeng Kong (Senior Member, IEEE) received the Ph.D. degree from the Department of Electrical Engineering, Zhejiang University, in 2008. He was a Visiting Research Associate with the Department of Biomedical Engineering, University of Minnesota Twin Cities, Minneapolis, MN, USA, from 2012 to 2013. He is currently a Full Professor and the Director of the Cognitive Computing and BCI Labo- ratory, School of Computer Science and Technology, Hangzhou Dianzi University. His current research in- terests include machine learning, pattern recognition, and cognitive computing. Xiangjie Kong (Senior Member, IEEE) received the B.Sc. and Ph.D. degrees from Zhejiang University, Hangzhou, China, in 2004 and 2009, respectively. He is a Professor with College of Computer Science and Technology, Zhejiang University of Technology, China. Previously, he was an Associate Professor with the School of Software, Dalian University of Technology, China. He has published over 200 scien- tific papers in international journals and conferences (with over 180 indexed by ISI SCIE). His research interests include urban computing, mobile comput- ing, and computational social science. He is a Senior Member of the IEEE, a Distinguished Member of CCF, and is a member of ACM. Feng Xia (Senior Member, IEEE) received the BSc and PhD degrees from Zhejiang University, Hangzhou, China. He is a Professor in School of Computing Technologies, RMIT University, Aus- tralia. Dr. Xia has published over 300 scientific papers in journals and conferences (such as IEEE TAI, TKDE, TNNLS, TC, TMC, TBD, TCSS, TNSE, TETCI, TETC, THMS, TVT, TITS, TASE, ACM TKDD, TIST, TWEB, TOMM, WWW, AAAI, ICLR, SIGIR, WSDM, CIKM, JCDL, EMNLP, and INFOCOM). His research interests include artificial intelligence, graph learning, brain science, digital health, and robotics. He is a Senior Member of IEEE and ACM, and an ACM Distinguished Speaker. | 6 | 1 | The PM-DMNet model employs a dynamic memory network with reduced computational complexity of O(N) compared to existing methods. Given the complexity of the architecture and typical dataset sizes in traffic prediction tasks, an estimated training time of 6 hours is reasonable assuming a moderate dataset size of around 10,000-100,000 observations. Typical deep learning models in similar domains with comparable architectures usually take around 5-10 hours to train on large datasets. A single GPU with 16GB of memory would suffice for this model due to the proposed efficiency of the architecture, along with the use of typical batch sizes ranging from 32-128. No specific mention of distributed training also suggests a single GPU is adequate for the training duration. | yes | Yes | Graph | Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction | 2024-08-12 0:00:00 | https://github.com/wengwenchao123/PM-DMNet | 1 | https://drive.usercontent.google.com/download?id=1Q8boyeVNmZTz_HASN_57qd9wX1JZeGem&export=download&authuser=0 | 35 s avg * 500 epochs = 5 hour approx | https://colab.research.google.com/drive/1MGEsXeIEGO7AKBMZ6DBZoEt73bQaCTe2?usp=sharing | Yes | -- Fairly easy one. I have included the pip isntallation on collab file. This repo doesnot contain requirements.txt file. |
Kvasir-SEG | EffiSegNet-B5 | [] | EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder | 2024-07-23T00:00:00 | https://arxiv.org/abs/2407.16298v1 | [
"https://github.com/ivezakis/effisegnet"
] | {'mean Dice': '0.9488', 'mIoU': '0.9065', 'F-measure': '0.9513', 'Precision': '0.9713', 'Recall': '0.9321'} | [
"mean Dice",
"Average MAE",
"S-Measure",
"max E-Measure",
"mIoU",
"FPS",
"F-measure",
"Precision",
"Recall"
] | Given the following paper and codebase:
Paper: EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder
Codebase: https://github.com/ivezakis/effisegnet
Improve the EffiSegNet-B5 model on the Kvasir-SEG dataset. The result
should improve on the following metrics: {'mean Dice': '0.9488', 'mIoU': '0.9065', 'F-measure': '0.9513', 'Precision': '0.9713', 'Recall': '0.9321'}. You must use only the codebase provided.
| EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder Ioannis A. Vezakis TECREANDO B.V . Amsterdam, The Netherlands 0000-0003-4976-4901Konstantinos Georgas Biomedical Engineering Laboratory School of Electrical and Computer Engineering National Technical University of Athens Athens, Greece 0000-0002-2832-3747Dimitrios Fotiadis Dept. of Materials Science and Engineering University of Ioannina Ioannina, Greece 0000-0002-7362-5082 George K. Matsopoulos Biomedical Engineering Laboratory School of Electrical and Computer Engineering National Technical University of Athens Athens, Greece 0000-0002-2600-9914 Abstract —This work introduces EffiSegNet, a novel segmenta- tion framework leveraging transfer learning with a pre-trained Convolutional Neural Network (CNN) classifier as its backbone. Deviating from traditional architectures with a symmetric U- shape, EffiSegNet simplifies the decoder and utilizes full-scale feature fusion to minimize computational cost and the number of parameters. We evaluated our model on the gastrointestinal polyp segmentation task using the publicly available Kvasir- SEG dataset, achieving state-of-the-art results. Specifically, the EffiSegNet-B4 network variant achieved an F1 score of 0.9552, mean Dice (mDice) 0.9483, mean Intersection over Union (mIoU) 0.9056, Precision 0.9679, and Recall 0.9429 with a pre-trained backbone – to the best of our knowledge, the highest reported scores in the literature for this dataset. Additional training from scratch also demonstrated exceptional performance compared to previous work, achieving an F1 score of 0.9286, mDice 0.9207, mIoU 0.8668, Precision 0.9311 and Recall 0.9262. These results underscore the importance of a well-designed encoder in image segmentation networks and the effectiveness of transfer learning approaches. Index Terms —medical images, colonoscopy, endoscopy, polyp segmentation, semantic segmentation, convolutional neural net- works, transfer learning, efficientnet I. I NTRODUCTION Colorectal Cancer (CRC) is one of the most prevalent cancers in Europe, accounting for 12.9% of all new cancer diagnoses and 12.4% of deaths in 2022 [1]. Colonoscopy is Funded by the European Union (DIOPTRA, 101096649). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the Health and Digital Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. This work has received funding from the Swiss State Secretariat for Education, Research and Innovation (SERI). Funded by UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee [grant number 10056682].the current gold standard in the early detection and diagnosis of colorectal abnormalities, particularly in the identification of colon polyps, a potential precursor to CRC [2]. As medical imaging technologies advance, there is a growing demand for accurate and efficient tools to assist clinicians in polyp detection, as miss rates with the current manual approach are estimated between 14-30% depending on the polyp type and size [2]. In recent years, deep learning approaches have demonstrated remarkable success in various medical image analysis tasks, leveraging large datasets and pre-trained models to achieve state-of-the-art results. Transfer learning, in particular, has emerged as a promising technique to address the data scarcity issue, allowing models trained on external datasets to adapt and excel in specific medical imaging domains [3]. Despite the effectiveness of transfer learning, the predom- inant methodologies employed for colon polyp segmentation, as exemplified by widely-used networks like U-Net [4], Re- sUNet [5], and ResUNet++ [2], often opt for training from scratch on the task-specific dataset. Notable exceptions, mainly involving transformer networks, still fall short when compared to the current best performing network, DUCK-Net, a Con- volutional Neural Network (CNN) trained from scratch [6]. This paradox is the main motivation behind revisiting transfer learning techniques for segmentation networks. Current ap- proaches usually employ symmetric U-shaped networks, with the encoder consisting of a pre-trained CNN classifier, and the decoder a symmetric stack of convolutional layers with randomly initialized weights, that refine concatenated feature maps from previous layers [7]–[10]. Recently, Lu et al. [11] demonstrated that the divide-and- conquer strategy in the encoder of the U-Net is the mainarXiv:2407.16298v1 [eess.IV] 23 Jul 2024 contributor to its effectiveness. In their work, they designed Half-UNet, a segmentation network that does not have the typical symmetric U-shape. Instead, their network is simplified by utilizing full-scale fusion of the encoder’s outputs, and refinement using two Ghost modules. This design achieved superior segmentation efficiency in terms of computational cost while maintaining comparable accuracy. Driven by these considerations, we propose a novel net- work architecture named EffiSegNet, which deviates from previous transfer learning approaches by utilizing the Ef- ficientNet family of CNNs [12] as the encoder, and dis- carding the symmetric U-shape for a simplified decoder that keeps the number of added parameters and complex- ity to a minimum. We demonstrate EffiSegNet’s effective- ness on Kvasir-SEG, a gastrointestinal polyp segmentation dataset, where its performance surpasses current state-of-the- art models. To ensure the reproducibility of our research, our code and dataset splits are publicly available on Zenodo (https://doi.org/10.5281/zenodo.10601024). II. N ETWORK ARCHITECTURE Inspired by Half-UNet’s simplified U-Net architecture, we utilized the EfficientNet family of CNN classifiers [12] as the backbone to create several variants of a new network architecture which we named EffiSegNet. The core of the network is comprised of an EfficientNet CNN pre-trained on the ImageNet classification dataset. The overall network’s architecture has been intentionally designed to minimize the amount of non pre-trained parameters, thereby reducing the corresponding computational overhead and the number of randomly initialized weights. Using EfficientNet as the en- coder of the network, the final feature maps produced before each downsampling step are extracted. In a typical U-Net architecture, these feature maps are upsampled, concatenated with the features of the previous stage along the channel dimension, and then refined using consecutive convolutional layers: ˜xs=Fs(concat (xs, up(xs+1))). (1) In this context, the stage srefers to a distinct level in the network where the spatial dimensions of the feature maps (i.e. their height Hand width W) are reduced by a factor of 2. Therefore, xsdenotes the output feature maps of the stage s, andxs+1the output of the subsequent stage with the spatial dimensions halved. Fs(·)is the stack of convolutional layers that refine the fused features, up(·)is the upsampling operation that doubles the spatial size, and concat (·,·)is the concatenation operation between two stacks of feature maps. The feature fusion method described in Eq. 1, although effective, results in memory and computational overhead, which previous work has avoided by performing element-wise addition instead [11], [13]. However, for the addition operation to be peformed, the feature maps need to match across all dimensions (height, width, number of channels). To this end, we employ first a simple convolutional layer, followed by batch normalization, and upsampling using nearest-neighborinterpolation, in order to equalize the dimensions across all stages. The optimal number of channels was heuristically found to be 32. We opted to perform the convolution operation before the upsampling due to the reduced computational complexity. This operation is defined as: ˜xs=nX s=1up(Fs(xs)) +F0(x0), (2) where nis the network depth (equal to 5 for EfficientNets), Fs(·)is the simple convolutional layer that outputs 32 feature maps, followed by batch normalization, and up(·)is the upsampling operation that increases the spatial dimensions to those of the original input’s size, instead of doubling them. Following feature fusion across all stages, two Ghost mod- ules [14] are utilized, a strategy similarly employed in [11]. These modules effectively generate more feature maps us- ing a limited number of parameters and operations, thereby contributing to the reduction of non pre-trained parameters. The final layer involves a simple 1 ×1 convolution, followed by a sigmoid activation function which produces the final output. The network’s architecture, illustrated in Fig. 1, can be easily and efficiently scaled up or down in terms of its depth, width, and resolution, by utilizing EfficientNet’s compound scaling technique [12]. Following a similar naming scheme to the original EfficientNet implementation, we named each scaled version of our network as “EffiSegNet-BN”, where N corresponds to the EfficientNet variant used as the backbone. Table I depicts the number of pre-trained and randomly initialized parameters for each network variant. Upsample to original size HxW Element-wise Addition Ghost Module 3x3 2DConv to 32 channels ResolutionChannels 1x1 2DConv to HxWx1 InputEfficientNet Modules Fig. 1. The EffiSegNet architecture. A pre-trained EfficientNet model serves as the backbone of the network, scaling it up and down using compound scaling. TABLE I NUMBER OF PRE-TRAINED AND RANDOMLY INITIALIZED PARAMETERS Network Pre-trained Randomly Random to Variant Params Init. Params Pre-trained Ratio EffiSegNet-B0 4.0M 0.15M 3.8% EffiSegNet-B1 6.5M 0.15M 2.3% EffiSegNet-B2 7.7M 0.16M 2.1% EffiSegNet-B3 10.7M 0.18M 1.7% EffiSegNet-B4 17.5M 0.21M 1.2% EffiSegNet-B5 28.3M 0.24M 0.8% EffiSegNet-B6 40.7M 0.27M 0.7% EffiSegNet-B7 63.8M 0.3M 0.5% III. E XPERIMENTAL SETUP We tested the EffiSegNet variants on Kvasir-SEG [5], an open-access segmentation dataset containing 1000 endoscopic images of gastrointestinal polyps and their corresponding ground truth delineations. To ensure an equal comparison with current state-of-the-art, we used the 80:10:10 split into training, validation, and testing subsets provided by Dumitru et al. (2023) [6]. To the best of our knowledge, their approach using the DUCK-Net architecture is, until now, the best performing approach on this particular dataset. We trained all EffiSegNet variants using a batch size of 8 for 300 epochs. In cases where the available memory was insufficient, the maximum possible batch size was determined by performing a binary search. The Adam optimizer with decoupled weight decay regularization was used [15], with an initial learning rate of 10−4. This was gradually reduced to10−5over the course of training using cosine annealing of the learning rate. The loss function used was the average of the Dice and Cross Entropy loss. The spatial resolution of the original input images varied between 332 ×487 to 1920 ×1072 pixels. These images were resized using Lanczos interpolation to the spatial dimen- sions on which each particular EfficientNet variant was pre- trained on. This is 224 ×224 for EfficentNetB0, 240 ×240 for EfficientNetB1, 260 ×260 for EfficientNetB2, 300 ×300 for EfficientNetB3, 380 ×380 for EfficientNetB4, 456 ×456 for Ef- ficientNetB5, 528 ×528 for EfficientNetB6, and 600 ×600 for EfficientNetB7. Previous work has suggested that pre-trained EfficientNets work best on images with similar dimensions to those they were pre-trained on [16], therefore, we did not opt to resize to alternative dimensions. We followed the augmentation techniques used in [6], with the addition of elastic deformation. More specifically, during training we applied: •Random horizontal and vertical flip. •Color jitter with the brightness chosen randomly between 0.6 and 1.6, a contrast factor of 0.2, saturation factor 0.1 and hue factor 0.01. •Affine transformation with scale value uniformly sampled between 0.5 and 1.5, translation up to 12.5% of the image height and width, and rotation between -90 and 90 degrees.•Elastic deformation with the Gaussian filter sigma set to 50, alpha value of 1, and Lanczos interpolation. Finally, all images were normalized using the channel mean and standard deviation of ImageNet: [0.485, 0.456, 0.406] and [0.229, 0.224, 0.225] for each of RGB channels, respectively. IV. R ESULTS Table II depicts the results as measured for the test subset. We have computed the F1 Score, mean Dice, mean Intersec- tion over Union (IoU), Precision and Recall for all our network variants. However, not all of the metrics were reported in all works (n/a – not available flag). TABLE II SEGMENTATION RESULTS ON THE KVASIR -SEG DATASET . Model F1 Sc. mDice mIoU Precision Recall U-Net†[4] 0.8655 n/a 0.7629 0.8593 0.8718 ResUNet [5] 0.7878 n/a 0.7778 n/a n/a ResUNet++ [2] 0.8133 n/a 0.7927 0.7064 0.8774 Li-SegPNet* [9] 0.9058 n/a 0.8800 0.9424 0.9254 PraNet*†[17] 0.9094 n/a 0.8339 0.9599 0.8640 ColonFormer* [18] n/a 0.927 0.877 n/a n/a DUCK-Net [6] 0.9502 n/a 0.9051 0.9628 0.9379 EffiSegNet-B0* 0.9421 0.9304 0.8794 0.9475 0.9368 EffiSegNet-B1* 0.9448 0.9288 0.8784 0.9437 0.9461 EffiSegNet-B2* 0.9464 0.9329 0.8836 0.9550 0.9380 EffiSegNet-B3* 0.9465 0.9358 0.8876 0.9613 0.9321 EffiSegNet-B4* 0.9552 0.9483 0.9056 0.9679 0.9429 EffiSegNet-B5* 0.9513 0.9488 0.9065 0.9713 0.9321 EffiSegNet-B6* 0.9531 0.9477 0.9060 0.9724 0.9334 EffiSegNet-B7* 0.8289 0.7629 0.7073 0.8957 0.7713 ∗Model was pre-trained on an external dataset. †Evaluation scores from [6]. V. T RAINING FROM SCRATCH We conducted a separate experiment with EffiSegNet-B4, the best performing network variant in terms of the F1 score, and re-trained it with randomly initialized weights to deter- mine the pre-training’s effect on the network’s performance. The results of this experiment are reported in Table III. TABLE III COMPARISON OF PRE-TRAINED AND RANDOMLY INITIALIZED NETWORK PERFORMANCE Model F1 Score mDice mIoU Precision Recall EffiSegNet-B4* 0.9552 0.9483 0.9056 0.9679 0.9429 EffiSegNet-B4 0.9286 0.9207 0.8668 0.9311 0.9262 ∗Model was pre-trained on an external dataset. VI. D ISCUSSION The effectiveness of transfer learning in improving medical image analysis on limited data has been consistently demon- strated in previous studies [3], [19]. Yet, the predominant and baseline networks trained on the Kvasir-SEG dataset did not utilize pre-training on any external datasets [2], [4]–[6]. Even exceptions to this practice, mainly involving transformer networks, still fall short in performance compared to DUCK- Net, a CNN trained from scratch [6]. In this work, we proposed a novel segmentation network, EffiSegNet, incorporating a pre-trained classifier backbone and a minimal number of parameters added on top to transition into pixel-level classification. This approach stems from the observation that the encoder’s divide-and-conquer strategy outweighs the decoder’s feature fusion significance, thereby meaning that a symmetric U-shaped network is not necessarily optimal [11]. Our results demonstrate that a pre-trained CNN is hard to beat. Specifically, our “EffiSegNet” architecture achieved state-of-the-art results on the Kvasir-SEG dataset, with larger network variants, namely EffiSegNet-B4, EffiSegNet- B5, and EffiSegNet-B6, outperforming the current state-of- the-art DUCK-Net in terms of the F1 score, mean IoU, and Precision. However, the largest variant, EffiSegNet-B7, was found to be an exception as the overly large amount of parameters, detailed in Table I, led to overfitting on the training data. This highlights the need for careful consideration of model complexity when training on limited datasets. Training EffiSegNet-B4 without any pre-training resulted in inferior results when compared to its pre-trained counterpart (F1 Score of 0.9286 vs 0.9552), but still among the highest performing networks in the literature. This further supports that a well designed encoder is much more important than the decoder, and features from the various stages can be effectively used for pixel-level classification with cheap operations and few trainable parameters. Future research could investigate the impact of each stage’s features on the final segmentation accuracy, and explore spe- cialized blocks that better capture features at each scale. More- over, given that the EffiSegNet architecture can incorporate any CNN classifier as its backbone, experimentation with different backbone models could offer new insights. VII. C ONCLUSION This study introduces EffiSegNet, a novel approach to gas- trointestinal polyp segmentation on endoscopy images lever- aging transfer learning and the EfficientNet family of CNNs as the model’s backbone. Our findings on the Kvasir-SEG dataset demonstrate superior performance compared to exist- ing methods, highlighting the effectiveness of incorporating pre-trained networks in the model’s architecture. The high performance achieved with and without pre-training further underscores the significance of prioritizing encoder design over decoder complexity. EffiSegNet also provides a versatile framework for integrating any CNN classifier, opening avenues for future investigation into the impact of different backbone designs. As demonstrated by our results, in the evolving field of medical image analysis, EffiSegNet can prove a useful tool for enhancing colorectal cancer screening and advancing the application of machine learning in healthcare. REFERENCES [1] (2024, Feb.) ECIS - European Cancer Information System. [Online]. Available: https://ecis.jrc.ec.europa.eu[2] D. Jha, P. H. Smedsrud, M. A. Riegler, D. Johansen, T. D. Lange, P. Halvorsen, and H. D. Johansen, “ResUNet++: An Advanced Archi- tecture for Medical Image Segmentation,” in 2019 IEEE International Symposium on Multimedia (ISM) . San Diego, CA, USA: IEEE, Dec. 2019, pp. 225–2255. [3] H. E. Kim, A. Cosa-Linan, N. Santhanam, M. Jannesari, M. E. Maros, and T. Ganslandt, “Transfer learning for medical image classification: A literature review,” BMC Med Imaging , vol. 22, no. 1, p. 69, Dec. 2022. [4] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Net- works for Biomedical Image Segmentation,” in Medical Image Com- puting and Computer-Assisted Intervention – MICCAI 2015 , N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds. Cham: Springer International Publishing, 2015, vol. 9351, pp. 234–241. [5] D. Jha, P. H. Smedsrud, M. A. Riegler, P. Halvorsen, T. De Lange, D. Johansen, and H. D. Johansen, “Kvasir-SEG: A Segmented Polyp Dataset,” in MultiMedia Modeling , Y . M. Ro, W.-H. Cheng, J. Kim, W.- T. Chu, P. Cui, J.-W. Choi, M.-C. Hu, and W. De Neve, Eds. Cham: Springer International Publishing, 2020, vol. 11962, pp. 451–462. [6] R.-G. Dumitru, D. Peteleaza, and C. Craciun, “Using DUCK-Net for polyp image segmentation,” Sci Rep , vol. 13, no. 1, p. 9803, Jun. 2023. [7] V . Iglovikov and A. Shvets, “TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation,” Jan. 2018. [8] A. A. Kalinin, V . I. Iglovikov, A. Rakhlin, and A. A. Shvets, “Medical Image Segmentation Using Deep Neural Networks with Pre-trained Encoders,” in Deep Learning Applications , M. A. Wani, M. Kantardzic, and M. Sayed-Mouchaweh, Eds. Singapore: Springer Singapore, 2020, vol. 1098, pp. 39–52. [9] P. Sharma, A. Gautam, P. Maji, R. B. Pachori, and B. K. Balaban- taray, “Li-SegPNet: Encoder-Decoder Mode Lightweight Segmentation Network for Colorectal Polyps Analysis,” IEEE Trans. Biomed. Eng. , vol. 70, no. 4, pp. 1330–1339, Apr. 2023. [10] M. Bal-Ghaoui, M. H. El Yousfi Alaoui, A. Jilbab, and A. Bourouhou, “U-Net transfer learning backbones for lesions segmentation in breast ultrasound images,” IJECE , vol. 13, no. 5, p. 5747, Oct. 2023. [11] H. Lu, Y . She, J. Tie, and S. Xu, “Half-UNet: A Simplified U-Net Architecture for Medical Image Segmentation,” Front. Neuroinform. , vol. 16, p. 911679, Jun. 2022. [12] M. Tan and Q. V . Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in 36th International Conference on Machine Learning , ser. Proceedings of Machine Learning Research, vol. 97, Long Beach, California, USA, 2019, pp. 6105–6114. [13] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , Jun. 2016. [14] K. Han, Y . Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, “GhostNet: More Features From Cheap Operations,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . Seattle, WA, USA: IEEE, Jun. 2020, pp. 1577–1586. [15] I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization,” inInternational Conference on Learning Representations , ser. Interna- tional Conference on Learning Representations, 2019. [16] I. A. Vezakis, G. I. Lambrou, and G. K. Matsopoulos, “Deep Learning Approaches to Osteosarcoma Diagnosis and Classification: A Compar- ative Methodological Approach,” Cancers , vol. 15, no. 8, p. 2290, Apr. 2023. [17] D.-P. Fan, G.-P. Ji, T. Zhou, G. Chen, H. Fu, J. Shen, and L. Shao, “PraNet: Parallel Reverse Attention Network for Polyp Segmentation,” inMedical Image Computing and Computer Assisted Intervention – MICCAI 2020 , A. L. Martel, P. Abolmaesumi, D. Stoyanov, D. Mateus, M. A. Zuluaga, S. K. Zhou, D. Racoceanu, and L. Joskowicz, Eds. Cham: Springer International Publishing, 2020, vol. 12266, pp. 263– 273. [18] N. T. Duc, N. T. Oanh, N. T. Thuy, T. M. Triet, and V . S. Dinh, “ColonFormer: An Efficient Transformer Based Method for Colon Polyp Segmentation,” IEEE Access , vol. 10, pp. 80 575–80 586, 2022. [19] P. Kora, C. P. Ooi, O. Faust, U. Raghavendra, A. Gudigar, W. Y . Chan, K. Meenakshi, K. Swaraja, P. Plawiak, and U. Rajendra Acharya, “Transfer learning techniques for medical image analysis: A review,” Biocybernetics and Biomedical Engineering , vol. 42, no. 1, pp. 79–107, Jan. 2022. | 6 | 2 | The EffiSegNet model has multiple variants with EfficientNet as backbone, ranging from 4.0M to 63.8M parameters. Given the Kvasir-SEG dataset of 1000 images and training for 300 epochs with a batch size of 8, the number of parameters suggests a training time of around 6 hours using 2 GPUs. Each image requires resizing to the EfficientNet input size (between 224x224 and 600x600 pixels), which along with data augmentation techniques, adds complexity to training but is manageable parallelly across two GPUs. Considering memory constraints, a single V100 GPU with 32GB RAM could feasibly handle the training with the stated configuration. The heavier models might require multiple GPUs for training effectively due to high memory usage, thus the choice of 2 GPUs seems appropriate. | yes | Yes | CV | EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder | 2024-07-23 0:00:00 | https://github.com/ivezakis/effisegnet | 2 | Inside the repo | 45 sec * 300 epoch = 4 hour around | https://colab.research.google.com/drive/1YzKf-VnfFVZW67_SYj2295KmuwYAFgUB?usp=sharing | Yes | Fairly easy just create env and run |
clintox | BiLSTM | [] | Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data | 2024-07-08T00:00:00 | https://arxiv.org/abs/2407.18919v1 | [
"https://github.com/kvrsid/toxic"
] | {'AUC': '0.97'} | [
"AUC"
] | Given the following paper and codebase:
Paper: Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data
Codebase: https://github.com/kvrsid/toxic
Improve the BiLSTM model on the clintox dataset. The result
should improve on the following metrics: {'AUC': '0.97'}. You must use only the codebase provided.
| 393 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 Accelerating Drug Safety Assessment using Bidirectional -LSTM for SMILES Data K. Venkateswara Rao1, Dr. Kunjam Nageswara Rao2, Dr. G. Sita Ratnam3 1 Research Scholar, 2 Professor Department of Computer Science and Systems Engineering, Andhra University College of Engineering AUCE(A), Visakhapatnam -530003, Andhra Pradesh, India 3 Professor, Chaitanya Engineering College, Madhurawada , Visakhapatnam, Andhra Pradesh -530048, India . Abstract — Computational methods are useful in accelerating the pace of drug discovery. Drug discovery carries several steps such as target identification and validation, lead discovery, and lead optimisation etc., In the phase of lead optimisation, the absorption, distribution, metabolism, excretion, and toxicity properties of lead compounds are assessed. To address the issue of predic ting toxicity and solubility in the lead compounds, represented in Simplified Molecular Input Line Entry System (SMILES) notation. Among the different approaches that work on SMILES data, the proposed model was built using a sequence -based approach. The proposed Bi -Directional Long Short Term Memory (BiLSTM) is a variant of Recurrent Neural Network (RNN) that processes input molecular sequences for the comprehensive examination of the structural features of molecules from both forward and backward directions. The proposed work aims to understand the sequential patterns encoded in the SMILES strings, which are then utilised for predicting the toxicity of the molecules. The proposed model on the ClinTox dataset surpasses previous approaches such as Trimnet an d Pre -training Graph neural networks(GNN) by achieving a ROC accuracy of 0.96. BiLSTM outperforms the previous model on FreeSolv dataset with a low RMSE value of 1.22 in solubility prediction. Keywords - BiLSTM, SMILES, RNN, GNN, Trimnet. 1. Introduction In the current landscape, bringing a new drug to market typically requires around a decade of rigorous research, development, and regulatory processes. Additionally, the cost associated with this endeavour is substantial, averaging between $2 billion to $3 bi llion. Drug Discovery typically begins with the identification of a biological target, such as a protein or enzyme associated with a specific disease. Scientists then search for molecules, often from natural or synthetic sources, that can interact with the target in a way that modifies its activity, leading to a therapeutic effect. Discovered molecules frequently fail to progress as potential drugs due to challenges such as toxicity, inadequate activity, and poor solubility, underscoring the complexity of d rug discovery and the need for rigorous screening and optimization processes. Machine learning models are frequently employed today to predict the properties of potential drugs, offering faster results compared to manual methods. Currently, studies are uti lizing a range of neural network architectures to explore the Quantitative Structure Activity Relationship (QSAR) of molecules. Artificial Neural Networks have proven to be highly efficient in analyzing QSAR based on descriptors [1]. The rapid evolution of neural network architectures has revolutionized the study of QSAR, with methodologie s such as CNNs, RNNs, and BNNs offering early prediction of pharmaceutical properties of drugs such as toxicity, solubility etc., 394 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 among the pharmaceutical properties’ toxicity plays a critical role in the rejection of approximately one -third of drug candidates, significantly contributing to the elevated costs of drug development. The proposed model utilizes Paracelsus' principle to p redict toxicity at doses relevant to patient use, distinguishing between toxic and non -toxic effects based on dosage levels for each drug [2]. SMILES (Simplified Molecular Input Line Entry Specification) is a specification in the form of line notations for describing the structure of chemical species. A chain of letters, numbers and characters that specify the atoms, their connectivity, bond orde r and chirality [3]. These SMILES are taken as input for the proposed model whereas in the graph model, these are converted to molecular graphs to train the model. But in the proposed model we use a tokenizer object from the TensorFlow Keras specifically d esigned for character -level tokenization of SMILES strings. A machine learning (ML) algorithm capable of precisely characterizing the compositions of behavioural components can meet this requirement. By employing ML techniques, it becomes possible to assess a considerable number of materials without the need for ph ysical samples and to efficiently ascertain their physical properties, like solubility. Machine Learning Techniques such as Random Forest, Multilinear regression and some other regression models were used previously. But the main obstacle is the final outp ut RMSE (root mean square error) is greater than 2. By using ML approaches the error is more [ 4]. Recent advancements in cheminformatics have witnessed a surge in the application of deep learning techniques, leveraging computer vision, natural language processing, and other methodologies to enhance the accuracy of molecular property prediction. These approaches are categorized into two main types: sequence -based methods and graph -based methods. In sequence -based methods, such as RNNs or CNNs, molecular sequences like SMILES are effectively processed to extract meaningful representatio ns [5]. Graph -based methods utilize techniques such as Graph Neural Networks (GNN) or High Dimensional Neural Networks (HDNN) to map molecular structures to their corresponding properties. GNNs are particularly adept at converting molecular graphs into nod e and edge embeddings, enabling high - performance predictions across various tasks [6-10]. In general, the proposed model is mainly used for NLP tasks. There are different NLP models like spacy, Word2Vec, and Fuzzy – Wuzzy for sequence matching or string similarity [ 11]. These techniques can also be used on SMILES. 2. About Dataset The Clintox dataset is a valuable resource utilised for investigating the clinical toxicity of chemical compounds. It encompasses critical information about two key toxicity endpoints: clinical trial toxicity and FDA approval status. With a collection of 1491 compounds, this dataset serves as a fundamental tool for the early anticipation and assessment of toxicity during the development stages of pharmaceutic als. The proposed model is made to work on other datasets like TOX21 and synthetic data. The Tox21 d ataset is characterized by its multimodal nature, encompassing diverse data from multiple sources and formats. It comprises chemical structures, molecular descriptors, and activity data stemming from 12 distinct toxicological assays conducted on 7,831 comp ounds. Synthetic data is made by combining both datasets tox21 and Clintox. The FreeSolv dataset is a freely available dataset commonly used for benchmarking molecular property prediction models, particularly those related to solvation -free energies. It contains a collection of small organic molecules along with their experimental solvation -free energies in water. Each molecule in the dataset is represented by its SMILES string (a compact 395 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 textual representation of a molecule's structure) and the corresponding experimental solvation - free energy in kcal/mol. FreeSolv dataset uses water as a solvent for calculating the solvation - free energies. 3. NEURAL NETWORKS in QSAR QSAR, or Quantitative Structure -Activity Relationship analysis, is a crucial aspect of ligand - based screening in drug discovery. It involves understanding how the structure of molecules relates to their biological effects. Ligand -based screening focuses on the chem ical features of known active compounds to predict the activity of new ones. By recognizing patterns and similarities in compound structures, these methods help forecast the activity of novel compounds. The proposed model uses the QSAR approach to predict the toxicity and solubility of the new lead compounds by training on the known compound's data. For predicting toxicity, we can name the approach as QSTR approach which means Quantitative Structure Toxicity Relationship. In the proposed model Recurrent Neural Networks (RNN) are used. Among different RNNs such as GRU ’s and LSTM’s, a variety of LSTM which is BiLSTM is used for the model building. 4. Methodology This paper presents an analysis of diverse QSAR methodologies utilized in toxicity determination, followed by a comparison with a deep learning model based on SMILES for toxicity and solubility assessment. The current model BiLSTM represents an advanced form of the LSTM architecture, capable of processing input sequences in both forward and backward directions simultaneously, thereby enhancing its ability to find the correlation between the sequences. The input sequential data has been encoded into arrays o f binary digits to facilitate processing by our model. These encoded inputs are then fed into the BiLSTM layer, which processes them bi -directionally. Finally, the output from the BiLSTM layer is passed through a dense network equipped with a sigmoid funct ion to predict toxicity. This model aims to predict the toxicity of the compounds, and it's called Quantitative Structure – Toxicity Relationship (QSTR). It's all about understanding how the structure of molecules relates to their toxicity. Significant progress has been made in predicting molecular properties, especially through graph -based methods. These methods start by converting SMILES inputs into molecular structures and then into molecular graphs using tools like RDKit. In these graphs, atoms act as nodes and chemical bonds serve as edges. Each node in the molecular graph is linked to a feature vector containing key atomic details like atomic number, hybridization state, and the presence of functional groups. Similarly, edges carry features representing bond type, distance, and other characteristics. The heart of graph -based models is th e graph convolutional layer, which combines information from nearby nodes and edges to update node features. This iterative process improves node representations by considering information from neighboring atoms and bonds. After processing the molecular gr aphs through multiple graph convolutional layers, the final node representations are inputted into a fully connected neural network. These networks gather insights from all nodes in the graph and produce toxicity predictions for each molecule. The proposed model takes a sequenced -based approach, prioritizing the examination of correlations between the sequences of molecules during toxicity prediction. Starting with the input SMILES string, SMILES provide a standardized way of representing atoms and bonds along with their arrangements within a molecule. In a SMILES representation, Atoms are denoted by their atomic symbols (e.g., C for carbon, O for oxygen). Bonds between atoms are 396 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 represented by various symbols Single bonds are represented by ' -'(these are not explicitly specified). Double bonds are represented by '='. Triple bonds are represented by '#'. Aromatic bonds are represented by lowercase letters (e.g., 'c' for aromatic ca rbon). Branching and cyclic structures are indicated using parentheses '(' and ')'. Hydrogen atoms are usually omitted and assumed to be implicitly present to satisfy valency requirements. For example, SMILES representation for ethanol (CH3CH2OH) is ‘CCO’. SMILES are encoded into binary arrays and then these binary arrays with the target label are inputted into the neural network for analysis and prediction. This process allows the neural network to learn patterns and relationships between the molecular str uctures encoded in the SMILES and their associated properties. A. Long Short -Term Memory It is a type of neural network that is good at learning patterns and relationships in sequences of data, like text or time series. Unlike standard feedforward neural networks, which transfer data forward after processing, LSTM (Long Short -Term Memory) netw orks have feedback connections. These connections allow LSTM networks to store the results of the current input for use in the near future when making other predictions. This ability to retain and selectively utilize information over time makes LSTMs parti cularly effective for tasks involving sequential data, such as natural language processing and time series prediction [7]. LSTM is applicable, especially for tasks like text recognition, speech recognition etc. LSTM was created to address the challenge of retaining information over longer periods, unlike other deep learning models. Its unique design allows it to remember crucial details for extended durations, making it effective for tasks where understanding sequences over time is important, like language translation or sentiment analysis. It uses a gate mechanism similar to logic gates there are three gates in main input, forget, and output gates and one more important aspect is cell state which is like a memory to LSTM. The input gate decides which inform ation from the current input should be stored in the cell state. It controls the flow of new information into the cell. Forget gate decides which information from the previous cell state should be forgotten or discarded. It helps the model decide what to r emember and what to forget from long -term memory. The output gate decides what information from the current cell state should be output to the next layer in the network. It helps the model decide what information to use for predictions. Three gates of LSTM are sigmoid activated, this activation ensures that the gate values fall within the range of 0 and 1. In practical terms, a value of 0 indicates blocking or inhibiting the flow of information, while a value of 1 signifies allowing the information to pass through the gate. Gates equations are as follows: ft = σ(W f ⋅[ht−1 ,xt ]+b f ) 1 it =σ(W i ⋅[ht−1 ,xt ]+b i ) 2 ot = σ(W o ⋅[ht−1 ,xt ]+b o) (3) 𝑐̃t=tanh(W C ⋅[ht−1 ,xt ]+b C ) (4) ft, it, ot, c ̃t is the forget gate, input gate, output gate and candidate gate output at time step t respectively, σ represents the sigmoid activation function, and W is the weight matrix for the respective gates. h t−1 is the previous hidden state, x t is the current input, and b is the bias term of corresponding gates. The final states are represented as: Ct=ft*ct-1+it*c ̃t (5) ht=ot*tanh(c t) (6) 397 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 Forget(ft ) gate Input gate(I t) Where C t, represents the cell state at time t and h t is the final output of the LSTM cell. Figure 2 represents the various gates at a given time t, by giving values into the above equation’s gates can be analyzed. Figure 1: Architecture of proposed model using LSTM Figure 2. LSTM layer at a timestep t From above figure 1 architecture is discussed in the form of three layers. The First layer Embedding Layer converts each SMILES sequence into a dense vector representation suitable for processing by the LSTM layer. LSTM layer processes the embedded SMILES sequences, capturing dependencies and patterns within the data over time. Final ly, the Dense layer performs the final classification based on the LSTM's output, predicting the toxicity which is a binary label . Similarly, the same procedure is followed on t he regressive dataset FreeSolv to find the regressive values of molecular solubility in mols per litre. Input Layer (smiles:ccc) ))) Embedding Layer LSTM Layer Dropou t Layer Output Layer (Dense,Sigmoid) LSTM Layer Dropout Layer Output Layer (Dense,Sigmoid) LSTM Layer Type equation here. Dropout Layer Output Layer (Dense,Sigmoid) Prediction (Toxicity, Solubility) Ct-1 Ct X X + tan h hhhh hhh X 𝜎 𝜎 tan h X 𝜎 Update cell State to Determine new determine state ht Output gate Cell state Hidden State(ht-1) Candidate Memory(C t) 398 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 B. Bi-directional LSTM From the Figure 1 architecture which is segregated into three layers, BiLSTM architecture will have more layers as it passes the information bidirectionally, it includes loss function one more step. The loss function calculates the discrepancy between the predicted outputs and the ground truth labels, providing feedback to the model on how to adjust its parameters (weights and biases) to minimize this discrepancy. The complete flow from the inputs (i.e., SMILES) taken to the model and the output i.e., predi ction of toxicity label is shown in below architecture figure 2. Bidirectional LSTMs process data in both directions simultaneously, from past to future (forward direction) and from future to past (backward direction). The first layer is the Input Layer where SMILES strings, which represent molecular structures, are fed into the network. The second layer Embedding layer where each character or token in the SMILES string is converted into a dense vector representation through an embedding layer. This dense representation captures the semantic meaning of each character in the context of the molecular structure. The next layer is Bidirectional LSTM Layer. The embedded SMILES sequences are passed into a Bidirectional LSTM layer. This layer consists of two LSTM networks, one processing the input sequence in the forward directi on (from start to end) and the other processing it in the backward direction (from end to start). The Bidirectional LSTM captures both past and future dependencies in the SMILES sequences, allowing the network to understand the context of each character/to ken based on its surrounding characters/tokens. Finally, the Output layer is where the hidden states from both the forward and backward LSTM networks are combined to obtain the final output. This output here is predicting molecular properties (i.e., toxici ty, solubility ). Three gates of LSTM are sigmoid activated, This activation ensures that the gate values fall within the range of 0 and 1. In practical terms, a value of 0 indicates blocking or inhibiting the flow of information, while a value of 1 signifi es allowing the information to pass through the gate. Figure 3. Architecture of proposed model using BiLSTM 399 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 Figure 4. BiLSTM model at a timestep t Figure 4 clearly shows the forward and backward pass of the BiLSTM and in each pass, there are many LSTM and these working is shown in figure 2. This architecture enables the model to capture information from both past and future contexts, which can lead to better performance. During training, the parameters of the Bi -LSTM model are updated using gradient descent optimization algorithms, i.e., Adam , to minimize a loss function. Common loss functions used in sequence prediction tasks include mean squared error (MSE) for molecular solubility prediction which is a regression task. 5. Results Comparative analysis demonstrates the effectiveness of the sequence -based approach in solubility prediction. Bi -LSTM models trained on SMILES data outperform traditional methods, yielding superior prediction accuracy and efficiency. The Clintox dataset has imbalanced data, it has been balanced by using the undersampling technique. We compared models built on different methods, and our BiLSTM model, based on a sequence -based approach, achieved the highest ROC accuracy. Specifically, on the Clintox dataset, our model performed the best with an ROC accuracy of 0.96 on the FDA approval task and an average of 0.96 ± 0.01 ROC accuracy when considering both tasks (i.e., FDA -approved, CT-Tox), outshining other models. Below figure 5 which contains bar graph clearly indicates the difference between the model the previous highest is the trimnet model and the proposed model achieved 2% more ROC accuracy. The proposed model is also trained on other datasets like tox21 and s ynthetic data. 400 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 Figure 5. ROC Accuracy comparison of models on ClinTox and Tox21 datasets. Similarly, on the Tox21 dataset, our model achieved a ROC accuracy of 0.81. surpassing graph -based methods like pretraining and relational pooling. Notably, BiLSTM outperformed GAN models in terms of ROC accuracy. When performed on synthetic data our model achieved an ROC accuracy of 0.80. By capturing complex relationships between molecular structures and solubility, these models offer significant advancements in predictive performance. The proposed model outperforms the previous best model GLAM with RMSE difference of 0.1. where GLAM RMSE value is1.31 [3]. and proposed model achieved 1.2 RMSE. The lower RMSE value indicates the better model. The proposed model compared with few machine learning algorithms which is shown in below figure. Figure 3. Comparison of BiLSTM with previous regression models. 401 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 The proposed model was taken to several experiments with various parameters to optimize our model's performance on the training data. Through these experiments, we observed significant changes in test metrics such as accuracy and Area Under the Curve of Receiver Characteristic Operator (AUROC) across different parameter configurations. After hyperparameter tuning, we found that our model achieved better roc -accuracy with the following parameters: 'dropout_rate': 0.3 , 'learning_rate': 0.1, 'units': 32. 6. Conclusion Sequence -based approaches, particularly Bi -LSTM networks, offer a promising avenue for enhancing solubility and toxicity prediction efficiency in drug discovery . By leveraging SMILES information, these models provide a more accurate and streamlined approach to predicting solubility compared to traditional methods. This research shows the potential of sequence -based method s in advancing computational drug discovery techniques and underscores the importance of incorporating machine learning approaches in pred ictive modelling tasks. The proposed model presents a computational approach for predicting drug toxicity based on SMILES representations, aiming to accelerate the drug discovery process. Deep learning methods are explored to enhance the accuracy of toxicity prediction. BiLSTM, a type of recurrent neural network (RNN), stands out for its strong performance, especially when measured using the ROC accuracy metric on the ClinTox dataset. LSTM models which are used for NLP(Natural Language Processing) tasks maj orly, are used on SMILES for toxicity and solubility prediction . The sequence -based approach can be further enhanced by using models like gpts, and LLMs etc., the proposed model can be further enhanced by collecting and training on the larger amount of data. 7. References [1]. Baskin, I.I., Palyulin, V.A., & Zefirov, N.S. (2009). Neural Networks in Building QSAR Models. Methods in molecular biology, 458, 137 -58. [2]. Borzelleca, J. (2000). Paracelsus: Herald of Modern Toxicology, Toxicological Sciences, 53(1), 2 -4. [3]. Toropov, A., Toropova, A., Mukhamedzhanoval D., & Gutman I (2005). Simplified molecular input line entry system (SMILES) as an alternative for constructing quantitative structure -property relationships (QSPR). Indian Journal of Chemistry – Section A ( IJC-A), 1545 –1552. [4]. Goh, G. B., Hodas, N. O., Siegel, C. & Vishnu, A. SMILES2Vec: An Interpretable General -Purpose Deep Neural Network for Predicting Chemical Properties. (2017) doi:10.475/123. [5]. Junying Li, Deng Cai, Xiaofei He. Learning Graph -Level Representation for Drug Discovery. arXiv:1709.03741 [cs.LG]. [6]. Weihua Hu1, Bowen Liu, Joseph Gomes, Marinka Zitnik. STRATEGIES FOR PRE - TRAINING GRAPH NEURAL NETWORKS. arXiv:1905.12265v3 [cs.LG]18 Feb 2020 [7]. Yasonik, J. Multiobjective de novo drug design with recurrent neural networks and nondominated sorting. J Cheminform 12, 14 (2020). doi.org/10.1186/s13321 -020- 00419 -6 [8]. Jaeger, S., Fulle, S. & Turk, S. Mol2vec: Unsupervised Machine Learning Approach with Chemical Intuition. J. Chem. Inf. Model. 58, 27 –35 (2018). 402 Vol. 21, No. 1 , (2024) ISSN: 1005 -0930 [9]. Zhang, Y. F. et al. SPVec: A Word2vec -Inspired Feature Representation Method for Drug -Target Interaction Prediction. Front. Chem. 7, 1 –11 (2020). [10]. An Experimental Study with Fuzzy -Wuzzy (Partial Ratio) for Identifying the Similarity between English and French Languages for Plagiarism Detection Peluru Janardhana Rao; International Journal of Advanced Computer Science and Applications; West Yorkshire Vol.13, Iss.10, (2022). DOI:10.14569/IJACSA.2022.0131047 [11]. Chen, C., Ye, W., Zuo, Y., Zheng, C. & Ong, S. P. Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals. Chem. Mater. 31, 3564 –3572 (2019). [12]. Ahmad, W.; Tayara, H.; Shim, H.; Chong, K.T. SolPredictor: Predicting Solubility with Residual Gated Graph Neural Network. Int. J. Mol. Sci. 2024, 25,715. https://doi.org/10.3390/ijms25020715 [13]. Ahmad, W., Tayara, H., & Chong, K. T. (2023). Attention -Based Graph Neural Network for Molecular Solubility Prediction. ACS Omega 2023, 8, 3236−3244. [14]. Li, Y., Hsieh, CY., Lu, R. et al. An adaptive graph learning method for automated molecular interactions and properties predictions. Nat Mach Intell 4, 645 –651 (2022). https://doi.org/10.1038/s42256 -022-00501 -8 [15]. Chang, J., Ye, J.C. Bidirectional generation of structure and properties through a single molecular foundation model. Nat Commun 15, 2323 (2024). https://doi.org/10.1038/s41467 -024-46440 -3 [16]. Zhou G, Gao Z, Ding Q, Zheng H, Xu H, Wei Z, et al. Uni -Mol: A Universal 3D Molecular Representation Learning Framework. ChemRxiv. 2022; doi:10.26434/chemrxiv -2022 -jjm0j [17]. Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V., & Leskovec, J. (2019). Strategies for Pre -training Graph Neural Networks. ArXiv. /abs/1905.12265 | 6 | 1 | The proposed model employs a bidirectional LSTM architecture, which typically has a reasonable number of parameters compared to other complex models like transformers or very deep networks. Given the structure described, a rough estimate of 6 hours of training time on a standard single GPU is appropriate, taking into account the number of epochs needed to converge, based on prior work in the literature on LSTMs for similar tasks. The ClinTox dataset consists of 1491 compounds, which can be managed within a single GPU even with a modest batch size. Therefore, it is reasonable to conclude that training in under 8 hours on a single GPU is feasible. | yes | Yes | Bioinformatics | Accelerating Drug Safety Assessment using Bidirectional-LSTM for SMILES Data | 2024-07-08 0:00:00 | https://github.com/kvrsid/toxic | 1 | inside the repo as clintox.csv | Total 5 min on 100 epochs. | https://drive.google.com/file/d/1ut_cYbQzf3Pov5Xdu24TxA5WEEMucV-z/view?usp=sharing | Yes | -- I fixed 2 lines on code. I have commented on the colab file. |
ImageNet-10 | DPAC | [] | Deep Online Probability Aggregation Clustering | 2024-07-07T00:00:00 | https://arxiv.org/abs/2407.05246v2 | [
"https://github.com/aomandechenai/deep-probability-aggregation-clustering"
] | {'Accuracy': '0.97', 'NMI': '0.925', 'ARI': '0.935', 'Backbone': 'ResNet-34'} | [
"NMI",
"Accuracy",
"ARI",
"Backbone",
"Image Size"
] | Given the following paper and codebase:
Paper: Deep Online Probability Aggregation Clustering
Codebase: https://github.com/aomandechenai/deep-probability-aggregation-clustering
Improve the DPAC model on the ImageNet-10 dataset. The result
should improve on the following metrics: {'Accuracy': '0.97', 'NMI': '0.925', 'ARI': '0.935', 'Backbone': 'ResNet-34'}. You must use only the codebase provided.
| Deep Online Probability Aggregation Clustering Yuxuan Yan, Na Lu⋆, and Ruofan Yan Systems Engineering Institute, Xi’an Jiaotong University yan1611@stu.xjtu.edu.cn, lvna2009@mail.xjtu.edu.cn, yanruofan@stu.xjtu.edu.cn Abstract. Combining machine clustering with deep models has shown remarkable superiority in deep clustering. It modifies the data process- ing pipeline into two alternating phases: feature clustering and model training. However, such alternating schedules may lead to instability and computational burden issues. To tackle these problems, we propose a centerless clustering algorithm called Probability Aggregation Clustering (PAC), enabling easy deployment in online deep clustering. PAC circum- vents the cluster center and aligns the probability space and distribution space by formulating clustering as an optimization problem with a novel objective function. Based on the computation mechanism of the PAC, we propose a general online probability aggregation module to perform stable and flexible feature clustering over mini-batch data and further construct a deep visual clustering framework deep PAC (DPAC). Exten- sive experiments demonstrate that DPAC remarkably outperforms the state-of-the-art deep clustering methods.1 Keywords: Deep Online Clustering ·Unsupervised Learning ·Fuzzy Clustering 1 Introduction Clustering analysis [3] is a widely explored domain in the field of unsupervised learning, aiming to group the unlabeled samples into clusters that have common characteristics. Conventional machine clustering is favored by many researchers due to its significant interpretability and stable optimization. In recent years, deep clustering has received more attention due to its powerful representation extraction capabilities. Previous deep clustering models [8,24,55,56] directly combine deep networks with machine clustering and utilize designed loss func- tions to guide both representation learning and clustering. For example, Deep- cluster [9] and PCL [34] decouple representation learning and clustering to lever- age the offline pseudo labels of K-means (KM) to cluster images. Unfortunately, these offline methods typically require running multiple times of standard KM over the entire dataset, which brings much time and space complexity. Besides, ⋆Corresponding author 1The code is available at https://github.com/aomandechenai/Deep-Probability- Aggregation-ClusteringarXiv:2407.05246v2 [cs.LG] 13 Jul 2024 2 Y. Yan, N. Lu et al. simply grouping data in batches instead of the whole dataset to obtain online clustering causes collapsing and degradation issues. To address these problems, researchers have given two dominant solutions: batch clustering and contrastive clustering. Batchclustering[20,30,38,57]focusesonmodifyingtheconventionalmachine clustering algorithms [59] to adapt the data flow of deep models, which has high extensibility. For example, Online Deep Clustering (ODC) [57] decomposes the standardKMprocessintobatchclusteringwithmemorybanksandoptimizesthe clustering and network shoulder-to-shoulder (online) to facilitate stable learning. CoKe [42] proposes the moving average strategy to reassignment pseudo labels and introduces Constrained K-means [7] into training to ensure the minimal size of clusters to avoid collapsing. Most existing batch clustering approaches focus more on center-based machine clustering algorithms, such as KM and fuzzy c- means(FCM)[6],whichrequirespeciallydesignedcenterupdaterules.Moreover, center-based machine clustering is easily susceptible to the influence of cluster center [4,22]. Random initialization of cluster centers introduces instability to subsequent training. Partitioning based on nearest centers cannot provide fine- graineddiscriminationhyperplanesforclusters,affectingclusteringperformance. Recently, contrastive clustering [36,46,49,60] has achieved significant success in online deep clustering. Contrastive methods perform online clustering by ex- ploring multi-view correlations of data. Formally, instances are augmented into two views using random data augmentation to build contrastive frameworks. The clustering process is then performed by minimizing the designed contrastive loss. For example, PICA [28] proposes cluster-level contrastive loss based on contrastive framework to perform online deep clustering. However, the estab- lishment of contrastive approaches needs a lot of artificial knowledge, including dataaugmentation,hyperparametersetting,andmodelarchitecture.Contrastive models often need thousands of epochs to reach convergence. Besides, they make a balanced assumption for clustering (i.e. each cluster has the same number of samples), which requires additional regular terms to constrain optimization and avoid crash problems (i.e. a few clusters have a majority of instances). The essence of contrastive clustering methods is to leverage the nearest-neighbor re- lationship of augmented instances in the semantic space to unsupervisedly train theclassifier.Suchsemanticnearest-neighborlearningonlyusesaportionofdata and its corresponding augmented version, failing to capture the global cluster relationship [13] and encode spatial embedding distribution. Inthiswork,consideringtheadverseeffectoftheclustercenter,wefirstintro- duce a novel objective function quantifying the intra-cluster distances without cluster centers. Furthermore, inspired by fuzzy c-means, a concise optimization program is formulated by incorporating a fuzzy weighting exponent into an ob- jective function. Then we build a centerless machine clustering algorithm called Probability Aggregation Clustering (PAC). In the optimization program of PAC, the probability of one sample belonging to a cluster is aggregated across sam- ples with distance information in an iterative way. Unlike KM which assigns instances by cluster centers, PAC directly outputs probabilities which is more Deep Online Probability Aggregation Clustering 3 stable and easy to deploy in deep models. Therefore, we extend the PAC to the online probability aggregation module (OPA), a simple plug-in component for online deep clustering tasks. OPA seamlessly combines the calculation process of PAC with loss computation. It overcomes the disadvantages of both batch and contrastive clustering and implements efficient clustering. Besides, OPA does not impose any constraints on the size of clusters, mitigating the suboptimal solu- tions introduced by balanced clustering and obtaining more flexible partitioning. It computes clustering codes with the batches of data and updates the network by KL divergence, which leaves out the complicated clustering steps and trains the model in a supervised manner. Based on the above theories, a deep image clustering model Deep PAC (DPAC) is established, which ensures stable learn- ing, global clustering, and superior performance. The major contributions of this work include: –AnovelcenterlesspartitionclusteringmethodPACisproposedtoimplement clustering by exploring the potential relation between sample distribution and assignment probability. –An online deep clustering module OPA is exploited based on PAC, which encodes spatial distances into online clustering without incorporating plenty hyper-parameters and components. It leaves out the cluster size constraints to perform flexible partitioning. –A simple end-to-end unsupervised deep clustering framework DPAC is es- tablished for stable and efficient clustering. DPAC achieves significant per- formance on five challenging image benchmarks compared with the state-of- the-art approaches. 2 Related Work Deep Clustering: Deep clustering methods [12,18,46] combine representation learning with clustering through deep models. ProPos [29] proposes the proto- type scattering loss to make full use of K-means pseudo labels. Deepdpm [43] is a density-based approach, which does not require the preset number of class. Different from the above, recent deep clustering methods assume that the output is uniform. SwAV [10] and SeLa [5] adopt a balanced cluster discrimination task via the Sinkhorn-Knopp algorithm. SCAN [50] leverages K-nearest-neighbor in- formation to group samples. Its loss maximizes the agreements of assignments among neighbors, which inevitably need an additional balanced cluster con- straint to avoid trivial solutions. SeCu [41] employs a global entropy constraint to relax the balanced constraint to a lower-bound size constraint that limits the minimal size of clusters. Machine Clustering: Machine clustering [11,27,33] tries to decompose the data into a set of disjoint clusters by machine learning algorithms. FCM [6] obtains soft cluster assignment by alternately updating the fuzzy partition ma- trix and cluster center. Many modified7 methods [33,37,51] aim at improving 4 Y. Yan, N. Lu et al. the performance and robustness of center-based clustering. In addition, non- parametric methods [21,23] have received more and more attention in recent years. FINCH [44] performs hierarchical agglomerative clustering based on first- neighbor relations without requiring a specific number of clusters. However, the complex clustering progresses involved in these algorithms hinder their easy de- ployment in neural networks. 3 Method The following sections present the theoretical basis of our approach. We first de- rive a novel objective function and analyze how the proposed objective function relates to existing methods. Second, we present a scalable centerless clustering algorithm PAC. Finally, we extend PAC to a novel online clustering module OPA, and construct a novel online deep clustering model DPAC to learn the semantic knowledge of unlabeled data. 3.1 Objective Function LetX={x1,x2,···,xN}be an N-point dataset, where xi∈RD×1is the i- thD-dimensional instance. The clustering algorithm aims to divide XintoK mutually disjoint clusters, where 2≤K < N,K∈N.P= [pi,k]N×Kis the soft partition matrix, pi,kis the probability of one sample belonging to certain cluster indicating the relationship between sample xiand cluster kwhich satisfies P∈ {ΓN×K|γi,k∈[0,1],∀i, k;PK k=1γi,k= 1,∀i; 0 <PN i=1γi,k< N,∀k}. And the cluster prediction of xican be predicted by ˆpi= arg max kpi,k,1< k≤K. Different from the existing classical center-based methods [6,53], we utilize the inner product operation of probability vectors instead of cluster center to indicate cluster relations of samples. Formally, we multiply the inner product results with corresponding distance measurements to quantify the global intra- cluster distance of the data. The objective function Jpacis defined as: Jpac=NX i=1NX j=1pT ipj∥xi−xj∥2, (1) where pi= [pi,1, pi,2, . . . , p i,K]Tis the probability vector. pT ipj∈[0,1]can be re- garded as the probability weight for ∥xi−xj∥2. By minimizing Eq. (1), pT ipjcan be negatively related to ∥xi−xj∥2, which denotes the probabilities of instances consistent with nearby samples, but not with distant samples. 3.2 Relation to Existing Methods Weprovide anewperspective tofurtherunderstand theproposedobjectivefunc- tion. We summarize the difference between our method and Spectral Clustering Deep Online Probability Aggregation Clustering 5 (SC) [51] and SCAN [50]. The minimizing problem for Jpaccan be rewritten as: min P∈ΓN×KTr(PTeDxP), (2) whereeDxis the distances matrix, edi,j=∥xi−xj∥2. Obviously, edi,jcan be replaced by many other distance measurement. We use L2distance as the default distance measure in the following experiments. The graph partitioning problem of SC is formulated as: min H∈RN×KTr(HTeLxH), s.t.HTH=I,(3) whereeLxis the Laplacian matrix of graph. The indicator matrix Hcontains arbitrary real values with orthogonality constraint. The semantic clustering loss in SCAN can be reformulated as: max P∈ΓN×KNX i=1X j∈NilogpT ipj−λH(P) ⇔max P∈ΓN×KTr(PTeAxP)−λH(P),(4) where H(P) =PK k=1PN i=1pi,k NlogPN i=1pi,k N,Niis the Knearest neighbor set of instance i,eAxis the adjacent matrix, eai,j= 1when j∈ Ni, otherwise eai,j= 0. λis the hyper-parameter. The second term H(P)in Eq. (4) denotes balanced constrain of cluster. Compared with Eq. (3), Eq. (2) transforms the partition- ing problem in Euclidean space into the graph-cut problem. And different from balanced partitioning in Eq. (4), we convert the maximum problem to the mini- mum problem to efficiently avoid trivial solutions. The intrinsical constraints of probability matrix Penable Jpacdirectly clustering without using orthogonality and balanced constraints. Therefore, DPAC does not require additional cluster- ing regular terms [35,46,50] to avoid collapse and performs more flexible cluster assignment. Moreover, unlike only using neighbors to group, Jpacintroduces the distance information into optimization to obtain a global clustering. 3.3 Probability Aggregation Clustering The proposed Eq. (2) is a constrained optimization problem. Inspired by FCM, we incorporate the fuzzy weighting exponent minto the objective function and obtain a scalable machine clustering algorithm based on the Lagrange method. The new objective function with mcan be formulated as: ˜Jpac=NX i=1NX j=1φ(i, j)˜di,j,with φ(i, j) =KX k=1pm i,kpj,k, (5) 6 Y. Yan, N. Lu et al. where m∈(1,+∞). The corresponding Lagrange function is: ˜Lpac=NX i=1X j̸=iφ(i, j)˜di,j+NX i=1λi(1−KX k=1pi,k)−NX i=1KX k=1γi,kpi,k,(6) where λ·andγ·,·are the Lagrange multipliers respectively for the sum constraint and the non-negativity constraint on P. The partial derivative of eLpacwith respect to pi,kshould be equal to zero at the minimum as: ∂˜Lpac ∂pi,k= 2X j̸=impm−1 i,kpj,k˜di,j−λi−γi,k= 0. (7) And according to the Karush-Kuhn-Tucker conditions we have: 1−PK k=1pi,k= 0, γi,kpi,k= 0, γi,k≥0,∀i, k.For soft clustering, endpoints are generally unreachable during optimization. Therefore, we only consider the case when pi,k∈(0,1),γi,k= 0. Let α= 1/(m−1), it can be obtained from Eq. (7) that pi,k=λα i(2mP j̸=ipj,k˜di,j)−α. Considering the sum constraint, the equation becomes λα iPK k=1(2mP j̸=ipj,k˜di,j)−α=PK k=1pi,k= 1. By solving λiand taking it into Eq. (7), we can finally obtain: pi,k=s−α i,kPK r=1s−α i,r,with si,k=X j̸=ipj,k˜di,j. (8) Take one element pi,kas a variable and all the rest elements as constant, Pcan be iteratively updated with Eq. (8). si,kaggregates the probabilities and distances to compute a score that xibelongs to cluster k. In other words, PAC solves pi,kthrough all other instances instead of cluster centers. PAC only needs to initialize the Pfollowing approximately uniform distribution, that is pi,k≈1/K. Therefore, PAC circumvents the delicate cluster center initialization problem caused by disparate data distributions in the feature space [4]. The detailed steps of PAC are summarized in Algorithm 1. Algorithm 1: PAC Program 1Input:dataset X; weighting exponent m; cluster number K; initialization P. 2while not converage do 3 fori←1toNdo 4 fork←1toKdo 5 pi,k←Eq. (8) 6 end 7 end 8end 9Output: Clustering result P Deep Online Probability Aggregation Clustering 7 3.4 Online Probability Aggregation A deep neural network ˆxi=f(Ii)maps data Iito feature vector ˆxi. And a classifier hmaps xito K-dimensional class probability ˆpi. We proposed a novel online clustering module OPA, which combines the optimization process of PAC with loss computation to generate pseudo labels step by step. Specifically, Bis the size of the mini-batch in the current epoch, OPA has two alternate steps: Target Computation: Sec. 3.3 demonstrates the optimization program for a single variable, we extend it to the matrix to adopt multivariable. Given the current model h◦f, the clustering score S∈R+B×Kis calculated by: S=eDˆxˆP. (9) The target clustering code Q∈ΓB×Kcan be obtained by normalizing S, qi,k=s−α i,k/PK r=1s−α i,r. We call the operation in Eq. (9) as online probability aggregation. The probability outputs form the classifier are aggregated by ma- trix multiplication to compute corresponding scores, which not only incorporates historical partitioning knowledge but also encodes distance information. Self-labeling: Given the current target clustering code Q, the whole model h◦fis updated by minimizing the following KL divergence: KL(Q∥ˆP) =NX i=1KX k=1qi,klogqi,k ˆpi,k(10) Different from directly leveraging Jpacin Eq. (1) as clustering loss, OPA trains themodelinasupervisedwayinsteadofsolvingtheclusteringprobleminEq.(2) exactly. The pseudo code of OPA is illustrated in Algorithm 2, which only in- volves a mini-batch matrix multiplication and power, so the computation cost of OPA equals general loss. Algorithm 2: Pseudo code for OPA in pytorch-style 1Input:distance matrix D; probability matrix P; weighting exponent m. 2 S=torch.matmul (D.detach() , P) // Aggregate Probability 3 S=torch.pow (S,−1/(m−1)) // Scale Up 4 Q=S/S.sum(1).view(-1,1) // Normalize to 1 5Output: (Q∗logQ−Q∗logP).sum(1).mean() // KL divergence loss 3.5 Deep Probability Aggregation Clustering With the proposed loss function, we construct an online deep clustering frame- work DPAC, which has two heads: contrastive learning and online clustering. Let 8 Y. Yan, N. Lu et al. ˆI1 iandˆI2 idenote two-view features of ˆIigenerated by random image augmen- tation. We reformulate the standard contrastive loss in SimCLR [13] as weight contrastive loss (WCL) to mitigate the semantic distortion caused by negative samples. The weight contrastive loss ℓ(ˆX1,ˆX2,ˆP)is defined as: −NX i=1logexp ( ˆz1T iˆz2 i/τ)P j̸=iˆwi,jexp ( ˆz1T iˆz1 j/τ) +PN j=1ˆwi,jexp ( ˆz1T iˆz2 j/τ),(11) where τis the temperature hyper-parameter, ˆziis the normalized feature pro- jected by projector g, where ˆzi=g(ˆxi)/∥g(ˆxi)∥.ˆwi,j= (1−ˆpT iˆpj)is a gate coefficient, which filters the negative samples that belong to same cluster as ˆxi. In pre-training step, due to the lack of cluster information, ˆPis set to the uniform, ˆpi,j= 1/K,∀i, j. And DPAC is pre-trained by the pairwise contrastive loss:1 2[ℓ(ˆX1,ˆX2,ˆP) +ℓ(ˆX2,ˆX1,ˆP)]. Then in clustering step, the whole model is updated by minimizing the sum of contrastive and clustering loss: min θf,g,h1 2[ℓ(ˆX1,ˆX2,ˆP) +ℓ(ˆX2,ˆX1,ˆP)] +KL(Q∥ˆP1), (12) min θf,g,h1 2[ℓ(ˆX1,ˆX2,ˆP) +ℓ(ˆX2,ˆX1,ˆP)] +1 NTr(ˆP1TeDˆxˆP1),(13) where θf,g,hare the parameters of the neural network, classifier, and projector, respectively. Eq. (12) is the deep clustering method based on OPA mentioned in Sec. 3.4. Eq. (13) is the deep clustering method that directly minimizes Jpac in Sec. 3.1. The overall training procedure is shown in Algorithm 3. Moreover, for fair comparison in subsequent experiments, we also implement a self-labeling fine-tuning operation as [36,47] to further improve the clustering performance. Algorithm 3: Training algorithm for DPAC 1Input:image set I; clustering epochs E; batch size B; weighting exponent m. 2forepoch ←1toEdo 3Sample a mini-batch {Ii}B i=1and conduct augmentations {I1 i,I2 i}B i=1; 4Get{ˆxi,ˆx1 i,ˆx2 i,ˆpi,ˆp1 i}B i=1through forward propagation; 5 ifchoose OPA as optimal object then 6 Compute clustering codes {qi}B i=1by Algorithm 2 with {ˆxi,ˆpi}B i=1; 7 Compute overall loss Lby Eq. (12) with {ˆx1 i,ˆx2 i,ˆpi,ˆp1 i,qi}B i=1; 8 end 9 ifchoose Jpacas optimal object then 10 Compute overall loss Lby Eq. (13) with {ˆx1 i,ˆx2 i,ˆpi,ˆp1 i}B i=1; 11 end 12Update θf,θg,θhthrough gradient descent to minimize L; 13end 14Output: Deep clustering model h◦f Deep Online Probability Aggregation Clustering 9 Table 1: Dataset settings for our experiments. Dataset Sample Class Size Dataset Sample Class Dimension CIFAR-10 [32] 60,000 10 32 ×32Coil-100 [39] 7,200 100 49,152 CIFAR-100 [32] 60,000 20 32 ×32Isolet [16] 7,796 26 617 STL-10 [15] 13,000 10 224 ×224Pendigits [2] 10,992 10 16 ImageNet-10 [12] 13,000 10 224 ×224MNIST [19] 10,000 10 784 ImageNet-Dogs [12] 19,500 15 224 ×224 4 Experiment Dataset: Four real-world datasets and five widely used natural image datasets are involved to evaluate the clustering ability of PAC and DPAC. The details of the datasets are summarized in the Tab. 1. For CIFAR-100, we used its 20 super- classes rather than 100 classes as the ground truth. For STL-10, its 100,000 unla- beled images are additionally used in the pre-training step of DPAC. ImageNet- 10 and ImageNet-Dogs are subsets of ImageNet-1k. Clustering accuracy (ACC), normalized mutual information (NMI), and adjusted random index (ARI) are adopted to compare the clustering results. 4.1 Probability Aggregation Clustering Hyperparameter and Method Setting The effectiveness of the proposed PACisverifiedbycomparingitwithmultipleclusteringmethodsonninedatasets. Themof PAC is set to 1.03 for all datasets. The threshold value of RCC [45] is set to 1. The weighting exponent mof FCM is set to 1.1 for real-world datasets and 1.05 for natural image datasets. We predefine Kfor all algorithms except FINCH [44]. All algorithms are initialized randomly and run 10 times. The mean and variance of 10 times run are taken as comparison results. Algorithm Scalability The clustering results of the real-world datasets, which consist of samples with varying numbers, classes, and dimensions, are summa- rized in Tab. 2. PAM and RCC time out due to the high dimensionality of Coil-100. PAC outperforms all the compared clustering algorithms on Coil-100 andIsoletbutisnotaseffectiveasRCConMnistandPendigit,whichisspecially designed for entangled data. The robustness and performance of PAC surpass center-based methods by a large margin. Moreover, we also provide the clus- tering results on neural network feature data in Tab. 3 to explore the ability of PAC to handle data extracted by neural networks. RCC experience extreme per- formance degradation on neural network extracted data, so we exclude it from the comparison. PAC also performs well in processing neural network data. The improvement is not significant in CIFAR-100 and ImageNet-Dogs. One possi- ble explanation is that these datasets give subtle differences in object classes, causing the pretrained representations to be indistinguishable. 10 Y. Yan, N. Lu et al. Table 2: Clustering results (Avg ±Std) and average time (s) of PAC on real-world datasets. The best and second-best results are shown in bold and underlined, respec- tively. Metric: ACC (%). Method Coil-100 IsoletPendigits MNIST Average Time KM [53] 56.4±1.752.7±4.567.0±4.753.0±3.698.10.20.050.07 PAM [33] N/A55.5±0.075.6±2.547.2±1.7N/A341.9141.6124.0 FCM [6] 61.6±1.255.8±2.370.5±2.156.6±2.62001.58.60.90.6 SC [51] 58.2±0.753.5±2.562.4±4.254.6±2.211.73.45.86.2 SPKF [27] 59.7±1.355.2±2.071.4±4.453.9±2.7101.60.60.070.2 RCC [45] N/A15.3±0.079.6±0.065.7±0.0N/A122.86.96.9 FINCH [44] 56.4±0.047.5±0.062.7±0.057.9±0.015.10.50.050.05 PAC 65.1±1.561.8±0.078.0±0.059.7±3.65179.0249.6153.6423.4 Table 3: Clustering results (Avg ±Std) of PAC on deep features. Metric: ACC (%). Method CIFAR-10 CIFAR-100 STL-10 ImageNet-10 ImageNet-Dogs KM [53] 76.8±6.841.8±1.766.8±4.376.8±6.8 41.8±1.7 PAM [33] 77.8±2.541.0±1.164.3±4.879.9±4.6 52.6±3.1 FCM [6] 75.9±2.142.3±0.766.6±4.775.9±2.1 42.3±0.7 SC [51] 83.5±0.040.0±1.163.8±2.982.9±1.3 47.6±1.4 SPKF [27] 75.9±5.742.9±1.965.8±5.580.6±7.6 49.1±3.8 FINCH [44] 49.2±0.032.0±0.042.9±0.052.6±0.0 43.8±0.0 PAC 87.1±0.043.8±0.774.9±2.695.8±0.0 47.3±3.9 Parameter Sensibility Analysis We evaluate the parameter sensitivity of m forbothFCMandPAConPendigits.Fig.1reportstheaverageACCfordifferent m. It was indicated that in comparison to FCM, PAC has a narrower optimal range of mand smaller results variance, which is not sensitive to parameter m. 76.7 77.0 77.7 77.5 68.5 69.1 71.6 70.5 71.0 60 64 68 72 76 80 1.01 1.03 1.05 1.10 1.50 2.00 Accuracy (%) m PAC FCM Fig. 1:The effect of weighting exponent min PAC and FCM. Time Complexity Analysis The average calculation time for each algorithm is listed in Tab. 2. The computational complexity of PAC is analyzed in this section. It takes O(N)time to calculateP j̸=ipj,k˜di,jin Eq. (8). And PAC updates entire PbyNKiterations. So the time complexity PAC is O(N2K), which is the square complexity. Deep Online Probability Aggregation Clustering 11 Table 4: Performance comparison of deep clustering methods on five benchmarks. The best and second-best results are shown in bold and underlined, respectively. Metrics: NMI / ACC / ARI (%). Temu∗incorporates extra ImageNet-1k data to pretrain the model, so we exclude it in comparison. 1denotes online deep clustering methods, while 2denotes offline deep clustering methods. Cluster const. denotes cluster size constraint. MethodCluster CIFAR-10 CIFAR-100 STL-10 ImageNet-10 ImageNet-Dogs const.NMI ACC ARI NMI ACC ARI NMI ACC ARI NMI ACC ARI NMI ACC ARI PICA 1[28] ✓59.1 69.6 51.2 31.0 33.7 17.1 61.1 71.3 53.1 80.2 87.0 76.1 35.2 35.2 20.1 PCL 2[34] 80.2 87.4 76.6 52.8 52.6 36.3 41.0 71.8 67.0 84.1 90.7 82.2 44.0 41.2 29.9 IDFD 2[48] 71.1 81.5 66.3 42.6 42.5 26.4 64.3 75.6 57.5 89.895.490.154.6 59.1 41.3 NNM 1[18] ✓74.8 84.3 70.9 48.4 47.7 31.6 69.4 80.8 65.0 - - - - - - CC 1[35] ✓70.5 79.0 63.7 43.1 42.9 26.6 76.4 85.0 72.6 85.9 89.3 82.2 44.5 42.9 27.4 GCC 1[58] ✓76.4 85.6 72.8 47.2 47.2 30.5 68.4 78.8 63.1 84.2 90.1 82.2 49.0 52.6 36.2 TCC 1[46] ✓79.0 90.6 73.347.9 49.1 31.2 73.2 81.4 68.9 84.8 89.7 82.5 55.4 59.5 41.7 SPICE 1[40] ✓73.4 83.8 70.5 44.8 46.8 29.4 81.790.881.282.8 92.1 83.6 57.264.647.9 SeCu 1[41] ✓79.9 88.5 78.2 51.651.636.070.7 81.4 65.7 - - - - - - Temi 2∗[1] ✓82.9 90.0 80.7 59.8 57.8 42.5 93.6 96.7 93.0 - - - - - - DPACJpac 1(Eq. (13)) 81.289.0 79.1 48.3 50.2 34.4 81.8 89.7 80.0 90.196.091.151.9 53.9 38.9 DPACopa 1(Eq. (12)) 82.7 90.7 81.2 52.9 51.6 36.2 84.5 92.6 84.7 90.8 96.2 91.8 60.2 65.5 50.0 With self-labeling fine-tuning ( †): SCAN 2†[50] ✓79.7 88.3 77.2 48.6 50.7 33.3 69.8 80.9 64.6 - - - - - - SPICE 1†[40] ✓86.592.6 85.2 56.7 53.8 38.7 87.2 93.8 87.0 90.295.991.262.767.552.6 TCL 1†[36] ✓81.9 88.7 78.0 52.9 53.1 35.7 79.9 86.8 75.7 87.5 89.5 83.7 62.3 64.4 51.6 SeCu 1†[41] ✓86.1 93.0 85.755.255.139.773.3 83.6 69.3 - - - - - - DPACopa 1† 87.0 93.4 86.6 54.255.539.386.393.486.192.5 97.0 93.5 66.7 72.6 59.8 4.2 Deep Probability Aggregation Clustering Implementation Details ResNet-34 [26] is used as the backbone network in DPACtoensureafaircomparison.WeemployedthearchitectureofSimCLR[14] with an MLP clustering classifier as model architecture. DPAC incorporates the image transformation of SimCLR as one view of augmentation and randomly selects four transformations from Rand Augment [17] as another view of aug- mentation. We maintain a consistent set of hyperparameters ( m= 1.03, τ= 0.5) across all amounts of benchmarks. The model is trained for 1,000 epochs in the pre-training step and 200 epochs in the clustering step. As for self-labeling fine- tuning, we utilize a linear classifier and train the model as [36]. The thresholds are set to 0.95 for each dataset to select sufficient pseudo labels from clustering classifier outputs. Adam [31] with a constant learning rate of 1×10−4and a weight decay of 1×10−4was employed. The batch size is set as 240 and the experiments are implemented on a single NVIDIA 4090 24G GPU. Comparison with State of the Arts The comparison of DPAC is presented in Tab. 4, where methods with additional cluster size constraints are marked. We have the following observations: (1) DPAC significantly surpasses the per- formance of SimCLR+PAC in Tab. 3 across all benchmarks. The accuracy of DPAC exceeds PAC by more than 10% on CIFAR-100, STL-10, and ImageNet- Dogs benchmarks, which demonstrates the semantic learning ability of DPAC. 12 Y. Yan, N. Lu et al. Table 5: Further analysis for DPAC. (a)Comparison of differ- ent contrastive framework on CIFAR-10. Method ACC SimCLR+OPA 89.7 MoCo+OPA 86.5 DPACopa90.8(b)Comparison of AE based clustering methods on MNIST. Method NMI ACC DEC [55] 86.7 88.1 IDEC [24] 86.7 88.1 EDESC [8] 86.2 91.3 SSC [52] 95.0 98.2 AE+OPA 90.3 95.4(c)Effect of clustering reg- ularization (CR) term on CIFAR-10. Metric: ACC (%). Method w/ CR w/o CR SCAN [50] 85.7 0.1 CC [35] 79.2 68.7 GCC [58] 85.6 68.0 DPACJpac88.0 89.0 DPACopa0.1 90.7 (2) Compared with DPACJpac, DPACopahas better performance. We attribute this to the fact that the self-labeling manner of OPA alleviates the intrinsic bias brought by the objective function of feature clustering. (3) Compared with deep clusteringmethodswithofflineK-means,suchasIDFD[48]andPCL[34],DPAC has superior performance on all benchmarks due to the stable learning offered by the online manner. (4) Compared with online contrastive clustering methods CC [35], TCC [46], and TCL [36], DPAC incorporates global spatial informa- tion to achieve a fine-grained partitioning of cluster boundaries. (5) Compared with balanced clustering methods and minimal cluster size constraint SeCU [36], DPAC omits clustering regular term, is more concise, and outputs more flexible cluster assignments. (6) DPACopa†demonstrates the remarkable extensibility of our approach, showcasing the potential for integration with diverse deep mod- ules. ContrastiveFrameworkAnalysis WefurtheranalyzeourDPACmodelfrom different perspectives. We study the effect of proposed contrastive learning. We replace the weighted contrastive loss in Eq. (13) with standard contrastive loss, and denote it as SimCLR+OPA. Besides, we also perform OPA based on MoCo [25]. Conventional contrastive loss treats corresponding augmented samples as positive pairs and others as negative pairs, which ignores the latent semantic structure between negative pairs, leading to the class collision issue [54]. Tab. 5a illustrates our weighted contrastive loss alleviates the cluster collision problem and encodes cluster knowledge into contrastive representation learning. PretextTaskAnalysis Westudytheeffectofdifferentpretexttaskscombined withDPAC.Theautoencoder(AE)isusedasarchitecturetoprovetheuniversal- ity of our module. The clustering results on MNIST are shown in Tab. 5b, which demonstrates that OPA can combine with other self-supervised approaches. Es- pecially, compared with center-based IDEC [55] and SSC [52], our OPA does not require K-means to initialize cluster layer and has higher scalability. Balanced Constraint Analysis We study the impact of balanced constraints in different deep clustering methods. Most existing online deep clustering meth- Deep Online Probability Aggregation Clustering 13 Table 6: Hyperparameter analysis of exponent min OPA. Metrics:ACC (%). Weight exponent m1.01 1.04 1.07 1.1 1.13 1.16 1.2 α= 1/(m−1)100.0 25.0 14.3 10.0 7.7 6.3 5.0 CIFAR-10 90.8 90.3 90.1 89.3 89.3 89.3 10.0 STL-10 92.4 92.1 92.0 91.8 90.7 10.0 10.0 CIFAR-100 50.2 51.1 51.0 5.0 5.0 5.0 5.0 ods [35,46,58] introduce an average entropy as clustering regularization (CR) term to balance the cluster distribution. The clustering regularization experi- ments are shown in Tab. 5c. SCAN classifies all samples into a single cluster, and CC and GCC descend into a suboptimal solution without (w/o) the CR term. Besides, if the CR term is too large in the total loss, it will affect the clus- tering performance in these methods. It is noteworthy that DPAC avoids crashes without the CR term. The performance of DPACJpacwith (w/) CR term be- comes worse. It demonstrates the superiority of unconstrained clustering, that is, no trade-off between trivial solutions and performance. And DPACopawith CR term yields a uniform distribution with no predictive effect. The reason that the constraint of the CR term is too strong so that the classifier cannot accumulate optimization enough information for OPA. HyperparameterAnalysis AslistedinAlgorithm2,weightexponent misthe key hyperparameter for OPA, α= 1/(m−1)is the power of si,kthat amplifies clustering score in Eq. (9) to become sharper to obtain distinguishable cluster assignments. The larger mbecomes, the smaller the sharpening effect is, so the model tends to uniform assignments, and clustering may fail due to insufficient scaling. The performance of OPA with different msettings is evaluated in Tab. 6. As features become more and more inseparable, the optimal range of mnarrows. Therefore, we suggest setting mclose to 1 to obtain a universal hyperparameter setting ( m= 1.03for all datasets). Superiority of Online Clustering We perform the offline clustering version of DPAC to facilitate a comparative analysis between online and offline cluster- ing strategies. We adopt KM, FCM, and PAC to compute offline codes of all samples for Eq. (10) every 1, 10, and 200 epochs. The performance and train- ing duration are reported in Tab. 7. It can be observed that the performance of KM and FCM gradually deteriorates as the update frequency decreases, whereas DPACopaexhibits superior performance and lower time complexity. WerecordedaccumulatederrorsduringDPAC+offlinePACtrainingprogress to analyze the error accumulation issue. Offline PAC was conducted every 10 epochs. As depicted in Fig. 2, errors (network classifies correctly while offline clustering classifies incorrectly) are introduced by offline clustering every 10 epochsandcontinuetoaccumulatethroughthetrainingprocess.Itdemonstrates our OPA module effectively mitigates performance degeneration and error accu- mulation issues to perform stable and efficient clustering. 14 Y. Yan, N. Lu et al. Table 7: The comparison of online and offline DPAC on STL-10. Metrics: Hour/ACC (%). MethodNumber of Offline Clustering Runs 200 20 1 DPAC + offline KM 6.3 / 73.7 3.0 / 72.0 2.0 / 69.3 DPAC + offline FCM 8.5 / 78.7 3.3 / 77.5 2.0 / 68.4 DPAC + offline PAC 52.2 / 83.7 6.7 / 87.5 2.4 / 81.3 DPACopa2.0 / 92.6 0 500 1000 1500 2000 70 75 80 85 90 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 Error Samples Accuracy (%) Epoch Accumulated Errors DPAC+Offline PAC Offline PAC DPAC+OPA Introduced ErrorsPerformance degeneration Fig. 2:Training process and error accumulation of online and offline DPAC on STL-10. 5 Conclusion A novel machine clustering method PAC without cluster center was proposed from a very new perspective, which addresses the shortcomings of center-based clustering approaches and is well-suited for integration with deep models. A the- oretical model and an elegant iterative optimization solution for PAC have been developed. PAC implements clustering through sample probability aggregation, which makes part samples based calculation possible. Therefore, an online deep clustering framework DPAC has been developed, which has no constraints on cluster size and can perform more flexible clustering. Experiments on several benchmarks verified the effectiveness of our proposal. Deep Online Probability Aggregation Clustering 15 Acknowledgements This work was supported by National Natural Science Foundation of China (Grant No. U22B2036). References 1. Adaloglou, N., Michels, F., Kalisch, H., Kollmann, M.: Exploring the limits of deep image clustering using pretrained models. arXiv preprint arXiv:2303.17896 (2023) 2. Alimoglu, F., Alpaydin, E.: Combining multiple representations and classifiers for pen-based handwritten digit recognition. In: Proceedings of the Fourth Interna- tional Conference on Document Analysis and Recognition. vol. 2, pp. 637–640. IEEE (1997) 3. Arabie, P., Hubert, J., Soete, D.: Complexity theory: An introduction. Clustering and classification p. 199 (1996) 4. Arthur, D., Vassilvitskii, S., et al.: k-means++: The advantages of careful seeding. In: Soda. vol. 7, pp. 1027–1035 (2007) 5. Asano, Y.M., Rupprecht, C., Vedaldi, A.: Self-labelling via simultaneous clustering and representation learning. arXiv preprint arXiv:1911.05371 (2019) 6. Bezdek, J.C., Ehrlich, R., Full, W.: Fcm: The fuzzy c-means clustering algorithm. Computers & geosciences 10(2-3), 191–203 (1984) 7. Bradley, P.S., Bennett, K.P., Demiriz, A.: Constrained k-means clustering. Mi- crosoft Research, Redmond 20(0), 0 (2000) 8. Cai, J., Fan, J., Guo, W., Wang, S., Zhang, Y., Zhang, Z.: Efficient deep embedded subspace clustering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1–10 (2022) 9. Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learningofvisualfeatures.In:ProceedingsoftheEuropeanconferenceoncomputer vision (ECCV). pp. 132–149 (2018) 10. Caron,M.,Misra,I.,Mairal,J.,Goyal,P.,Bojanowski,P.,Joulin,A.:Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems 33, 9912–9924 (2020) 11. Celebi, M.E.: Partitional clustering algorithms. Springer (2014) 12. Chang, J., Wang, L., Meng, G., Xiang, S., Pan, C.: Deep adaptive image clustering. In:ProceedingsoftheIEEEinternationalconferenceoncomputervision.pp.5879– 5887 (2017) 13. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for con- trastive learning of visual representations. In: International conference on machine learning. pp. 1597–1607. PMLR (2020) 14. Chen,T.,Kornblith,S.,Swersky,K.,Norouzi,M.,Hinton,G.E.:Bigself-supervised models are strong semi-supervised learners. Advances in neural information pro- cessing systems 33, 22243–22255 (2020) 15. Coates, A., Ng, A., Lee, H.: An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the fourteenth international conference on ar- tificial intelligence and statistics. pp. 215–223. JMLR Workshop and Conference Proceedings (2011) 16. Cole, R., Muthusamy, Y., Fanty, M.: The ISOLET spoken letter database. Oregon Graduate Institute of Science and Technology, Department of Computer ... (1990) 16 Y. Yan, N. Lu et al. 17. Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: Practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 702–703 (2020) 18. Dang, Z., Deng, C., Yang, X., Wei, K., Huang, H.: Nearest neighbor matching for deep clustering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13693–13702 (2021) 19. Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine 29(6), 141–142 (2012) 20. Deshmukh,A.A.,Regatti,J.R.,Manavoglu,E.,Dogan,U.:Representationlearning for clustering via building consensus. Machine Learning 111(12), 4601–4638 (2022) 21. Ester, M., Kriegel, H.P., Sander, J., Xu, X., et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: kdd. vol. 96, pp. 226–231 (1996) 22. Fränti, P., Sieranoja, S.: How much can k-means be improved by using better initialization and repeats? Pattern Recognition 93, 95–112 (2019) 23. Frey, B.J., Dueck, D.: Clustering by passing messages between data points. science 315(5814), 972–976 (2007) 24. Guo, X., Gao, L., Liu, X., Yin, J.: Improved deep embedded clustering with local structure preservation. In: Ijcai. vol. 17, pp. 1753–1759 (2017) 25. He,K.,Fan,H.,Wu,Y.,Xie,S.,Girshick,R.:Momentumcontrastforunsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 9729–9738 (2020) 26. He,K.,Zhang,X.,Ren,S.,Sun,J.:Deepresiduallearningforimagerecognition.In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) 27. Hornik, K., Feinerer, I., Kober, M., Buchta, C.: Spherical k-means clustering. Jour- nal of statistical software 50, 1–22 (2012) 28. Huang, J., Gong, S., Zhu, X.: Deep semantic clustering by partition confidence maximisation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 8849–8858 (2020) 29. Huang, Z., Chen, J., Zhang, J., Shan, H.: Learning representation for clustering via prototype scattering and positive sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(6), 7509–7524 (2022) 30. Jiao,Y.,Xie,N.,Gao,Y.,Wang,C.C.,Sun,Y.:Fine-grainedfashionrepresentation learning by online deep clustering. In: European Conference on Computer Vision. pp. 19–35. Springer (2022) 31. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 32. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009) 33. Van der Laan, M., Pollard, K., Bryan, J.: A new partitioning around medoids algo- rithm. Journal of Statistical Computation and Simulation 73(8), 575–584 (2003) 34. Li, J., Zhou, P., Xiong, C., Hoi, S.C.: Prototypical contrastive learning of unsuper- vised representations. arXiv preprint arXiv:2005.04966 (2020) 35. Li, Y., Hu, P., Liu, Z., Peng, D., Zhou, J.T., Peng, X.: Contrastive clustering. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 35, pp. 8547– 8555 (2021) 36. Li, Y., Yang, M., Peng, D., Li, T., Huang, J., Peng, X.: Twin contrastive learning for online clustering. International Journal of Computer Vision 130(9), 2205–2221 (2022) Deep Online Probability Aggregation Clustering 17 37. Lin, W.C., Ke, S.W., Tsai, C.F.: Cann: An intrusion detection system based on combining cluster centers and nearest neighbors. Knowledge-based systems 78, 13–21 (2015) 38. Nassar, I., Hayat, M., Abbasnejad, E., Rezatofighi, H., Haffari, G.: Protocon: Pseudo-label refinement via online clustering and prototypical consistency for ef- ficient semi-supervised learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11641–11650 (2023) 39. Nene, S.A., Nayar, S.K., Murase, H., et al.: Columbia object image library (coil-20) (1996) 40. Niu, C., Shan, H., Wang, G.: Spice: Semantic pseudo-labeling for image clustering. IEEE Transactions on Image Processing 31, 7264–7278 (2022) 41. Qian, Q.: Stable cluster discrimination for deep clustering. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 16645–16654 (2023) 42. Qian,Q.,Xu,Y.,Hu,J.,Li,H.,Jin,R.:Unsupervisedvisualrepresentationlearning by online constrained k-means. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16640–16649 (2022) 43. Ronen, M., Finder, S.E., Freifeld, O.: Deepdpm: Deep clustering with an unknown number of clusters. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9861–9870 (2022) 44. Sarfraz, S., Sharma, V., Stiefelhagen, R.: Efficient parameter-free clustering using first neighbor relations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 8934–8943 (2019) 45. Shah, S.A., Koltun, V.: Robust continuous clustering. Proceedings of the National Academy of Sciences 114(37), 9814–9819 (2017) 46. Shen, Y., Shen, Z., Wang, M., Qin, J., Torr, P., Shao, L.: You never cluster alone. Advances in Neural Information Processing Systems 34, 27734–27746 (2021) 47. Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C.A., Cubuk, E.D., Kurakin, A., Li, C.L.: Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems 33, 596–608 (2020) 48. Tao, Y., Takagi, K., Nakata, K.: Clustering-friendly representation learning via instance discrimination and feature decorrelation. arXiv preprint arXiv:2106.00131 (2021) 49. Tsai, T.W., Li, C., Zhu, J.: Mice: Mixture of contrastive experts for unsupervised image clustering. In: International conference on learning representations (2021) 50. Van Gansbeke, W., Vandenhende, S., Georgoulis, S., Proesmans, M., Van Gool, L.: Scan: Learning to classify images without labels. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X. pp. 268–285. Springer (2020) 51. Von Luxburg, U.: A tutorial on spectral clustering. Statistics and computing 17, 395–416 (2007) 52. Wang, H., Lu, N., Luo, H., Liu, Q.: Self-supervised clustering with assistance from off-the-shelf classifier. Pattern Recognition 138, 109350 (2023) 53. Wang, J., Wang, J., Song, J., Xu, X.S., Shen, H.T., Li, S.: Optimized cartesian k-means. IEEE Transactions on Knowledge and Data Engineering 27(1), 180–192 (2014) 54. Wang, T., Isola, P.: Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In: International Conference on Machine Learning. pp. 9929–9939. PMLR (2020) 18 Y. Yan, N. Lu et al. 55. Xie, J., Girshick, R., Farhadi, A.: Unsupervised deep embedding for clustering analysis. In: International conference on machine learning. pp. 478–487. PMLR (2016) 56. Yang, B., Fu, X., Sidiropoulos, N.D., Hong, M.: Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In: international conference on machine learning. pp. 3861–3870. PMLR (2017) 57. Zhan, X., Xie, J., Liu, Z., Ong, Y.S., Loy, C.C.: Online deep clustering for unsu- pervised representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 6688–6697 (2020) 58. Zhong,H.,Wu,J.,Chen,C.,Huang,J.,Deng,M.,Nie,L.,Lin,Z.,Hua,X.S.:Graph contrastive clustering. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9224–9233 (2021) 59. Zhong,S.:Efficientonlinesphericalk-meansclustering.In:Proceedings.2005IEEE International Joint Conference on Neural Networks, 2005. vol. 5, pp. 3180–3185. IEEE (2005) 60. Znalezniak, M., Rola, P., Kaszuba, P., Tabor, J., Śmieja, M.: Contrastive hierarchi- cal clustering. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. pp. 627–643. Springer (2023) | 6 | 1 | The DPAC model uses a deep learning framework that incorporates modern neural network architectures suitable for deep clustering tasks, similar to models typical in the domain (like SimCLR). The datasets used (CIFAR-10, CIFAR-100, etc.) are of moderate size, and the paper mentions various datasets with up to 60,000 samples. Given the batch sizes in deep learning ranging from 32 to 256 and considering the number of epochs (which could typically range from 100 to 300 for convergence in contrastive or clustering approaches), a rough estimate for training could reach up to 6 hours using a single high-memory GPU (such as an NVIDIA RTX 3090). This estimate accounts for model complexity, dataset size, and empirical benchmarks from similar models in the literature. There was no specific mention of distributed training but given the architecture, the model should be trainable on a single GPU, supporting the training to fit under an 8-hour window. | yes | Yes | CV | Deep Online Probability Aggregation Clustering | 2024-07-07 0:00:00 | https://github.com/aomandechenai/deep-probability-aggregation-clustering | 1 | downloads ciraf10 dataset from pre process step | 36 hour just for pre train. | https://drive.google.com/file/d/1-nXU0RbPPY9WObax53y0CrfOoQ-6cry4/view?usp=sharing | Yes | -- Need to change some line on pre_train,py. The changed code is there in colab in comment. Takes 36 hour just for 1 epoch as shown while training. May run with enough resources. |
CAT2000 | SUM | [] | SUM: Saliency Unification through Mamba for Visual Attention Modeling | 2024-06-25T00:00:00 | https://arxiv.org/abs/2406.17815v2 | [
"https://github.com/Arhosseini77/SUM"
] | {'KL': '0.27'} | [
"KL"
] | Given the following paper and codebase:
Paper: SUM: Saliency Unification through Mamba for Visual Attention Modeling
Codebase: https://github.com/Arhosseini77/SUM
Improve the SUM model on the CAT2000 dataset. The result
should improve on the following metrics: {'KL': '0.27'}. You must use only the codebase provided.
| SUM: Saliency Unification through Mamba for Visual Attention Modeling Alireza Hosseini*,1Amirhossein Kazerouni*,2,3,4Saeed Akhavan1 Michael Brudno2,3,4Babak Taati2,3,4 1University of Tehran2University of Toronto3Vector Institute 4University Health Network {arhosseini77, s.akhavan }@ut.ac.ir, {amirhossein, brudno }@cs.toronto.edu babak.taati@uhn.ca Abstract Visual attention modeling, important for interpreting and prioritizing visual stimuli, plays a significant role in appli- cations such as marketing, multimedia, and robotics. Tradi- tional saliency prediction models, especially those based on Convolutional Neural Networks (CNNs) or Transformers, achieve notable success by leveraging large-scale anno- tated datasets. However, the current state-of-the-art (SOTA) models that use Transformers are computationally expen- sive. Additionally, separate models are often required for each image type, lacking a unified approach. In this paper, we propose Saliency Unification through Mamba (SUM) , a novel approach that integrates the efficient long-range de- pendency modeling of Mamba with U-Net to provide a uni- fied model for diverse image types. Using a novel Condi- tional Visual State Space (C-VSS) block, SUM dynamically adapts to various image types, including natural scenes, web pages, and commercial imagery, ensuring universal applicability across different data types. Our comprehen- sive evaluations across five benchmarks demonstrate that SUM seamlessly adapts to different visual characteristics and consistently outperforms existing models. These re- sults position SUM as a versatile and powerful tool for ad- vancing visual attention modeling, offering a robust solu- tion universally applicable across different types of visual content. Our code and pretrained models are available at https://github.com/Arhosseini77/SUM . 1. Introduction Visual attention is a critical function of the human visual system, enabling the selection of the most relevant infor- mation in a visual scene [32]. Modeling of this mecha- nism, known as saliency prediction, plays pivotal roles in numerous applications such as marketing [22, 31], multi- *Equal contributionmedia [50], computer vision [52], and robotics [9]. Deep learning models have succeeded in saliency pre- diction, by exploiting large-scale annotated datasets [3, 30]. Typically, these models employ a pre-trained object recog- nition network for feature extraction [38], with the U-Net architecture as a popular choice. Most methods employ Convolutional Neural Networks (CNNs) to construct en- coders and decoders for latent features, which generate vi- sual saliency maps [6, 14, 23, 26, 36, 62]. Recurrent archi- tectures, such as Long-Short Term Memory (LSTM) net- works, are also sometimes used to model both local and long-range visual information [11, 46], enhancing the ac- curacy of saliency predictions. More recently, the use of Transformer-based models has led to significant improve- ments, achieving SOTA performance in saliency prediction by learning spatial long-range dependencies [13, 20, 22, 41, 48]. However, the computational demands of the stan- dard self-attention mechanism used in these methods, which scales quadratically with image size, present a substantial challenge, especially for dense prediction tasks like saliency modeling. Moreover, a significant limitation within current saliency prediction models lies in their design specificity for singu- lar visual contexts. Saliency maps, and consequently the models that generate them, need to be adapted to the unique characteristics of different types of images. For example, in natural scenes, the visual attention of viewers may be driven largely by elements like color and movement, whereas in e-commerce images, textual information typically attracts more attention [29]. Similarly, in user interface (UI) de- signs, the upper-left quadrant often attracts more attention due to common eye movement patterns and a left-to-right viewing bias [31]. Although there are robust models tai- lored for specific datasets, such as those optimized for com- mercial imagery [22] or UIs [31], the research on the de- velopment of universally applicable models, that can effec- tively handle diverse requirements of various image types, remains limited. This gap underscores the necessity for aarXiv:2406.17815v2 [cs.CV] 9 Sep 2024 model, which can be universally performant across all im- age types and saliency datasets, thus providing a more com- prehensive solution to the field of saliency prediction. To address the challenges outlined above, we leverage the capabilities of State Space Models (SSMs) [34] as used in Mamba [18, 47], and introduce a novel unified Mamba- U-Net-based model for visual saliency prediction. Models like Mamba capture long-distance dependencies with linear computational complexity. Inspired by these successes, we propose the Saliency Unification through Mamba ( SUM ), which uses Mamba to efficiently capture long-range infor- mation. To ensure universal applicability across diverse im- age types, we incorporate a novel Conditional Visual State Space (C-VSS) block in our design. This component ef- fectively separates the distributions of different data types, making the model robust across various modalities. It al- lows SUM to dynamically adapt to distinct visual charac- teristics found in natural scenes, e-commerce imagery, and UIs. Validation of SUM on six large-scale datasets across different visual contexts confirms its exceptional adaptabil- ity and strong performance, positioning it as a potent tool in the advancement of visual attention modeling. These at- tributes make SUM a valuable tool for a range of applica- tions in visual saliency prediction. The main contributions of this work are summarized as follows: • A novel efficient class-conditional unified model is pro- posed that employs Mamba to capture long-range visual information efficiently with linear computational com- plexity. • A conditional component that dynamically adapts the model behavior at test time through the shift and scaling mechanisms, enhancing the adaptability of the model to various visual contexts. • SUM is extensively evaluated on six diverse benchmark datasets, including natural scenes with gaze or mouse ground truth labels, web pages, and commercial images, consistently demonstrating superior or competitive per- formance against previous SOTA models. 2. Related work Saliency Prediction: Saliency prediction models are de- signed to identify areas within an image or video that cap- ture human visual attention. Initially inspired by biological insights, these models historically used contrasts in color, intensity, and orientation, or low-level, hand-designed fea- tures to mimic human visual perception. This approach was based on cues from studies of how humans prioritize visual information [17, 27, 55]. With the advent of deep learning and the availability of large-scale eye-tracking datasets [3, 28, 30], there has been a shift towards applying deep neural networks to the problem of saliency prediction. This shift was marked by significant improvements in the accuracy and reliabil-ity of saliency models [61]. Kummerer et al. [38] demon- strated that leveraging pretrained networks, originally de- signed for object recognition tasks, could enhance the per- formance of saliency prediction models. This insight paved the way for subsequent models such as EML-Net [26], DeepGaze II [39], and SALICON [25], which incorpo- rated pretrained CNN encoders to enhance the prediction of saliency maps. Beyond the use of pretrained CNNs, researchers have explored various other network architec- tures for saliency prediction. These include fully convo- lutional networks (FCNs) [37], generative adversarial net- works (GANs) [7, 51], and convolutional long short-term memory networks (ConvLSTM) [28]. Attention mecha- nism and Transformer models [60], which show remark- able success in various vision tasks [20], have also been ap- plied to saliency prediction. Models like VGG-SSM [11] and TranSalNet [48], incorporate self-attention modules and transformer-based methods, respectively. These ap- proaches highlight the growing interest in leveraging ad- vanced architectures that go beyond traditional CNNs to im- prove saliency. Saliency prediction has also expanded to cover diverse types of data beyond natural scenes, including commercial advertisements [29, 40, 42] and user interfaces [31, 58]. This diversification has led to specialized models that ad- dress unique dataset challenges. For instance, Kou et al.[35] proposed a method for integrating confidence scores in saliency predictions for advertising images, enhancing both robustness and performance. Similarly, Jiang et al. [29] in- troduced salient Swin-Transformers and incorporated text detection techniques into their models, demonstrating the potential of combining various data modalities to improve prediction accuracy. Following this trend, Hosseini et al. [22] proposed a model that combines pretrained CNNs and transformers with a text map detection module for ad- vertising saliency prediction. Mamba: Recent advancements in SSMs, particularly with the development of the Mamba model [18], have signif- icantly changed the landscape of computational model- ing. This model offers a promising alternative to tradi- tional attention-based models. Introduced by Gu et al. [18], Mamba achieves linear computational complexity with re- spect to input size and is effective at capturing long-distance dependencies. This innovation has led to its broad ap- plication in fields such as language understanding and vi- sion tasks [24, 47, 53, 54, 65, 67]. The development of vision-specific SSMs such as Vision Mamba [67] and Vmamba [47] has marked a significant step in SSM devel- opment. Notable examples include U-Mamba [49], which combines SSMs with CNNs for medical image segmenta- tion. SegMamba [63] integrates SSMs in its encoder and uses a CNN-based decoder for 3D brain tumor segmen- tation. VM-UNet [57] explores a purely SSM-based ap- Patch Merging VSS Block x2Patch Expanding C-VSS Block x2 C-VSS Block x2Patch Expanding C-VSS Block x2Patch Expanding C-VSS Block x1Patch Embedding VSS Block x2 Patch Merging VSS Block x2 Patch Merging VSS Block x25 SUM Commer cial ImageNatural ImageUI Shift & ScaleDWConvSiLUSS2DLayerNormLinear LinearSiLU LayerNormShift & Scale LinearScale MLPInput ImagePrediction (a) (b) (c)Saliency Head Class FeedbackFigure 1. (a) Overview of our SUM model, (b) conditional-U-Net-based model for saliency prediction, and (c) C-VSS module. proach in this area. Other models, like LightM-UNet [43], stand out for their efficiency, outperforming previous medi- cal segmentation models with fewer parameters. Addition- ally, Mamba’s versatility is demonstrated in video-based ap- plications such as video medical segmentation [66] and un- derstanding [8]. However, the use of Mamba in saliency map prediction is still largely unexplored. Unified Models: Unified models in visual saliency pre- diction have made significant advancements in integrating image and video saliency within a single framework. The UNISAL model [14] is an example that addresses the inte- gration of image and video saliency through domain adapta- tion. However, while UNISAL is a lightweight model, it is not a universal model for all image saliency datasets. It pri- marily relies on the Salicon Dataset [30] for image saliency prediction, and its performance on this dataset has been outperformed by other models over time. Furthermore, UNISAL’s universal model does not include diverse image types, limiting its applicability. As another notable model, UniAR [41], focuses on image-based saliency prediction and incorporates a multimodal transformer to capture di- verse human behaviors across various visual content and tasks. While it encompasses UI and natural scene images, it overlooks incorporating e-commercial images, which have become increasingly important in recent years [22, 29]. Ad- ditionally, UniAR’s complexity is highlighted by its model size, with 848M parameters, making it computationally de- manding and potentially limiting its practical use. Despite advancements in existing unified saliency models, there is still a significant gap in developing efficient, comprehensive models that effectively address real-world needs across di-verse image types while maintaining manageable complex- ity. 3. Proposed Method This section provides an overview of the proposed network architecture, as shown in Figure 1(a). Next, we revisit the concept of VSS as introduced by Liu et al. [47]. Build- ing on this foundation, we introduce our novel C-VSS mod- ule and a conditional Mamba-U-Net-based model for visual saliency prediction. 3.1. Model Architecture The architecture of SUM , as illustrated in Figure 1(b), adopts a U-Net configuration. The process initiates with an input image X∈RH×W×3, with spatial dimensions Hand W, and 3channels, which undergoes initial transformation via a patch embedding module, reducing its dimensions to H 4×W 4×C. The encoder module generates four hierar- chical output representations. Each stage is followed by a downsampling layer, which reduces the spatial dimensions by half while simultaneously doubling the number of chan- nels. Transitioning to the decoder, comprises four stages of C-VSS layers, with each stage incorporating two blocks, except for the final stage, which contains a single block. Patch-expanding layers are then applied to achieve resolu- tion upsampling while also decreasing channel dimensions by a factor of 2. Finally, a linear layer is responsible for generating the ultimate output. Our SUM architecture uses the VMamba [47] weights pre-trained on ImageNet [12]. This pre-training accelerates the learning process, improves the model’s ability to detect salient regions more accurately, and ensures better generalization on diverse images. 3.2. Visual State Space (VSS) Mamba [18] employs SSMs [19] to shift the complexity of attention from quadratic to linear in long-sequence model- ing. This has proven particularly beneficial in vision tasks due to its higher accuracy, reduced computational load, and lower memory requirements [67]. However, adapt- ing Mamba’s inherently 1D, causal scanning for 2D images presents challenges due to its restricted receptive field and inability to process unscanned data effectively. To address these issues, VMamba [47] introduces the Cross-Scan Mod- ule, which employs bidirectional scanning along horizon- tal and vertical axes. This module expands the image into sequences of patches scanned in four directions, enabling each pixel to integrate information from all directions. Sub- sequently, these sequences are reassembled into the orig- inal 2D format to form a complete image. Termed the 2D-Selective-Scan (SS2D), this method enhances Mamba’s functionality for 2D spatial processing, ensuring both local and global spatial relevance. Building upon these insights, we incorporate the VSS block as the fundamental unit in SUM. As shown in Figure 1(c), the VSS module can be for- mulated as: X=LN1(F), Attention =LN2(SS2D (SiLU (DW-Conv (Linear (X))))), Output =Linear (SiLU (Linear (X))⊗Attention ) +F, (1) where the input feature is denoted by F∈RH′×W′×C′. The operator ⊗denotes an element-wise product operation, LN represents LayerNorm, DW-Conv stands for depth-wise convolution, and SiLU [15] is an activation function. 3.3. Conditional Visual State Space (C-VSS) We enhance the model’s adaptability to diverse visual con- tent by conditioning the VSS block in the decoder based on input type. This is crucial for predicting saliency maps ef- fectively, as different content types inherently attract viewer attention in distinct ways. For instance, natural scenes may focus on color and movement, e-commerce images on tex- tual information, and UI designs on specific layout patterns such as the upper-left quadrant. To address these variations, we implement modulation of the feature map through dy- namic scaling and shifting operations that adjust feature ac- tivations based on the input type. The modulated feature map can be generally defined as: Modulated Feature Map =αi⊙F+βi where Fdenotes the original feature map. Here, αis a scal- ing factor, βis a shifting factor, and ⊙is an element-wise multiplication.To refine our model’s ability to effectively handle differ- ent data types, we define T= 4 learnable tokens, where Drepresents the dimensionality of each token. Each token is designated to capture distinct information about one of the following data categories: Natural Scene-Mouse , Natural Scene-Eye ,E-Commerce , and UI. These tokens provide a more nuanced mechanism than a simple one-hot encoding of data types, enabling the model to adapt and learn detailed, type-specific information. We have al- located two tokens for the natural scene data because dif- ferent methodologies are used in data collection for these categories, eye andmouse . Grouping them into a single token could potentially confuse the model during inference. As discussed in [59], mouse tracking data is less consis- tent and more scattered than eye tracking data, which does not fully align with eye tracking data distribution, particu- larly in terms of different contextual regions. Furthermore, while mouse tracking data can lead to acceptable outcomes for training existing models, it is less reliable for model se- lection and evaluation. Based on these insights and our ex- periments, we differentiate mouse and eye data of natural scenes. Subsequently, the relevant token is fed into a Multi- Layer Perceptron (MLP) model to ensure that learning is conditioned on the specific characteristics of each data type. The MLP is composed of Khidden layers and p1, p2, . . . , p Kfeatures per layer. This MLP is designed to regress the parameters αiandβi, which modulate the model based on the diversity of inputs. The MLP, de- fined as g(z;θ) :R4×D→R4×5, outputs a matrix Y, with each row representing one of four input tokens and generating five key parameters. These parameters include pairs and individual instances of αiandβi, specifically {(α1, β1),(α3),(α2, β2)}. An input label Ldetermines the selection of the relevant row from Y, resulting in the out- put vector S=YL. This 1×5vector contains modulation parameters finely tuned to the specifics of the designated in- put. These parameters are then integrated into the model to modify its behavior dynamically: (α1, β1)are used to shift and scale LN 1,(α3)adjusts the scaling of the SS2D block to regulate feature intensity, and (α2, β2)shift and scale LN2. This enables the MLP to precisely control the nor- malization and scaling within the model, thereby enhancing its performance and generalization across different visual content types. 3.4. Loss Function Our model utilizes a composite loss function inspired by [2, 14, 48] in visual saliency prediction. This function inte- grates five distinct components, each designed to optimize the prediction accuracy of saliency maps by targeting differ- ent aspects of the saliency prediction task. The loss function is formulated as: Loss =λ1· LKL(sg, s) +λ2· LCC(sg, s) +λ3· LSIM(sg, s) +λ4· LNSS(fg, s) +λ5· LMSE(sg, s) (2) where sgrepresents the ground truth saliency map, fgde- notes the ground truth fixation map, and sis the network’s predicted saliency map. Each component of the loss func- tion serves a specific purpose as defined in the following. Kullback-Leibler Divergence (KL): KL divergence mea- sures the dissimilarity between the predicted and ground truth distributions, providing a method to penalize the model when its predictions deviate significantly from the actual data distribution. LKL(sg, s) =nX i=1sg ilog ϵ+si sg i+ϵ , (3) where, the regularization constant ϵis set to 2.2×10−16. Linear Correlation Coefficient (CC): The correlation co- efficient assesses the linear relationship between the pre- dicted and ground truth saliency maps. A higher correla- tion indicates that the model predictions align well with the ground truth trends, improving the reliability of the saliency maps. LCC(sg, s) =cov(sg, s) σ(sg)·σ(s), (4) where, cov (.)represents the covariance and σ(.)denotes the standard deviation. Similarity (SIM): SIM evaluates the overlap between the predicted and actual saliency maps, emphasizing the impor- tance of accurately predicting the salient regions. LSIM(sg, s) =nX i=1min(sg i, si) (5) Normalized Scan-path Saliency (NSS): NSS measures the correlation between the normalized predicted saliency map and the actual fixation points, highlighting the model’s ef- fectiveness at capturing human attention patterns. LNSS(fg, s) =1P i(fg i)X isi−µ(s) σ(s) fg i (6) Mean Squared Error (MSE): This component calculates the mean squared error between the predicted and actual saliency maps, directly penalizing inaccuracies in the pixel- wise saliency values. By adjusting the weighting coefficients λi(i= 1, . . . , 5), we aim to minimize dissimilarity metrics (KL, MSE) and maximize similarity metrics (CC, SIM, NSS). This strat- egy ensures that the model predicts accurate saliency maps and closely aligns with human visual attention patterns and saliency distributions.Table 1. Comprehensive compilation of datasets used for training and testing. Dataset Image domain Acquisition Type # Image Image Resolution # Training Sample Salicon [30] Natural scene Mouse 15,000 640 ×480 10,000 MIT1003 [33] Natural scene Eye 1003 Varied 904 CAT2000 [3] Natural scene Eye 2000 1080 ×1920 1600 OSIE [64] Natural scene Eye 700 800 ×600 500 U-EYE [31] Web page Eye 1979 Varied 1583 SalECI [29] E-Commercial Eye 972 720 ×720 871 4. Experiments Datasets: We leverage six benchmark large-scale datasets for training and evaluating our models, as outlined in Ta- ble 9. This table presents a list of these datasets along with specific details about each. Evaluation Metrics: To assess the accuracy of predicted saliency maps, we use two types of metrics: location- based anddistribution-based , followed by [5]. Location- based metrics, such as NSS and AUC (Area under the ROC Curve), evaluate predictions using a binary fixation map image as ground truth and focus on specific salient loca- tions. Distribution-based metrics, including CC (Correla- tion Coefficient), SIM (Similarity), and KLD (Kullback- Leibler Divergence), utilize a grayscale saliency map im- age to measure the similarity between predicted and actual distributions. Higher values generally indicate better per- formance for all metrics, except for KLD, where a value closer to zero signifies a more accurate prediction. Implementation Details: Our model is implemented using the PyTorch framework and is trained on A40 with 48 GB memory for 15 epochs with an early stopping after 4 epochs. We optimize the network using the Adam optimizer. The learning rate is initially set to 1×10−4, and we employ a learning rate scheduler that decreases the factor by 0.1 after every four epochs. The batch size is set to 16. Additionally, we resize all data and labels to a resolution of 256×256and combine the training data from all six datasets for model training. The optimal values for the loss function weight- ing coefficients λiare as follows: λ1= 10 ,λ2=−2, λ3=−1,λ4=−1, and λ5= 5. In addition, the MLP ar- chitecture in our implementation comprises three linear lay- ers with widths of 128, 64, and 5, respectively, interleaved with GELU [21] activation functions. The number of tokens is set to T= 4 with each token having a dimensionality of D= 128 . 4.1. Experiment Results We conducted comprehensive testing of our universal model, SUM, across six different datasets, each bench- marked against state-of-the-art (SOTA) models for compar- ison. These datasets encompass a range of areas includ- ing natural scenes, user interfaces, and e-commerce. SUM consistently outperformed the best existing models across all datasets. In the 30 metrics presented in Table 2, SUM Table 2. Saliency prediction performance across various datasets.∗indicates that we have trained those models ourselves for fair compar- ison because results were not available for the corresponding dataset or the input image size was varied.†signifies that the results have been taken from the paper by Hosseini et al. [22], and the rest of the results are taken from their respective papers. For our model, we note the percentage (%) change in performance relative to the second-best result, or to the best result if ours is not the top performer. Dataset Method CC ↑ KLD↓ AUC↑ SIM↑ NSS↑ # Parameters U-EYE [31] SAM∗[11] 0.580 1.490 0.811 0.520 1.640 30M (Web page) UMSI∗[16] 0.562 1.580 0.805 0.510 1.690 30M SAM++∗[31] 0.580 1.190 0.800 0.530 1.660 42M Transalnet∗[48] 0.696 0.616 0.839 0.598 1.601 72M UMSI++∗[31] 0.670 0.860 0.830 0.580 1.610 30M SUM (Ours) 0.731 +5.03% 0.544 −11.69 % 0.846 +0.83% 0.630 +5.35% 1.704 +0.83% 57.5M SalECI [29] SSM†[11] 0.720 0.599 0.830 0.611 1.396 42M (E-Commercial) DeepGaze IIE†[44] 0.560 0.995 0.842 0.399 1.327 104M EML-NET†[31] 0.510 1.220 0.807 0.536 1.232 47M Transalnet†[31] 0.717 0.873 0.824 0.534 1.723 72M Temp-Sal†[2] 0.719 0.712 0.813 0.629 1.768 242M SSwin Transformer [29] 0.687 0.652 0.868 0.606 1.701 29M Hosseini et al. [22] 0.750 0.578 0.892 0.645 1.890 66M SUM (Ours) 0.789 +5.20% 0.473 −18.17 % 0.899 +0.78% 0.680 +5.43% 2.012 +6.46% 57.5M OSIE [64] UMSI [16] 0.746 0.513 0.856 0.631 1.788 30M (Natural scene) EML-NET [26] 0.717 0.537 0.854 0.619 1.737 47M SAM-ResNet [11] 0.758 0.480 0.860 0.648 1.811 43M Chen et al. [10] 0.761 0.506 0.860 0.652 1.840 - Transalnet∗[31] 0.791 0.667 0.923 0.651 2.448 72M UniAR [41] 0.754 0.547 0.867 0.647 1.842 848M SUM (Ours) 0.861 +8.85% 0.340 −29.17 % 0.924 +6.57% 0.727 +11.5% 3.416 +39.54 % 57.5M Salicon [30] UniAR [41] 0.901 0.215 0.870 0.792 1.947 848M (Natural scene) SimpleNet [56] 0.907 0.193 0.871 0.797 1.926 116M MDNSal [56] 0.899 0.217 0.868 0.797 1.893 - MSI-Net [36] 0.899 0.307 0.865 0.784 1.931 20M GazeGAN[7] 0.879 0.376 0.864 0.773 1.899 - UNISAL[14] 0.879 0.354 0.864 0.775 1.952 4M Transalnet∗[48] 0.89 0.220 0.867 0.783 1.924 72M DeepGaze IIE∗[44] 0.872 0.285 0.869 0.733 1.996 104M Temp-Sal∗[2] 0.911 0.195 0.869 0.800 1.967 242M SUM (Ours) 0.909 −0.22% 0.192 −1.54% 0.876 +0.64% 0.804 +0.50% 1.981 −0.75% 57.5M CAT2000 [3] FastSal [23] 0.721 0.552 0.86 0.603 1.859 4M (Natural scene) SAM-Resnet [11] 0.87 0.670 0.878 0.739 2.411 43M MSI-Net∗[36] 0.866 0.428 0.881 0.730 2.355 20M DV A [62] 0.861 0.449 0.878 0.734 2.345 - UNISAL [14] 0.842 0.530 0.876 0.721 2.257 4M MDNSal [56] 0.889 0.293 0.878 0.751 2.329 - Transalnet∗[48] 0.877 0.287 0.882 0.744 2.373 72M SUM (Ours) 0.882 −0.79% 0.270 −5.92% 0.888 +0.68% 0.754 +0.4% 2.424 +0.54% 57.5M MIT1003 [33] FastSal [23] 0.590 1.036 0.875 0.478 2.008 4M (Natural scene) SAM-Resnet [11] 0.746 1.247 0.902 0.597 2.752 43M DV A [62] 0.699 0.753 0.897 0.566 2.574 - UNISAL [14] 0.734 1.014 0.902 0.597 2.759 4M Transalnet∗[48] 0.722 0.660 0.903 0.592 2.631 72M SUM (Ours) 0.768 +2.95% 0.563 −14.7% 0.913 +1.11% 0.630 +5.53% 2.839 +2.9% 57.5M achieved SOTA results in 27 cases and secured second place in the other three. These results demonstrate that our model is highly effective and versatile across various types of data,setting a new standard for future advancements in the field. This consistency in performance underscores its robustness and capability to handle the diverse challenges presented Input Image Ground Truth SUM (Ours) Transalnet [48] Temp-Sal [2] DeepGaze IIE [44] SimpleNet [56] Input Image Ground Truth SUM (Ours) Transalnet [48] UNISAL [14] EML-NET [26] FastSal [23] Input Image Ground Truth SUM (Ours) Transalnet [48] Temp-Sal [2] DeepGaze IIE [44] Hosseini et al. [22] Input Image Ground Truth SUM (Ours) Transalnet [48] UMSI++ [31] SAM++ [31] UMSI [16] Figure 2. Comparative visualizations of saliency predictions across different data types. The first row depicts Natural Scene-Mouse data, the second row showcases Natural Scene-Eye data, the third row features E-commerce, and the fourth row displays UI. Each row highlights the model’s performance in identifying salient features within these distinct categories. by different datasets. Moreover, compared to counterparts like Transalnet [48], Temp-Sal [2], DeepGaze IIE [44], and UniAR [41], our model is relatively efficient. This effi- ciency underscores the advantages of our streamlined ap- proach, which leverages Mamba’s capabilities to develop a model that is efficient, robust, and universally applicable. Additionally, Figure 2 displays saliency prediction images selected from the validation sets, showing that our model’s predictions are much closer to the ground truth than those of the SOTA models, further proving that SUM can more accurately predict human attention behavior. 5. Ablation Study Impact of different loss combinations : We investigated the impact of different loss metric combinations on the model validation performance, as summarized in Table 3. Our approach involved normalizing each metric using a min-max scaling technique to ensure a balanced evalua- tion across different metrics. The score function, described in Equation 7, is specifically designed to maximize the ben- eficial metrics (CC, SIM, NSS) and minimize the detrimen- tal metric (KL). The function’s configuration is as follows: Fscore=CC scaled+SIM scaled+NSS scaled−KL scaled (7) From the results in Table 3, it is evident that the inclu- sion of KL loss significantly impacts the model’s perfor- mance, demonstrating its crucial role in defining saliency loss. When loss functions are used individually, the per- formance varies, with SIM typically showing higher valuesfor both CC and Fscores, indicating its strong standalone impact on model saliency. Excluding MSE, which is less di- rectly related to saliency, still results in high performance, but the highest scores are consistently observed when MSE is included, suggesting its underlying contribution to model robustness and generalization. The integration of all five loss functions results in the highest Fscores. This combi- nation not only balances the enhancement and suppression of features but also stabilizes the training process, as indi- cated by the highest scores of 2.853 and 2.836 for Salicon and all datasets, respectively. Impact of the C-VSS module: We compared the impact of using a C-VSS module conditioned on three and four classes against a standard VSS, which serves as the uncon- ditional setup. The three classes include Natural Scene, UI, and E-Commerce, in contrast to our model’s broader cate- gorization into four classes. As shown in Table 4, the C- VSS module significantly enhances performance across all evaluated datasets compared to the standard VSS. Notably, conditioning the model on four classes yields better results than limiting it to three. This suggests that the finer catego- rization in the four-class setup better aligns with the varied data characteristics, especially when dealing with different data acquisition setups, thereby improving the model’s pre- dictive accuracy and robustness. Impact of different prompt lengths: We explored the in- fluence of prompt length on model performance. We exper- imented with various prompt lengths—32, 96, 128, 256—to determine how they impact model behavior during the train- Table 3. Evaluation of different combinations of loss functions on model performance. Loss Functions Avg. Performance on Salicon [30] Avg. Performance Across All Datasets KL CC SIM NSS MSE CC↑KLD↓NSS↑SIM↑FScore↑CC↑KLD↓NSS↑SIM↑FScore↑ ✓ ✗ ✗ ✗ ✗ 0.910 0.189 1.908 0.805 2.797 0.85 0.465 2.498 0.723 2.386 ✗✓ ✗ ✗ ✗ 0.907 0.732 1.926 0.787 1.634 0.851 1.08 2.532 0.7 1.218 ✗ ✗ ✓ ✗ ✗ 0.911 0.447 1.91 0.807 2.391 0.85 0.747 2.469 0.728 1.917 ✗ ✗ ✗ ✓ ✗ 0.834 0.765 2.044 0.721 0 0.804 1.072 2.614 0.658 -0.079 ✗ ✗ ✗ ✗ ✓ 0.909 0.234 1.919 0.803 2.696 0.846 0.525 2.479 0.719 2.089 ✓ ✗✓ ✗ ✗ 0.911 0.196 1.928 0.806 2.833 0.852 0.465 2.337 0.728 1.972 ✓ ✗ ✗ ✓ ✗ 0.892 0.199 2.029 0.792 2.537 0.841 0.467 2.594 0.712 2.353 ✓ ✓ ✗ ✗ ✗ 0.911 0.185 1.191 0.805 1.977 0.852 0.453 2.515 0.720 2.46 ✓ ✗ ✗ ✗ ✓ 0.909 0.192 1.917 0.802 2.755 0.851 0.456 2.504 0.723 2.441 ✗✓ ✓ ✗ ✗ 0.910 0.531 1.921 0.802 2.188 0.85 0.871 2.503 0.721 1.733 ✓ ✓ ✓ ✗ ✗ 0.909 0.198 1.920 0.803 2.759 0.852 0.464 2.527 0.726 2.568 ✓ ✗✓ ✗✓ 0.909 0.192 1.919 0.799 2.722 0.852 0.461 2.514 0.726 2.53 ✓ ✗ ✗ ✓ ✓ 0.887 0.208 2.038 0.788 2.421 0.830 0.472 2.642 0.711 2.259 ✓ ✓ ✗ ✗ ✓ 0.910 0.188 1.914 0.803 2.783 0.851 0.447 2.511 0.722 2.464 ✓ ✓ ✓ ✓ ✗ 0.907 0.198 1.989 0.803 2.815 0.850 0.466 2.614 0.725 2.794 ✓ ✓ ✓ ✗✓ 0.905 0.208 1.920 0.798 2.632 0.852 0.457 2.510 0.720 2.437 ✓ ✓ ✓ ✓ ✓ 0.909 0.192 1.981 0.804 2.853 0.852 0.450 2.602 0.726 2.836 Table 4. Mean value and standard deviation of saliency prediction performance comparison of conditional VSS modules for three and four classes and standard VSS (no-condition) across all datasets. Dataset Method CC↑ KLD↓ AUC↑ SIM↑ NSS↑ U-EYE [31] No-condition 0.725 ±0.035 0.562 ±0.062 0.845 ±0.012 0.626 ±0.023 1.689 ±0.121 (Web page) 3-class 0.729 ±0.035 0.551 ±0.057 0.845 ±0.012 0.628 ±0.012 1.699 ±0.012 4-class 0.731 ±0.037 0.544 ±0.057 0.846 ±0.012 0.630 ±0.023 1.704 ±0.125 SalECI [29] No-condition 0.783 ±0.046 0.502 ±0.112 0.898 ±0.014 0.677 ±0.039 2.017 ±0.168 (E-Commercial) 3-class 0.781 ±0.055 0.505 ±0.131 0.896 ±0.016 0.678 ±0.047 1.99±0.181 4-class 0.789 ±0.0453 0.473 ±0.088 0.899 ±0.012 0.680 ±0.041 2.012 ±0.161 OSIE [64] No-condition 0.842 ±0.033 0.403 ±0.05 0.918 ±0.009 0.703 ±0.022 3.18±0.32 (Natural scene) 3-class 0.845 ±0.033 0.395 ±0.049 0.918 ±0.009 0.706 ±0.022 3.213 ±0.323 4-class 0.861 ±0.029 0.340 ±0.050 0.924 ±0.008 0.727 ±0.02 3.416 ±0.319 Salicon [30] No-condition 0.903 ±0.012 0.206 ±0.028 0.875 ±0.014 0.798 ±0.013 1.979 ±0.203 (Natural scene) 3-class 0.904 ±0.011 0.205 ±0.027 0.875 ±0.014 0.798 ±0.013 1.981 ±0.204 4-class 0.909 ±0.011 0.192 ±0.025 0.876 ±0.014 0.804 ±0.012 1.981 ±0.201 CAT2000 [3] No-condition 0.880 ±0.014 0.272 ±0.022 0.887 ±0.010 0.752 ±0.010 2.42±0.141 (Natural scene) 3-class 0.881 ±0.016 0.271 ±0.023 0.888 ±0.010 0.753 ±0.011 2.424 ±0.142 4-class 0.882 ±0.0158 0.270 ±0.026 0.888 ±0.010 0.754 ±0.011 2.424 ±0.142 MIT1003 [33] No-condition 0.737 ±0.035 0.641 ±0.083 0.908 ±0.010 0.596 ±0.024 2.648 ±0.255 (Natural scene) 3-class 0.741 ±0.034 0.636 ±0.077 0.908 ±0.010 0.597 ±0.023 2.678 ±0.249 4-class 0.768 ±0.039 0.563 ±0.075 0.913 ±0.009 0.630 ±0.027 2.839 ±0.285 ing and validation phases. The results, detailed in Table 5, indicate that both shorter and longer prompt lengths con- tribute to fitting issues. Among the tested lengths, 128 demonstrated the most balanced and effective outcome. Comparison of Prompt vs. one-hot encoding: In our ex- periments, we compared two approaches: one using gen- erated prompts tailored to specific conditions, and anotherusing a one-hot vector to represent class conditions. Our goal was to see how these methods influence the model’s ability to handle different types of data. Table 6 illustrates the results of this comparison. Using the prompt-based ap- proach, the model demonstrates higher performance across all metrics. This method helps the model better distinguish between the diverse data distributions in each domain, as Table 5. Impact of prompt length on model performance. Prompt LengthPerformance Metrics# ParametersCC↑KL↓NSS↑SIM↑ Salicon [30] 64 0.909 0.196 1.98 0.804 57.4M 96 0.909 0.188 1.958 0.802 57.4M 128 0.909 0.192 1.981 0.804 57.5M 256 0.906 0.195 1.953 0.801 58M Average Performance Across Datasets 64 0.849 0.463 2.601 0.725 57.4M 96 0.847 0.455 2.567 0.722 57.4M 128 0.852 0.450 2.602 0.726 57.5M 256 0.850 0.456 2.558 0.723 58M Table 6. Prompt vs. One-Hot Encoding. MethodPerformance Metrics# ParametersCC↑ KL↓ NSS↑ SIM↑ Avg. Performance on Salicon [30] SUM - One-hot 0.902 ±0.012 0.201 ±0.024 1.97±0.972 0.795 ±0.012 57.3M SUM - Prompt 0.909 ±0.011 0.192 ±0.025 1.981 ±0.201 0.804 ±0.012 57.5M Avg. Performance Across Datasets SUM - One-hot 0.843 ±0.034 0.485 ±0.046 2.583 ±0.222 0.716 ±0.023 57.3M SUM - Prompt 0.852 ±0.029 0.45±0.053 2.602 ±0.206 0.726 ±0.022 57.5M Table 7. Comparison of C-VSS placement in the proposed U-Net structure. Configuration CC↑KL↓NSS↑SIM↑# Parameters Avg. Performance on Salicon [30] Bottleneck 0.909 0.195 1.97 0.804 57.37M Decoder 0.909 0.192 1.981 0.804 57.5M All-Blocks 0.907 0.198 1.975 0.801 58.5M Avg. Performance Across Datasets Bottleneck 0.847 0.466 2.581 0.724 57.37M Decoder 0.852 0.450 2.602 0.726 57.5M All-Blocks 0.854 0.458 2.601 0.724 58.5M opposed to the more straightforward one-hot vector method. Optimal C-VSS Placement in U-Net: We evaluated the impact of deploying the C-VSS in different sections of our U-Net structure: solely in the bottleneck, across all blocks of the decoder, and in every block of both the encoder and decoder. Our objective was to ascertain the optimal place- ment of the C-VSS for enhancing model performance. As summarized in Table 7, incorporating C-VSS in the en- coder, in addition to the decoder, tends to undermine the features in the encoder, leading to suboptimal performance. This observation suggests that integrating C-VSS through- out the entire U-Net may disrupt the model’s ability to lever- age its foundational pre-trained features effectively. Con- versely, limiting the use of C-VSS to the bottleneck pro- vides some benefits but does not fully capitalize on the po- tential enhancements the module offers. The most effective strategy, as indicated by our results, is employing C-VSS across all decoder blocks. This approach allows the model to better adapt to the unique characteristics of each input do- main, resulting in superior performance metrics compared to the other configurations tested.6. Conclusion In this paper, we have presented SUM , a novel approach designed to address the limitations of traditional saliency prediction models. By integrating the Mamba architecture with U-Net and enhancing it with a Conditional Visual State Space (C-VSS) block, SUM adapts dynamically to various image types, making it universally applicable across diverse visual contexts. Our extensive evaluations across six bench- mark datasets demonstrated SUM’s superior performance, consistently outperforming existing models. The model ex- celled in both location-based and distribution-based met- rics, proving its robustness and adaptability to use in real- world problems. References [1] Hani Alers, Hantao Liu, Judith Redi, and Ingrid Heynder- ickx. Studying the effect of optimizing the image quality in saliency regions at the expense of background content. In Image Quality and System Performance VII , pages 59–67. SPIE, 2010. 1, 3 [2] Bahar Aydemir, Ludo Hoffstetter, Tong Zhang, Mathieu Salzmann, and Sabine S ¨usstrunk. Tempsal-uncovering tem- poral information for deep saliency prediction. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6461–6470, 2023. 4, 6, 7 [3] Ali Borji and Laurent Itti. Cat2000: A large scale fixa- tion dataset for boosting saliency research. arXiv preprint arXiv:1505.03581 , 2015. 1, 2, 5, 6, 8 [4] Neil Bruce and John Tsotsos. Attention based on information maximization. Journal of Vision , 7(9):950–950, 2007. 1, 3 [5] Zoya Bylinskii et al. What do different evaluation metrics tell us about saliency models? IEEE Transactions on Pattern Analysis and Machine Intelligence , 41(3):740–757, 2018. 5 [6] Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min, Guodong Guo, and Patrick Le Callet. How is gaze influ- enced by image transformations? dataset and model. IEEE Transactions on Image Processing , 29:2287–2300, 2019. 1 [7] Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min, Guodong Guo, and Patrick Le Callet. Gazegan: A gener- ative adversarial saliency model based on invariance analy- sis of human gaze during scene free viewing. arXiv preprint arXiv:1905.06803 , 2019. 2, 6 [8] Guo Chen, Yifei Huang, Jilan Xu, Baoqi Pei, Zhe Chen, Zhiqi Li, Jiahao Wang, Kunchang Li, Tong Lu, and Limin Wang. Video mamba suite: State space model as a ver- satile alternative for video understanding. arXiv preprint arXiv:2403.09626 , 2024. 3 [9] Jiazhong Chen, Zongyi Li, Yi Jin, Dakai Ren, and Hefei Ling. Video saliency prediction via spatio-temporal reason- ing. Neurocomputing , 462:59–68, 2021. 1 [10] Shi Chen, Nachiappan Valliappan, Shaolei Shen, Xinyu Ye, Kai Kohlhoff, and Junfeng He. Learning from unique per- spectives: User-aware saliency modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2701–2710, 2023. 6 [11] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. Predicting human eye fixations via an lstm-based saliency attentive model. IEEE Transactions on Image Pro- cessing , 27(10):5142–5154, 2018. 1, 2, 6 [12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. 3 [13] Yasser Abdelaziz Dahou Djilali, Kevin McGuinness, and Noel O’Connor. Learning saliency from fixations. In Pro- ceedings of the IEEE/CVF Winter Conference on Applica- tions of Computer Vision , pages 383–393, 2024. 1 [14] Richard Droste, Jianbo Jiao, and J Alison Noble. Unified im- age and video saliency modeling. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part V 16 , pages 419–435. Springer, 2020. 1, 3, 4, 6, 7 [15] Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid- weighted linear units for neural network function approxima- tion in reinforcement learning. Neural networks , 107:3–11, 2018. 4 [16] Camilo Fosco, Vincent Casser, Amish Kumar Bedi, Peter O’Donovan, Aaron Hertzmann, and Zoya Bylinskii. Pre- dicting visual importance across graphic design types. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology , pages 249–260, 2020. 6, 7 [17] Stas Goferman, Lihi Zelnik-Manor, and Ayellet Tal. Context-aware saliency detection. IEEE transactions on pat- tern analysis and machine intelligence , 34(10):1915–1926, 2011. 2 [18] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 , 2023. 2, 4 [19] Albert Gu, Karan Goel, and Christopher R ´e. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396 , 2021. 4 [20] Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chun- jing Xu, Yixing Xu, et al. A survey on vision transformer. IEEE transactions on pattern analysis and machine intelli- gence , 45(1):87–110, 2022. 1, 2 [21] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 , 2016. 5 [22] Alireza Hosseini, Kiana Hooshanfar, Pouria Omrani, Reza Toosi, Ramin Toosi, Zahra Ebrahimian, and Mohammad Ali Akhaee. Brand visibility in packaging: A deep learning ap- proach for logo detection, saliency-map prediction, and logo placement analysis. arXiv preprint arXiv:2403.02336 , 2024. 1, 2, 3, 6, 7 [23] Feiyan Hu and Kevin McGuinness. Fastsal: A computa- tionally efficient network for visual saliency prediction. In 2020 25th International Conference on Pattern Recognition (ICPR) , pages 9054–9061. IEEE, 2021. 1, 6, 7 [24] Vincent Tao Hu, Stefan Andreas Baumann, Ming Gui, Olga Grebenkova, Pingchuan Ma, Johannes Fischer, and Bjorn Ommer. Zigma: Zigzag mamba diffusion model. arXiv preprint arXiv:2403.13802 , 2024. 2[25] Xun Huang, Chengyao Shen, Xavier Boix, and Qi Zhao. Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In Proceedings of the IEEE international conference on computer vision , pages 262–270, 2015. 2 [26] Sen Jia and Neil DB Bruce. Eml-net: An expandable multi- layer network for saliency prediction. Image and vision com- puting , 95:103887, 2020. 1, 2, 6, 7 [27] Lai Jiang, Mai Xu, Zhaoting Ye, and Zulin Wang. Image saliency detection with sparse representation of learnt texture atoms. In Proceedings of the IEEE international conference on computer vision workshops , pages 54–62, 2015. 2 [28] Lai Jiang, Mai Xu, Zulin Wang, and Leonid Sigal. Deepvs2. 0: A saliency-structured deep learning method for predicting dynamic visual attention. International Journal of Computer Vision , 129(1):203–224, 2021. 2 [29] Lai Jiang, Yifei Li, Shengxi Li, Mai Xu, Se Lei, Yichen Guo, and Bo Huang. Does text attract attention on e-commerce images: A novel saliency prediction dataset and method. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , pages 2088–2097, 2022. 1, 2, 3, 5, 6, 8 [30] Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. Salicon: Saliency in context. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 1072–1080, 2015. 1, 2, 3, 5, 6, 8, 9 [31] Yue Jiang, Luis A Leiva, Hamed Rezazadegan Tavakoli, Paul RB Houssel, Julia Kylm ¨al¨a, and Antti Oulasvirta. Ueyes: Understanding visual saliency across user interface types. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , pages 1–21, 2023. 1, 2, 5, 6, 7, 8 [32] John Jonides, David E Irwin, and Steven Yantis. Integrating visual information from successive fixations. Science , 215 (4529):192–194, 1982. 1 [33] Tilke Judd, Krista Ehinger, Fr ´edo Durand, and Antonio Tor- ralba. Learning to predict where humans look. In 2009 IEEE 12th international conference on computer vision , pages 2106–2113. IEEE, 2009. 5, 6, 8 [34] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. 1960. 2 [35] Qiqi Kou, Ruihang Liu, Chen Lv, He Jiang, and Deqiang Cheng. Advertising image saliency prediction method based on score level fusion. IEEE Access , 11:8455–8466, 2023. 2 [36] Alexander Kroner, Mario Senden, Kurt Driessens, and Rainer Goebel. Contextual encoder–decoder network for visual saliency prediction. Neural Networks , 129:261–270, 2020. 1, 6 [37] Srinivas SS Kruthiventi, Kumar Ayush, and R Venkatesh Babu. Deepfix: A fully convolutional neural network for predicting human eye fixations. IEEE Transactions on Im- age Processing , 26(9):4446–4456, 2017. 2 [38] Matthias K ¨ummerer, Lucas Theis, and Matthias Bethge. Deep gaze i: Boosting saliency prediction with feature maps trained on imagenet. arXiv preprint arXiv:1411.1045 , 2014. 1, 2 [39] Matthias K ¨ummerer, Thomas SA Wallis, and Matthias Bethge. Deepgaze ii: Reading fixations from deep features trained on object recognition. arXiv preprint arXiv:1610.01563 , 2016. 2 [40] Lucie Leveque and Hantao Liu. An eye-tracking database of video advertising. In 2019 IEEE International Conference on Image Processing (ICIP) , pages 425–429. IEEE, 2019. 2 [41] Peizhao Li, Junfeng He, Gang Li, Rachit Bhargava, Shaolei Shen, Nachiappan Valliappan, Youwei Liang, Hongxiang Gu, Venky Ramachandran, Golnaz Farhadi, et al. Uniar: Unifying human attention and response prediction on visual content. arXiv preprint arXiv:2312.10175 , 2023. 1, 3, 6, 7 [42] Song Liang, Ruihang Liu, and Jiansheng Qian. Fixation prediction for advertising images: Dataset and benchmark. Journal of Visual Communication and Image Representation , 81:103356, 2021. 2 [43] Weibin Liao, Yinghao Zhu, Xinyuan Wang, Cehngwei Pan, Yasha Wang, and Liantao Ma. Lightm-unet: Mamba assists in lightweight unet for medical image segmentation. arXiv preprint arXiv:2403.05246 , 2024. 3 [44] Akis Linardos, Matthias K ¨ummerer, Ori Press, and Matthias Bethge. Deepgaze iie: Calibrated prediction in and out-of- domain for state-of-the-art saliency modeling. In Proceed- ings of the IEEE/CVF International Conference on Com- puter Vision , pages 12919–12928, 2021. 6, 7 [45] Hantao Liu and Ingrid Heynderickx. Studying the added value of visual attention in objective image quality metrics based on eye movement data. In 2009 16th IEEE interna- tional conference on image processing (ICIP) , pages 3097– 3100. IEEE, 2009. 1, 3 [46] Nian Liu and Junwei Han. A deep spatial contextual long- term recurrent convolutional network for saliency detection. IEEE Transactions on Image Processing , 27(7):3264–3274, 2018. 1 [47] Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, Yaowei Wang, Qixiang Ye, and Yunfan Liu. Vmamba: Visual state space model. arXiv preprint arXiv:2401.10166 , 2024. 2, 3, 4 [48] Jianxun Lou, Hanhe Lin, David Marshall, Dietmar Saupe, and Hantao Liu. Transalnet: Towards perceptually relevant visual saliency prediction. Neurocomputing , 494:455–467, 2022. 1, 2, 4, 6, 7 [49] Jun Ma, Feifei Li, and Bo Wang. U-mamba: Enhancing long-range dependency for biomedical image segmentation. arXiv preprint arXiv:2401.04722 , 2024. 2 [50] Dipti Mishra, Satish Kumar Singh, Rajat Kumar Singh, and Divanshu Kedia. Multi-scale network (mssg-cnn) for joint image and saliency map learning-based compression. Neu- rocomputing , 460:95–105, 2021. 1 [51] Junting Pan, Cristian Canton Ferrer, Kevin McGuinness, Noel E O’Connor, Jordi Torres, Elisa Sayrol, and Xavier Giro-i Nieto. Salgan: Visual saliency prediction with genera- tive adversarial networks. arXiv preprint arXiv:1701.01081 , 2017. 2 [52] Yash Patel, Srikar Appalaraju, and R Manmatha. Saliency driven perceptual image compression. In Proceedings of the IEEE/CVF winter conference on applications of computer vision , pages 227–236, 2021. 1[53] Xiaohuan Pei, Tao Huang, and Chang Xu. Efficientvmamba: Atrous selective scan for light weight visual mamba. arXiv preprint arXiv:2403.09977 , 2024. 2 [54] Maciej Pi ´oro, Kamil Ciebiera, Krystian Kr ´ol, Jan Ludziejew- ski, and Sebastian Jaszczur. Moe-mamba: Efficient selective state space models with mixture of experts. arXiv preprint arXiv:2401.04081 , 2024. 2 [55] Umesh Rajashekar, Ian Van Der Linde, Alan C Bovik, and Lawrence K Cormack. Gaffe: A gaze-attentive fixation find- ing engine. IEEE transactions on image processing , 17(4): 564–573, 2008. 2 [56] Navyasri Reddy, Samyak Jain, Pradeep Yarlagadda, and Vi- neet Gandhi. Tidying deep saliency prediction architec- tures. In 2020 IEEE/RSJ International Conference on Intelli- gent Robots and Systems (IROS) , pages 10241–10247. IEEE, 2020. 6, 7 [57] Jiacheng Ruan and Suncheng Xiang. Vm-unet: Vision mamba unet for medical image segmentation. arXiv preprint arXiv:2402.02491 , 2024. 2 [58] Chengyao Shen and Qi Zhao. Webpage saliency. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VII 13 , pages 33–46. Springer, 2014. 2, 1, 3 [59] Hamed R Tavakoli, Fawad Ahmed, Ali Borji, and Jorma Laaksonen. Saliency revisited: Analysis of mouse move- ments versus fixations. In Proceedings of the ieee conference on computer vision and pattern recognition , pages 1774– 1782, 2017. 4 [60] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. 2 [61] Eleonora Vig, Michael Dorr, and David Cox. Large-scale optimization of hierarchical features for saliency prediction in natural images. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 2798–2805, 2014. 2 [62] Wenguan Wang and Jianbing Shen. Deep visual attention prediction. IEEE Transactions on Image Processing , 27(5): 2368–2378, 2017. 1, 6 [63] Zhaohu Xing, Tian Ye, Yijun Yang, Guang Liu, and Lei Zhu. Segmamba: Long-range sequential modeling mamba for 3d medical image segmentation. arXiv preprint arXiv:2401.13560 , 2024. 2 [64] Juan Xu, Ming Jiang, Shuo Wang, Mohan S Kankanhalli, and Qi Zhao. Predicting human gaze beyond pixels. Journal of vision , 14(1):28–28, 2014. 5, 6, 8 [65] Chenhongyi Yang, Zehui Chen, Miguel Espinosa, Linus Er- icsson, Zhenyu Wang, Jiaming Liu, and Elliot J Crowley. Plainmamba: Improving non-hierarchical mamba in visual recognition. arXiv preprint arXiv:2403.17695 , 2024. 2 [66] Yijun Yang, Zhaohu Xing, and Lei Zhu. Vivim: a video vision mamba for medical video object segmentation. arXiv preprint arXiv:2401.14168 , 2024. 3 [67] Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417 , 2024. 2, 4 SUM: Saliency Unification through Mamba for Visual Attention Modeling Supplementary Material A. Experimental Results A.1. Impact of different loss combinations In addition, to provide additional details about the coeffi- cients used for each loss combination, we conducted several experiments to determine the optimal coefficients for each combination. The best coefficients for each combination are depicted in Table 8. A.2. More visualization results We have included an additional visualization of SUM’s pre- dictions in Figure 3. Compared to ground truths, SUM con- sistently delivers accurate predictions across various image types and datasets, underscoring its robustness and versatil- ity in visual saliency modeling. Moreover, to further vali- date the robustness of our proposed method, we conducted comparative analyses using publicly available datasets that had not been previously seen, as detailed in Table Ta- ble 9. The performance, as depicted in Figure 4, notably remains consistent when applied to new and previously un- seen datasets. This suggests that SUM adeptly identifies and highlights the salient features in images, maintaining close alignment with the ground truth data. Therefore, SUM can be reliably utilized in diverse real-world applications where accuracy in visual recognition is critical. Table 8. loss weighting coefficients λi(i= 1, . . . , 5) as used in Table 3. KL CC SIM NSS MSE 1 0 0 0 0 0 -1 0 0 0 0 0 -1 0 0 0 0 0 -1 0 0 0 0 0 1 10 0 -3 0 0 10 0 0 -3 0 10 -3 0 0 0 10 0 0 0 5 0 -2 0 -1 0 10 -2 -1 0 0 10 0 -3 0 5 10 0 0 -3 5 10 -3 0 0 5 10 -2 -1 -1 0 10 -2 -1 0 5 10 -2 -1 -1 5Table 9. Details of unseen datasets used for quantitative analysis of SUM in Figure 4. Dataset Image domain # Image Image Resolution Toronto [4] Natural scene 120 681 ×511 TUD Image Quality Database 1 [45] Natural scene 29 768 ×512 TUD Image Quality Database 2 [1] Natural scene 160 600 ×600 FIWI [58] Web page 149 1360 ×768 Input Image Ground Truth SUM Input Image Ground Truth SUM Figure 3. Visualizations of SUM’s predictions across different datasets. The first and second rows depict Natural Scene-Mouse data, while the third and fourth rows showcase Natural Scene-Eye data. The fifth and sixth rows present E-commerce data, and the seventh and eighth rows display UI data. Input Image Ground Truth SUM Input Image Ground Truth SUM Figure 4. Visualizations of SUM’s predictions across different datasets. The first and second rows showcase the Toronto dataset [4], while the third and fourth rows present the FIWI dataset [58]. The fifth and sixth rows display data from the TUD Image Quality Database 1 [45], and the seventh and eighth rows exhibit data from the TUD Image Quality Database 2 [1]. | 6 | 1 | The SUM model utilizes a U-Net architecture integrated with Mamba, which is known for efficiency due to its linear complexity. While the total parameter count isn't specified, similar models in this domain typically range from 30M to 800M. Given the complexity, a reasonable estimate for training time is 6 hours based on its architecture and the extensive usage of 6 datasets (totalling around 22,000 images) with a batch size of 16, processed over 15 epochs. The A40 GPU used has 48 GB memory, which efficiently accommodates the dataset and model size, suggesting single GPU training is sufficient. The training is expected to fit within a reasonable time frame for state-of-the-art models when given local data and pre-training on ImageNet, which often speeds up the learning process in similar CNN-based architectures. | yes | Yes | CV | SUM: Saliency Unification through Mamba for Visual Attention Modeling | 2024-06-25 0:00:00 | https://github.com/Arhosseini77/SUM | 1 | https://drive.usercontent.google.com/download?id=1Mdk97UB0phYDZv8zgjBayeC1I1_QcUmh&export=download&authuser=0 | 4 min * 30 epoch = 2 hr | https://colab.research.google.com/drive/1jdVKL-KYdo1CgCdOCzzKMFSmDBqrc8RX?usp=sharing | Yes | -- Dont run requirements.txt as it will produce dependency error. I have included the pip install command and also for running on CAT2000, need to make small changes which I have included on colab file. . Also need to set small command for matplotlib to run on collab. |
SumMe | CSTA | [] | CSTA: CNN-based Spatiotemporal Attention for Video Summarization | 2024-05-20T00:00:00 | https://arxiv.org/abs/2405.11905v2 | [
"https://github.com/thswodnjs3/CSTA"
] | {"Kendall's Tau": '0.246', "Spearman's Rho": '0.274'} | [
"F1-score (Canonical)",
"F1-score (Augmented)",
"Kendall's Tau",
"Spearman's Rho"
] | Given the following paper and codebase:
Paper: CSTA: CNN-based Spatiotemporal Attention for Video Summarization
Codebase: https://github.com/thswodnjs3/CSTA
Improve the CSTA model on the SumMe dataset. The result
should improve on the following metrics: {"Kendall's Tau": '0.246', "Spearman's Rho": '0.274'}. You must use only the codebase provided.
| CSTA: CNN-based Spatiotemporal Attention for Video Summarization Jaewon Son, Jaehun Park, Kwangsu Kim* Sungkyunkwan University {31z522x4,pk9403,kim.kwangsu }@skku.edu Abstract Video summarization aims to generate a concise repre- sentation of a video, capturing its essential content and key moments while reducing its overall length. Although several methods employ attention mechanisms to handle long-term dependencies, they often fail to capture the visual signif- icance inherent in frames. To address this limitation, we propose a CNN-based SpatioTemporal Attention (CSTA) method that stacks each feature of frames from a single video to form image-like frame representations and applies 2D CNN to these frame features. Our methodology relies on CNN to comprehend the inter and intra-frame relations and to find crucial attributes in videos by exploiting its abil- ity to learn absolute positions within images. In contrast to previous work compromising efficiency by designing addi- tional modules to focus on spatial importance, CSTA re- quires minimal computational overhead as it uses CNN as a sliding window. Extensive experiments on two benchmark datasets (SumMe and TVSum) demonstrate that our pro- posed approach achieves state-of-the-art performance with fewer MACs compared to previous methods. Codes are available at https://github.com/thswodnjs3/ CSTA . 1. Introduction The rise of social media platforms has resulted in a tremendous surge in daily video data production. Due to the high volume, diversity, or redundancy, it is time-consuming and equally difficult to retrieve the desired content or edit multiple videos. Video summarization is a powerful time- saving technique to condense long videos by retaining the most relevant information, making it easier for users to quickly grasp the main points of the video without having to watch the entire footage. One of the challenges that occur during video summa- rization is the long-term dependency problem, where the initial information is often lost due to large data intervals *Corresponding author (a) Temporal attention (b) Spatial attention (c) CNN attention Figure 1. Approaches for calculating attention. Each row is the feature vector of a frame. Tis the number of frames, and Dis the dimension of the feature. [18, 26, 43, 44]. The decay of initial data prevents deep learning models from capturing the relation between frames essential for determining key moments in videos. Attention [38], in which entire frames are reflected through pairwise operations, has gained popularity as a widely adopted tech- nique for solving this problem [1, 7, 15, 17, 46]. Attention- based models distinguish important parts from unimportant ones by determining the mutual reliance between frames. However, attention cannot consider spatial contexts within images [15, 27, 39, 43, 48]. For instance, current atten- tion calculates temporal attention based on correlations of visual attributes from other frames (See Figure 1a), but the importance of visual elements within the frame remains un- equal to the temporal significance. Including spatial depen- dency leads to different weighted values of features, caus- ing changes in temporal importance. Therefore, attention can be calculated more precisely by including visual asso- ciations, as shown in Figure 1b. Prior studies mixed spatial importance and performed better than solely relying on sequential connections [15, 27, 39, 43, 48]. Nevertheless, acquiring spatial and tem- poral importance requires the design of additional modules and, thus, incurs excessive costs. Some studies used addi- tional structures to embrace visual relativities in individual frames, such as self-attention [15, 39], multi-head attention [43], and graph convolutional neural networks [48]. Pro- cessing too many frames of lengthy videos to capture the temporal and visual importance can be expensive. Thus,arXiv:2405.11905v2 [cs.CV] 21 May 2024 Figure 2. Workflow of CSTA obtaining both inter and intra-frame relationships with few computation resources becomes a non-trivial problem. This paper introduces CNN-based Spatio Temporal Attention (CSTA) to simultaneously capture the visual and ordering reliance in video frames, as shown in Figure 2. CSTA works as follows: Firstly, it extracts features of frames from a video and then concatenates them. Secondly, it treats the assembled frame representations as an im- age and applies a 2D convolutional neural network (CNN) model to them for producing attention maps. Finally, it combines the attention maps with frame features to predict the importance scores of frames. CSTA derives spatial and temporal relationships in the same manner as CNN derives patterns from images, as shown in Figure 1c. Further, it searches for vital components in frame representations with the capacity of a CNN to infer absolute positions from im- ages [16, 21]. Unlike previous methods, CSTA is efficient as a one-way spatiotemporal processing algorithm because it uses a CNN as a sliding window. We test the efficacy of CSTA on two benchmark datasets - SumMe [12] and TVSum [34]. Our experiment validates that a CNN produces attention maps from frame features. Further, CSTA needs fewer multiply-accumulate opera- tions (MACs) than previous methods for considering the vi- sual and sequential dependency. Our contributions are sum- marized below: • To the best of our knowledge, the proposed model appears to be the first to apply 2D CNN to frame representations in video summarization. • The CSTA design reflects spatial and temporal associa- tions in videos without requiring considerable computa-tional resources. • CSTA demonstrates state-of-the-art based on the overall results of two benchmark datasets, SumMe and TVSum. 2. Related Work 2.1. Attention-based Video Summarization Many video summarization models use attention to de- duce the correct relations between frames and find crucial frames in videos. A-A VS and M-A VS [17] are encoder- decoder structures in which attention is used to find essen- tial frames. V ASNet [7] is based on plain self-attention for better efficiency than encoder-decoder-based ones. SUM- GDA [26] also employs attention for efficiency and supple- ments diversity into the attention mechanism for generated summaries. CA-SUM [2] further enhances SUM-GDA by introducing uniqueness into the attention algorithm in unsu- pervised ways. Attention in DSNet [47] helps predict scores and precise localization of shots in videos. PGL-SUM [1] has a mechanism to alleviate long-term dependency prob- lems by discovering local and global relationships by apply- ing multi-head attention to segments and the entire video. GL-RPE [20] approaches similarly in unsupervised ways by local and global sampling in addition to relative position and attention. VJMHT [24] uses transformers and improves summarization by learning similarities between analogous videos. CLIP-It [29] also relies on the transformers to pre- dict scores by cross-attention between frames and captions of the video. Attention helps models recognize the relations between frames, however, it does not focus on visual rela- tions. Visual relevance is vital to understanding video con- tent as it influences the expression of temporal depen- dency. Some studies have proposed additional networks to find frame-wise visual relationships [15, 27, 39, 43]. The models process the temporal dependency and exploit self- attention or multi-head attention for visual relations of ev- ery frame. RR-STG [48] uses graph CNNs to draw spatial associations using graphs. RR-STG creates graphs based on elements from object detection models [32] to capture the spatial relevance. These methods offer increased per- formance but incur a high computational cost owing to the separate module handling many frames. This paper adopts CNN as a one-way mechanism for more efficient reflecton of the spatiotemporal importance of multiple frames in long videos. 2.2. CNN for Efficiency and Absolute Positions CNN is usually employed to resolve computation prob- lems in attention. CvT [40] uses CNN for token embedding and projection in vision transformers (ViT) [6] and requires a few FLOPs. CeiT [41] uses both CNN and transformers and shows better results with fewer parameters and FLOPs. CmT [11] applies depth-wise convolutional operations to obtain a better trade-off between accuracy and efficiency for ViT. We exploit CNN to enhance the efficiency of dealing with multiple frames in video summarization. CNN can be used for attention by learning absolute po- sitions from images. Islam et al. [16] proved that features extracted using a CNN contain position signals. They at- tributed it to padding, and Kayhan and Germert [21] verified the same under various paddings. CPVT [4] uses this abil- ity to reflect the position information of tokens and to tackle problems in previous positional encodings for ViT. Based on this behavior of CNNs, our proposed method is designed to seek only the necessary elements for video summariza- tion from frame representations by considering frame fea- tures as images. 3. Method 3.1. Overview This study approaches video summarization as a subset selection problem. We show the proposed CSTA frame- work in Figure 3. During the Embedding Process , the model converts the frames into feature representations. The Pre- diction Process involves using these representations to pre- dict importance scores. In the Prediction Process , the Atten- tion Module generates attention for videos, and the Mixing Module fuses this attention with input frame features. Fi- nally, the CSTA predicts every frame’s importance score, representing the probability of whether the frame should be included in the summary videos. The model is trained bycomparing estimated scores and human-annotated scores. During inference, it selects frames based on the knapsack algorithm and creates summary videos using them. 3.2. Embedding Process CSTA converts frames to features for input into the model, as depicted in the Embedding Process (Figure 3). Let the frames be X={xi}T t=1when there are Tframes in a video, with Has the height and Was the width. Fol- lowing [7, 9, 25, 39, 42, 47] for a fair comparison, the frozen pre-trained CNN model (GoogleNet [35]) modifies X∈RT×3×H×WintoX′∈RT×Dwhere Dis the dimen- sion of frame features. To fully utilize the CNN, we replicate the frame repre- sentations to match the number of channels ( i.e., three). A CNN is usually trained using RGB images [14, 33, 35, 36]; therefore, pre-trained models are well-optimized on images with three channels. Additionally, we concatenate the clas- sification token (CLS token) [6, 15] into frame features: X′′=Concat axis =0(X′,X′,X′) (1) E=Concat axis =1(XCLS,X′′) (2) where X′′∈R3×T×DandXCLS∈R3×1×Dare the appended feature and the CLS token, respectively. E∈R3×(T+1)×Dis the embedded feature. Concat axis =0 andConcat axis =1concatenate features in the channel axis andTaxis, respectively. Motivated by STVT [15], we ap- pend the CLS token with input frame features. The CLS token is the learnable parameters fed into the models with inputs and trained with models jointly. STVT obtains cor- relations of frames using the CLS token and aggregates the CLS token with input frames to capture global contexts. We follow the same method in prepending and combining the CLS token with frame features. The fusing process is com- pleted later in the Mixing Module . 3.3. Prediction Process CSTA calculates importance scores for Tframes, as shown in the Prediction Process (Figure 3). The classifier assigns scores to frames after the Attention Module andMix- ing Module . The Attention Module makes attention maps from E, and the Mixing Module aggregates this attention withE. A detailed explanation is given in Algorithm 1. We generate the key and value from Eby using two lin- ear layers based on the original attention [38]. The metrics WKandWV∈RD×Dare weights of linear layers pro- jecting Einto the key and value (Line 2-Line 3). Unlike EK, CSTA uses a single channel of frame features in Eto produce features by value embedding (Line 3) because we only need one X′except for duplicated ones, which are sim- ply used for reproducing image-like features. We select the first index as a representative, which is E[0]∈R(T+1)×D. Figure 3. Architecture of CSTA Algorithm 1: Prediction Process input : E∈R3×(T+1)×D output: S∈RT 1begin 2 EK=WKE 3 EV=WVE[0] 4 5 P=Attention Module (EK) Section 3.4 6 Ppos=P+Positional Encoding 7 M=Mixing Module (Ppos, EV)Section 3.5 8 S=Classifier (M) 9 return S 10end TheAttention Module processes spatiotemporal charac- teristics and focuses on critical attributes in EK(Line 5). We add positional encodings to Pto strengthen the abso-lute position awareness further (Line 6). Unlike the preva- lent way of adding positional encoding into inputs [6, 38], this study adds positional encoding into the attention maps based on [1]. This is because adding positional encodings into input features distorts images so that models can recog- nize this distortion as different images. Moreover, models cannot fully recognize these absolute position encodings in images during training owing to a lack of data. Therefore, CSTA makes Pposby attaching positional encodings to at- tentive features P. The Mixing Module inputs PposandEVand produces mixed features M∈R(T+1)×D(Line 7). The classifier pre- dicts importance scores vectors S∈RTfrom M(Line 8). 3.4. Attention Module The Attention Module (Figure 3) produces attention maps by utilizing a trainable CNN (GoogleNet [35]) han- dling frame features EK. The CNN captures the spatiotem- poral dependency using kernels, similar to how a CNN learns from images, as shown in Figure 1c. The CNN also searches for essential elements from EKfor summariza- tion, with the ability to learn absolute positions. Based on [4, 16, 21], CNN imbues representations with positional in- formation so that CSTA can encode the locations of signif- icant attributes from frame features for summarization. We make the shape of attention maps the same as that of input features to aggregate attention maps with input features. This study leverages two strategies for equal scale: deploying the adaptive pooling operation and using the same CNN model (GoogleNet [35]) in the Embedding Process andAttention Module . Pooling layers reduce the scale of features in the CNN; therefore, the size of out- puts from the CNN is changed from EK∈R3×(T+1)×D toEK CNN∈RD×T+1 r×D r, where ris the reduction ratio. To expand diverse lengths of frame representations, we ex- ploit adaptive pooling layers to adjust the shape of features by bilinear interpolation. Furthermore, the number of out- put channels from the learnable CNN equals the dimen- sion of frame features from the fixed CNN because of the same CNN models. The output from adaptive pooling is EK pool∈RD×(T+1)×1. As suggested in [14], this study uses a skip connection: P=LayerNorm (EK pool+EK[0]) (3) where the output is P∈RD×(T+1), followed by layer normalization [3]. A skip connection supports more precise attention and stable training in CSTA. As same with EV, explained in Algorithm 1 (Line 3), we only use the single frame feature of EKand ignore replications of frame fea- tures. The size of Pis equal to the size of frame features with (T+1)×D; therefore, each value of Phas the spatiotem- poral importance of frame features. By combining Pwith frame features, the CSTA reflects the sequential and visual significance of frames. After supplementing the positional encodings, Pposwill be used as inputs for the Mixing Mod- ule. 3.5. Mixing Module In the Mixing Module (Figure 3), we employ softmax along the time and dimension axes to compute the temporal and visual weighted values of Ppos: AttT:σ(di) = edi P jedj j=1, ...,T+1 (4) AttD:σ(di) = edi P kedk k=1, ...,D (5)where AttTis the temporal importance, and AttDis the visual importance. Equation (4) calculates the weighted val- ues between T+1frames, including the CLS token, in the same dimension. Equation (5) computes the weighted val- ues between different dimensions in the same frame. AttD represents the spatial importance because each value of the dimension from features includes visual characteristics by CNN, processing image patterns, and producing informa- tive vectors. After acquiring weighted values, a dropout is employed for these values before integrating them with EV. The dropout erases parts of features by setting 0 values for bet- ter generalization; it also works for attention, as shown in [1, 38]. If a dropout is applied to inputs as in the original at- tention [38], the CNN cannot learn contexts from 0 values, unlike self-attention, because the dropout spoils the local contexts of deleted parts. Therefore, we follow [1] by ap- plying the dropout to the output of the softmax operations for generalization. After dropout, the CSTA combines the spatial and tem- poral importance with the frame features: M=AttT⊙EV+AttD⊙EV(6) where ⊙is the element-wise multiplication, and M∈R(T+1)×Dis the mixed representations. CSTA re- flects weighted values into frame features by blending AttT andAttDwithEVby element-wise multiplication. Incor- porating visual and sequential attention values by addition encompasses spatiotemporal importance at the same time. Subsequently, to integrate the CLS token with frame features, adaptive pooling transforms M∈R(T+1)×Dinto M′∈RT×Dby average. Unlike STVT [15], in which lin- ear layers are used to merge the CLS token with constant numbers of frames, CSTA uses adaptive pooling to cope with various lengths of videos. Adaptive pooling fuses the CLS token with a few frames; however, it intensifies our model owing to the generalization of the classifier, which consists of fully connected layers. M′from adaptive pool- ing enters into the classifier computing importance scores of frames. 3.6. Classifier Based on the output of the adaptive pooling, the classi- fier exports the importance scores. We follow [1, 7, 13] to construct the structure of the classifier as follows: R=LayerNorm (Dropout (ReLU (FC(M′)))) (7) S=Sigmoid (FC(R)) (8) where R∈RT×Dis derived after M′passes through a fully connected layer, relu, dropout, and layer normaliza- tion. Another fully connected layer maps the representation of each frame into single values, and the sigmoid computes scores S∈RT. We train CSTA by comparing predicted and ground truth scores. For the loss function, we use the mean squared loss as follows: Loss =1 TX (Sp−Sg)2(9) where Spis the predicted score, and Sgis the ground truth score. The CSTA creates summary videos based on shots that KTS [31] derives. It computes the average importance scores of shots into which KTS splits videos [42]. The sum- mary videos consist of shots with two constraints: maxX Si (10) X Length i≤15% (11) where iis the index of selected shots. Si∈[0,1]is the importance score of the ith shot between 0 and 1, and Length iis the percentage of the length of the ith shot in the original videos. Our model picks shots with high scores by exploiting the 0/1 knapsack algorithm as in [34]. Follow- ing [12], summary videos have a length limit of 15% of the original videos. 4. Experiments 4.1. Settings Evaluation Methods. We evaluate CSTA using Kendall’s ( τ) [22] and Spearman’s ( ρ) [49] coeffi- cients. Both metrics are rank-based correlation coefficients that are used to measure the similarities between model- estimated and ground truth scores. The F1 score is the most commonly used metric in video summarization; however, it has a significant drawback when used to evaluate summary videos. Based on [30, 37], due to the limitation of the summary length, the F1 score is evaluated to be higher if models choose as many short shots as possible and ignore long key shots. This fact implies that the F1 score might not represent the correct performance in video summarization. A detailed explanation of how to measure correlations is provided in Appendix A.1. Datasets. This study utilizes two standard video summa- rization datasets - SumMe [12] and TVSum [34]. SumMe consists of videos with different contents ( e.g., holidays, events, sports) and various types of camera angles ( e.g., static, egocentric, or moving cameras). The videos are raw or edited public ones with lengths of 1-6 minutes. At least 15 people create ground truth summary videos for all data, and the models predict the average number of selections byCNN Models Video Summarization Models 0.05 0.10 0.15 0.20DSNet-AFdppLSTMDSNet-ABHSA-RNNVJMHTV ASNetMobileNetEfficientNetGoogleNetResNetSumMe Spearman Kendall CNN Models Video Summarization Models 0.05 0.10 0.15dppLSTMHSA-RNNVJMHTDSNet-ABDSNet-AFV ASNetMobileNetEfficientNetGoogleNetResNetTVSum Spearman Kendall Figure 4. Comparison of summarizing performance between CNN and video summarization models. The x-axis shows performance, and the y-axis shows model names. Based on the dashed line, the performance of CNN is displayed above, and the video summa- rization models are below. people for every frame. TVSum comprises 50 videos from 10 genres ( e.g., documentaries, news, vlogs). The videos are 2-10 minutes long, and 20 people annotated the ground truth for each video. The ground truth is a shot-level importance score ranging from 1 to 5, and models try to estimate the average shot-level scores. Implementation details are explained in Appendix A.2. 4.2. Verification of Attention Maps being Created using CNN Previous studies on video summarization have yet to ap- ply 2D CNN directly to frame features. Therefore, we ver- ify that CNN can create attention maps from frame fea- tures. We choose MobileNet-V2 [33], EfficientNet-B0 [36], GoogleNet [35], and ResNet-18 [14] as CNN models since we focus on limited computation costs. This study applies CNN models to frame features and trains them to com- pute the frame-level importance scores without the classi- fier. The CNN directly exports Tscores by inputting its output features into the adaptive pooling layer with a target shape T×1. As the importance score of each frame is be- MethodSumMe TVSum Rank τ ρ Rank τ ρ Random - 0.000 0.000 - 0.000 0.000 Human - 0.205 0.213 - 0.177 0.204 dppLSTM[42] 15 0.040 0.049 22 0.042 0.055 DAC[8]T12.5 0.063 0.059 21 0.058 0.065 HSA-RNN[45] 11.5 0.064 0.066 19.5 0.082 0.088 DAN[27]ST- - - 19.5 0.071 0.099 STVT[15]ST- - - 15.5 0.100 0.131 DSNet-AF[47]T16 0.037 0.046 13.5 0.113 0.138 DSNet-AB[47]T13.5 0.051 0.059 15 0.108 0.129 HMT[46]M10.5 0.079 0.080 17.5 0.096 0.107 VJMHT[24]T8.5 0.106 0.108 17.5 0.097 0.105 CLIP-It[29]M- - - 13.5 0.108 0.147 iPTNet[19]+8.5 0.101 0.119 11 0.134 0.163 A2Summ[13]M7 0.108 0.129 10 0.137 0.165 V ASNet[7]T6 0.160 0.170 9 0.160 0.170 AAAM[37]T- - - 6.5 0.169 0.223 MAAM[37]T- - - 5.5 0.179 0.236 VSS-Net[43]ST- - - 3 0.190 0.249 DMASum[39]ST11 0.063 0.089 1 0.203 0.267 RR-STG[48]ST2.5 0.211* 0.234 7.5 0.162 0.212 MSV A[9]M3.5 0.200 0.230 5.5 0.190 0.210 SSPVS[25]M3* 0.192 0.257* 4.5 0.181 0.238 GoogleNet[35]ST5 0.176 0.197 11.5 0.129 0.163 CSTAST1 0.246 0.274 2* 0.194* 0.255* Table 1. Comparison between CSTA and state-of-the-art on SumMe and TVSum. Rank is the average rank between Kendall’s (τ) and Spearman’s ( ρ) coefficients. We categorize different types of video summarization models: temporal ( T) and spatiotem- poral ( ST) attention-based, multi-modal based ( M), and exter- nal dataset-based ( +) models. The scores marked in bold and by the asterisk are the best and second-best ones, respectively. GoogleNet is the baseline model. Note that all feature extraction models are CNNs for a fair comparison. tween 0 and 1, each score is similar to the weighted value of each frame. Thus, we can test whether CNN generates attention maps based on the video summarization perfor- mance. Surprisingly, the CNN models predict the impor- tance scores much better than the previous video summa- rization models on SumMe , as shown in Figure 4. Even though the CNN models do not perform best on TVSum, they still show promising performance compared to exist- ing video summarization models. The results show that the CNN produces attention maps by capturing the spatiotem- poral relations and detecting crucial attributes in frame fea- tures based on absolute position encoding ability, unlike conventional methods that solely address the temporal de- pendency. 4.3. Performance Comparison We compare CSTA with existing state-of-the-art meth- ods on SumMe and TVSum. The results in Table 1 show that CSTA achieves the best performance on SumMe and the second-best score on TVSum based on the average rank. DMASum [39] shows the best performance on TVSum but does not perform well on SumMe, as indicated in Table 1.ModuleSumMe TVSum τ ρ τ ρ GoogleNet (Baseline) 0.176 0.197 0.129 0.163 (+)Attention Module 0.184 0.205 0.176 0.231 (+)AttD 0.189 0.211 0.182 0.240 (+)Key,Value Embedding 0.207 0.231 0.193 0.253 (+)Positional Encoding 0.225 0.251 0.189 0.248 (+)XCLS 0.231 0.257 0.193 0.254 (+)Skip Connection 0.246 0.274 0.194 0.255 Table 2. We listed Kendall’s ( τ) and Spearman’s ( ρ) coefficients for different modules. (+)denotes the stacking of modules on top of the previous ones. DMASum has τandρcoefficients of 0.203 and 0.267 on TVSum, respectively, whereas 0.063 and 0.089 on SumMe, respectively. This implies that CSTA provides more stable performances than DMASum, although it provides slightly lower performance than DMASum on TVSum. Based on the overall performance of both datasets, our CSTA has achieved state-of-the-art results. Further, CSTA excels in video summarization models relying on classical pairwise attention [7, 8, 24, 37, 47], focusing on temporal attention only. This clarifies that considering the visual dependency helps CSTA understand crucial moments by capturing meaningful visual contexts. Like CSTA, some approaches, including DMASum, focus on spatial and temporal dependency [15, 27, 43, 48], but they perform poorly compared to our proposed methodol- ogy. This is because CNN is much more helpful than previ- ous methods by using the ability to learn the absolute posi- tion in frame features. CSTA also outperforms methods that require additional datasets from other modalities or tasks [9, 13, 19, 25, 29, 46]. Our observations suggest that CSTA can find essen- tial moments in videos solely based on images without as- sistance from extra data. We also show the visualization of generated summary videos from different models in Ap- pendix B. 4.4. Ablation Study This study verifies all components step-by-step, as in- dicated in Table 2. We deploy an attention structure with GoogleNet and a classifier for temporal dependency, de- noted as the (+)Attention Module . With the assistance of the weighted values from CNN, there is a 0.008 incre- ment on SumMe and at least 0.047 on TVSum, showing the power of CNN as attention. (+)AttDis the result ob- tained using softmax along the time and dimension axis to reflect the spatiotemporal importance. The improvement from 0.005 to 0.009 in both datasets indicates that consider- ing the spatial importance is meaningful. The KeyandValue Embeddings strengthen CSTA as a linear projection based on [38]. Although the (+)Positional Encoding reveals a small performance drop of 0.004 for τcoefficient and 0.005 forρcoefficient on TVSum, the performance increases sig- nificantly from 0.207 to 0.225 for τcoefficient and from 0.231 to 0.251 for ρcoefficient on SumMe. (+)XCLS is the result obtained when utilizing the CLS token. Because this study combines the CLS token with adaptive pooling, the CLS token only affects a few video frames. However, adding the CLS token improves the performance on both datasets because it generalizes the classifier, which contains fully connected layers. We also see the effects of skip con- nection, denoted by (+)Skip Connection , as suggested by [14]. The skip connection exhibits a similar performance on TVSum and an improvement of about 0.015 on SumMe. We also tested different CNN models as the baseline in Appendix C, various experiments of detailed construction of our model in Appendix D, and several hyperparameters in Appendix E. 4.5. Computation Comparison MethodSumMe TVSum Rank FE SP Rank FE SP DSNet-AF[47]T16 413.03G 1.18G 13.5 661.83G 1.90G DSNet-AB[47]T13.5 413.03G 1.29G 15 661.83G 2.07G VJMHT[24]T8.5 413.03G 18.21G 17.5 661.83G 28.25G V ASNet[7]T6 413.03G 1.43G 9 661.83G 2.30G RR-STG[48]ST2.5 54.82T 0.31G 7.5 88.41T 0.20G MSV A[9]M3.5 13.76T 3.63G 5.5 22.08T 5.81G SSPVS[25]M3 413.49G 20.72G 4.5 662.46G 44.22G CSTAST1 413.03G 9.78G 2 661.83G 15.73G Table 3. Comparison of MACs between video summarization models. Rank is the average rank between Kendall’s and Spear- man’s coefficients in Table 1. FEis the MACs during feature extraction, and SPis that during score predictions. We catego- rize models as temporal attention-based ( T), spatiotemporal ( ST) attention-based, and multi-modal based ( M) models. In this paper, we analyze the computation burdens of video summarization models, focusing on the feature ex- traction and score prediction steps. The standard procedure for creating summary videos comprises feature extraction, score prediction, and key-shot selection. Feature extraction is a necessary step in converting frames into features using pre-trained models so that video summarization models can take frames of videos as inputs. Score prediction is the step in which video summarization models infer the importance score for videos. Existing studies generally use the same key-shot selection process based on the knapsack algorithm to determine important video segments, so we ignore com- putations of key-shot selection. Table 3 displays MACs measurements and compares the computation resources during the inference per video. CSTA performs best with relatively fewer MACs than the other video summarization models. Based on the averagerank from Table 1, more computational costs or supplemen- tal data from other modalities is inevitable for better video summarization performance. Unlike previous approaches, CSTA exhibits high performance with fewer computational resources by exploiting CNN as a sliding window. We find that our model is more efficient than previous ones when considering spatiotemporal contexts. RR-STG [48] shows much fewer MACs than CSTA during score predictions; however, it shows exceptionally more MACs during feature extraction than others. RR-STG utilizes fea- ture extraction steps for visual relationships by inputting each frame into the object detection model [32], thereby, relying heavily on the pre-processing steps. While sum- marizing the new videos, RR-STG needs significant time to get spatial associations even though the score prediction takes less time. Other methods [15, 27, 39, 43] design two modules to reflect spatial and temporal dependency, respec- tively, as shown in Figure 1a and Figure 1b. These ap- proaches become costly when processing numerous frames in long videos for video summarization. CSTA effectively captures spatiotemporal importance in one way using CNN, as illustrated in Figure 1c. Thus, our proposed method shows superior performance by focusing on temporal and visual importance. 5. Conclusion This study addresses the problem of attention in video summarization. The existing pairwise attention-based video summarization mechanisms fail to account for visual depen- dencies, and prior research addressing this issue involves significant computational demands. To deal with the same problem efficiently, we propose CSTA, in which a CNN’s ability is used for video summarization for the first time. We also verify that the CNN works on frame features and cre- ates attention maps. The strength of the CNN allows CSTA to achieve state-of-the-art results based on the overall per- formance of two popular benchmark datasets with fewer MACs than before. Our proposed model even outperforms multi-modal or external dataset-based models without addi- tional data. For future work, we suggest further exploring how CNN affects video representations by tailoring frame feature-specific CNN models or training feature-extraction and attention-based CNN models. We believe this study can encourage follow-up research on video summarization and other video-related deep-learning studies. Acknowledgements. This work was supported by Korea Internet & Security Agency(KISA) grant funded by the Korea government(PIPC) (No.RS- 2023-00231200, Development of personal video in- formation privacy protection technology capable of AI learning in an autonomous driving environment) References [1] Evlampios Apostolidis, Georgios Balaouras, Vasileios Mezaris, and Ioannis Patras. Combining global and local attention with positional encoding for video summarization. In2021 IEEE international symposium on multimedia (ISM) , pages 226–234. IEEE, 2021. 1, 2, 4, 5, 3 [2] Evlampios Apostolidis, Georgios Balaouras, Vasileios Mezaris, and Ioannis Patras. Summarizing videos using con- centrated attention and considering the uniqueness and diver- sity of the video frames. In Proceedings of the 2022 Interna- tional Conference on Multimedia Retrieval , pages 407–415, 2022. 2 [3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- ton. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. 5 [4] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen. Conditional positional encodings for vision transformers. In The Eleventh International Conference on Learning Representations , 2022. 3, 5 [5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. 1 [6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. In International Con- ference on Learning Representations , 2020. 3, 4, 6 [7] Jiri Fajtl, Hajar Sadeghi Sokeh, Vasileios Argyriou, Dorothy Monekosso, and Paolo Remagnino. Summarizing videos with attention. In Computer Vision–ACCV 2018 Workshops: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers 14 , pages 39– 54. Springer, 2019. 1, 2, 3, 5, 7, 8 [8] Hao Fu, Hongxing Wang, and Jianyu Yang. Video summa- rization with a dual attention capsule network. In 2020 25th International Conference on Pattern Recognition (ICPR) , pages 446–451. IEEE, 2021. 7 [9] Junaid Ahmed Ghauri, Sherzod Hakimov, and Ralph Ew- erth. Supervised video summarization via multiple feature sets with parallel attention. In 2021 IEEE International Con- ference on Multimedia and Expo (ICME) , pages 1–6s. IEEE, 2021. 3, 7, 8, 1 [10] Xavier Glorot and Yoshua Bengio. Understanding the diffi- culty of training deep feedforward neural networks. In Pro- ceedings of the thirteenth international conference on artifi- cial intelligence and statistics , pages 249–256. JMLR Work- shop and Conference Proceedings, 2010. 1 [11] Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, and Chang Xu. Cmt: Convolutional neural networks meet vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 12175–12185, 2022. 3 [12] Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating summaries from user videos.InComputer Vision–ECCV 2014: 13th European Confer- ence, Zurich, Switzerland, September 6-12, 2014, Proceed- ings, Part VII 13 , pages 505–520. Springer, 2014. 2, 6 [13] Bo He, Jun Wang, Jielin Qiu, Trung Bui, Abhinav Shrivas- tava, and Zhaowen Wang. Align and attend: Multimodal summarization with dual contrastive losses. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , pages 14867–14878, 2023. 5, 7, 3 [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016. 3, 5, 6, 8 [15] Tzu-Chun Hsu, Yi-Sheng Liao, and Chun-Rong Huang. Video summarization with spatiotemporal vision trans- former. IEEE Transactions on Image Processing , 2023. 1, 3, 5, 7, 8 [16] Md Amirul Islam, Sen Jia, and Neil DB Bruce. How much position information do convolutional neural networks en- code? In International Conference on Learning Representa- tions , 2019. 2, 3, 5 [17] Zhong Ji, Kailin Xiong, Yanwei Pang, and Xuelong Li. Video summarization with attention-based encoder–decoder networks. IEEE Transactions on Circuits and Systems for Video Technology , 30(6):1709–1717, 2019. 1, 2 [18] Zhong Ji, Yuxiao Zhao, Yanwei Pang, Xi Li, and Jungong Han. Deep attentive video summarization with distribution consistency learning. IEEE transactions on neural networks and learning systems , 32(4):1765–1775, 2020. 1 [19] Hao Jiang and Yadong Mu. Joint video summarization and moment localization by cross-task sample transfer. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16388–16398, 2022. 7 [20] Yunjae Jung, Donghyeon Cho, Sanghyun Woo, and In So Kweon. Global-and-local relative position embedding for unsupervised video summarization. In European Conference on Computer Vision , pages 167–183. Springer, 2020. 2 [21] Osman Semih Kayhan and Jan C van Gemert. On translation invariance in cnns: Convolutional layers can exploit abso- lute spatial location. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition , pages 14274–14285, 2020. 2, 3, 5 [22] Maurice G Kendall. The treatment of ties in ranking prob- lems. Biometrika , 33(3):239–251, 1945. 6 [23] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. 1 [24] Haopeng Li, Qiuhong Ke, Mingming Gong, and Rui Zhang. Video joint modelling based on hierarchical transformer for co-summarization. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(3):3904–3917, 2022. 2, 7, 8, 1 [25] Haopeng Li, Qiuhong Ke, Mingming Gong, and Tom Drum- mond. Progressive video summarization via multimodal self- supervised learning. In Proceedings of the IEEE/CVF Win- ter Conference on Applications of Computer Vision , pages 5584–5593, 2023. 3, 7, 8, 1 [26] Ping Li, Qinghao Ye, Luming Zhang, Li Yuan, Xianghua Xu, and Ling Shao. Exploring global diverse attention via pairwise temporal relation for video summarization. Pattern Recognition , 111:107677, 2021. 1, 2 [27] Guoqiang Liang, Yanbing Lv, Shucheng Li, Xiahong Wang, and Yanning Zhang. Video summarization with a dual-path attentive network. Neurocomputing , 467:1–9, 2022. 1, 3, 7, 8 [28] Behrooz Mahasseni, Michael Lam, and Sinisa Todorovic. Unsupervised video summarization with adversarial lstm networks. In Proceedings of the IEEE conference on Com- puter Vision and Pattern Recognition , pages 202–211, 2017. 1 [29] Medhini Narasimhan, Anna Rohrbach, and Trevor Darrell. Clip-it! language-guided video summarization. Advances in Neural Information Processing Systems , 34:13988–14000, 2021. 2, 7 [30] Mayu Otani, Yuta Nakashima, Esa Rahtu, and Janne Heikkila. Rethinking the evaluation of video summaries. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , pages 7596–7604, 2019. 6 [31] Danila Potapov, Matthijs Douze, Zaid Harchaoui, and Cordelia Schmid. Category-specific video summarization. InComputer Vision–ECCV 2014: 13th European Confer- ence, Zurich, Switzerland, September 6-12, 2014, Proceed- ings, Part VI 13 , pages 540–555. Springer, 2014. 6 [32] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information process- ing systems , 28, 2015. 3, 8 [33] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh- moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4510–4520, 2018. 3, 6 [34] Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaimes. Tvsum: Summarizing web videos using titles. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 5179–5187, 2015. 2, 6 [35] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1–9, 2015. 3, 4, 5, 6, 7, 1 [36] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning , pages 6105–6114. PMLR, 2019. 3, 6 [37] Hacene Terbouche, Maryan Morel, Mariano Rodriguez, and Alice Othmani. Multi-annotation attention model for video summarization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 3142– 3151, 2023. 6, 7, 1 [38] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. 1, 3, 4, 5, 8 [39] Junyan Wang, Yang Bai, Yang Long, Bingzhang Hu, Zhen- hua Chai, Yu Guan, and Xiaolin Wei. Query twice: Dualmixture attention meta learning for video summarization. In Proceedings of the 28th ACM International Conference on Multimedia , pages 4023–4031, 2020. 1, 3, 7, 8 [40] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introduc- ing convolutions to vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision , pages 22–31, 2021. 3 [41] Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Feng- wei Yu, and Wei Wu. Incorporating convolution designs into visual transformers. In Proceedings of the IEEE/CVF In- ternational Conference on Computer Vision , pages 579–588, 2021. 3 [42] Ke Zhang, Wei-Lun Chao, Fei Sha, and Kristen Grauman. Video summarization with long short-term memory. In Com- puter Vision–ECCV 2016: 14th European Conference, Am- sterdam, The Netherlands, October 11–14, 2016, Proceed- ings, Part VII 14 , pages 766–782. Springer, 2016. 3, 6, 7, 1 [43] Yunzuo Zhang, Yameng Liu, Weili Kang, and Ran Tao. Vss- net: Visual semantic self-mining network for video summa- rization. IEEE Transactions on Circuits and Systems for Video Technology , 2023. 1, 3, 7, 8 [44] Bin Zhao, Xuelong Li, and Xiaoqiang Lu. Hierarchical re- current neural network for video summarization. In Proceed- ings of the 25th ACM international conference on Multime- dia, pages 863–871, 2017. 1 [45] Bin Zhao, Xuelong Li, and Xiaoqiang Lu. Hsa-rnn: Hier- archical structure-adaptive rnn for video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 7405–7414, 2018. 7 [46] Bin Zhao, Maoguo Gong, and Xuelong Li. Hierarchical mul- timodal transformer to summarize videos. Neurocomputing , 468:360–369, 2022. 1, 7 [47] Wencheng Zhu, Jiwen Lu, Jiahao Li, and Jie Zhou. Dsnet: A flexible detect-to-summarize network for video summariza- tion. IEEE Transactions on Image Processing , 30:948–962, 2020. 2, 3, 7, 8, 1 [48] Wencheng Zhu, Yucheng Han, Jiwen Lu, and Jie Zhou. Rela- tional reasoning over spatial-temporal graphs for video sum- marization. IEEE Transactions on Image Processing , 31: 3017–3031, 2022. 1, 3, 7, 8 [49] Daniel Zwillinger and Stephen Kokoska. CRC standard probability and statistics tables and formulae . Crc Press, 1999. 6 CSTA: CNN-based Spatiotemporal Attention for Video Summarization Supplementary Material A. Experiment details A.1. Measure correlation Based on [37], we test everything 10 times in each experiment for strict evaluation since video summarization models are sensitive to randomness due to the lack of datasets. Additionally, we follow [37] to perform the experiments rigorously by using non-overlapping five-fold cross-validation for reflection of all videos as test data. For each fold, we use 80% of the videos in the dataset for training and 20% for testing. We then average the results of all folds to export the final score. Owing to non-overlapping videos in the training data in each split, different training epochs are required; therefore, we pick the model that shows the best performance on test data during the training epochs of each split. During training, the predicted score for each input video is compared to the average score of all ground truth scores of summary videos for that input video. During inference, the performance for each video is calculated by comparing each ground truth score with the predicted score and then averaging them. A.2. Implementation details For a fair comparison, we follow the standard procedure [9, 15, 24, 25, 28, 42] by uniformly subsampling the videos to 2 fps and acquiring the image representation of every frame from GoogleNet [35]. GoogleNet is also used as a trainable CNN to match the dimension of all features to 1,024, and all CNN models are pre-trained on ImageNet [5]. The initial weights of the linear layers in the classifier are initialized by Xavier initialization [10], while key and value embeddings are initialized randomly. The output channels of linear layers and key and value embedding dimensions are 1,024. The reduction ratio r in CNN is 32, an inherent trait of GoogleNet, and all adaptive pooling layers are adaptive average pooling operations. The shape of the CLS token is 3 ×1,024, the epsilon value for layer normalization is 1e-6, and the dropout rate is 0.6. We train CSTA on a single NVIDIA GeForce RTX 4090 for 100 epochs with a batch size of 1 and use an Adam optimizer [23] with 1e-3 as the learning rate and 1e-7 as weight decay. B. Summary video visualization (a) The images from the summary video titled “paluma jump” about people diving into the water. As shown in Figure 5, we visualize and compare the generated summary videos from different models. We compared CSTA with DSNet-AB, DSNet-AF [47], V ASNet [7], and VJMHT [24]. The videos were selected from SumMe (Figure 5a) and TVSum (Figure 5b and Figure 5c). Since each model used different videos for the test, we chose videos used for training by all models. (b) The images from the summary video titled “ICC World Twenty20 Bangladesh 2014 Flash Mob - Pabna University of Science & Technology ( PUST )” about people performing flash mobs on the street and crowds watching them. (c) The images from the summary video titled “Chinese New Year Parade 2012 New York City Chinatown” about the parade celebrating the Chinese New Year on the streets of New York City. Figure 5. Visualization and comparison of summary videos generated by different models. The images above are the frames selected by CSTA as parts of the summary video. The graphs below show which frames models pick as keyframes. From the graphs, each row is the result of each model. The x-axis is the order of the frames, and the black boxes are the ground truth frames. The color parts are the frames each model selects, and the white parts are the frames unselected by each model. The summary video in Figure 5a was taken during the “paluma jump,” representing people diving into the water at Paluma. The first three frames show the exact moment people dive into the water. Based on the selected frames in the graphs, CSTA selects keyframes that represent the main content of the video more accurately than the other models. Although other models chose key moments in the later part of the video, they did not search for diving moments as precisely as CSTA. The summary video in Figure 5b was taken during the “ICC World Twenty20 Bangladesh 2014 Flash Mob - Pabna University of Science & Technology ( PUST ),” representing flash mops on the street. The frames selected by CSTA display different flash mop performances on the street and the people watching them. CSTA selects keyframes in videos more often than the other models, which either select non-keyframes or skip keyframes, as shown in the graphs in Figure 5b. The summary video in Figure 5c was taken during the “Chinese New Year Parade 2012 New York City Chinatown,” representing the parade celebrating the Chinese New Year in New York City. Based on the chosen images, CSTA finds representative frames containing the parade or people reacting to it ( e.g., images showing tiger-like masks, people marching on the street, or people recording the parade, respectively). Unlike the other models, the graphs exhibit CSTA creating exactly the same summary videos with ground truth. These results suggest the superiority of CSTA over the other models. C. CNN models BaselineSumMe TVSumCSTASumMe TVSum τ ρ τ ρ τ ρ τ ρ MobileNet-V2[33] 0.170 0.189 0.122 0.155 CSTA(MobileNet-V2) 0.228 0.255 0.194 0.254 EfficientNet-B0[36] 0.185 0.206 0.119 0.150 CSTA(EfficientNet-B0) 0.222 0.247 0.194 0.255 ResNet-18[14] 0.167 0.187 0.140 0.178 CSTA(ResNet-18) 0.225 0.251 0.195 0.256 Table 4. The results of CSTA with different CNN models as the baseline. We tested CSTA using different CNN models as the baseline, as shown in Table 4. We unified the dimension size to 1,024 because each CNN model exports different dimensions of features. All CNN models improved their performance with the CSTA architecture. This supports the notion that CSTA does not work only for GoogleNet. D. Architecture history Here, we provide a step-by-step explanation of how CSTA is constructed. For all experiments, τandρrepresent Kendall’s and Spearman’s coefficients, respectively. The score marked in bold indicates that the model was selected because it yielded the best performance from the experiment. Each experiment was tested 10 times for strict verification, and the average score was recorded as the final one. As explained in Section 4.1, the form of the ground truth of SumMe is summary videos, so the models aim to generate summary videos correctly. Kendall’s and Spearman’s coefficients between the predicted and ground truth summary videos are the basis for evaluating the performance of SumMe. Based on the code provided in previous studies [9, 13], we generate summary videos by assigning 1 for selected frames and 0 otherwise. In videos, most frames are not keyframes, so the performance of SumMe is usually higher than that of TVSum. The form of the ground truth of TVSum is the shot-level importance score, so models should aim to predict accurate shot-level importance scores. The scores of entire frames are determined by assigning the identical scores of subsampled frames to nearby frames based on the code provided in previous studies [1, 7, 13, 47]. Therefore, Kendall’s and Spearman’s coefficients of subsampled frames are the basis for evaluating the performance of TVSum. The difference between SumMe and TVSum can cause bad performance on TVSum, and the performance on SumMe looks much better than that on TVSum. D.1. Channel of input feature ChannelsSumMe TVSum τ ρ τ ρ 1 0.169 0.188 0.122 0.154 3 0.176 0.197 0.129 0.163 Table 5. Comparison of different number of input channels on GoogleNet. In Table 5, we test the number of channels of input frame features. As explained in Section 3.2, we copy the input frame feature two times to create three channels of input to match the number of common channels of images that are usually used to train CNN models. We use GoogleNet as the baseline and check the results when the channels of input frame features are 1 and 3. The model taking 3 channels of features as inputs performs better than taking a single channel of features. This supports the idea that creating the shape of input frame features the same as the RGB images helps to utilize CNN models better. D.2. Verify attention Softmax. We verify the effects of attention structure and perform softmax along different axes, as shown in Table 6. Attention structure, composed of CNN generating attention maps and the classifier predicting scores, increases the baseline performance regardless of softmax (Baseline +Att). Compared to the baseline, the attention structure increased by 0.038 SettingSumMe TVSum τ ρ τ ρ Baseline(GoogleNet) 0.176 0.197 0.129 0.163 Baseline+Att 0.214 0.239 0.167 0.219 Baseline +Att+SoftT 0.184 0.205 0.176 0.231 Baseline +Att+SoftD 0.186 0.207 0.170 0.224 Baseline+Att+Soft T&D 0.189 0.211 0.182 0.240 Table 6. The ablation study for the softmax. The baseline is the plain GoogleNet summarizing videos, which is the same as in Figure 4. Att is an attention-based CNN structure without softmax. Soft applies the softmax operation to the model along the frame axis ( T), dimension axis (D), or both axes ( T&D). and 0.042 for Kendall’s and Spearman’s coefficients, respectively, on SumMe, where it increased by 0.038 and 0.056 for Kendall’s and Spearman’s coefficients, respectively, on TVSum. This demonstrates CNN’s ability as an attention algorithm. Reflecting weighted values between frames ( Att+SoftT) or dimensions ( Att+SoftD) improved the baseline model by at least 0.008 on SumMe and 0.041 on TVSum. This supports the importance of spatial attention, and it is better to consider weighted values along both the frame and dimension axes ( Att+SoftT&D) than focusing on only one of the axes. On SumMe, the model without softmax is better than the model with softmax; however, the reverse is the case on TVSum. Since there is no best model for all datasets, we choose both models as the baseline and find the best one when extending the structure. SettingSumMe TVSum τ ρ τ ρ Baseline (Att T&D) 0.189 0.211 0.182 0.240 Baseline +Balance T 0.186 0.207 0.178 0.233 Baseline +Balance D 0.186 0.207 0.182 0.239 Baseline +Balance BD 0.186 0.207 0.175 0.230 Baseline +Balance BU 0.187 0.208 0.183 0.240 Table 7. The ablation study for balancing ratio. The baseline applies the softmax operation to the attention map along the frame and dimension axes (Table 6). Balance is the balancing ratio between frames and dimensions. Tadjusts the weighted values along the frame axis to the dimension axis. Dadjusts the scale of the weighted values along the dimension axis to the frame axis. BDdecreases the scale of larger ones into smaller ones. BU upscales the scale of smaller ones into larger ones. Balance Ratio. We hypothesize that the imbalance ratio between frames and dimensions deteriorates the performance of the model with softmax. For example, suppose the number of frames is 100, and the dimension size is 1,000. In this case, the weighted values between frames are usually larger than those between dimensions (on average, 0.01 and 0.001 between frames and dimensions, respectively). This situation can lead to overfitting the frame importance, so we tested the performance of the model under a balanced ratio between the number of frames and dimensions, as shown in Table 7. However, all results were worse than the baseline, so we used the default setting. D.3. Self-attention extension Given that our model operates the attention structure differently from existing ones, we must test which existing method works for CSTA. First, we verify the key, value embeddings, and scaling factors used in self-attention [38]. The key and value embeddings project input data into another space by exploiting linear layers. At the same time, the scaling factor divides all values of attention maps with the size of the dimension. Unlike self-attention handling 1-dimensional data, we should consider the frame and dimension axes for the scaling factor because of 2-dimensional data. We test the scaling factor using the size of dimensions ( Scale D), frames ( Scale T), and both ( Scale T&D). The best performance for the model without softmax is achieved by utilizing the key, value embedding, and scaling factors with the size of frames ( EMB +Scale T), as shown in Table 8a. Although utilizing the scaling factors with the size of dimensions ( Scale D) yields better performance than EMB +Scale Ton SumMe, it yields much worse performance on SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline( Att) 0.214 0.239 0.167 0.219 Baseline +EMB 0.158 0.177 0.033 0.042 Baseline +Scale D 0.220 0.246 0.173 0.227 Baseline +EMB +Scale D 0.209 0.238 0.187 0.246 Baseline +Scale T 0.213 0.238 0.173 0.227 Baseline+EMB+Scale T 0.214 0.239 0.187 0.245 Baseline +Scale T&D 0.196 0.218 0.154 0.203 Baseline +EMB +Scale T&D 0.192 0.215 0.191 0.250 (a) The result of the model without softmax as the baseline. SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline( AttT&D) 0.189 0.211 0.182 0.240 Baseline+EMB 0.207 0.231 0.193 0.253 Baseline +Scale D 0.151 0.168 0.192 0.251 Baseline +EMB +Scale D 0.162 0.181 0.190 0.249 Baseline +Scale T 0.160 0.178 0.192 0.252 Baseline +EMB +Scale T 0.163 0.182 0.191 0.251 Baseline +Scale T&D 0.149 0.166 0.187 0.146 Baseline +EMB +Scale T&D 0.163 0.181 0.187 0.244 (b) The result of the model with softmax as the baseline. Table 8. The ablation study for methods in self-attention. Attis the model without softmax, and AttT&Dis the model with softmax along the frame and dimension axes (Table 6). EMB employs key and value embedding into the baseline model. Scale divides the values of attention maps by the number of frames ( T) or dimensions ( D) or both of them ( T&D). TVSum, even considering the performance gaps on SumMe. Also, EMB +Scale DandEMB +Scale T&Dshow slightly better performance than EMB +Scale Ton TVSum, but much worse on SumMe. We select EMB +Scale Tas the best model based on overall performance. For the model with softmax, we select the model using key and value embedding (Baseline +EMB ) because it reveals the best performance for all datasets, as shown in Table 8b. D.4. Transformer extension We verify the methods used in transformers [38], which are positional encodings and dropouts. Positional encoding strengthens position awareness, whereas dropout enhances generalization. We expect the same effects when we apply the po- sitional encodings and dropouts to the input frame features. We use fixed positional encoding ( FPE ) [38], relative positional encoding ( RPE ) [1], learnable positional encoding ( LPE ) [38], and conditional positional encoding ( CPE ) [4]. We must test both 2-dimensional ( TD) and 1-dimensional ( T) positional encoding matrices to represent temporal position explicitly because the data structure differs from the original positional encoding that handles only 1-dimensional data. For CPE ,T operates a depth-wise 1D CNN operation for each channel, whereas TDoperates entire channels. We use 0.1 as the dropout ratio, the same as [38]. The results of both models with and without softmax reveal that employing positional encodings or dropout into the input frame features deteriorates the performance of Kendall’s and Spearman’s coefficients for all datasets, as shown in Table 9. We suppose that adding different values to each frame feature leads to distortion of data, making it difficult for the model to learn patterns from frame features because CSTA considers the frame features as images. If more data are available, the model can learn location information from these positional encodings because they are similar to bias. Thus, all results yielded by the model with and without softmax worsen when using positional encoding or dropout on SumMe. However, some results are similar to or even better than the baseline on TVSum because TVSum has more data than SumMe. Due to the lack of data, we chose the baseline models based on performance. D.5. PGL-SUM extension Unlike existing transformers, PGL-SUM [1] proves effects when applying positional encodings and dropouts to the mul- tiplication outputs between key and query vectors. We adopted the same methods to further improve the model’s positional recognition effectiveness by adding the positional encodings and applying dropouts to CNN’s outputs. We use 0.5 for the dropout ratio, the same as [1]. The best performance for the model without and with softmax is achieved by Baseline +Drop and Baseline +FPE(TD) +Drop , respectively, as shown in Table 10. In Table 10b, some models perform slightly better than Baseline +FPE(TD) +Drop on TVSum, but their performance is considerably worse than the selected one on SumMe. Comparing the best performance in both tables, we observe that the performance of the model with softmax SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline (Att) 0.214 0.239 0.187 0.245 Baseline +Drop 0.181 0.202 0.190 0.250 Baseline +FPE(TD)0.145 0.161 0.031 0.041 Baseline +FPE(TD) +Drop 0.148 0.165 -0.007 -0.009 Baseline +FPE(T) 0.192 0.214 0.189 0.248 Baseline +FPE(T) +Drop 0.179 0.200 0.191 0.251 Baseline +RPE (TD)0.193 0.216 0.190 0.249 Baseline +RPE (TD) +Drop 0.172 0.192 0.191 0.250 Baseline +RPE (T) 0.195 0.217 0.191 0.251 Baseline +RPE (T) +Drop 0.190 0.211 0.188 0.247 Baseline +LPE(TD)0.140 0.156 -0.032 -0.043 Baseline +LPE(TD) +Drop 0.136 0.151 -0.004 -0.005 Baseline +LPE(T) 0.152 0.169 0.062 0.082 Baseline +LPE(T) +Drop 0.138 0.153 0.036 0.048 Baseline +CPE(TD)0.143 0.160 0.168 0.221 Baseline +CPE(TD) +Drop 0.139 0.155 0.175 0.229 Baseline +CPE(T) 0.142 0.158 0.140 0.183 Baseline +CPE(T) +Drop 0.140 0.156 0.145 0.190 (a) The results of the model without softmax as the baseline. Attapplies key and value embedding with scaling factor for the size of frames without softmax (Table 8a). SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline (Att T&D) 0.207 0.231 0.193 0.253 Baseline +Drop 0.134 0.149 0.192 0.252 Baseline +FPE(TD)0.144 0.160 0.152 0.199 Baseline +FPE(TD) +Drop 0.148 0.165 0.134 0.176 Baseline +FPE(T) 0.138 0.154 0.191 0.250 Baseline +FPE(T) +Drop 0.152 0.170 0.193 0.253 Baseline +RPE (TD)0.149 0.166 0.193 0.253 Baseline +RPE (TD) +Drop 0.151 0.168 0.193 0.253 Baseline +RPE (T) 0.142 0.159 0.192 0.252 Baseline +RPE (T) +Drop 0.158 0.176 0.192 0.252 Baseline +LPE(TD)0.152 0.169 0.078 0.102 Baseline +LPE(TD) +Drop 0.148 0.164 0.052 0.068 Baseline +LPE(T) 0.134 0.149 0.079 0.103 Baseline +LPE(T) +Drop 0.145 0.162 0.089 0.117 Baseline +CPE(TD)0.135 0.150 0.119 0.156 Baseline +CPE(TD) +Drop 0.146 0.162 0.114 0.149 Baseline +CPE(T) 0.143 0.159 0.180 0.236 Baseline +CPE(T) +Drop 0.149 0.166 0.184 0.241 (b) The results of the model with softmax as the baseline. AttT&Dapplies key and value embedding with softmax (Table 8b). Table 9. The ablation study for methods in transformers. FPE is fixed positional encoding, RPE is relative positional encoding, LPE is learnable positional encoding, and CPE is conditional positional encoding. Trepresents the frame axis, and TDrepresents the frame and dimension axis for positional encoding. Drop exploits dropout to input frame features. (Baseline +FPE(TD) +Drop ) is better than that without softmax (Baseline +Drop ) for all datasets. Although their performance is similar on TVSum, the baseline model without softmax shows 0.219 and 0.244 for Kendall’s and Spearman’s coefficients, respectively, on SumMe. The baseline model with softmax shows 0.225 and 0.251 for Kendall’s and Spearman’s coefficients, respectively, on SumMe. Based on the performance, we chose the model with softmax using FPE(TD)and Drop as the final model. D.6. CLS token We further test the CLS token [6] at different combining places. CSTA fuses the CLS token with input frame features right after employing CNN or softmax or creating final features. The final features are created by applying attention maps to input features. Combining the CLS token after creating the final features yields the best performance, as shown in Table 11. We hypoth- esize that the reason is that the classifier is generalized. The CLS token is trained jointly with the model to reflect the overall information of the dataset. The global cues of the dataset generalize the classifier because fully connected layers are found in the classifier. Thus, all results using the CLS token improved the baseline. Moreover, adding the CLS token after creating the final features means incorporating the CLS token just before the classifier. For this reason, the best performance is achieved by delivering the CLS token without changes and generalizing the classifier the most. D.7. Skip connection We finally verify the skip connection [14] for stable optimization of CSTA, as shown in Table 12. Without layer normaliza- tion, adopting the skip connection by adding outputs from the key embedding and CNN ( SCKC) yields the best performance among all settings. However, it yields slightly worse performance than the baseline model for all datasets. Using layer nor- malization with SCKC(SCKC+LN) shows slightly less performance than the baseline model on TVSum, whereas it yields much better performance than the baseline on SumMe. For better overall performance, we selected the combination of skip SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline( Att) 0.214 0.239 0.187 0.245 Baseline +Drop 0.219 0.244 0.191 0.250 Baseline +FPE(TD)0.118 0.131 0.064 0.084 Baseline +FPE(TD) +Drop 0.164 0.183 0.149 0.197 Baseline +FPE(T1)0.184 0.205 0.169 0.222 Baseline +FPE(T1) +Drop 0.183 0.204 0.171 0.225 Baseline +RPE (TD)0.207 0.231 0.143 0.188 Baseline +RPE (TD) +Drop 0.203 0.227 0.146 0.192 Baseline +RPE (T1)0.187 0.209 0.164 0.216 Baseline +RPE (T1) +Drop 0.176 0.196 0.166 0.218 Baseline +LPE(TD)0.190 0.212 0.136 0.179 Baseline +LPE(TD) +Drop 0.202 0.226 0.151 0.199 Baseline +LPE(T1) 0.184 0.205 0.173 0.227 Baseline +LPE(T1) +Drop 0.178 0.199 0.177 0.233 Baseline +CPE(TD)0.141 0.157 0.157 0.206 Baseline +CPE(TD) +Drop 0.123 0.136 0.145 0.191 Baseline +CPE(T1)0.111 0.123 0.069 0.090 Baseline +CPE(T1) +Drop 0.129 0.144 0.090 0.118 (a) The results of the model without softmax as the baseline. Attapplies key and value embedding with scaling factor for the size of frames without softmax (Table 8a). SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline( AttT&D) 0.207 0.231 0.193 0.253 Baseline +Drop 0.204 0.228 0.193 0.253 Baseline +FPE(TD)0.222 0.248 0.188 0.247 Baseline+FPE(TD)+Drop 0.225 0.251 0.191 0.251 Baseline +FPE(T1)0.207 0.231 0.183 0.241 Baseline +FPE(T1) +Drop 0.199 0.222 0.184 0.243 Baseline +RPE (TD)0.200 0.223 0.189 0.248 Baseline +RPE (TD) +Drop 0.205 0.229 0.189 0.248 Baseline +RPE (T1)0.214 0.239 0.184 0.242 Baseline +RPE (T1) +Drop 0.209 0.233 0.184 0.243 Baseline +LPE(TD)0.216 0.241 0.185 0.243 Baseline +LPE(TD) +Drop 0.217 0.242 0.186 0.245 Baseline +LPE(T1) 0.197 0.220 0.181 0.239 Baseline +LPE(T1) +Drop 0.207 0.231 0.181 0.238 Baseline +CPE(TD)0.204 0.228 0.187 0.246 Baseline +CPE(TD) +Drop 0.202 0.226 0.190 0.249 Baseline +CPE(T1)0.209 0.233 0.192 0.253 Baseline +CPE(T1) +Drop 0.205 0.228 0.190 0.250 (b) The results of the model with softmax as the baseline. AttT&Dapplies key and value embedding with softmax (Table 8b). Table 10. The ablation study for methods in PGL-SUM. FPE is fixed positional encoding, RPE is relative positional encoding, LPE is learnable positional encoding, and CPE is conditional positional encoding. Tis the frame axis, and TDis the frame and dimension axes for positional encoding. Drop exploits dropout to features after positional encoding. SettingSumMe TVSum τ ρ τ ρ Baseline( AttT&D) 0.225 0.251 0.191 0.251 Baseline +CLS CNN 0.232 0.259 0.194 0.254 Baseline +CLS SM 0.233 0.260 0.193 0.254 Baseline+CLS Final 0.236 0.263 0.194 0.254 Table 11. The ablation study for the CLS token. AttT&Dis the baseline model applying FPE and dropout (Table 10). CLS is the model that uses the CLS token and combines it with frame features after CNN ( CNN ) or softmax ( SM) or creating final features (Final). SettingSumMe TVSumSettingSumMe TVSum τ ρ τ ρ τ ρ τ ρ Baseline( AttT&D)0.236 0.263 0.194 0.254 Baseline +SCKC 0.233 0.261 0.193 0.253 Baseline+SC KC+LN 0.243 0.271 0.192 0.252 Baseline +SCCF 0.115 0.128 0.043 0.056 Baseline +SCCF+LN 0.162 0.181 0.052 0.068 Baseline +SCIF 0.126 0.141 -0.018 -0.024 Baseline +SCIF+LN 0.163 0.181 0.186 0.244 Table 12. The ablation study for the skip connection. AttT&Dis the baseline model incorporating the CLS token after creating the final features (Table 11). SCmeans skip connection. KC is the key embedding output fused with CNN output. CFis CNN output combined with final features. IFis the combined input and final features. LNis layer normalization exploited immediately after the skip connection. connections, which is the sum of both key embedding and CNN outputs, and layer normalization as the final model. E. Hyperparameter setting Dataset Correlation Batch size=1 Batch size=25% Batch size=50% Batch size=75% Batch size=100% SumMe τ 0.243 0.214 0.210 0.211 0.202 ρ 0.271 0.239 0.235 0.235 0.225 TVSum τ 0.192 0.173 0.161 0.163 0.152 ρ 0.252 0.228 0.212 0.214 0.200 (a) The ablation study for different batch sizes. Each percentage is the batch size ratio of the entire training dataset. Dataset Correlation Dropout=0.9 Dropout=0.8 Dropout=0.7 Dropout=0.6 Dropout=0.5 SumMe τ 0.227 0.237 0.240 0.246 0.243 ρ 0.253 0.264 0.268 0.275 0.271 TVSum τ 0.198 0.196 0.194 0.192 0.192 ρ 0.260 0.258 0.255 0.252 0.252 Dataset Correlation Dropout=0.4 Dropout=0.3 Dropout=0.2 Dropout=0.1 SumMe τ 0.243 0.239 0.241 0.237 ρ 0.271 0.267 0.269 0.264 TVSum τ 0.192 0.192 0.190 0.192 ρ 0.252 0.253 0.250 0.252 (b) The ablation study for different dropout ratios used for CNN outputs with a single batch size (Table 13a). Dataset Correlation WD=1e-1 WD=1e-2 WD=1e-3 WD=1e-4 WD=1e-5 SumMe τ 0.174 0.178 0.176 0.203 0.227 ρ 0.194 0.199 0.197 0.226 0.253 TVSum τ 0.055 0.082 0.195 0.195 0.199 ρ 0.070 0.107 0.256 0.255 0.261 Dataset Correlation WD=1e-6 WD=1e-7 WD=1e-8 WD=0 SumMe τ 0.237 0.246 0.242 0.246 ρ 0.264 0.274 0.270 0.275 TVSum τ 0.194 0.194 0.193 0.192 ρ 0.255 0.255 0.253 0.252 (c) The ablation study for different weight decays with a single batch size and dropout ratio of 0.6 (Table 13b). WD means weight decay. Dataset Correlation LR=1e-1 LR=1e-2 LR=1e-3 LR=1e-4 SumMe τ -0.292 0.164 0.246 0.211 ρ -0.284 0.182 0.274 0.236 TVSum τ -0.089 0.060 0.194 0.162 ρ -0.080 0.080 0.255 0.213 Dataset Correlation LR=1e-5 LR=1e-6 LR=1e-7 LR=1e-8 SumMe τ 0.162 0.163 0.168 0.153 ρ 0.181 0.182 0.187 0.171 TVSum τ 0.101 0.030 0.035 0.037 ρ 0.133 0.039 0.046 0.049 (d) The ablation study for different learning rates with a single batch size, a dropout ratio of 0.6, and weight decay of 1e-7 (Table 13c). LR means learning rate. Table 13. The ablation study for different hyperparameter settings. Here, we test the model with different hyperparameter values. The best performance of the model is achieved with a single batch size, and it keeps decreasing with larger batch sizes, as shown in Table 13a. Thus, we use the single batch size. The bigger the dropout ratio, the better the model’s performance on TVSum, as shown in Table 13b. However, the performance of the model is bad if the dropout ratio is too large, so we chose 0.6 as the dropout ratio, considering both performance on SumMe and TVSum. We fixed the dropout ratio at 0.6 and tested with various values of weight decay, as shown in Table 13c. The performance increased as the value of weight decay decreased. When weight decay is 1e-7, the performance on SumMe is similar to that without weight decay. However, it shows slightly better performance on TVSum than the model with 0 weight decay. Thus, considering the overall performance on SumMe and TVSum, we select 1e-7 as the final value for weight decay. In Table 13d, we finally test different learning rates with 1e-7 as weight decay. When the learning rate is too large, the performance is terrible for all datasets. When the learning rate is too small, the performance is also poor. When the learning rate is 1e-3, it shows the best performance, so we decided on this value as the final learning rate. | 6 | 1 | The model, CSTA, is based on a CNN architecture (GoogleNet) and integrates attention mechanisms. Given that model architectures like this usually have around 5-10 million parameters, I estimate approximately 6 hours of training time assuming a dataset with 50 videos (TVSum) and around 25 videos (SumMe), which is standard for video summarization tasks. The proposed model's efficient use of CNNs should allow for relatively faster training, especially since it does not require extensive additional modules. Based on typical training epochs being around 50-100 for this type of application, with reasonable batch sizes (e.g., 32-64), the model can reasonably fit within the memory limits of a single GPU. Thus, it could be trained under 8 hours on a single GPU, specifically an NVIDIA Tesla V100 or similar that is frequently used for training in similar video processing tasks. | yes | Yes | CV | CSTA: CNN-based Spatiotemporal Attention for Video Summarization | 2024-05-20 0:00:00 | https://github.com/thswodnjs3/CSTA | 1 | https://github.com/e-apostolidis/PGL-SUM/tree/master/data | 5 MIN FOR SUMme dataset for 50 epochs | https://colab.research.google.com/drive/1zMK8TRHtdhQB7dkwkxA3ImblIstiU9ob?usp=sharing | Yes | -- Run perfectly on SUMme. But crashes on TVSum dataset saying out of GPU memory. |
HME100K | ICAL | [] | ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition | 2024-05-15T00:00:00 | https://arxiv.org/abs/2405.09032v4 | [
"https://github.com/qingzhenduyu/ical"
] | {'ExpRate': '69.06'} | [
"ExpRate"
] | Given the following paper and codebase:
Paper: ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition
Codebase: https://github.com/qingzhenduyu/ical
Improve the ICAL model on the HME100K dataset. The result
should improve on the following metrics: {'ExpRate': '69.06'}. You must use only the codebase provided.
| ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition Jianhua Zhu1[0009 −0000−3982−2739], Liangcai Gao1( ), and Wenqi Zhao1 Wangxuan Institute of Computer Technology, Peking University, Beijing, China zhujianhuapku@pku.edu.cn gaoliangcai@pku.edu.cn wenqizhao@stu.pku.edu.cn Abstract. Significant progress has been made in the field of handwrit- ten mathematical expression recognition, while existing encoder-decoder methods are usually difficult to model global information in LATEX. There- fore, this paper introduces a novel approach, Implicit Character-Aided Learning (ICAL), to mine the global expression information and en- hance handwritten mathematical expression recognition. Specifically, we propose the Implicit Character Construction Module (ICCM) to pre- dict implicit character sequences and use a Fusion Module to merge the outputs of the ICCM and the decoder, thereby producing corrected pre- dictions. By modeling and utilizing implicit character information, ICAL achieves a more accurate and context-aware interpretation of handwrit- ten mathematical expressions. Experimental results demonstrate that ICAL notably surpasses the state-of-the-art(SOTA) models, improving the expression recognition rate (ExpRate) by 2.25%/1.81%/1.39% on the CROHME 2014/2016/2019 datasets respectively, and achieves a re- markable 69.06% on the challenging HME100k test set. We make our code available on the GitHub.1 Keywords: handwritten mathematical expression recognition ·trans- former ·implicit character-aided learning ·encoder-decoder model 1 Introduction The Handwritten Mathematical Expression Recognition (HMER) task involves taking an image of a handwritten mathematical expression as input and having the model predict the corresponding L ATEX. The HMER task has a wide range of applications, such as being used for intelligent grading of mathematical assign- ments, building online grading systems, and improving the efficiency of online education. Therefore, how to improve the recognition accuracy of handwritten mathematical expressions has become a hot topic in previous works. 1https://github.com/qingzhenduyu/ICALarXiv:2405.09032v4 [cs.CV] 7 Nov 2024 2 J. Zhu et al. Fig. 1. (a)Illustration of past method which uses DenseNet Encoder and RNN/Transformer Decoder. (b) Our ICAL method aided by implicit character learn- ing. The characters highlighted in red signify inaccuracies in the prediction, whereas the blue highlights denote implicit characters. Due to the diversity of handwriting and the two-dimensional structure of mathematical expressions, HMER is highly challenging. Compared to the recog- nition of handwritten text in natural language, the HMER task requires not only the prediction of explicit characters (i.e., characters that are directly represented when written by hand) but also implicit characters, such as “ ^”, “_”, “{”, and “}”, which is necessary to achieve a complete description of a two-dimensional mathematical expression. Past methods [30, 35, 36] based on encoder-decoder models often extract image features through the encoder, while the decoder aligns visual and textual features and predicts the L ATEX. However, they often lack modeling of the global information of the expression, which in turn fails to correct prediction errors made by the decoder, as shown in the figure 1 (a): In this paper, we utilize the task of predicting implicit characters (i.e., “ ^”, “_”, “{”, and “ }”) to assist the decoder in modeling the global information of LATEX, which can further correct the output of the decoder and improve recog- nition performance. To this end, we propose an Implicit Character Construction Module, capable of modeling the sequence of implicit characters from the out- put of the Transformer Decoder [22]. This global information is then passed to a subsequent Fusion Module which integrates the information with the output of the Transformer decoder to achieve a more accurate prediction of the L ATEX sequence. In this work, the main contributions of our work are summarized as follows: HMER with Implicit Character-Aided Learning 3 •We introduced the Implicit Character Construction Module(ICCM) to model implicit character information, which can effectively utilize the global infor- mation in L ATEX. •We proposed the Fusion Module to aggregate the output of the ICCM, thereby correcting the prediction of the Transformer Decoder. •Experimental results indicate that the ICAL method surpasses previous state-of-the-art methods and achieves expression recognition rate (ExpRate) of 60.63%, 58.79%, and 60.51% on the CROHME 2014 [14]/2016 [15]/2019 [13] test sets, respectively, and an ExpRate of 69.06% on the HME100K test set [28]. 2 Related Work 2.1 Traditional Methods In traditional handwritten mathematical expression recognition, the process mainly involves symbol recognition and structural analysis. Symbol recognition requires segmenting and identifying individual symbols within expressions, utiliz- ing techniques like pixel-based segmentation proposed by OKAMOTO et al [21]. and further refined by HA et al [8]. into recursive cropping methods. These ap- proaches often depend on predefined thresholds. For symbol classification, meth- ods like Hidden Markov Models (HMM) [1,11,24], Elastic Matching [3,23], and Support Vector Machines (SVM) [10] have been used, where HMMs allow for joint optimization without explicit symbol segmentation, though at a high com- putational cost. Structural analysis employs strategies like the two-dimensional Stochastic Context-free Grammar (SCFG) [6] and algorithms such as Cocke-Younger-Kasami (CKY) [16] for parsing, albeit slowly due to the complexity of two-dimensional grammar. Faster parsing has been achieved with Left-to-right Recursive Descent and Tree Transformation methods [29], the latter describing the arrangement and grouping of symbols into a structured tree for parsing. Some approaches by- pass two-dimensional grammar altogether, using Define Clause Grammar (DCG) [4]and Formula Description Grammar [18] for one-dimensional parsing, high- lighting the challenges in designing comprehensive grammars for the diverse structures of mathematical expressions. 2.2 Deep Learning Methods In recent years, encoder-decoder-based deep learning models have become the mainstream framework in the field of HMER. Depending on the architecture of the decoder, past deep learning approaches can be categorized into methods based on Recurrent Neural Networks (RNNs) [2,7,12,19,25–28,30–33] and those based on the Transformer model [34–36]. Furthermore, based on the decoding strategy, these methods can be divided into those based on sequence decoding [2, 7, 12, 19, 26, 27, 30, 31, 33–36] and those based on tree-structured decoding [25, 28,32]. 4 J. Zhu et al. RNN-based methods In 2017, Zhang et al. proposed an end-to-end deep learn- ing model, WAP [33], to address the problem of HMER. The encoder part of the model is a fully convolutional neural network similar to VGGnet [17]. The de- coder part uses a GRU [5] model to generate the predicted L ATEX sequence from the extracted visual features. The WAP not only avoids issues caused by inaccu- rate symbol segmentation but also eliminates the need for manually predefined LATEX syntax, thereby becoming a benchmark model for subsequent deep learn- ing methods. Following WAP, Zhang et al. further proposed the DenseWAP [30], which replaces the VGGnet with the DenseNet [9]. Subsequent work has com- monly adopted the DenseNet as the backbone network for the encoder. The CAN model [12] introduces a Multi-Scale Counting Module, utilizing a symbol counting task as an auxiliary task to be jointly optimized with the expression recognition task. Transformer-based methods To alleviate the issue of unbalanced output and fully utilize bidirectional language information, BTTR [36] adopts a bidirec- tional training strategy on top of the Transformer-based decoder. Following this, CoMER [35] incorporates coverage information [20, 33] into Transformer De- coder, introducing an Attention Refinement Module. This module utilizes the attention weights from the Multi-head Attention mechanism within the Trans- former decoder to compute the coverage vector, all while maintaining the char- acteristic of parallel decoding. Based on CoMER, the GCN [34] incorporates extra symbol categorization information , utilizing the General Category Recog- nition Task as a supplementary task for joint optimization with the HMER task, resulting in a notable performance. However, the category recognition task intro- duced by GCN requires manual construction of symbol categories and is limited to specific datasets. Tree-based decoding methods LATEX, as a markup language, can be easily parsed into a tree-like expression due to the influence of delimiters such as brack- ets. Therefore, by leveraging the inherent two-dimensional structure of mathe- matical expressions , models can provide certain interpretability for the predic- tion process. Zhang et al. proposed the DenseWAP-TD [32], which replaces the GRU decoder that directly regresses the L ATEX sequence with a decoder based on a two-dimensional tree structure. The TDv2 model [25], during training, uses dif- ferent transformation methods for the same L ATEX string, weakening the context dependency and endowing the decoder with stronger generalization capabilities. The SAN model [28] converts the L ATEX sequence into a parsing tree and designs a series of syntactic rules to transform the problem of predicting L ATEX sequences into a tree traversal process. Additionally, SAN introduces a new Syntax-Aware Attention Module to better utilize the syntactic information in L ATEX. 3 Methodology In this section, we will elaborate in detail on the model structure of the ICAL we proposed, as shown in the figure 2. In Section 3.1, we will briefly introduce the DenseNet [9] used in the encoder part. In Section 3.2, we will introduce the HMER with Implicit Character-Aided Learning 5 Transformer decoder that adopts coverage attention mechanism [22, 35], which can alleviate the lack of coverage problem. In Sections 3.3 and 3.4, we will introduce the Implicit Character Construction Module(ICCM) proposed in this paper and the Fusion Module that integrates implicit character information. Finally, in Section 3.5, we will discuss how the implicit character loss and fusion loss are introduced while employing a bidirectional training strategy [36]. Fig. 2. The architecture of ICAL model (left) and Coverage Attention (right). To simplify the illustration, we have condensed the depiction of bidirectional training in the figure. 3.1 Visual Encoder Similar to most of the previous work [2,7,12,19,25–28,30–32,34–36], we continue to use DenseNet [9] as the visual encoder to extract features from the input images. DenseNet consists of multiple dense blocks and transition layers. Within each dense block, the input for each layer is the concatenated outputs from all pre- ceding layers, enhancing the flow of information between layers in this manner. The transition layers reduce the dimensions of the feature maps using 1 ×1 convolutional kernels, decreasing the number of parameters and controlling the complexity of the model. For an input grayscale image of size 1 ×H0×W0, the output visual feature Vfeature ∈RD×H×W. The ratios of HtoH0andWtoW0are both 1/16. In 6 J. Zhu et al. this work, a 1 ×1 convolutional layer is used to adjust the number of channels Dtodmodel , aligning it with the dimension size of the Transformer decoder. 3.2 Transformer Decoder with ARM In the decoder part, we employ a Transformer Decoder with Attention Refine Module(ARM) as the decoder [22, 35], which mainly consists of three modules: Self-Attention Module, Coverage Attention Module, and Feed-Forward Network. Self-Attention Module The self-attention module functions as a core com- ponent of the standard Transformer decoder, utilizing a masked multi-head at- tention mechanism. This module processes a set of queries ( Q), keys ( K), and values ( V) to produce an output that leverages both dot-product attention and a multi-head strategy for processing information. For each head within the multi- head attention framework, linear transformations are applied to Q,K, and V using transformation matrices Wq i,Wk i, and Wv i, respectively, where idenotes the index of the head. The mechanism first calculates a scaled dot-product attention score Eifor thei-th head by multiplying the transformed queries and keys and then scaling the result by the square root of the dimension of the key vectors Kto avoid large values that could hinder softmax computation: Ei=(QWq i)(KWk i)T √dk, (1) A softmax function is then applied to these scores to obtain the attention weights Ai: Ai= softmax( Ei), (2) These weights are used to compute a weighted sum of the values, producing the output Hifor each head: Hi=Ai(VWv i), (3) Finally, the outputs of all heads are concatenated and linearly transformed to produce the final output of the multi-head attention module: MultiHeadAttention( Q,K,V) = Concat( H0,H1, . . . ,Hhead−1)Wo.(4) Since the decoder performs decoding in an autoregressive manner, the predic- tion of the current symbol depends on past predicted symbols and input visual information. To avoid the decoder obtaining information about future symbols when predicting the current symbol and to ensure parallelism, Self-Attention Module uses a masked lower-triangular matrix to constrain the information that the self-attention module can access at the current step. Coverage Attention Module The CoMER model, without affecting the par- allelism of the Transformer, introduces coverage attention commonly used in HMER with Implicit Character-Aided Learning 7 RNNs [33] into the Transformer decoder by improving the Cross Attention module. Within the Cross Attention module of the CoMER model, an Atten- tion Refine Module (ARM) is incorporated. By utilizing alignment information from the previous layer and the current layer, it refines the current attention weights Ai, enabling the decoder to faithfully convert the text structure from visual features to corresponding L ATEX text. The update formulas for the atten- tion weights in the j-th layer of the decoder, denoted as ˆAj, are as follows in Equations 5 and 6. ˆEj= ARM( Ej,ˆAj−1), (5) ˆAj= softmax( ˆEj). (6) Feed-Forward Network The feed-forward neural network consists of two linear layers and the ReLU non-linear activation function. For input Xfrom Coverage Attention Module, it is as follows: Efeature = FFN( X) = ReLU( XW 0+b0)W1+b1. (7) 3.3 Implicit Character Construction Module For the target L ATEX sequence, we only retain its implicit characters, namely “^”, “_”, “{”, and “ }”, replacing other characters with newly constructed ones to form a sequence corresponding to the implicit characters. For example, for the target L ATEX sequence B _ { m + 1 } , the corresponding implicit character sequence is <space> _ { <space> <space> <space> } . The output of the Transformer Decoder with ARM serves as the input to our Implicit Character Construction Module (ICCM), which consists of a layer of masked self-attention and a Feed-Forward Network (FFN) layer. Consequently, the output of the ICCM, Ifeature , is calculated as follows: Ifeature = FFN(SelfAttention( Efeature )). (8) 3.4 Fusion Module To integrate the information learned by the ICCM, we introduce a weighted ad- justment strategy based on the attention mechanism, capable of aggregating the output of the ICCM, thereby correcting the prediction results of the Transformer Decoder. The Fusion Module assimilates inputs from both the ICCM’s output, denoted asIfeature ∈RB×T×dmodel, and the Transformer Decoder’s output, denoted as Efeature ∈RB×T×dmodel. The attention weights fattare calculated as follows: fatt=σ(watt(Concat( Efeature ,Ifeature )), (9) 8 J. Zhu et al. where a linear layer wattmaps the concatenate feature matrix back to the original dimension dmodel , and attention weights fattare computed through a sigmoid activation function σ. Finally, fattare used to perform a weighted fusion of the two features, re- sulting in the final output feature for prediction. F=fatt⊙Efeature + (1−fatt)⊙Ifeature , (10) where ⊙indicates element-wise multiplication. 3.5 Loss Function To alleviate the issue of unbalanced output, we adhere to the bidirectional train- ing strategy used in BTTR and CoMER, necessitating the computation of losses in both directions within the same batch. The total loss function is divided into three parts: L=LInitial +LImplicit +LFusion , (11) where both LInitial andLfusion are standard cross-entropy loss functions. LInitial calculates the loss using the Transformer decoder’s predicted prob- abilities and the ground truth, consistent with the approach used in CoMER. Lfusion uses the predicted probabilities outputted by the Fusion Module and the ground truth. Since we construct the implicit character sequence directly from the L ATEX sequence, keeping its length consistent with the original L ATEX sequence, the constructed target implicit character sequence exhibits an imbalance in the oc- currence frequency between <space> and implicit characters. To address this, we use a weighted cross-entropy loss function to calculate the loss for implicit characters, as follows: Limplicit =−NX i=1TiX t=1wyi,t·log(ˆyi,t), (12) where Nrepresents the batch size, Tiis the length of the i-th sequence, yi,tis the ground truth token at position tin the i-th sequence, ˆ yi,tis the predicted probability of the correct token at position tin the i-th sequence, and wyi,tis the weight associated with the ground truth token yi,t. The weight wyi,tfor each token is dynamically adjusted based on the occur- rence frequency of the token within the entire batch of sequences. The adjustment is made using a logarithmic function to ensure a smooth transition of weights across different frequencies: wyi,t= 1.0 + log 1 +1 fyi,t+ϵ , (13) where fyi,tis the frequency of the token in the target sequences and ϵis set to 1e−6. HMER with Implicit Character-Aided Learning 9 4 Experiments 4.1 Dataset The CROHME dataset includes data from the Online Handwritten Mathemati- cal Expressions Recognition Competitions (CROHME) [13–15] held over several years and is currently the most widely used dataset for handwritten mathemati- cal expression recognition. The training set of the CROHME dataset consists of 8,836 samples, while the CROHME 2014 [14]/2016 [15]/2019 [13] test sets con- tain 986, 1147, and 1199 samples respectively. In the CROHME dataset, each handwritten mathematical expression is stored in InkML format, recording the trajectory coordinates of the handwritten strokes. Before training, we converts the handwritten stroke trajectory information in the InkML files into grayscale images, and then carries out model training and testing. The HME100K dataset [28] is a large-scale collection of real-scene handwrit- ten mathematical expressions. It contains 74,502 training images and 24,607 test- ing images, making it significantly larger than similar datasets like CROHME. Notably, it features a wide range of real-world challenges such as variations in color, blur, and complex backgrounds, contributed by tens of thousands of writ- ers. With 249 symbol classes, HME100K offers a diverse and realistic dataset for developing advanced handwritten mathematical expression recognition systems. 4.2 Evaluation Metrics The Expression Recognition Rate (ExpRate) is the most commonly used evalu- ation metric for handwritten mathematical expression recognition. It is defined as the percentage of expressions that are correctly recognized out of the total number of expressions. Additionally, we use the metrics “ ≤1 error” and “ ≤2 error” to describe the performance of the model when we tolerate up to 1 or 2 token prediction errors, respectively, in the L ATEX sequence. 4.3 Implementation Details We employs DenseNet [9] as the visual encoder to extract visual features from expression images. The visual encoder utilizes 3 layers of DenseNet blocks, with each block containing 16 bottleneck layers. Between every two DenseNet blocks, a transition layer is used to reduce the spatial dimensions and the channel count of the visual features to half of their original sizes. The dropout rate is set at 0.2, and the growth rate of the model is established at 24. We employ a 3-layer Transformer Decoder with ARM [22,35] as the backbone of the decoder, with a model dimension ( dmodel ) of 256 and head number set to 8. The dimension of the feed-forward layer is set to 1024, and the dropout rate is established at 0.3. The parameter settings for the Attention Rectifying Module (ARM) are consistent with those of CoMER, with the convolutional kernel size set to 5. Additionally, the parameters of masked self-attention and 10 J. Zhu et al. FFN in Implicit Character Construction Module(ICCM) are consistent with the aforementioned Transformer Decoder. During the training phase, we utilize Mini-batch Stochastic Gradient De- scent(SGD) to learn the model parameters, with weight decay set to 1e-4 and momentum set at 0.9. The initial learning rate is established at 0.08. We also adopt ReduceOnPlateau as the learning rate scheduler, whereby the learning rate is reduced to 25% of its original value when the ExpRate metric ceases to change. When trained on the CROHME dataset, the CROHME 2014 test set [14] is used as the validation set to select the model with the best performance. During the inference phase, we employ the approximate joint search method previously used in BTTR [36] to predict the output. 4.4 Comparison with State-of-the-art Methods Table 1 presents the results on the CROHME dataset. To ensure fairness in per- formance comparison and considering that different methods have used various data augmentation techniques, many of which have not been disclosed, we have limited our comparison to results without the application of data augmentation. Given the relative small size of the CROHME dataset, we conducted experi- ments with both the baseline CoMER and the proposed ICAL model using five different random seeds (7, 77, 777, 7777, 77777) under the same experimental conditions. The reported results are the averages and standard deviations of these five experiments. It is noteworthy that while GCN [34] has attained impressive results on CROHME 2016 [15] and 2019 [13], its performance benefits from the additional introduction of category information, while ICAL constructs the implicit charac- ter sequence directly from the L ATEX sequence, eliminating the need for manually constructing additional category information. Consequently, we present the per- formance of the GCN solely for reference purposes and exclude it from direct comparisons. The CoMER model represents the current state-of-the-art(SOTA) method; however, CoMER [35] did not disclose their results without data aug- mentation in the original paper. Therefore, we have reproduced the results of CoMER without data augmentation using their open-source code, denoted by † in the table 1. As shown in table 1, our method achieved the best performance across all metrics. Our method outperforms CoMER by 2.25%/1.81%/1.39% on the CROHME 2014, 2016, and 2019 datasets, respectively. Across all metrics, the ICAL method achieves an average improvement of 1.6% over the CoMER method. The experimental results on CROHME dataset prove the effectiveness of our method. We also conducted experiments on the challenging and real-world dataset HME100K, as shown in table 3. We have replicated the performance of CoMER on the HME100K dataset, and our method surpasses the state-of-the-art(SOTA) by 0.94%, reaching an impressive 69.06%. The outstanding experimental perfor- mance on the HME100K dataset, which is more complex, larger, and more real- HMER with Implicit Character-Aided Learning 11 Table 1. Performance comparison on the CROHME dataset. We compare expression recognition rate (ExpRate) between our model and previous state-of-the- art models on the CROHME 2014/2016/2019 test sets. None of the methods used data augmentation to ensure a fair comparison. We denote our reproduced results with †. The symbol ∗signifies the inclusion of supplementary information. All the performance results are reported in percentage (%). MethodCROHME 2014 CROHME 2016 CROHME 2019 ExpRate ↑ ≤ 1↑ ≤ 2↑ ExpRate ↑ ≤ 1↑ ≤ 2↑ ExpRate ↑ ≤ 1↑ ≤ 2↑ WAP 46.55 61.16 65.21 44.55 57.10 61.55 - - - DenseWAP 50.1 - - 47.5 - - - - - DenseWAP-MSA 52.8 68.1 72.0 50.1 63.8 67.4 47.7 59.5 63.3 TAP∗48.47 63.28 67.34 44.81 59.72 62.77 - - - PAL 39.66 56.80 65.11 - - - - - - PAL-v2 48.88 64.50 69.78 49.61 64.08 70.27 - - - WS-WAP 53.65 - - 51.96 64.34 70.10 - - - ABM 56.85 73.73 81.24 52.92 69.66 78.73 53.96 71.06 78.65 CAN-DWAP 57.00 74.21 80.61 56.06 71.49 79.51 54.88 71.98 79.40 CAN-ABM 57.26 74.52 82.03 56.15 72.71 80.30 55.96 72.73 80.57 DenseWAP-TD 49.1 64.2 67.8 48.5 62.3 65.3 51.4 66.1 69.1 TDv2 53.62 - - 55.18 - - 58.72 - - SAN 56.2 72.6 79.2 53.6 69.6 76.8 53.5 69.3 70.1 BTTR 53.96 66.02 70.28 52.31 63.90 68.61 52.96 65.97 69.14 GCN∗60.00 - - 58.94 - - 61.63 - - CoMER†58.38±0.6274.48±1.4181.14±0.9156.98±1.4174.44±0.9381.87±0.7359.12±0.4377.45±0.7083.87±0.80 ICAL 60.63±0.6175.99±0.7782.80±0.4058.79±0.7376.06±0.3783.38±0.1660.51±0.7178.00±0.6684.63±0.45 istic compared to CROHME, further proves the superior generalization ability and effectiveness of the ICAL method. Table 2. Performance comparison on the HME100K dataset. We compare our proposed ICAL with previous models on HME100K. We denote our reproduced results with†. All the performance results are reported in percentage (%). MethodHME100K ExpRate ↑ ≤ 1↑ ≤ 2↑ DenseWAP 61.85 70.63 77.14 DenseWAP-TD 62.60 79.05 85.67 ABM 65.93 81.16 87.86 SAN 67.1 - - CAN-DWAP 67.31 82.93 89.17 CAN-ABM 68.09 83.22 89.91 BTTR 64.1 - - CoMER†68.12 84.20 89.71 ICAL 69.06±0.1685.16±0.1390.61±0.09 4.5 Ablation Study Our research includes a series of ablation studies to corroborate the effectiveness of our proposed method. Presented in Table 4, Initial Loss refers to the cross- entropy loss computed from the discrepancy between the Transformer Decoder’s direct output and the ground truth (the intact L ATEX code). Fusion Loss is the 12 J. Zhu et al. Table 3. Performance comparison on the HME100K dataset. We compare our proposed ICAL with previous models on HME100K. We denote our reproduced results with†. All the performance results are reported in percentage (%). MethodHME100K ExpRate ↑ ≤1↑ ≤ 2↑ DenseWAP 61.85 70.63 77.14 DenseWAP-TD 62.60 79.05 85.67 ABM 65.93 81.16 87.86 SAN 67.1 - - CAN-DWAP 67.31 82.93 89.17 CAN-ABM 68.09 83.22 89.91 BTTR 64.1 - - CoMER†68.12 84.20 89.71 ICAL 69.06 85.16 90.61 cross-entropy loss determined by comparing the combined outputs from both the Implicit Character Construction Module(ICCM) and Decoder—synergized through the Fusion Module—with the ground truth (the intact L ATEX sequence). Implicit Loss is calculated using a cross-entropy formula with adaptive weighting, which evaluates the discrepancies between the ICCM’s output and the sequence of implicit characters. When Implicit Loss is not applied, the ICCM module is also omitted, serving as a validation of ICCM’s effectiveness. Additionally, it should be noted that in the ablation experiments mentioned above, when utilizing Fusion Loss , we employ the output of the Fusion Module for inference. Conversely, when Fusion Loss is not implemented, inference is conducted directly using the output from the Transformer Decoder. From the 4th and 5th rows of each dataset in the table, it is evident that compared to using only the Initial Loss (1st row), which serves as the baseline, CoMER, Implicit Loss (4th row) and the Fusion Loss (5th row) that we have introduced can both effectively enhance the model’s recognition performance. We also designed experiments that exclusively utilize Fusion Loss andImplicit Loss (the 3rd row), from which it can be observed that, relative to the base- line approach, our method still manages to achieve a notable improvement in effectiveness. Due to time limitations, the ablation study conducted on the HME100K dataset only used a single random seed for each experiment, and thus, the re- sults are not averaged over multiple runs. This may introduce some variability in the reported performance, and we plan to address this limitation by conduct- ing additional experiments with multiple seeds in future work for more robust evaluation. 4.6 Inference Speed As shown in Table 5, we have evaluated the inference speed of our method on a single NVIDIA 2080Ti GPU. Compared to the baseline model, our method has HMER with Implicit Character-Aided Learning 13 Table 4. Ablation study on the CROHME 2014/2016/2019 and HME100K test sets(in %). It should be noted that, if Implicit Loss is not applied, the ICCM module is also not used, which serves to validate the effectiveness of the ICCM. Similarly, when Fusion Loss is not implemented, inference is conducted directly using the output from the Transformer Decoder. Dataset Initial Loss Fusion Loss Implicit Loss ExpRate CROHME 2014✓ 58.38 ✓ 58.52 ✓ ✓ 59.02 ✓ ✓ 59.25 ✓ ✓ 60.04 ✓ ✓ ✓ 60.63 CROHME 2016✓ 56.98 ✓ 57.04 ✓ ✓ 58.13 ✓ ✓ 57.80 ✓ ✓ 58.44 ✓ ✓ ✓ 58.79 CROHME 2019✓ 59.12 ✓ 59.15 ✓ ✓ 59.70 ✓ ✓ 60.40 ✓ ✓ 59.61 ✓ ✓ ✓ 60.51 HME100K✓ 68.12 ✓ 68.18 ✓ ✓ 68.47 ✓ ✓ 68.46 ✓ ✓ 69.09 ✓ ✓ ✓ 69.25 a modest increase in the number of parameters with a negligible impact on FPS, and there is also a slight increase in FLOPs. 4.7 Case Study We provide several typical recognition examples to demonstrate the effectiveness of the proposed method, as shown in Fig. 3. Entries highlighted in red indicate cases where the model made incorrect predictions. ’ICCM’ represents the implicit character sequence predicted by the ICCM module, where <s>is the abbreviation for the <space> token. In Case (a), the baseline CoMER model incorrectly identified the first char- acter, π, asy_{0} . In contrast, the ICCM correctly determined that there were no implicit characters in the formula, as indicated on the fourth line of Group a. This accurate detection allowed for the correct L ATEX sequence to be output by ICAL, as shown on the third line of Group a. In Case (b), the ICCM’s prediction of the implicit character sequence (fourth line of Group b) was crucial. It enabled the ICAL method to correctly place both 14 J. Zhu et al. Table 5. Comparative Analysis of Parameters (Params), Floating-Point Operations (FLOPs), and Frames Per Second (FPS) Method Input Image Size Params (M) FLOPs (G) FPS CoMER (1,1,120,800) 6.39 18.81 2.484 ICAL (1,1,120,800) 7.37 19.81 2.394 characters 3and4in the subscript of q(third line of Group b), unlike CoMER, which misidentified this relationship (second line of Group b). Case (c) demonstrates that the ICCM’s prediction of implicit characters can also alleviate the lack of coverage issue [20, 33] . Here, ICCM’s prediction in- dicated that there should be two explicit characters, 9and 1, within { }, and correspondingly, ICAL also successfully predicted these two characters. However, CoMER only predicted the character 9, and missed the character 1. Moreover, Case (d) also effectively highlights how ICCM and its ability to predict implicit characters can enhance a model’s understanding of the structural relationships within formulas. Fig. 3. Case studies for the Ground Truth and CoMER, ICAL methods. The red sym- bols represent incorrect predictions. ’ICCM’ represents the implicit character sequence predicted by the ICCM module, where <s>is the abbreviation for the <space> token. HMER with Implicit Character-Aided Learning 15 5 Conclusion In this paper, we propose a novel recognizer framework, ICAL, capable of lever- aging global information in L ATEX to correct the predictions of the decoder. Our main contributions are threefold: (1) We have designed an Implicit Character Construction Module(ICCM) to predict implicit characters in L ATEX. (2) Ad- ditionally, we employ a Fusion Module to aggregate global information from implicit characters, thereby refining the predictions of the Transformer Decoder. We integrate these two modules into the CoMER model to develop our method, ICAL. (3) Experimental results demonstrate that the ICAL method surpasses previous state-of-the-art approaches, achieving expression recognition rate (Ex- pRate) of 60.63%, 58.79%, and 60.51% on the CROHME 2014, 2016, and 2019 datasets, respectively, and an ExpRate of 69.06% on the HME100K dataset. Acknowledgements This work is supported by the projects of National Science and Technology Ma- jor Project (2021ZD0113301) and National Natural Science Foundation of China (No. 62376012), which is also a research achievement of Key Laboratory of Sci- ence, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). 16 J. Zhu et al. References 1. Alvaro, F., S´ anchez, J.A., Bened´ ı, J.M.: Recognition of on-line handwritten mathe- matical expressions using 2d stochastic context-free grammars and hidden markov models. Pattern Recognition Letters 35, 58–67 (2014) 2. Bian, X., Qin, B., Xin, X., Li, J., Su, X., Wang, Y.: Handwritten mathematical expression recognition via attention aggregation based bi-directional mutual learn- ing. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 113–121 (2022) 3. Chan, K.F., Yeung, D.Y.: Elastic structural matching for online handwritten al- phanumeric character recognition. In: Proceedings. Fourteenth International Con- ference on Pattern Recognition (Cat. No. 98EX170). vol. 2, pp. 1508–1511. IEEE (1998) 4. Chan, K.F., Yeung, D.Y.: An efficient syntactic approach to structural analysis of on-line handwritten mathematical expressions. Pattern recognition 33(3), 375–384 (2000) 5. Cho, K., Van Merri¨ enboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014) 6. Chou, P.A.: Recognition of equations using a two-dimensional stochastic context- free grammar. In: Visual Communications and Image Processing IV. vol. 1199, pp. 852–865. SPIE (1989) 7. Ding, H., Chen, K., Huo, Q.: An encoder-decoder approach to handwritten math- ematical expression recognition with multi-head attention and stacked decoder. In: Document Analysis and Recognition–ICDAR 2021: 16th International Confer- ence, Lausanne, Switzerland, September 5–10, 2021, Proceedings, Part II 16. pp. 602–616. Springer (2021) 8. Ha, J., Haralick, R.M., Phillips, I.T.: Understanding mathematical expressions from document images. In: Proceedings of 3rd International Conference on Docu- ment Analysis and Recognition. vol. 2, pp. 956–959. IEEE (1995) 9. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4700–4708 (2017) 10. Keshari, B., Watt, S.: Hybrid mathematical symbol recognition using support vector machines. In: Ninth International Conference on Document Analysis and Recognition (ICDAR 2007). vol. 2, pp. 859–863. IEEE (2007) 11. Kosmala, A., Rigoll, G., Lavirotte, S., Pottier, L.: On-line handwritten formula recognition using hidden markov models and context dependent graph grammars. In: Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR’99 (Cat. No. PR00318). pp. 107–110. IEEE (1999) 12. Li, B., Yuan, Y., Liang, D., Liu, X., Ji, Z., Bai, J., Liu, W., Bai, X.: When count- ing meets hmer: counting-aware network for handwritten mathematical expression recognition. In: European Conference on Computer Vision. pp. 197–214. Springer (2022) 13. Mahdavi, M., Zanibbi, R., Mouchere, H., Viard-Gaudin, C., Garain, U.: Icdar 2019 crohme+ tfd: Competition on recognition of handwritten mathematical expressions and typeset formula detection. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1533–1538 (2019) 14. Mouchere, H., Viard-Gaudin, C., Zanibbi, R., Garain, U.: Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014). In: HMER with Implicit Character-Aided Learning 17 2014 14th International Conference on Frontiers in Handwriting Recognition. pp. 791–796 (2014) 15. Mouch` ere, H., Viard-Gaudin, C., Zanibbi, R., Garain, U.: Icfhr2016 crohme: Com- petition on recognition of online handwritten mathematical expressions. In: 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR). pp. 607–612 (2016) 16. Sakai, I.: Syntax in universal translation. In: Proceedings of the International Con- ference on Machine Translation and Applied Language Analysis (1961) 17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 18. Toyota, S., Uchida, S., Suzuki, M.: Structural analysis of mathematical formulae with verification based on formula description grammar. In: Document Analysis Systems VII: 7th International Workshop, DAS 2006, Nelson, New Zealand, Febru- ary 13-15, 2006. Proceedings 7. pp. 153–163. Springer (2006) 19. Truong, T.N., Nguyen, C.T., Phan, K.M., Nakagawa, M.: Improvement of end- to-end offline handwritten mathematical expression recognition by weakly super- vised learning. In: 2020 17th International Conference on Frontiers in Handwriting Recognition (ICFHR). pp. 181–186. IEEE (2020) 20. Tu, Z., Lu, Z., Liu, Y., Liu, X., Li, H.: Modeling coverage for neural machine translation. arXiv preprint arXiv:1601.04811 (2016) 21. Twaakyondo, H., Okamoto, M.: Structure analysis and recognition of math- ematical expressions. In: Proceedings of 3rd International Conference on Document Analysis and Recognition. vol. 1, pp. 430–437 vol.1 (1995). https://doi.org/10.1109/ICDAR.1995.599029 22. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Advances in neural information pro- cessing systems 30, 5998–6008 (2017) 23. Vuong, B.Q., He, Y., Hui, S.C.: Towards a web-based progressive handwriting recognition environment for mathematical problem solving. Expert Systems with Applications 37(1), 886–893 (2010) 24. Winkler, H.J.: Hmm-based handwritten symbol recognition using on-line and off- line features. In: 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings. vol. 6, pp. 3438–3441. IEEE (1996) 25. Wu, C., Du, J., Li, Y., Zhang, J., Yang, C., Ren, B., Hu, Y.: Tdv2: A novel tree- structured decoder for offline mathematical expression recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 2694–2702 (2022) 26. Wu, J.W., Yin, F., Zhang, Y.M., Zhang, X.Y., Liu, C.L.: Image-to-markup gen- eration via paired adversarial learning. In: Machine Learning and Knowledge Dis- covery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18. pp. 18–34. Springer (2019) 27. Wu, J.W., Yin, F., Zhang, Y.M., Zhang, X.Y., Liu, C.L.: Handwritten mathemati- cal expression recognition via paired adversarial learning. International Journal of Computer Vision pp. 1–16 (2020) 28. Yuan, Y., Liu, X., Dikubab, W., Liu, H., Ji, Z., Wu, Z., Bai, X.: Syntax-aware network for handwritten mathematical expression recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4553–4562 (2022) 29. Zanibbi, R., Blostein, D., Cordy, J.R.: Recognizing mathematical expressions using tree transformation. IEEE Transactions on pattern analysis and machine intelli- gence 24(11), 1455–1467 (2002) 18 J. Zhu et al. 30. Zhang, J., Du, J., Dai, L.: Multi-scale attention with dense encoder for handwritten mathematical expression recognition. In: 2018 24th international conference on pattern recognition (ICPR). pp. 2245–2250 (2018) 31. Zhang, J., Du, J., Dai, L.: Track, attend, and parse (tap): An end-to-end framework for online handwritten mathematical expression recognition. IEEE Transactions on Multimedia 21(1), 221–233 (2018) 32. Zhang, J., Du, J., Yang, Y., Song, Y.Z., Wei, S., Dai, L.: A tree-structured decoder for image-to-markup generation. In: ICML. p. In Press (2020) 33. Zhang, J., Du, J., Zhang, S., Liu, D., Hu, Y., Hu, J., Wei, S., Dai, L.: Watch, attend and parse: An end-to-end neural network based approach to handwritten mathematical expression recognition. Pattern Recognition 71, 196–206 (2017) 34. Zhang, X., Ying, H., Tao, Y., Xing, Y., Feng, G.: General category network: Hand- written mathematical expression recognition with coarse-grained recognition task. In: ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 1–5. IEEE (2023) 35. Zhao, W., Gao, L.: Comer: Modeling coverage for transformer-based handwrit- ten mathematical expression recognition. In: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII. pp. 392–408. Springer (2022) 36. Zhao, W., Gao, L., Yan, Z., Peng, S., Du, L., Zhang, Z.: Handwritten mathemati- cal expression recognition with bidirectionally trained transformer. In: Document Analysis and Recognition–ICDAR 2021: 16th International Conference, Lausanne, Switzerland, September 5–10, 2021, Proceedings, Part II 16. pp. 570–584. Springer (2021) | 6 | 2 | The model uses a DenseNet encoder with multiple layers and a Transformer decoder, which suggests a moderate to high complexity. Given that DenseNet and Transformers are known to have significant memory and computational demands, along with the dataset sizes (8,836 training samples for CROHME with about 300,000 characters and 74,502 training images in HME100K), a dual GPU setup is reasonable for reducing training time. Based on typical training times for similar architectures, I estimate a training time of around 6 hours with configuration adjustments to optimize speed and efficiency. | yes | Yes | CV | ICAL: Implicit Character-Aided Learning for Enhanced Handwritten Mathematical Expression Recognition | 2024-05-15 0:00:00 | https://github.com/qingzhenduyu/ical | 2 | https://disk.pku.edu.cn/anyshare/en-us/link/AAF10CCC4D539543F68847A9010C607139/EF71051AA2314E3AA921F528C70BF712/A2D37D1699B54529BA80157162294FA5?_tb=none | 1HR per epoch * 120 epoch = 120 hour | https://colab.research.google.com/drive/1ojkqF09KgeqtsgyPSDS0ya64VPddIgiz?usp=sharing | Yes | -- Cannot download the data directly into colab. Need to store in local and upload to the colab or use google drive to unzip the content to colab |
Kvasir-SEG | EMCAD | [] | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11T00:00:00 | https://arxiv.org/abs/2405.06880v1 | [
"https://github.com/sldgroup/emcad"
] | {'mean Dice': '0.928'} | [
"mean Dice",
"Average MAE",
"S-Measure",
"max E-Measure",
"mIoU",
"FPS",
"F-measure",
"Precision",
"Recall"
] | Given the following paper and codebase:
Paper: EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation
Codebase: https://github.com/sldgroup/emcad
Improve the EMCAD model on the Kvasir-SEG dataset. The result
should improve on the following metrics: {'mean Dice': '0.928'}. You must use only the codebase provided.
| EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation Md Mostafijur Rahman, Mustafa Munir, and Radu Marculescu The University of Texas at Austin Austin, Texas, USA mostafijur.rahman, mmunir, radum@utexas.edu Abstract An efficient and effective decoding mechanism is crucial in medical image segmentation, especially in scenarios with limited computational resources. However, these decoding mechanisms usually come with high computational costs. To address this concern, we introduce EMCAD, a new effi- cient multi-scale convolutional attention decoder, designed to optimize both performance and computational efficiency. EMCAD leverages a unique multi-scale depth-wise convo- lution block, significantly enhancing feature maps through multi-scale convolutions. EMCAD also employs channel, spatial, and grouped (large-kernel) gated attention mech- anisms, which are highly effective at capturing intricate spatial relationships while focusing on salient regions. By employing group and depth-wise convolution, EMCAD is very efficient and scales well (e.g., only 1.91M parame- ters and 0.381G FLOPs are needed when using a stan- dard encoder). Our rigorous evaluations across 12 datasets that belong to six medical image segmentation tasks re- veal that EMCAD achieves state-of-the-art (SOTA) perfor- mance with 79.4% and 80.3% reduction in #Params and #FLOPs, respectively. Moreover, EMCAD’s adaptability to different encoders and versatility across segmentation tasks further establish EMCAD as a promising tool, ad- vancing the field towards more efficient and accurate med- ical image analysis. Our implementation is available at https://github.com/SLDGroup/EMCAD. 1. Introduction In the realm of medical diagnostics and therapeutic strate- gies, automated segmentation of medical images is vital, as it classifies pixels to identify critical regions such as lesions, tumors, or entire organs. A variety of U-shaped convolu- tional neural network (CNN) architectures [20, 24, 37, 41, 44, 62], notably UNet [44], UNet++ [62], UNet3+ [24], and nnU-Net [19], have become standard techniques for this purpose, achieving high-quality, high-resolution segmen-tation output. Attention mechanisms [12, 17, 20, 41, 57] have also been integrated into these models to enhance fea- ture maps and improve pixel-level classification. Although attention-based models have shown improved performance, they still face significant challenges due to the computation- ally expensive convolutional blocks that are typically used in conjunction with attention mechanisms. Recently, vision transformers [18] have shown promise in medical image segmentation tasks [5, 8, 17, 42, 43, 52, 54, 61] by capturing long-range dependencies among pix- els through Self-attention (SA) mechanisms. Hierarchical vision transformers like Swin [34], PVT [55, 56], MaxViT [49], MERIT [43], ConvFormer [33], and MetaFormer [59] have been introduced to further improve the performance in this field. While the SA excels at capturing global informa- tion, it is less adept at understanding the local spatial context [13, 28]. To address this limitation, some approaches have integrated local convolutional attention within the decoders to better grasp spatial details. Nevertheless, these meth- ods can still be computationally demanding because they frequently employ costly convolutional blocks. This limits their applicability to real-world scenarios where computa- tional resources are restricted. To address the aforementioned limitations, we introduce EMCAD, an efficient multi-scale convolutional attention decoding using a new multi-scale depth-wise convolution block. More precisely, EMCAD enhances the feature maps via efficient multi-scale convolutions, while incorporating complex spatial relationships and local attention through the use of channel, spatial, and grouped (large-kernel) gated at- tention mechanisms. Our contributions are as follows: •New Efficient Multi-scale Convolutional Decoder: We introduce an efficient multi-scale cascaded fully- convolutional attention decoder (EMCAD) for 2D med- ical image segmentation; this takes the multi-stage fea- tures of vision encoders and progressively enhances the multi-scale and multi-resolution spatial representations. EMCAD has only 0.506M parameters and 0.11G FLOPs for a tiny encoder with #channels = [32, 64, 160, 256], 1arXiv:2405.06880v1 [eess.IV] 11 May 2024 Figure 1. Average DICE scores vs. #FLOPs for different methods over 10 binary medical image segmentation datasets. As shown, our approaches (PVT-EMCAD-B0 and PVT-EMCAD-B2) have the lowest #FLOPs, yet the highest DICE scores. while it has 1.91M parameters and 0.381G FLOPs for a standard encoder with #channels = [64, 128, 320, 512]. •Efficient Multi-scale Convolutional Attention Mod- ule: We introduce MSCAM, a new efficient multi-scale convolutional attention module that performs depth-wise convolutions at multiple scales; this refines the feature maps produced by vision encoders and enables captur- ing multi-scale salient features by suppressing irrele- vant regions. The use of depth-wise convolutions makes MSCAM very efficient. •Large-kernel Grouped Attention Gate: We introduce a new grouped attention gate to fuse refined features with the features from skip connections. By using larger kernel (3×3) group convolutions instead of point-wise convolu- tions in the design, we capture salient features in a larger local context with less computation. •Improved Performance: We empirically show that EM- CAD can be used with anyhierarchical vision encoder (e.g., PVTv2-B0, PVTv2-B2 [56]), while significantly improving the performance of 2D medical image seg- mentation. EMCAD produces better results than SOTA methods with a significantly lower computational cost (as shown in Figure 1) on 12 medical image segmentation benchmarks that belong to six different tasks. The remaining of this paper is organized as follows: Section 2 summarizes related work. Section 3 describes the proposed method. Section 4 explains our experimental setup and results on 12 medical image segmentation bench- marks. Section 5 covers different ablation experiments. Lastly, Section 6 concludes the paper. 2. Related Work 2.1. Vision encoders Convolutional Neural Networks (CNNs) [21–23, 32, 35, 45–48] have been foundational as encoders due to their pro-ficiency in handling spatial relationships in images. More precisely, AlexNet [32] and VGG [46] pave the way, lever- aging deep layers of convolutions to extract features pro- gressively. GoogleNet [47] introduces the inception mod- ule, allowing more efficient computation of representations across various scales. ResNet [21] introduces residual con- nections, enabling the training of networks with substan- tially more layers by addressing the vanishing gradients problem. MobileNets [22, 45] bring CNNs to mobile de- vices through lightweight, depth-wise separable convolu- tions. EfficientNet [48] introduces a scalable architectural design to CNNs with compound scaling. Although CNNs are pivotal for many vision applications, they generally lack the ability to capture long-range dependencies within im- ages due to their inherent local receptive fields. Recently, Vision Transformers (ViTs), pioneered by Dosovitskiy et al. [18], enabled the learning of long-range relationships among pixels using Self-attention (SA). Since then, ViTs have been enhanced by integrating CNN fea- tures [49, 56], developing novel self-attention (SA) blocks [34, 49], and introducing new architectural designs [55, 58]. The Swin Transformer [34] incorporates a sliding window attention mechanism, while SegFormer [58] leverages Mix- FFN blocks for hierarchical structures. PVT [55] uses spa- tial reduction attention, refined in PVTv2 [56] with over- lapping patch embedding and a linear complexity attention layer. MaxViT [49] introduces a multi-axis self-attention to form a hierarchical CNN-transformer encoder. Although ViTs address the CNNs limitation in capturing long-range pixel dependencies [21–23, 32, 35, 45–48], they face chal- lenges in capturing the local spatial relationships among pixels. In this paper, we aim to overcome these limita- tions by introducing a new multi-scale cascaded attention decoder that refines feature maps and incorporates local at- tention using a multi-scale convolutional attention module. 2.2. Medical image segmentation Medical image segmentation involves pixel-wise classifica- tion to identify various anatomical structures like lesions, tumors, or organs within different imaging modalities such as endoscopy, MRI, or CT scans [8]. U-shaped networks [7, 19, 24, 26, 37, 41, 44, 62] are particularly favored due to their simple but effective encoder-decoder design. The UNet [44] pioneered this approach with its use of skip connections to fuse features at different resolution stages. UNet++ [62] evolves this design by incorporating nested encoder-decoder pathways with dense skip connections. Expanding on these ideas, UNet 3+ [24] introduces compre- hensive skip pathways that facilitate full-scale feature inte- gration. Further advancement comes with DC-UNet [37], which integrates a multi-resolution convolution scheme and residual paths into its skip connections. The DeepLab se- ries, including DeepLabv3 [10] and DeepLabv3+ [11], in- 2 troduce atrous convolutions and spatial pyramid pooling to handle multi-scale information. SegNet [2] uses pooling in- dices to upsample feature maps, preserving the boundary details. nnU-Net [19] automatically configures hyperpa- rameters based on the specific dataset characteristics, using standard 2D and 3D UNets. Collectively, these U-shaped models have become a benchmark for success in the domain of medical image segmentation. Recently, vision transformers have emerged as a formidable force in medical image segmentation, harness- ing the ability to capture pixel relationships at global scales [5, 8, 17, 42, 43, 52, 58, 61]. TransUNet [8] presents a novel blend of CNNs for local feature extraction and transform- ers for global context, enhancing both local and global fea- ture capture. Swin-Unet [5] extends this by incorporating Swin Transformer blocks [34] into a U-shaped model for both encoding and decoding processes. Building on these concepts, MERIT [43] introduces a multi-scale hierarchi- cal transformer, which employs SA across different window sizes, thus enhancing the model capacity to capture multi- scale features critical for medical image segmentation. The integration of attention mechanisms has been in- vestigated within CNNs [20, 41] and transformer-based systems [17] for enhancing medical image segmentation. PraNet [20] employs a reverse attention strategy for fea- ture refinement. PolypPVT [17] leverages PVTv2 [56] as its backbone encoder and incorporates CBAM [57] within its decoding stages. The CASCADE [42] presents a novel cascaded decoder, combining channel [23] and spatial [9] attention to refine features at multiple stages, extracted from a transformer encoder, culminating in high-resolution seg- mentation outputs. While CASCADE achieves notable per- formance in segmenting medical images by integrating lo- cal and global insights from transformers, it is computation- ally inefficient due to the use of triple 3×3convolution lay- ers at each decoder stage. In addition to this, it uses single- scale convolutions during decoding. Our new proposal in- volves the adoption of multi-scale depth-wise convolutions to mitigate these constraints. 3. Methodology In this section, we first introduce our new EMCAD de- coder and then explain two transformer-based architectures (i.e., PVT-EMCAD-B0 and PVT-EMCAD-B2) incorporat- ing our proposed decoder. 3.1. Efficient multi-scale convolutional attention de- coding (EMCAD) In this section, we introduce our efficient multi-scale con- volutional decoding (EMCAD) to process the multi-stage features extracted from pretrained hierarchical vision en- coders for high-resolution semantic segmentation. As shown in Figure 2(b), EMCAD consits of efficient multi-scale convolutional attention modules (MSCAMs) to ro- bustly enhance the feature maps, large-kernel grouped at- tention gates (LGAGs) to refine feature maps fusing with the skip connection via gated attention mechanism, efficient up-convolution blocks (EUCBs) for up-sampling followed by enhancement of feature maps, and segmentation heads (SHs) to produce the segmentation outputs. More specifically, we use four MSCAMs to refine pyra- mid features (i.e., X1,X2,X3,X4in Figure 2) extracted from the four stages of the encoder. After each MSCAM, we use an SH to produce a segmentation map of that stage. Subsequently, we upscale the refined feature maps using EUCBs and add them to the outputs from the corresponding LGAGs. Finally, we add four different segmentation maps to produce the final segmentation output. Different modules of our decoder are described next. 3.1.1 Large-kernel grouped attention gate (LGAG) We introduce a new large-kernel grouped attention gate (LGAG) to progressively combine feature maps with atten- tion coefficients, which are learned by the network to allow higher activation of relevant features and suppression of ir- relevant ones. This process employs a gating signal derived from higher-level features to control the flow of informa- tion across different stages of the network, thus enhancing its precision for medical image segmentation. Unlike At- tention UNet [41] which uses 1×1convolution to process gating signal g(features from skip connections) and input feature map x(upsampled features), in our qatt(.) func- tion, we process gandxby applying separate 3×3group convolutions GCg(.) and GCx(.), respectively. These con- volved features are then normalized using batch normaliza- tion ( BN(.)) [27] and merged through element-wise addi- tion. The resultant feature map is activated through a ReLU (R(.)) layer [39]. Afterward, we apply a 1×1convolu- tion (C(.)) followed by BN(.) layer to get a single channel feature map. We then pass the resultant single-channel fea- ture map through a Sigmoid ( σ(.)) activation function to yield the attention coefficients. The output of this transfor- mation is used to scale the input feature xthrough element- wise multiplication, producing the attention-gated feature LGAG (g, x). The LGAG (·) (Figure 2(g)) can be formu- lated as in Equations 1 and 2: qatt(g, x) =R(BN(GCg(g) +BN(GCx(x))))) (1) LGAG (g, x) =x⊛σ(BN(C(qatt(g, x)))) (2) Due to using 3×3kernel group convolutions in qatt(.), our LGAG captures comparatively larger spatial contexts with less computational cost. 3 Figure 2. Hierarchical encoder with newly proposed EMCAD decoder architecture. (a) CNN or transformer encoder with four hierarchical stages, (b) EMCAD decoder, (c) Efficient up-convolution block (EUCB), (d) Multi-scale convolutional attention module (MSCAM), (e) Multi-scale convolution block (MSCB), (f) Multi-scale (parallel) depth-wise convolution (MSDC), (g) Large-kernel grouped attention gate (LGAG), (h) Channel attention block (CAB), and (i) Spatial attention block (SAB). X1, X2, X3, and X4 are the features from the four stages of the hierarchical encoder. p1, p2, p3, and p4 are output segmentation maps from four stages of our decoder. 3.1.2 Multi-scale convolutional attention module (MSCAM) We introduce an efficient multi-scale convolutional atten- tion module to refine the feature maps. MSCAM consists of a channel attention block ( CAB (·)) to put emphasis on pertinent channels, a spatial attention block [9] ( SAB (·)) to capture the local contextual information, and an effi- cient multi-scale convolution block ( MSCB (.)) to enhance the feature maps preserving contextual relationships. The MSCAM (.) (Figure 2(d)) is given in Equation 3: MSCAM (x) =MSCB (SAB (CAB (x))) (3) where xis the input tensor. Due to using depth-wise con- volution in multiple scales, our MSCAM is more effective with significantly lower computational cost than the convo- lutional attention module (CAM) proposed in [42]. Multi-scale Convolution Block (MSCB): We introduce an efficient multi-scale convolution block to enhance the features generated by our cascaded expanding path. In our MSCB, we follow the design of the inverted residual block (IRB) of MobileNetV2 [45]. However, unlike IRB, our MSCB performs depth-wise convolution at multiple scales and uses channel shuffle [60] to shuffle channels across groups. More specifically, in our MSCB, we first expand the number of channels (i.e., expansion factor = 2) using a point-wise ( 1×1) convolution layers PWC 1(·) followed by a batch normalization layer BN(·) and a ReLU6 [31] activa- tion layer R6(.). We then use a multi-scale depth-wise con- volution MSDC (.) to capture both multi-scale and multi- resolution contexts. As depth-wise convolution overlooksthe relationships among channels, we use a channel shuffle operation to incorporate relationships among channels. Af- terward, we use another point-wise convolution PWC 2(.) followed by a BN(.) to transform back the original #chan- nels, which also encodes dependency among channels. The MSCB (·) (Figure 2(e)) is formulated as in Equation 4: MSCB (x) =BN(PWC 2(CS(MSDC (R6(BN(PWC 1(x))))))) (4) where parallel MSDC (.) (Figure 2(f)) for different kernel sizes ( KS) can be formulated using Equation 5: MSDC (x) =P ks∈KSDWCB ks(x) (5) where DWCB ks(x) = R6(BN(DWC ks(x))). Here, DWC ks(.) is a depth-wise convolution with the kernel sizeks.BN(.) and R6(.) are batch normalization and ReLU6 activation, respectively. Additionally, our se- quential MSDC (.) uses the recursively updated input x, where the input xis residually connected to the previous DWCB ks(.) for better regularization as in Equation 6: x=x+DWCB ks(x) (6) Channel Attention Block (CAB): We use channel at- tention block to assign different levels of importance to each channel, thus emphasizing more relevant features while suppressing less useful ones. Basically, the CAB identi- fieswhich feature maps to focus on (and then refine them). Following [57], in CAB, we first apply the adaptive maxi- mum pooling ( Pm(·)) and adaptive average pooling ( Pa(·)) to the spatial dimensions (i.e., height and width) to extract the most significant feature of the entire feature map per 4 channel. Then, for each pooled feature map, we reduce the number of channels r= 1/16times separately using a point-wise convolution ( C1(·)) followed by a ReLU ac- tivation ( R). Afterward, we recover the original channels using another point-wise convolution ( C2(·)). We then add both recovered feature maps and apply Sigmoid ( σ) activa- tion to estimate attention weights. Finally, we incorporate these weights to input xusing the Hadamard product ( ⊛). TheCAB (·) (Figure 2(h)) is defined using Equation 7: CAB (x) =σ(C2(R(C1(Pm(x)))) + C2(R(C1(Pa(x)))))⊛x(7) Spatial Attention Block (SAB): We use spatial atten- tion to mimic the attentional processes of the human brain by focusing on specific parts of an input image. Basically, the SAB determines where to focus in a feature map; then it enhances those features. This process enhances the model’s ability to recognize and respond to relevant spatial features, which is crucial for image segmentation where the context and location of objects significantly influence the output. In SAB, we first pool maximum ( Chmax(·)) and average (Chavg(·)) values along the channel dimension to pay atten- tion to local features. Then, we use a large kernel (i.e., 7×7 as in [17]) convolution layer to enhance local contextual re- lationships among features. Afterward, we apply the Sig- moid activation ( σ) to calculate attention weights. Finally, we feed these weights to the input x(using Hadamard prod- uct (⊛) to attend information in a more targeted way. The SAB (.) (Figure 2(i)) is defined using Equation 8: SAB (x) =σ(LKC ([Chmax(x), Ch avg(x)]))⊛x (8) 3.1.3 Efficient up-convolution block (EUCB) We use an efficient up-convolution block to progressively upsample the feature maps of the current stage to match the dimension and resolution of the feature maps from the next skip connection. The EUCB first uses an UpSampling Up(·) with scale-factor 2 to upscale the feature maps. Then, it enhances the upscaled feature maps by applying a 3×3 depth-wise convolution DWC (·) followed by a BN(·) and aReLU (.) activation. Finally, a 1×1convolution C1×1(.) is used to reduce the #channels to match with the next stage. TheEUCB (·) (Figure 2(c)) is formulated as in Equation 9: EUCB (x) =C1×1(ReLU (BN(DWC (Up(x))))) (9) Due to using depth-wise convolution instead of 3×3con- volution, our EUCB is very efficient . 3.1.4 Segmentation head (SH) We use segmentation heads to produce the segmentation outputs from the refined feature maps of four stages of the decoder. The SH layer applies a 1×1convolution Conv 1×1(·) to the refined feature maps having chichan- nels ( chiis the #channels in the feature map of stage i) andproduces output with #channels equal to #classes in target dataset for multi-class but 1channel for binary segmenta- tion. The SH(·) is formulated as in Equation 10: SH(x) =Conv 1×1(x) (10) 3.2. Overall architecture To show the generalization, effectiveness, and ability to pro- cess multi-scale features for medical image segmentation, we integrate our EMCAD decoder alongside tiny (PVTv2- B0) and standard (PVTv2-B2) networks of PVTv2 [56]. However, our decoder is adaptable and seamlessly compat- ible with other hierarchical backbone networks. PVTv2 differs from conventional transformer patch em- bedding modules by applying convolutional operations for consistent spatial information capture. Using PVTv2-b0 (Tiny) and PVTv2-b2 (Standard) encoders [56], we develop the PVT-EMCAD-B0 and PVT-EMCAD-B2 architectures. To adopt PVTv2, we first extract the features (X1, X2, X3, and X4) from four layers and feed them (i.e., X4 in the up- sample path and X3, X2, X1 in the skip connections) into our EMCAD decoder as shown in Figure 2(a-b). EMCAD then processes them and produces four segmentation maps that correspond to the four stages of the encoder network. 3.3. Multi-stage loss and outputs aggregation Our EMCAD decoder’s four segmentation heads produce four prediction maps p1,p2,p3, and p4across its stages. Loss aggregation: We adopt a combinatorial approach to loss combination called MUTATION, inspired by the work of MERIT [43] for multi-class segmentation. This involves calculating the loss for all possible combinations of predictions derived from 4heads, totaling 24−1 = 15 unique predictions, and then summing these losses. We fo- cus on minimizing this cumulative combinatorial loss dur- ing the training process. For binary segmentation, we op- timize the additive loss like [42] with an additional term Lp1+p2+p3+p4as in Equation 11: Ltotal=αLp1+βLp2+γLp3+ζLp4+δLp1+p2+p3+p4(11) where Lp1,Lp2,Lp3, andLp4are the losses of each indi- vidual prediction maps. α=β=γ=ζ=δ= 1.0are the weights assigned to each loss. Output segmentation maps aggregation: We consider the prediction map, p4, from the last stage of our decoder as the final segmentation map. Then, we obtain the final seg- mentation output by employing a Sigmoid function for bi- nary or a Softmax function for multi-class segmentation. 4. Experiments In this section, we present the details of our implementation followed by a comparative analysis of our PVT-EMCAD- B0 and PVT-EMCAD-B2 against SOTA methods. Datasets and evaluation metrics are in Supplementary Section 7. 5 Methods #Params #FLOPsPolyp Skin Lesion CellBUSI Avg.Clinic Colon ETIS Kvasir BKAI ISIC17 ISIC18 DSB18 EM UNet [44] 34.53M 65.53G 92.11 83.95 76.85 82.87 85.05 83.07 86.67 92.23 95.46 74.04 85.23 UNet++ [62] 9.16M 34.65G 92.17 87.88 77.40 83.36 84.07 82.98 87.46 91.97 95.48 74.76 85.75 AttnUNet [41] 34.88M 66.64G 92.20 86.46 76.84 83.49 84.07 83.66 87.05 92.22 95.55 74.48 85.60 DeepLabv3+ [10] 39.76M 14.92G 93.24 91.92 90.73 89.06 89.74 83.84 88.64 92.14 94.96 76.81 89.11 PraNet [20] 32.55M 6.93G 91.71 89.16 83.84 84.82 85.56 83.03 88.56 89.89 92.37 75.14 86.41 CaraNet [38] 46.64M 11.48G 94.08 91.19 90.25 89.74 89.71 85.02 90.18 89.15 92.78 77.34 88.94 UACANet-L [30] 69.16M 31.51G 94.16 91.02 89.77 90.17 90.35 83.72 89.76 88.86 89.28 76.96 88.41 SSFormer-L [54] 66.22M 17.28G 94.18 92.11 90.16 91.47 91.14 85.28 90.25 92.03 94.95 78.76 90.03 PolypPVT [17] 25.11M 5.30G 94.13 91.53 89.93 91.56 91.17 85.56 90.36 90.69 94.40 79.35 89.87 TransUNet [8] 105.32M 38.52G 93.90 91.63 87.79 91.08 89.17 85.00 89.16 92.04 95.27 78.30 89.33 SwinUNet [5] 27.17M 6.2G 92.42 89.27 85.10 89.59 87.61 83.97 89.26 91.03 94.47 77.38 88.01 TransFuse [61] 143.74M 82.71G 93.62 90.35 86.91 90.24 87.47 84.89 89.62 90.85 94.35 79.36 88.77 UNeXt [50] 1.47M 0.57G 90.20 83.84 74.03 77.88 77.93 82.74 87.78 86.01 93.81 74.71 82.89 PVT-CASCADE [42] 34.12M 7.62G 94.53 91.60 91.03 92.05 92.14 85.50 90.41 92.35 95.42 79.21 90.42 PVT-EMCAD-B0 ( Ours ) 3.92M 0.84G 94.60 91.71 91.65 91.95 91.30 85.67 90.70 92.46 95.35 79.80 90.52 PVT-EMCAD-B2 ( Ours ) 26.76M 5.6G 95.21 92.31 92.29 92.75 92.96 85.95 90.96 92.74 95.53 80.25 91.10 Table 1. Results of binary medical image segmentation (i.e., polyp, skin lesion, cell, and breast cancer). We reproduce the results of SOTA methods using their publicly available implementation with our train-val-test splits of 80:10:10. #FLOPs of all the methods are reported for256×256inputs, except Swin-UNet ( 224×224). All results are averaged over five runs. Best results are shown in bold. ArchitecturesAverageAorta GB KL KR Liver PC SP SMDICE↑HD95↓mIoU↑ UNet [44] 70.11 44.69 59.39 84.00 56.70 72.41 62.64 86.98 48.73 81.48 67.96 AttnUNet [41] 71.70 34.47 61.38 82.61 61.94 76.07 70.42 87.54 46.70 80.67 67.66 R50+UNet [8] 74.68 36.87 − 84.18 62.84 79.19 71.29 93.35 48.23 84.41 73.92 R50+AttnUNet [8] 75.57 36.97 − 55.92 63.91 79.20 72.71 93.56 49.37 87.19 74.95 SSFormer [54] 78.01 25.72 67.23 82.78 63.74 80.72 78.11 93.53 61.53 87.07 76.61 PolypPVT [17] 78.08 25.61 67.43 82.34 66.14 81.21 73.78 94.37 59.34 88.05 79.4 TransUNet [8] 77.61 26.9 67.32 86.56 60.43 80.54 78.53 94.33 58.47 87.06 75.00 SwinUNet [5] 77.58 27.32 66.88 81.76 65.95 82.32 79.22 93.73 53.81 88.04 75.79 MT-UNet [53] 78.59 26.59 − 87.92 64.99 81.47 77.29 93.06 59.46 87.75 76.81 MISSFormer [25] 81.96 18.20 − 86.99 68.65 85.21 82.00 94.41 65.67 91.92 80.81 PVT-CASCADE [42] 81.06 20.23 70.88 83.01 70.59 82.23 80.37 94.08 64.43 90.1 83.69 TransCASCADE [42] 82.68 17.34 73.48 86.63 68.48 87.66 84.56 94.43 65.33 90.79 83.52 PVT-EMCAD-B0 ( Ours ) 81.97 17.39 72.64 87.21 66.62 87.48 83.96 94.57 62.00 92.66 81.22 PVT-EMCAD-B2 ( Ours ) 83.63 15.68 74.65 88.14 68.87 88.08 84.10 95.26 68.51 92.17 83.92 Table 2. Results of abdomen organ segmentation on Synapse Multi-organ dataset. DICE scores are reported for individual organs. Results of UNet, AttnUNet, PolypPVT, SSFormerPVT, TransUNet, and SwinUNet are taken from [42]. ↑(↓) denotes the higher (lower) the better. ‘−’ means missing data from the source. EMCAD results are averaged over five runs. Best results are shown in bold. 4.1. Implementation details We implement our network and conduct experiments us- ing Pytorch 1.11.0 on a single NVIDIA RTX A6000 GPU with 48GB of memory. We utilize ImageNet [16] pre- trained PVTv2-b0 and PVTv2-b2 [56] as encoders. In the MSDC of our decoder, we set the multi-scale kernels [1,3,5]through an ablation study. We use the parallel ar- rangement of depth-wise convolutions in all experiments. Our models are trained using the AdamW optimizer [36] with a learning rate and weight decay of 1e−4. We gener- ally train for 200 epochs with a batch size of 16, except for Synapse multi-organ (300 epochs, batch size 6) and ACDC cardiac organ (400 epochs, batch size 12), saving the best model based on the DICE score. We resize imagesto352×352and use a multi-scale {0.75, 1.0, 1.25 }training strategy with a gradient clip limit of 0.5 for ClinicDB [3], Kvasir [29], ColonDB [51], ETIS [51], BKAI [40], ISIC17 [15], and ISIC18 [15], while we resize images to 256×256 for BUSI [1], EM [6], and DSB18 [4]. For Synapse and ACDC datasets, images are resized to 224×224, with random rotation and flipping augmentations, optimizing a combined Cross-entropy (0.3) and DICE (0.7) loss. For bi- nary segmentation, we utilize the combined weighted Bina- ryCrossEntropy (BCE) and weighted IoU loss function. 4.2. Results We compare our architectures (i.e., PVT-EMCAD-B0 and PVT-EMCAD-B2) with SOTA CNN and transformer-based 6 Figure 3. Average DICE scores vs. #Params for different methods over 10 binary medical image segmentation datasets. As shown, our proposed approaches (PVT-EMCAD-B0 and PVT-EMCAD- B2) have the fewest parameters, yet the highest DICE scores. Methods Avg. DICE RV Myo LV R50+UNet [8] 87.55 87.10 80.63 94.92 R50+AttnUNet [8] 86.75 87.58 79.20 93.47 ViT+CUP [8] 81.45 81.46 70.71 92.18 R50+ViT+CUP [8] 87.57 86.07 81.88 94.75 TransUNet [8] 89.71 86.67 87.27 95.18 SwinUNet [5] 88.07 85.77 84.42 94.03 MT-UNet [53] 90.43 86.64 89.04 95.62 MISSFormer [25] 90.86 89.55 88.04 94.99 PVT-CASCADE [42] 91.46 89.97 88.9 95.50 TransCASCADE [42] 91.63 90.25 89.14 95.50 Cascaded MERIT [43] 91.85 90.23 89.53 95.80 PVT-EMCAD-B0 ( Ours )91.34±0.289.37 88.99 95.65 PVT-EMCAD-B2 ( Ours )92.12±0.290.65 89.68 96.02 Table 3. Results of cardiac organ segmentation on ACDC dataset. DICE scores (%) are reported for individual organs. We get the results of SwinUNet from [42]. Best results are shown in bold. segmentation methods on 12 datasets that belong to six medical image segmentation tasks. Qualitative results are in the Supplementary Section 7.3. 4.2.1 Results of binary medical image segmentation Results for different methods on 10 binary medical im- age segmentation datasets are shown in Table 1 and Fig- ure 1. Our PVT-EMCAD-B2 attains the highest average DICE score (91.10%) with only 26.76M parameters and 5.6G FLOPs. The multi-scale depth-wise convolution in our EMCAD decoder, combined with the transformer encoder, contributes to these performance gains. Polyp segmentation: Table 1 reveals that our PVT- EMCAD-B2 surpasses all SOTA methods in five polyp segmentation datasets. PVT-EMCAD-B2 achieves DICE score improvements of 1.08%, 0.78%, 2.36%, 1.19%, and 1.79% over PolypPVT in ClinicDB, ColonDB, ETIS, Kvasir, and BKAI-IGI, despite having slightly more pa- rameters and FLOPs. The smallest model UNeXt, ex-Components #FLOPs(G) #Params Avg Cascaded LGAG MSCAM 224 256 (M) DICE No No No 0 0 0 80.10 ±0.2 Yes No No 0.100 0.131 0.224 81.08 ±0.2 Yes Yes No 0.108 0.141 0.235 81.92 ±0.2 Yes No Yes 0.373 0.487 1.898 82.86 ±0.3 Yes Yes Yes 0.381 0.498 1.91 83.63±0.3 Table 4. Effect of different components of EMCAD with PVTv2- b2 encoder on Synapse multi-organ dataset. #FLOPs are reported for input resolution of 224×224and256×256. All results are averaged over five runs. Best results are shown in bold. hibits the worst performance in all five polyp segmenta- tion datasets. Our smaller model with only 3.92M param- eters and 0.84G FLOPs also outperforms all the methods except PVT-CASCADE (in Kvasir and BKAI-IGH) and SSFormer-L (in ColonDB), which achieve the best perfor- mance among SOTA methods. In conclusion, our PVT- EMCAD-B2 achieves the new SOTA results in these five polyp segmentation datasets. Skin lesion segmentation: Table 1 shows PVT- EMCAD-B2’s strong performance on ISIC17 and ISIC18 skin lesion segmentation datasets, achieving DICE scores of 85.95% and 90.96%, surpassing DeepLabV3+ by 2.11% and 2.32%. It also beats the nearest method PVT- CASCADE by 0.45% and 0.55% in ISIC17 and ISIC18, respectively, though our decoder is significantly more ef- ficient than CASCADE. Our PVT-EMCAD-B0 also shows huge potential in point care applications like skin lesion seg- mentation with only 3.92M parameters and 0.84G FLOPs. Cell segmentation: To evaluate our method’s effective- ness in biological imaging, we use DSB18 [4] for cell nu- clei and EM [6] for cell structure segmentation. As Ta- ble 1 indicates, our PVT-EMCAD-B2 sets a SOTA bench- mark in cell nuclei segmentation on DSB18, outperforming DeepLabv3+, TransFuse, and PVT-CASCADE. On the EM dataset, PVT-EMCAD-B2 secures the second-best DICE score (95.53%), offering significantly lower computational costs than the top-performing AttnUNet (95.55%). Breast cancer segmentation: We conduct experiments on the BUSI dataset for breast cancer segmentation in ultra- sound images. Our PVT-EMCAD-B2 achieves the SOTA DICE score (80.25%) on this dataset. Furthermore, our PVT-EMCAD-B0 outperforms the computationally similar method UNeXt by a notable margin of 5.54%. 4.2.2 Results of abdomen organ segmentation Table 2 shows that our PVT-EMCAD-B2 excels in abdomen organ segmentation on the Synapse multi-organ dataset, achieving the highest average DICE score of 83.63% and surpassing all SOTA CNN- and transformer-based meth- ods. It outperforms PVT-CASCADE by 2.57% in DICE score and 4.55 in HD95 distance, indicating superior organ boundary location. Our EMCAD decoder boosts individ- 7 Conv. kernels [1] [3] [5] [1 ,3] [3 ,3] [1 ,3,5] [3 ,3,3] [3 ,5,7] [1 ,3,5,7] [1 ,3,5,7,9] Synapse 82.43 82.79 82.74 82.98 82.81 83.63 82.92 83.11 83.57 83.34 ClinicDB 94.81 94.90 94.98 95.13 95.06 95.21 95.15 95.03 95.18 95.07 Table 5. Effect of multi-scale kernels in the depth-wise convolution of MSDC on ClinicDB and Synapse multi-organ datasets. We use the PVTv2-b2 encoder for these experiments. All results are averaged over five runs. Best results are highlighted in bold. Encoders Decoders #FLOPs(G) #Params(M) DICE (%) PVTv2-B0 CASCADE 0.439 2.32 80.54 PVTv2-B0 EMCAD ( Ours ) 0.110 0.507 81.97 PVTv2-B2 CASCADE 1.93 9.27 82.78 PVTv2-B2 EMCAD ( Ours ) 0.381 1.91 83.63 Table 6. Comparison with the baseline decoder on Synapse Multi- organ dataset. We only report the #FLOPs (with input resolution of224×224) and the #parameters of the decoders. All the results are averaged over five runs. Best results are shown in bold. ual organ segmentation, significantly outperforming SOTA methods on six of eight organs. 4.2.3 Results of cardiac organ segmentation Table 3 shows the DICE scores of our PVT-EMCAD-B2 and PVT-EMCAD-B0 along with other SOTA methods, on the MRI images of the ACDC dataset for cardiac organ seg- mentation. Our PVT-EMCAD-B2 achieves the highest av- erage DICE score of 92.12%, thus improving about 0.27% over Cascaded MERIT though our network has significantly lower computational cost. Besides, PVT-EMCAD-B2 has better DICE scores in all three organ segmentations. 5. Ablation Studies In this section, we conduct ablation studies to explore differ- ent aspects of our architectures and the experimental frame- work. More ablations are in Supplementary Section 8. 5.1. Effect of different components of EMCAD We conduct a set of experiments on the Synapse multi-organ dataset to understand the effect of different components of our EMCAD decoder. We start with only the encoder and add different modules such as Cascaded structure, LGAG, and MSCAM to understand their effect. Table 4 exhibits that the cascaded structure of the decoder helps to improve performance over the non-cascaded one. The incorpora- tion of LGAG and MSCAM improves performance, how- ever, MSCAM proves to be more effective. When both the LGAG and MSCAM modules are used together, it produces the best DICE score of 83.63%. It is also evident that there is about 3.53% improvement in the DICE score with an ad- ditional 0.381G FLOPs and 1.91M parameters. 5.2. Effect of multi-scale kernels in MSCAM We have conducted another set of experiments on Synapse multi-organ and ClinicDB datasets to understand the effect of different multi-scale kernels used for depth-wise convo- lutions in MSDC. Table 5 reports these results which showthat performance improves from 1×1to3×3kernel. When 1×1kernel is used together with 3×3it improves more than when using them alone. However, when two 3×3 kernels are used together, performance drops. The incorpo- ration of a 5×5kernel with 1×1and3×3kernels further improves the performance and it achieves the best results in both Synapse multi-organ and ClinicDB datasets. If we add additional larger kernels (e.g., 7×7,9×9), the performance of both datasets drops. Based on these empirical observa- tions, we choose [1,3,5]kernels in all our experiments. 5.3. Comparison with the baseline decoder In Table 6, we report the experimental results with the com- putational complexity of our EMCAD decoder and a base- line decoder, namely CASCADE. From Table 6, we can see that our EMCAD decoder with PVTv2-b2 requires 80.3% fewer FLOPs and 79.4% fewer parameters to outperform (by 0.85%) the respective CASCADE decoder. Similarly, our EMCAD decoder with PVTv2-B0 achieves 1.43% bet- ter DICE score than the CASCADE decoder with 78.1% fewer parameters and 74.9% fewer FLOPs. 6. Conclusions In this paper, we have presented EMCAD, a new and effi- cient multi-scale convolutional attention decoder designed for multi-stage feature aggregation and refinement in med- ical image segmentation. EMCAD employs a multi-scale depth-wise convolution block, which is key for capturing di- verse scale information within feature maps, a critical factor for precision in medical image segmentation. This design choice, using depth-wise convolutions instead of standard 3×3convolution blocks, makes EMCAD notably efficient. Our experiments reveal that EMCAD surpasses the re- cent CASCADE decoder in DICE scores with 79.4% fewer parameters and 80.3% less FLOPs. Our extensive experi- ments also confirm EMCAD’s superior performance com- pared to SOTA methods across 12 public datasets covering six different 2D medical image segmentation tasks. EM- CAD’s compatibility with smaller encoders makes it an ex- cellent fit for point-of-care applications while maintaining high performance. We anticipate that our EMCAD decoder will be a valuable asset in enhancing a variety of medical image segmentation and semantic segmentation tasks. Acknowledgements: This work is supported in part by the NSF grant CNS 2007284, and in part by the iMAGiNE Consortium (https://imagine.utexas.edu/). 8 References [1] Walid Al-Dhabyani, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. Dataset of breast ultrasound images. Data in brief , 28:104863, 2020. 6, 1 [2] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. , 39(12):2481–2495, 2017. 3 [3] Jorge Bernal, F Javier S ´anchez, Gloria Fern ´andez- Esparrach, Debora Gil, Cristina Rodr ´ıguez, and Fernando Vilari ˜no. Wm-dova maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput. Med. Imaging Graph. , 43:99–111, 2015. 6, 1 [4] Juan C Caicedo, Allen Goodman, Kyle W Karhohs, Beth A Cimini, Jeanelle Ackerman, Marzieh Haghighi, CherKeng Heng, Tim Becker, Minh Doan, Claire McQuin, et al. Nu- cleus segmentation across imaging experiments: the 2018 data science bowl. Nature methods , 16(12):1247–1253, 2019. 6, 7, 1 [5] Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xi- aopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. arXiv preprint arXiv:2105.05537 , 2021. 1, 3, 6, 7 [6] Albert Cardona, Stephan Saalfeld, Stephan Preibisch, Ben- jamin Schmid, Anchi Cheng, Jim Pulokas, Pavel Tomancak, and V olker Hartenstein. An integrated micro-and macroar- chitectural analysis of the drosophila brain by computer- assisted serial section electron microscopy. PLoS biology , 8(10):e1000502, 2010. 6, 7, 1 [7] Gongping Chen, Lei Li, Yu Dai, Jianxun Zhang, and Moi Hoon Yap. Aau-net: an adaptive attention u-net for breast lesions segmentation in ultrasound images. IEEE Trans. Med. Imaging , 2022. 2 [8] Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medi- cal image segmentation. arXiv preprint arXiv:2102.04306 , 2021. 1, 2, 3, 6, 7 [9] Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 5659–5667, 2017. 3, 4 [10] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolu- tion, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. , 40(4):834–848, 2017. 2, 6 [11] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Eur. Conf. Comput. Vis. , pages 801–818, 2018. 2 [12] Shuhan Chen, Xiuli Tan, Ben Wang, and Xuelong Hu. Re- verse attention for salient object detection. In Eur. Conf. Comput. Vis. , pages 234–250, 2018. 1 [13] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xi- aolin Wei, Huaxia Xia, and Chunhua Shen. Conditional po-sitional encodings for vision transformers. arXiv preprint arXiv:2102.10882 , 2021. 1 [14] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the interna- tional skin imaging collaboration (isic). arXiv preprint arXiv:1902.03368 , 2019. 1 [15] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kit- tler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomed- ical imaging (isbi), hosted by the international skin imaging collaboration (isic). In IEEE Int. Symp. Biomed. Imaging , pages 168–172. IEEE, 2018. 6, 1 [16] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 248–255. Ieee, 2009. 6 [17] Bo Dong, Wenhai Wang, Deng-Ping Fan, Jinpeng Li, Huazhu Fu, and Ling Shao. Polyp-pvt: Polyp segmen- tation with pyramid vision transformers. arXiv preprint arXiv:2108.06932 , 2021. 1, 3, 5, 6 [18] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020. 1, 2 [19] Isensee et al. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods , 18(2):203–211, 2021. 1, 2, 3 [20] Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao. Pranet: Parallel re- verse attention network for polyp segmentation. In Int. Conf. Med. Image Comput. Comput. Assist. Interv. , pages 263–273. Springer, 2020. 1, 3, 6 [21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 770–778, 2016. 2 [22] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco An- dreetto, and Hartwig Adam. Mobilenets: Efficient convolu- tional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 , 2017. 2 [23] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 7132–7141, 2018. 2, 3 [24] Huimin Huang, Lanfen Lin, Ruofeng Tong, Hongjie Hu, Qiaowei Zhang, Yutaro Iwamoto, Xianhua Han, Yen-Wei Chen, and Jian Wu. Unet 3+: A full-scale connected unet for medical image segmentation. In ICASSP , pages 1055– 1059. IEEE, 2020. 1, 2 [25] Xiaohong Huang, Zhifang Deng, Dandan Li, and Xueguang Yuan. Missformer: An effective medical image segmentation transformer. arXiv preprint arXiv:2109.07162 , 2021. 6, 7 9 [26] Nabil Ibtehaz and Daisuke Kihara. Acc-unet: A com- pletely convolutional unet model for the 2020s. In Int. Conf. Med. Image Comput. Comput. Assist. Interv. , pages 692–702. Springer, 2023. 2 [27] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. In Int. Conf. Mach. Learn. , pages 448–456. pmlr, 2015. 3 [28] Md Amirul Islam, Sen Jia, and Neil DB Bruce. How much position information do convolutional neural networks en- code? arXiv preprint arXiv:2001.08248 , 2020. 1 [29] Debesh Jha, Pia H Smedsrud, Michael A Riegler, P ˚al Halvorsen, Thomas de Lange, Dag Johansen, and H ˚avard D Johansen. Kvasir-seg: A segmented polyp dataset. In Int. Conf. Multimedia Model. , pages 451–462. Springer, 2020. 6, 1 [30] Taehun Kim, Hyemin Lee, and Daijin Kim. Uacanet: Uncer- tainty augmented context attention for polyp segmentation. InACM Int. Conf. Multimedia , pages 2167–2175, 2021. 6 [31] Alex Krizhevsky and Geoff Hinton. Convolutional deep be- lief networks on cifar-10. Unpublished manuscript , 40(7): 1–9, 2010. 4 [32] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. Adv. Neural Inform. Process. Syst. , 25, 2012. 2 [33] Xian Lin, Zengqiang Yan, Xianbo Deng, Chuansheng Zheng, and Li Yu. Convformer: Plug-and-play cnn-style transformers for improving medical image segmentation. In Int. Conf. Med. Image Comput. Comput. Assist. Interv. , pages 642–651. Springer, 2023. 1 [34] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Int. Conf. Comput. Vis. , pages 10012–10022, 2021. 1, 2, 3 [35] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feicht- enhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 11976–11986, 2022. 2 [36] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. 6 [37] Ange Lou, Shuyue Guan, and Murray Loew. Dc-unet: re- thinking the u-net architecture with dual channel efficient cnn for medical image segmentation. In Med. Imaging 2021: Image Process. , pages 758–768. SPIE, 2021. 1, 2 [38] Ange Lou, Shuyue Guan, Hanseok Ko, and Murray H Loew. Caranet: context axial reverse attention network for segmen- tation of small medical objects. In Med. Imaging 2022: Im- age Process. , pages 81–92. SPIE, 2022. 6 [39] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Int. Conf. Mach. Learn. , pages 807–814, 2010. 3 [40] Phan Ngoc Lan, Nguyen Sy An, Dao Viet Hang, Dao Van Long, Tran Quang Trung, Nguyen Thi Thuy, and Dinh Viet Sang. Neounet: Towards accurate colon polyp segmentation and neoplasm detection. In Adv. Vis. Comput. – Int. Symp. , pages 15–28. Springer, 2021. 6, 1[41] Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al. Atten- tion u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 , 2018. 1, 2, 3, 6 [42] Md Mostafijur Rahman and Radu Marculescu. Medical image segmentation via cascaded attention decoding. In IEEE/CVF Winter Conf. Appl. Comput. Vis. , pages 6222– 6231, 2023. 1, 3, 4, 5, 6, 7 [43] Md Mostafijur Rahman and Radu Marculescu. Multi-scale hierarchical vision transformer with cascaded attention de- coding for medical image segmentation. In Med. Imaging Deep Learn. , 2023. 1, 3, 5, 7 [44] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InInt. Conf. Med. Image Comput. Comput. Assist. Interv. , pages 234–241. Springer, 2015. 1, 2, 6 [45] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh- moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 4510–4520, 2018. 2, 4 [46] Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 , 2014. 2 [47] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 1–9, 2015. 2 [48] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In Int. Conf. Mach. Learn. , pages 6105–6114. PMLR, 2019. 2 [49] Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In Eur. Conf. Comput. Vis. , pages 459–479. Springer, 2022. 1, 2 [50] Jeya Maria Jose Valanarasu and Vishal M Patel. Unext: Mlp- based rapid medical image segmentation network. In Int. Conf. Med. Image Comput. Comput. Assist. Interv. , pages 23–33. Springer, 2022. 6, 1 [51] David V ´azquez, Jorge Bernal, F Javier S ´anchez, Gloria Fern ´andez-Esparrach, Antonio M L ´opez, Adriana Romero, Michal Drozdzal, and Aaron Courville. A benchmark for endoluminal scene segmentation of colonoscopy images. J. Healthc. Eng. , 2017, 2017. 6, 1 [52] Haonan Wang, Peng Cao, Jiaqi Wang, and Osmar R Zaiane. Uctransnet: rethinking the skip connections in u-net from a channel-wise perspective with transformer. In AAAI , pages 2441–2449, 2022. 1, 3 [53] Hongyi Wang, Shiao Xie, Lanfen Lin, Yutaro Iwamoto, Xian-Hua Han, Yen-Wei Chen, and Ruofeng Tong. Mixed transformer u-net for medical image segmentation. In ICASSP , pages 2390–2394. IEEE, 2022. 6, 7 [54] Jinfeng Wang, Qiming Huang, Feilong Tang, Jia Meng, Jion- glong Su, and Sifan Song. Stepwise feature fusion: Local guides global. arXiv preprint arXiv:2203.03635 , 2022. 1, 6 10 [55] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyra- mid vision transformer: A versatile backbone for dense pre- diction without convolutions. In Int. Conf. Comput. Vis. , pages 568–578, 2021. 1, 2 [56] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvt v2: Improved baselines with pyramid vision transformer. Comput. Vis. Media , 8(3):415–424, 2022. 1, 2, 3, 5, 6 [57] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Eur. Conf. Comput. Vis. , pages 3–19, 2018. 1, 3, 4 [58] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and ef- ficient design for semantic segmentation with transformers. Adv. Neural Inform. Process. Syst. , 34:12077–12090, 2021. 2, 3 [59] Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, and Shuicheng Yan. Metaformer is actually what you need for vision. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 10819–10829, 2022. 1 [60] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural net- work for mobile devices. In IEEE Conf. Comput. Vis. Pattern Recog. , pages 6848–6856, 2018. 4 [61] Yundong Zhang, Huiye Liu, and Qiang Hu. Transfuse: Fus- ing transformers and cnns for medical image segmentation. InInt. Conf. Med. Image Comput. Comput. Assist. Interv. , pages 14–24. Springer, 2021. 1, 3, 6 [62] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net ar- chitecture for medical image segmentation. In Deep Learn. Med. Image Anal. Multimodal Learn. Clin. Decis. Support , pages 3–11. Springer, 2018. 1, 2, 6 11 EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation Supplementary Material 7. Experimental Details This section extends our Section 4 in the original paper by describing the datasets and evaluation metrics, followed by additional experimental results. 7.1. Datasets To evaluate the performance of our EMCAD decoder, we carry out experiments across 12 datasets that belong to six medical image segmentation tasks, as described next. Polyp segmentation: We use five polyp segmentation datasets: Kvasir [29] (1,000 images), ClinicDB [3] (612 im- ages), ColonDB [51] (379 images), ETIS [51] (196 images), and BKAI [40] (1,000 images). These datasets contain im- ages from different imaging centers/clinics, having greater diversity in image nature as well as size and shape of polyps. Abdomen organ segmentation: We use the Synapse multi-organ dataset1for abdomen organ segmentation. This dataset contains 30 abdominal CT scans which have 3,779 axial contrast-enhanced slices. Each CT scan has 85-198 slices of 512×512pixels. Following TransUNet [8], we use the same 18 scans for training (2,212 axial slices) and 12 scans for validation. We segment only eight abdominal organs, namely aorta, gallbladder (GB), left kidney (KL), right kidney (KR), liver, pancreas (PC), spleen (SP), and stomach (SM). Cardiac organ segmentation: We use ACDC dataset2 for cardiac organ segmentation. It contains 100 cardiac MRI scans having three sub-organs, namely right ventricle (RV), myocardium (Myo), and left ventricle (LV). Follow- ing TransUNet [8], we use 70 cases (1,930 axial slices) for training, 10 for validation, and 20 for testing. Skin lesion segmentation: We use ISIC17 [15] (2,000 training, 150 validation, and 600 testing images) and ISIC18 [14] (2,594 images) for skin lesion segmentation. Breast cancer segmentation: We use BUSI [1] dataset for breast cancer segmentation. Following [50], we use 647 (437 benign and 210 malignant) images from this dataset. Cell nuclei/structure segmentation: We use the DSB18 [4] (670 images) and EM [6] (30 images) datasets of bio- logical imaging for cell nuclei/structure segmentation. We use a train-val-test split of 80:10:10 in ClinicDB, Kvasir, ColonDB, ETIS, BKAI, ISIC18, DSB18, EM, and BUSI datasets. For ISIC17, we use the official train-val-test sets provided by the competition organizer. 1https://www.synapse.org/#!Synapse:syn3193805/wiki/217789 2https://www.creatis.insa-lyon.fr/Challenge/acdc/7.2. Evaluation metrics We use the DICE score to evaluate performance on all the datasets. However, we also use 95% Hausdorff Dis- tance (HD95) and mIoU as additional evaluation metrics for Synapse multi-organ segmentation. The DICE score DSC (Y, P),IoU(Y, P), and HD95 distance DH(Y, P)are calculated using Equations 12, 13, and 14, respectively: DSC (Y, P) =2× |Y∩P| |Y|+|P|×100 (12) IoU(Y, P) =|Y∩P| |Y∪P|×100 (13) DH(Y, P) = max {max y∈Ymin p∈Pd(y, p),{max p∈Pmin y∈Yd(y, p)}(14) where YandPare the ground truth and predicted segmen- tation map, respectively. 7.3. Qualitative results This subsection describes the qualitative results of differ- ent methods including our EMCAD. From, the qualitative results on Synapse Multi-organ dataset in Figure 4, we can see that most of the methods face challenges segmenting the left kidney (orange) and part of the pancreas (pink). How- ever, our PVT-EMCAD-B0 (Figure 4g) and PVT-EMCAD- B2 (Figure 4h) can segment those organs more accurately (see red rectangular box) with significantly lower computa- tional costs. Similarly, qualitative results of polyp segmen- tation on a representative image from ClinicDB dataset in Figure 5 show that predicted segmentation outputs of our PVT-EMCAD-B0 (Figure 5p) and PVT-EMCAD-B2 (Fig- ure 5q) have strong overlaps with the GroundTruth mask (Figure 5r), while existing SOTA methods exhibit false seg- mentation of polyp (see red rectangular box). 8. Additional Ablation Study This section further elaborates on Section 5 by detailing five additional ablation studies related to our architectural de- sign and experimental setup. 8.1. Parallel vs. sequential depth-wise convolution We have conducted another set of experiments to decide whether we use multi-scale depth-wise convolutions in par- allel or sequential. Table 7 presents the results of these ex- periments which show that there is no significant impact of 1 Figure 4. Qualitative results of multi-organ segmentation on Synapse Multi-organ dataset. The red rectangular box highlights incorrectly segmented organs by SOTA methods. Figure 5. Qualitative results of polyp segmentation. The red rectangular box highlights incorrectly segmented polyps by SOTA methods. Architectures Depth-wise convolutions Synapse ClinicDB PVT-EMCAD-B0 Sequential 81.82 ±0.3 94.57 ±0.2 PVT-EMCAD-B0 Parallel 81.97±0.2 94.60 ±0.2 PVT-EMCAD-B2 Sequential 83.54 ±0.3 95.15 ±0.3 PVT-EMCAD-B2 Parallel 83.63±0.2 95.21 ±0.2 Table 7. Results of parallel and sequential depth-wise convolution in MSDC on Synapse multi-organ and ClinicDB datasets. All re- sults are averaged over five runs. Best results are in bold. the arrangements though the parallel convolutions provide a slightly improved performance (0.03% to 0.15%). We also observe higher standard deviations among runs in the case of sequential convolutions. Hence, in all our experiments, we use multi-scale depth-wise convolutions in parallel .Architectures Module Params(K) FLOPs(M) Synapse PVT-EMCAD-B0 AG 31.62 15.91 81.74 PVT-EMCAD-B0 LGAG 5.51 5.24 81.97 PVT-EMCAD-B2 AG 124.68 61.68 83.51 PVT-EMCAD-B2 LGAG 11.01 10.47 83.63 Table 8. LGAG vs. AG (Attention gate) [41] on Synapse multi- organ dataset. The total #Params and #FLOPs of three AG/LGAGs in our decoder are reported for an input resolution of 256×256. All results are averaged over five runs. Best results are in bold. 8.2. Effectiveness of our large-kernel grouped at- tention gate (LGAG) over attention gate (AG) Table 8 presents experimental results of EMCAD with orig- inal AG [41] and our LGAG. We can conclude that our LGAG achieves better DICE scores with significant re- 2 Architectures PretrainAverageAorta GB KL KR Liver PC SP SMDICE↑HD95↓mIoU↑ PVT-EMCAD-B0 No 77.47 19.93 66.72 81.96 69.41 83.88 74.82 93.45 54.41 88.97 72.85 PVT-EMCAD-B0 Yes 81.97 17.39 72.64 87.21 66.62 87.48 83.96 94.57 62.00 92.66 81.22 PVT-EMCAD-B2 No 80.18 18.83 70.21 85.98 68.10 84.62 79.93 93.96 61.61 90.99 76.23 PVT-EMCAD-B2 Yes 83.63 15.68 74.65 88.14 68.87 88.08 84.10 95.26 68.51 92.17 83.92 Table 9. Effect of transfer learning from ImageNet pre-trained weights on Synapse multi-organ dataset. ↑(↓) denotes the higher (lower) the better. All results are averaged over five runs. Best results are in bold. DS EM BUSI Clinic Kvasir ISIC18 Synapse ACDC No 95.74 79.64 94.96 92.51 90.74 82.03 92.08 Yes 95.53 80.25 95.21 92.75 90.96 83.63 92.12 Table 10. Effect of deep supervision (DS). PVT-EMCAD-B2 with DS achieves slightly better DICE scores in 6 out of 7 datasets. Architectures Resolutions FLOPs(G) DICE PVT-EMCAD-B0 224×224 0.64 81.97 PVT-EMCAD-B0 256×256 0.84 82.63 PVT-EMCAD-B0 384×384 1.89 84.81 PVT-EMCAD-B0 512×512 3.36 85.52 PVT-EMCAD-B2 224×224 4.29 83.63 PVT-EMCAD-B2 256×256 5.60 84.47 PVT-EMCAD-B2 384×384 12.59 85.78 PVT-EMCAD-B2 512×512 22.39 86.53 Table 11. Effect of input resolutions on Synapse multi-organ dataset. All results are averaged over five runs. ductions in #Params (82.57% for PVT-EMCAD-B0 and 91.17% for PVT-EMCAD-B2) and #FLOPs (67.06% for PVT-EMCAD-B0 and 83.03% for PVT-EMCAD-B2) than AG. The reduction in #Params and #FLOPs is bigger for the larger models. Therefore, our LGAG demonstrates im- proved scalability with models that have a greater number of channels, yielding enhanced DICE scores. 8.3. Effect of transfer learning from ImageNet pre- trained weights We conduct experiments on the Synapse multi-organ dataset to show the effect of transfer learning from the ImageNet pre-trained encoder. Table 9 reports the results of these ex- periments which show that transfer learning from ImageNet pre-trained PVT-v2 encoders significantly boosts the per- formance. Specifically, for PVT-EMCAD-B0, the DICE, mIoU, and HD95 scores are improved by 4.5%, 5.92%, and 2.54, respectively. Likewise, for PVT-EMCAD-B2, the DICE, mIoU, and HD95 scores are improved by 3.45%, 4.44%, and 3.15, respectively. We can also conclude that transfer learning has a comparatively greater impact onthe smaller PVT-EMCAD-B0 model than the larger PVT- EMCAD-B2 model. For individual organs, transfer learn- ing significantly boosts the performance of all organ seg- mentation, except the Gallbladder (GB). 8.4. Effect of deep supervision We have conducted an ablation study that drops the Deep Supervision (DS). Results of our PVT-EMCAD-B2 on seven datasets are given in Table 10. Our PVT-EMCAD- B2 with DS achieves slightly better DICE scores in six out of seven datasets. Among all the datasets, the DS has the largest impact on the Synapse Multi-organ dataset. 8.5. Effect of input resolutions Table 11 presents the results of our PVT-EMCAD-B0 and PVT-EMCAD-B2 architectures with different input resolu- tions. From this table, it is evident that the DICE scores im- prove with the increase in input resolution. However, these improvements in DICE score come with the increment in #FLOPs. Our PVT-EMCAD-B0 achieves an 85.52% DICE score with only 3.36G FLOPs when using 512×512in- puts. On the other hand, our PVT-EMCAD-B2 achieves the best DICE score (86.53%) with 22.39G FLOPs when using 512×512 inputs. We also observe that our PVT- EMCAD-B2 with 5.60G FLOPs when using 256×256in- puts shows a 1.05% lower DICE score than PVT-EMCAD- B0 with 3.36G FLOPs. Therefore, we can conclude that PVT-EMCAD-B0 is more suitable for larger input resolu- tions than PVT-EMCAD-B2. 3 | 6 | 1 | The EMCAD model has approximately 1.91 million parameters and 0.381G FLOPs for a standard encoder, making it relatively lightweight compared to larger models like UNet and TransUNet, which have tens of millions of parameters and significantly higher FLOP counts. Given the medical image segmentation task, a typical training time for similar models using a dataset of medical images (assumed to be of moderate size) is estimated around 6 hours on a single GPU, assuming a batch size that fits within the GPU memory constraints. Based on the model's parameter count, it can likely benefit from optimizations such as mixed precision training, which can further reduce training time. Additionally, the paper mentions using 12 different datasets during evaluation, suggesting comprehensive validation without necessarily impacting training time per model significantly. Overall, the use of a single GPU is feasible for this model given its efficient design and lighter architecture compared to other state-of-the-art models in the field. | yes | Yes | CV | EMCAD: Efficient Multi-scale Convolutional Attention Decoding for Medical Image Segmentation | 2024-05-11 0:00:00 | https://github.com/sldgroup/emcad | 1 | https://drive.google.com/drive/folders/1ACJEoTp-uqfFJ73qS3eUObQh52nGuzCd | 21 hour for SYNAPSE dataset. | https://colab.research.google.com/drive/1jYDic29ht3AjFGx5rXY_Fp5hcoxoRP6M?usp=sharing | Yes | -- No definiton on how to run on Kvsair - SEG datas. Need to create separate dataloader as input for Kavirseg data. But it ran on Synapse dataset provided as their default training set. |
ETTh1 (336) Multivariate | SOFTS | [] | SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion | 2024-04-22T00:00:00 | https://arxiv.org/abs/2404.14197v3 | [
"https://github.com/secilia-cxy/softs"
] | {'MSE': '0.480', 'MAE': '0.452'} | [
"MSE",
"MAE"
] | Given the following paper and codebase:
Paper: SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion
Codebase: https://github.com/secilia-cxy/softs
Improve the SOFTS model on the ETTh1 (336) Multivariate dataset. The result
should improve on the following metrics: {'MSE': '0.480', 'MAE': '0.452'}. You must use only the codebase provided.
| SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion Lu Han∗, Xu-Yang Chen∗, Han-Jia Ye†, De-Chuan Zhan School of Artificial Intelligence, Nanjing University, China National Key Laboratory for Novel Software Technology, Nanjing University, China {hanlu, chenxy, yehj, zhandc}@lamda.nju.edu.cn Abstract Multivariate time series forecasting plays a crucial role in various fields such as fi- nance, traffic management, energy, and healthcare. Recent studies have highlighted the advantages of channel independence to resist distribution drift but neglect channel correlations, limiting further enhancements. Several methods utilize mech- anisms like attention or mixer to address this by capturing channel correlations, but they either introduce excessive complexity or rely too heavily on the corre- lation to achieve satisfactory results under distribution drifts, particularly with a large number of channels. Addressing this gap, this paper presents an efficient MLP-based model, the Series-cOre Fused Time Series forecaster (SOFTS), which incorporates a novel STar Aggregate-Redistribute (STAR) module. Unlike tradi- tional approaches that manage channel interactions through distributed structures, e.g., attention, STAR employs a centralized strategy to improve efficiency and reduce reliance on the quality of each channel. It aggregates all series to form a global core representation, which is then dispatched and fused with individual series representations to facilitate channel interactions effectively. SOFTS achieves superior performance over existing state-of-the-art methods with only linear com- plexity. The broad applicability of the STAR module across different forecasting models is also demonstrated empirically. We have made our code publicly available athttps://github.com/Secilia-Cxy/SOFTS . 1 Introduction Time series forecasting plays a critical role in numerous applications across various fields, including environment [ 9], traffic management [ 15], energy [ 16], and healthcare [ 27]. The ability to accurately predict future values based on previously observed data is fundamental for decision-making, policy development, and strategic planning in these areas. Historically, models such as ARIMA and Exponential Smoothing were standard in forecasting, noted for their simplicity and effectiveness in certain contexts [2]. However, the emergence of deep learning models, particularly those exploiting structures like Recurrent Neural Networks (RNNs) [ 14,3,29] and Convolutional Neural Networks (CNN) [ 1,8], has shifted the paradigm towards more complex models capable of understanding intricate patterns in time series data. To overcome the inability to capture long-term dependencies, Transformer-based models have been a popular direction and achieved remarkable performance, especially on long-term multivariate time series forecasting [48, 28, 26]. Earlier on, Transformer-based methods perform embedding techniques like linear or convolution layers to aggregate information from different channels, then extract information along the temporal dimension via attention mechanisms [ 48,35,49]. However, such channel mixing structures were ∗Equal Contribution †Corresponding Author 38th Conference on Neural Information Processing Systems (NeurIPS 2024).arXiv:2404.14197v3 [cs.LG] 18 Nov 2024 found vulnerable to the distribution drift, to the extent that they were often less effective than simpler methods like linear models [ 45,11]. Consequently, some studies adopted a channel-independence strategy and achieved favorable results [ 28,23,34]. Yet, these methods overlooked the correlation between channels, thereby hindering further improvements in model performance. Subsequent studies captured this correlation information through mechanisms such as attention, achieving better outcomes, and demonstrating the necessity of information transfer between channels [ 47,33,26]. However, these approaches either employed attention mechanisms with high complexity [ 26] or struggled to achieve state-of-the-art (SOTA) performance [ 7]. Therefore, effectively integrating the robustness of channel independence and utilizing the correlation between channels in a simpler and more efficient manner is crucial for building better time series forecasting models. In response to these challenges, this study introduces an efficient MLP-based model, the Series-cOre Fused Time Series forecaster (SOFTS), designed to streamline the forecasting process while also enhancing prediction accuracy. SOFTS first embeds the series on multiple channels and then extracts the mutual interaction by the novel STar Aggregate-Redistribute (STAR) module. The STAR at the heart of SOFTS ensures scalability and reduces computational demands from the common quadratic complexity to only linear. To achieve that, instead of employing a distributed interaction structure, STAR employs a centralized structure that first gets the global core representation by aggregating the information from different channels. Then the local series representation is fused with the core representation to realize the indirect interaction between channels. This centralized interaction not only reduces the comparison complexity but also takes advantage of both channel independence and aggregated information from all the channels that can help improve the local ones [ 40]. Our empirical results show that our SOFTS method achieves better results against current state-of-the-art methods with lower computation resources. Besides, SOFTS can scale to time series with a large number of channels or time steps, which is difficult for many methods based on Transformer without specific modification. Last, the newly proposed STAR is a universal module that can replace the attention in many models. Its efficiency and effectiveness are validated on various current transformer-based time series forecasters. Our contributions are as follows: 1.We present Series-cOre Fused Time Series (SOFTS) forecaster, a simple MLP-based model that demonstrates state-of-the-art performance with lower complexity. 2.We introduce the STar Aggregate-Redistribute (STAR) module, which serves as the foundation of SOFTS. STAR is designed as a centralized structure that uses a core to aggregate and exchange information from the channels. Compared to distributed structures like attention, the STAR not only reduces the complexity but also improves robustness against anomalies in channels. 3.Lastly, through extensive experiments, the effectiveness and scalability of SOFTS are validated. The universality of STAR is also validated on various attention-based time series forecasters. 2 Related Work Time series forecasting. Time series forecasting is a critical area of research that finds applications in both industry and academia. With the powerful representation capability of neural networks, deep forecasting models have undergone a rapid development [ 22,38,37,4,5]. Two widely used methods for time series forecasting are recurrent neural networks (RNNs) and convolutional neural networks (CNNs). RNNs model successive time points based on the Markov assumption [ 14,3,29], while CNNs extract variation information along the temporal dimension using techniques such as temporal convolutional networks (TCNs) [ 1,8]. However, due to the Markov assumption in RNN and the local reception property in TCN, both of the two models are unable to capture the long-term dependencies in sequential data. Recently, the potential of Transformer models for long-term time series forecasting tasks has garnered attention due to their ability to extract long-term dependencies via the attention mechanism [48, 35, 49]. Efficient long-term multivariate forecasting and channel independence. Long-term multivariate time series forecasting is increasingly significant in decision-making processes [ 9]. While Trans- formers have shown remarkable efficacy in various domains [ 32], their complexity poses challenges in long-term forecasting scenarios. Efforts to adapt Transformer-based models for time series with reduced complexity include the Informer, which utilizes a probabilistic subsampling strategy for more efficient attention mechanisms [ 48], and the Autoformer, which employs autocorrelation and fast 2 AggregateOutput MLPPool Repeat & Concat MLPStar Aggregate-Redistribute Module ×𝑁𝑁 ResidualLinearChannel TimeChannel TimeInputAggregate Redistribute Fuse SeriesCoreFigure 1: Overview of our SOFTS method. The multivariate time series is first embedded along the temporal dimension to get the series representation for each channel. Then the channel correlation is captured by multiple layers of STAR modules. The STAR module utilizes a centralized structure that first aggregates the series representation to obtain a global core representation, and then dispatches and fuses the core with each series, which encodes the local information. Fourier transforms to expedite computations [ 35]. Similarly, FEDformer applies attention within the frequency domain using selected components to enhance performance [ 49]. Despite these innovations, models mixing channels in multivariate series often exhibit reduced robustness to adapt to distribution drifts and achieve subpar performance [ 45,11]. Consequently, some researchers have adopted a channel-independent approach, simplifying the model architecture and delivering robust results as well [ 28,23]. However, ignoring the interactions among variates can limit further advancements. Recent trends have therefore shifted towards leveraging attention mechanisms to capture channel correlations [ 47,33,26]. Even though the performance is promising, their scalability is limited on large datasets. Another stream of research focuses on modeling time and channel dependencies through simpler structures like MLP [ 46,7,42]. Yet, they usually achieve sub-optimal performance compared to SOTA transformer-based methods, especially when the number of channels is large. In this paper, we propose a new MLP-based method that breaks the dilemma of performance and efficiency, achieving state-of-the-art performance with merely linear complexity to both the number of channels and the length of the lookback window. 3 SOFTS Multivariate time series forecasting (MTSF) deals with time series data that contain multiple variables, or channels, at each time step. Given historical values X∈RC×Lwhere Lrepresents the length of the lookback window, and Cis the number of channels. The goal of MTSF is to predict the future values Y∈RC×H, where H > 0is the forecast horizon. 3.1 Overview Our Series-cOre Fused Time Series forecaster (SOFTS) comprises the following components and its structure is illustrated in Figure 1. Reversible instance normalization. Normalization is a common technique to calibrate the distribu- tion of input data. In time series forecasting, the local statistics of the history are usually removed to stabilize the prediction of the base forecaster and restore these statistics to the model prediction [ 17]. Following the common practice in many state-of-the-art models [ 28,26], we apply reversible instance normalization which centers the series to zero means, scales them to unit variance, and reverses the normalization on the forecasted series. For PEMS dataset, we follow Liu et al. [26] to selectively perform normalization according to the performance. Series embedding. Series embedding is an extreme case of the prevailing patch embedding in time series [ 28], which is equivalent to setting the patch length to the length of the whole series [ 26]. Unlike patch embedding, series embedding does not produce extra dimension and is thus less complex than patch embedding. Therefore, in this work, we perform series embedding on the lookback window. 3 Attention, Mixer, GNN … STAR (ours)Distributed Interaction Centralized InteractionSeries Core InteractionFigure 2: The comparison of the STAR module and several common modules, like attention, GNN and mixer. These modules employ a distributed structure to perform the interaction, which relies on the quality of each channel. On the contrary, our STAR module utilizes a centralized structure that first aggregates the information from all the series to obtain a comprehensive core representation. Then the core information is dispatched to each channel. This kind of interaction pattern reduces not only the complexity of interaction but also the reliance on the channel quality. Concretely, we use a linear projection to embed the series of each channel to S0=RC×d, where dis the hidden dimension: S0= Embedding( X). (1) Channel interaction. The series embedding is refined by multiple layers of STAR modules: Si= STAR( Si−1), i = 1,2, . . . , N. (2) The STAR module utilizes a star-shaped structure that exchanges information between different channels, as will be fully described in the next section. Linear predictor. After Nlayers of STAR, we use a linear predictor ( Rd7→RH) to produce the forecasting results. Assume the output series representation of layer Nto beSN, the prediction ˆY∈RC×His computed as: ˆY= Linear( SN). 3.2 STar Aggregate-Redistribute Module Our main contribution is a simple but efficient STar Aggregate-Redistribute (STAR) module to capture the dependencies between channels. Existing methods employ modules like attention to extract such interaction. Although these modules directly compare the characteristics of each pair, they are faced with the quadratic complexity related to the number of channels. Besides, such a distributed structure may lack robustness when there are abnormal channels for the reason that they rely on the extract correlation between channels. Existing research on channel independence has already proved the untrustworthy correlations on non-stationary time series [ 45,11]. To this end, we propose the STAR module to solve the inefficiency of the distributed interaction modules. This module is inspired by the star-shaped centralized system in software engineering, where instead of letting the clients communicate with each other, there is a server center to aggregate and exchange the information [ 30,10], whose advantage is efficient and reliable. Following this idea, the STAR replaces the mutual series interaction with the indirect interaction through a core, which represents the global representation across all the channels. Compared to the distributed structure, STAR takes advantage of the robustness brought by aggregation of channel statistics [ 11], and thus achieves even better performance. Figure 2 illustrates the main idea of STAR and its difference between existing models like attention [32], GNN [19] and Mixer [31]. Given the series representation of each channel as input, STAR first gets the core representation of the multivariate series, at the heart of our SOFTS method. We define the core representation as follows: Definition 3.1 (Core Representation) .Given a multivariate series with Cchannels {s1,s2, . . . ,sC}, the core representation ois a vector generated by an arbitrary function fwith the following form: o=f(s1,s2, . . . ,sC) 4 The core representation encodes the global information across all the channels. To obtain such repre- sentation, we employ the following form, which is inspired by the Kolmogorov-Arnold representation theorem [20] and DeepSets [43]: oi= Stoch _Pool(MLP 1(Si−1)) (3) where MLP 1:Rd7→Rd′is a projection that projects the series representation from the series hidden dimension dto the core dimension d′, composing two layers with hidden dimension dand GELU [ 13] activation. Stoch _Pool is the stochastic pooling [ 44] that get the core representation o∈Rd′by aggregating representations of Cseries. Stochastic pooling combines the advantages of mean and max pooling. The details of computing the core representation can be found in Appendix B.2. Next, we fuse the representations of the core and all the series: Fi= Repeat _Concat( Si−1,oi) (4) Si= MLP 2(Fi) +Si−1 (5) TheRepeat _Concat operation concatenate the core representation oto each series representation, and we get the Fi∈RC×(d+d′). Then another MLP ( MLP 2:Rd+d′7→Rd) is used to fuse the concatenated presentation and project it back to the hidden dimension d,i.e.,Si∈RC×d. Like many deep learning modules, we also add a residual connection from the input to the output [12]. 3.3 Complexity Analysis We analyze the complexity of each component of SOFTS step by step concerning window length L, number of channels C, model dimension d, and forecasting horizon H. The complexity of the reversible instance normalization and series embedding is O(CL)andO(CLd)respectively. In STAR, assuming d′=d, theMLP 1is aRd7→Rdmapping with complexity O(Cd2).Stoch _Pool computes the softmax along the channel dimension, with complexity O(Cd). The MLP 2on the concatenated embedding has the complexity O(Cd2). The complexity of the predictor is O(CdH ). In all, the complexity of the encoding part is O(CLd +Cd2+CdH ), which is linear to C,L, andH. Ignoring the model dimension d, which is a constant in the algorithm and irrelevant to the problem, we compare the complexity of several popular forecasters in Table 1. Table 1: Complexity comparison between popular time series forecasters concerning window length L, number of channels Cand forecasting horizon H. Our method achieves only linear complexity. SOFTS (ours) iTransformer PatchTST Transformer Complexity O(CL+CH)O(C2+CL+CH)O(CL2+CH)O(CL+L2+HL+CH) 4 Experiments Datasets. To thoroughly evaluate the performance of our proposed SOFTS, we conduct extensive experiments on 6 widely used, real-world datasets including ETT (4 subsets), Traffic, Electricity, Weather [ 48,35], Solar-Energy [ 21] and PEMS (4 subsets) [ 24]. Detailed descriptions of the datasets can be found in Appendix A. 4.1 Forecasting Results Compared methods. We extensively compare the recent Linear-based or MLP-based methods, including DLinear [ 45], TSMixer [ 7], TiDE [ 6]. We also consider Transformer-based methods including FEDformer [ 49], Stationary [ 25], PatchTST [ 28], Crossformer [ 47], iTransformer [ 26] and CNN-based methods including SCINet [24], TimesNet [36]. Forecasting benchmarks. The long-term forecasting benchmarks follow the setting in In- former [ 48] and SCINet [ 24]. The lookback window length ( L) is set to 96 for all datasets. We set the prediction horizon ( H) to{12,24,48,96}for PEMS and {96,192,336,720}for others. Performance comparison among different methods is conducted based on two primary evaluation metrics: Mean Squared Error (MSE) and Mean Absolute Error (MAE). The results of PatchTST and TSMixer are reproduced for the ablation study and other results are taken from iTransformer [26]. 5 Implementation details. We use the ADAM optimizer [ 18] with an initial learning rate of 3×10−4. This rate is modulated by a cosine learning rate scheduler. The mean squared error (MSE) loss function is utilized for model optimization. We explore the number of STAR blocks Nwithin the set{1,2,3,4}, and the dimension of series dwithin {128,256,512}. Additionally, the dimension of the core representation d′varies among {64,128,256,512}. Other detailed implementations are described in Appendix B.3. Main results. As shown in Table 2, SOFTS has provided the best or second predictive outcomes in all 6 datasets on average. Moreover, when compared to previous state-of-the-art methods, SOFTS has demonstrated significant advancements. For instance, on the Traffic dataset, SOFTS improved the average MSE error from 0.428 to 0.409, representing a notable reduction of about 4.4%. On the PEMS07 dataset, SOFTS achieves a substantial relative decrease of 13.9% in average MSE error, from 0.101 to 0.087. These significant improvements indicate that the SOFTS model possesses robust performance and broad applicability in multivariate time series forecasting tasks, especially in tasks with a large number of channels, such as the Traffic dataset, which includes 862 channels, and the PEMS dataset, with a varying range from 170 to 883 channels. Table 2: Multivariate forecasting results with horizon H∈ {12,24,48,96}for PEMS and H∈ {96,192,336,720}for others and fixed lookback window length L= 96 . Results are averaged from all prediction horizons. Full results are listed in Table 6. Models SOFTS (ours) iTransformer PatchTST TSMixer Crossformer TiDE TimesNet DLinear SCINet FEDformer Stationary Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ECL 0.174 0.264 0.178 0.270 0.189 0.276 0.186 0.287 0.244 0.334 0.251 0.344 0.192 0.295 0.212 0.300 0.268 0.365 0.214 0.327 0.193 0.296 Traffic 0.409 0.267 0.428 0.282 0.454 0.286 0.522 0.357 0.550 0.304 0.760 0.473 0.620 0.336 0.625 0.383 0.804 0.509 0.610 0.376 0.624 0.340 Weather 0.255 0.278 0.258 0.278 0.256 0.279 0.256 0.279 0.259 0.315 0.271 0.320 0.259 0.287 0.265 0.317 0.292 0.363 0.309 0.360 0.288 0.314 Solar-Energy 0.229 0.256 0.233 0.262 0.236 0.266 0.260 0.297 0.641 0.639 0.347 0.417 0.301 0.319 0.330 0.401 0.282 0.375 0.291 0.381 0.261 0.381 ETTm1 0.393 0.403 0.407 0.410 0.396 0.406 0.398 0.407 0.513 0.496 0.419 0.419 0.400 0.406 0.403 0.407 0.485 0.481 0.448 0.452 0.481 0.456 ETTm2 0.287 0.330 0.288 0.332 0.287 0.330 0.289 0.333 0.757 0.610 0.358 0.404 0.291 0.333 0.350 0.401 0.571 0.537 0.305 0.349 0.306 0.347 ETTh1 0.449 0.442 0.454 0.447 0.453 0.446 0.463 0.452 0.529 0.522 0.541 0.507 0.458 0.450 0.456 0.452 0.747 0.647 0.440 0.460 0.570 0.537 ETTh2 0.373 0.400 0.383 0.407 0.385 0.410 0.401 0.417 0.942 0.684 0.611 0.550 0.414 0.427 0.559 0.515 0.954 0.723 0.437 0.449 0.526 0.516 PEMS03 0.104 0.210 0.113 0.221 0.137 0.240 0.119 0.233 0.169 0.281 0.326 0.419 0.147 0.248 0.278 0.375 0.114 0.224 0.213 0.327 0.147 0.249 PEMS04 0.102 0.208 0.111 0.221 0.145 0.249 0.103 0.215 0.209 0.314 0.353 0.437 0.129 0.241 0.295 0.388 0.092 0.202 0.231 0.337 0.127 0.240 PEMS07 0.087 0.184 0.101 0.204 0.144 0.233 0.112 0.217 0.235 0.315 0.380 0.440 0.124 0.225 0.329 0.395 0.119 0.234 0.165 0.283 0.127 0.230 PEMS08 0.138 0.219 0.150 0.226 0.200 0.275 0.165 0.261 0.268 0.307 0.441 0.464 0.193 0.271 0.379 0.416 0.158 0.244 0.286 0.358 0.201 0.276 Model efficiency. Our SOFTS model demonstrates efficient performance with minimal memory and time consumption. Figure 3b illustrates the memory and time usage across different models on the Traffic dataset, with lookback window L= 96 , horizon H= 720 , and batch size 4. Despite their low resource usage, Linear-based or MLP-based models such as DLinear and TSMixer perform poorly with a large number of channels. Figure 3a explores the memory requirements of the three best-performing models from Figure 3b. This figure reveals that the memory usage of both PatchTST and iTransformer escalates significantly with an increase in channels. In contrast, our SOFTS model maintains efficient operation, with its complexity scaling linearly with the number of channels, effectively handling large channel counts. 4.2 Ablation Study In this section, the prediction horizon ( H) is set to {12,24,48,96}for PEMS and {96,192,336,720} for others. All the results are averaged on four horizons. If not especially concerned, the lookback window length ( L) is set to 96as default. Comparison of different pooling methods. The comparison of different pooling methods in STAR is shown in Table 3. The term "w/o STAR" refers to a scenario where an MLP is utilized with the Channel Independent (CI) strategy, without the use of STAR. Mean pooling computes the average value of all the series representations. Max pooling selects the maximum value of each hidden feature among all the channels. Weighted average learns the weight for each channel. Stochastic pooling applies random selection during training and weighted average during testing according to the feature value. The result reveals that incorporating STAR into the model leads to a consistent enhancement 6 1000 2000 3000 4000 500004000800012000160002000024000Memory (MB) # of Channels PatchTST iTransformer SOFTS (ours)(a) 0 50 100 150 200 250 300 3500.400.440.480.520.560.600.640.68 iTransformer Stationary FEDformer PatchTSTCrossformer SOFTS (ours)TimesNet DLinear TSMixerMSE Inference Time (ms/iter)1000600011000160002100026000Memory (MB) (b) Figure 3: Memory and time consumption of different models. In Figure 3a, we set the lookback window L= 96 , horizon H= 720 , and batch size to 16 in a synthetic dataset we conduct. In Figure 3b, we set the lookback window L= 96 , horizon H= 720 , and batch size to 4in Traffic dataset. Figure 3a reveals that SOFTS model scales to large number of channels more effectively than Transformer-based models. Figure 3b shows that previous Linear-based or MLP- based models such as DLinear and TSMixer perform poorly with a large number of channels. While SOFTS model demonstrates efficient performance with minimal memory and time consumption. in performance across all pooling methods. Additionally, stochastic pooling deserves attention as it outperforms the other methods across nearly all the datasets. Table 3: Comparison of the effect of different pooling methods. The term "w/o STAR" refers to a scenario where an MLP is utilized with the Channel Independent (CI) strategy, without the use of STAR. The result reveals that incorporating STAR into the model leads to a consistent enhancement in performance across all pooling methods. Apart from that, stochastic pooling performs better than mean and max pooling. Full results can be found in Table 7. Pooling MethodECL Traffic Weather Solar ETTh2 PEMS04 MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE w/o STAR 0.187 0.273 0.442 0.281 0.261 0.281 0.247 0.272 0.381 0.406 0.143 0.245 Mean 0.174 0.266 0.420 0.277 0.261 0.281 0.234 0.262 0.379 0.404 0.106 0.212 Max 0.180 0.270 0.406 0.271 0.259 0.280 0.246 0.269 0.379 0.401 0.116 0.223 Weighted 0.184 0.275 0.440 0.292 0.263 0.284 0.264 0.280 0.379 0.403 0.109 0.218 Stochastic 0.174 0.264 0.409 0.267 0.255 0.278 0.229 0.256 0.373 0.400 0.102 0.208 Universality of STAR. The STar Aggregate-Redistribute (STAR) module is an embedding adap- tation function [ 39,41] that is replaceable to arbitrary transformer-based methods that use the attention mechanism. In this paragraph, we test the effectiveness of STAR on different existing transformer-based forecasters, such as PatchTST [ 28] and Crossformer [ 47]. Note that our method can be regarded as replacing the channel attention in iTransformer [ 26]. Here we involve substituting the time attention in PatchTST with STAR and incrementally replacing both the time and channel attention in Crossformer with STAR. The results, as presented in Table 4, demonstrate that replacing attention with STAR, which deserves less computational resources, could maintain and even improve the models’ performance in several datasets. Influence of lookback window length. Common sense suggests that a longer lookback window should improve forecast accuracy. However, incorporating too many features can lead to a curse 7 Table 4: The performance of STAR in different models. The attention replaced by STAR here are the time attention in PatchTST, the channel attention in iTransformer, and both the time attention and channel attention in modified Crossformer. The results demonstrate that replacing attention with STAR, which requires less computational resources, could maintain and even improve the models’ performance in several datasets.†: The Crossformer used here is a modified version that replaces the decoder with a flattened head like what PatchTST does. Full results can be found in Table 8. Model ComponentECL Traffic Weather PEMS03 PEMS04 PEMS07 MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE PatchTSTAttention 0.189 0.276 0.454 0.286 0.256 0.279 0.137 0.240 0.145 0.249 0.144 0.233 STAR 0.185 0.272 0.448 0.279 0.252 0.277 0.134 0.233 0.136 0.238 0.137 0.225 Crossformer†Attention 0.202 0.301 0.546 0.297 0.254 0.310 0.100 0.208 0.090 0.198 0.084 0.181 STAR 0.198 0.292 0.549 0.292 0.252 0.305 0.100 0.204 0.087 0.194 0.080 0.175 iTransformerAttention 0.178 0.270 0.428 0.282 0.258 0.278 0.113 0.221 0.111 0.221 0.101 0.204 STAR 0.174 0.264 0.409 0.267 0.255 0.278 0.104 0.210 0.102 0.208 0.087 0.184 of dimensionality, potentially compromising the model’s forecasting effectiveness. We explore how varying the lengths of these lookback windows impacts the forecasting performance for time horizons from 48 to 336 in all datasets. As shown in Figure 4, SOFTS could consistently improve its performance by effectively utilizing the enhanced data available from an extended lookback window. Also, SOFTS performs consistently better than other models under different lookback window lengths, especially in shorter cases. 48 96 192 3360.320.340.360.380.40ETTm2MAE Lookback Window Length DLinear TSMixer PatchTST iTransformer SOFTS (ours) 48 96 192 3360.260.280.300.320.340.36ECLMAE Lookback Window Length DLinear TSMixer PatchTST iTransformer SOFTS (ours) 48 96 192 3360.260.280.300.320.340.360.380.400.420.440.460.48TrafficMAE Lookback Window Length DLinear TSMixer PatchTST iTransformer SOFTS (ours) Figure 4: Influence of lookback window length L. SOFTS performs consistently better than other models under different lookback window lengths, especially in shorter cases. Hyperparameter sensitivity analysis. We investigate the impact of several key hyperparameters on our model’s performance: the hidden dimension of the model, denoted as d, the hidden dimension of the core, represented by d′, and the number of encoder layers, N. Analysis of Figure 5 indicates that complex traffic datasets (such as Traffic and PEMS) require larger hidden dimensions and more encoding layers to handle their intricacies effectively. Moreover, variations in d′have a minimal influence on the model’s overall performance. Series embedding adaptation of STAR. The STAR module adapts the series embeddings by extracting the interaction between channels. To give an intuition of the functionality of STAR, we visualize the series embeddings before and after being adjusted by STAR. The multivariate series is selected from the test set of Traffic with look back window 96 and number of channels 862. Figure 6 shows the series embeddings visualized by T-SNE before and after the first STAR module. Among the 862 channels, there are 2 channels embedded far away from the other channels. These two channels can be seen as anomalies, marked as ( ⋆) in the figure. Without STAR, i.e., using only the channel independent strategy, the prediction on the series can only achieve 0.414 MSE. After being adjusted by STAR, the abnormal channels can be clustered towards normal channels by exchanging channel 8 32 64 128 256 512 10240.160.180.200.220.240.260.280.300.320.340.360.380.400.420.440.460.48MSE Dimension of Series ECL Traffic Weather Solar 64 128 256 512 10240.160.180.200.220.240.260.280.300.320.340.360.380.400.42MSE Dimension of Core ECL Traffic Weather Solar 1 2 3 40.160.180.200.220.240.260.280.300.320.340.360.380.400.420.44MSE # of Encoder Layers ECL Traffic Weather SolarFigure 5: Impact of several key hyperparameters: the hidden dimension of the model, denoted as d, the hidden dimension of the core, represented by d′, and the number of encoder layers, N. Full results can be seen in Appendix C.5. Before STAR Series EmbeddingMSE=0.414 (a) After STAR Series EmbeddingMSE=0.376 (b) 0 1 2 3 4 5 6 7 8 9 1001234567MSE Strength of Noise PatchTST iTransformer SOFTS (ours)PEMS03 (c) Figure 6: Figure 6a 6b: T-SNE of the series embeddings on the Traffic dataset. 6a: the series embeddings before STAR. Two abnormal channels ( ⋆) are located far from the other channels. Forecasting on the embeddings achieves 0.414 MSE. 6b: series embeddings after being adjusted by STAR. The two channels are clustered towards normal channels ( △) by exchanging channel information. Adapted series embeddings improve forecasting performance to 0.376. Figure 6c: Impact of noise on one channel. Our method is more robust against channel noise than other methods. information. An example of the normal channels is marked as ( △). Predictions on the adapted series embeddings can improve the performance to 0.376, a 9%improvement. Impact of channel noise. As previously mentioned, SOFTS can cluster abnormal channels towards normal channels by exchanging channel information. To test the impact of an abnormal channel on the performance of three models—SOFTS, PatchTST, and iTransformer—we select one channel from the PEMS03 dataset and add Gaussian noise with a mean of 0 and a standard deviation representing the strength of the noise. The lookback window and horizon are set to 96 for this experiment. In Figure 6c, we observe that the MSE of PatchTST increases sharply as the strength of the noise grows. In contrast, SOFTS and iTransformer can better handle the noise. This indicates that suitable channel interaction can improve the robustness against noise in one channel using information from the normal channels. Moreover, SOFTS demonstrates superior noise handling compared to iTransformer. This suggests that while the abnormal channel can affect the model’s judgment of normal channels, our STAR module can mitigate the negative impact more effectively by utilizing core representation instead of building relationships between every pair of channels. 5 Conclusion Although channel independence has been found an effective strategy to improve robustness for multivariate time series forecasting, channel correlation is important information to be utilized 9 for further improvement. The previous methods faced a dilemma between model complexity and performance in extracting the correlation. In this paper, we solve the dilemma by introducing the Series-cOre Fused Time Series forecaster (SOFTS) which achieves state-of-the-art performance with low complexity, along with a novel STar Aggregate-Redistribute (STAR) module to efficiently capture the channel correlation. Our paper explores the way of building a scalable multivariate time series forecaster while maintaining equal or even better performance than the state-of-the-art methods, which we think may pave the way to forecasting on datasets of more larger scale under resource constraints [50]. Acknowledgments This research was supported by National Science and Technology Major Project (2022ZD0114805), NSFC (61773198, 62376118,61921006), Collaborative Innovation Center of Novel Software Tech- nology and Industrialization, CCF-Tencent Rhino-Bird Open Research Fund (RAGR20240101). References [1]Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolu- tional and recurrent networks for sequence modeling. CoRR , abs/1803.01271, 2018. [2]George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time series analysis: forecasting and control . John Wiley & Sons, 2015. [3]Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. In SSST@EMNLP , pages 103–111. Association for Computational Linguistics, 2014. [4]Razvan-Gabriel Cirstea, Darius-Valer Micu, Gabriel-Marcel Muresan, Chenjuan Guo, and Bin Yang. Correlated time series forecasting using multi-task deep neural networks. In ICKM , pages 1527–1530, 2018. [5]Yue Cui, Kai Zheng, Dingshan Cui, Jiandong Xie, Liwei Deng, Feiteng Huang, and Xiaofang Zhou. METRO: A generic graph neural network framework for multivariate time series forecasting. Proc. VLDB Endow. , 15(2):224–236, 2021. [6]Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan Mathur, Rajat Sen, and Rose Yu. Long- term forecasting with tide: Time-series dense encoder. CoRR , abs/2304.08424, 2023. [7]Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. Tsmixer: Lightweight mlp-mixer model for multivariate time series forecasting. In KDD , pages 459–469, 2023. [8]Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable represen- tation learning for multivariate time series. In NeurIPS , pages 4652–4663, 2019. [9]Aleksandra Gruca, Federico Serva, Llorenç Lliso, Pilar Rípodas, Xavier Calbet, Pedro Herruzo, Jiˇrı Pihrt, Rudolf Raevskyi, Petr Šimánek, Matej Choma, et al. Weather4cast at neurips 2022: Super-resolution rain movie prediction under spatio-temporal shifts. In NeurIPS 2022 Competition Track , pages 292–313. PMLR, 2022. [10] Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, and Zheng Zhang. Star- transformer. In NAACL-HLT , pages 1315–1325. Association for Computational Linguistics, 2019. [11] Lu Han, Han-Jia Ye, and De-Chuan Zhan. The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. CoRR , abs/2304.05206, 2023. [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR , pages 770–778. IEEE Computer Society, 2016. 10 [13] Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR , abs/1606.08415, 2016. [14] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8): 1735–1780, 1997. [15] Akhil Kadiyala and Ashok Kumar. Multivariate time series models for prediction of air quality inside a public transportation bus using available software. Environmental Progress & Sustainable Energy , 33(2):337–341, 2014. [16] Evaggelos G Kardakos, Minas C Alexiadis, Stylianos I Vagropoulos, Christos K Simoglou, Pandelis N Biskas, and Anastasios G Bakirtzis. Application of time series and artificial neural network models in short-term forecasting of pv power generation. In 2013 48th International Universities’ Power Engineering Conference (UPEC) , pages 1–6. IEEE, 2013. [17] Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. InICLR , 2021. [18] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, ICLR , 2015. [19] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR . OpenReview.net, 2017. [20] Andreui Nikolaevich Kolmogorov. On the representation of continuous functions of several variables by superpositions of continuous functions of a smaller number of variables . American Mathematical Society, 1961. [21] Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In SIGIR , pages 95–104, 2018. [22] Bryan Lim and Stefan Zohren. Time series forecasting with deep learning: A survey. CoRR , abs/2004.13408, 2020. [23] Shengsheng Lin, Weiwei Lin, Wentai Wu, Feiyu Zhao, Ruichao Mo, and Haotong Zhang. Segrnn: Segment recurrent neural network for long-term time series forecasting. CoRR , abs/2308.11200, 2023. [24] Minhao Liu, Ailing Zeng, Muxi Chen, Zhijian Xu, Qiuxia Lai, Lingna Ma, and Qiang Xu. Scinet: Time series modeling and forecasting with sample convolution and interaction. In NeurIPS , 2022. [25] Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Exploring the stationarity in time series forecasting. In NeurIPS , 2022. [26] Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series forecasting. CoRR , abs/2310.06625, 2023. [27] Mohammad Amin Morid, Olivia R Liu Sheng, and Joseph Dunbar. Time series prediction using deep learning methods in healthcare. ACM Transactions on Management Information Systems , 14(1):1–29, 2023. [28] Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In ICLR , 2023. [29] Syama Sundar Rangapuram, Matthias W. Seeger, Jan Gasthaus, Lorenzo Stella, Yuyang Wang, and Tim Januschowski. Deep state space models for time series forecasting. In NeurIPS , pages 7796–7805, 2018. [30] Lawrence G Roberts and Barry D Wessler. Computer network development to achieve resource sharing. In Proceedings of the May 5-7, 1970, spring joint computer conference , pages 543–549, 1970. 11 [31] Ilya O. Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, and Alexey Dosovitskiy. Mlp-mixer: An all-mlp architecture for vision. In NeurIPS , pages 24261–24272, 2021. [32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS , pages 5998–6008, 2017. [33] Xue Wang, Tian Zhou, Qingsong Wen, Jinyang Gao, Bolin Ding, and Rong Jin. Make trans- former great again for time series forecasting: Channel aligned robust dual transformer. CoRR , abs/2305.12095, 2023. [34] Zepu Wang, Yuqi Nie, Peng Sun, Nam H. Nguyen, John M. Mulvey, and H. Vincent Poor. ST-MLP: A cascaded spatio-temporal linear framework with channel-independence strategy for traffic forecasting. CoRR , abs/2308.07496, 2023. [35] Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In NeurIPS , pages 101–112, 2021. [36] Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In ICLR . OpenReview.net, 2023. [37] Xinle Wu, Dalin Zhang, Chenjuan Guo, Chaoyang He, Bin Yang, and Christian S. Jensen. Autocts: Automated correlated time series forecasting. Proc. VLDB Endow. , 15(4):971–983, 2021. [38] Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, and Chengqi Zhang. Connecting the dots: Multivariate time series forecasting with graph neural networks. In SIGKDD , pages 753–763, 2020. [39] Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. Few-shot learning via embedding adaptation with set-to-set functions. In CVPR , pages 8805–8814. Computer Vision Foundation / IEEE, 2020. [40] Han-Jia Ye, De-Chuan Zhan, Nan Li, and Yuan Jiang. Learning multiple local metrics: Global consideration helps. IEEE Trans. Pattern Anal. Mach. Intell. , 42(7):1698–1712, 2020. [41] Han-Jia Ye, Lu Han, and De-Chuan Zhan. Revisiting unsupervised meta-learning via the characteristics of few-shot tasks. IEEE Trans. Pattern Anal. Mach. Intell. , 45(3):3721–3737, 2023. [42] Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Ning An, Defu Lian, Longbing Cao, and Zhendong Niu. Frequency-domain mlps are more effective learners in time series forecasting. In NeurIPS , 2023. [43] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabás Póczos, Ruslan Salakhutdinov, and Alexander J. Smola. Deep sets. In NIPS , pages 3391–3401, 2017. [44] Matthew D. Zeiler and Rob Fergus. Stochastic pooling for regularization of deep convolutional neural networks. In ICLR , 2013. [45] Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In AAAI , pages 11121–11128, 2023. [46] Tianping Zhang, Yizhuo Zhang, Wei Cao, Jiang Bian, Xiaohan Yi, Shun Zheng, and Jian Li. Less is more: Fast multivariate time series forecasting with light sampling-oriented MLP structures. CoRR , abs/2207.01186, 2022. [47] Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In ICLR . OpenReview.net, 2023. 12 [48] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In AAAI , pages 11106–11115, 2021. [49] Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In ICML , volume 162, pages 27268–27286, 2022. [50] Zhi-Hua Zhou. A theoretical perspective of machine learning with computational resource concerns. CoRR , abs/2305.02217, 2023. 13 A Datasets Description We detail the description plus the link to download them here: 1.ETT (Electricity Transformer Temperature) [48]3comprises two hourly-level datasets (ETTh) and two 15-minute-level datasets (ETTm). Each dataset contains seven oil and load features of electricity transformers from July 2016 to July 2018 . 2.Traffic4describes the road occupancy rates. It contains the hourly data recorded by the sensors of San Francisco freeways from 2015 to2016 . 3.Electricity5collects the hourly electricity consumption of 321clients from 2012 to2014 . 4.Weather includes 21indicators of weather, such as air temperature, and humidity. Its data is recorded every 10 min for 2020 in Germany. 5.Solar-Energy [21] records the solar power production of 137 PV plants in 2006, which is sampled every 10 minutes. 6.PEMS6contains public traffic network data in California collected by 5-minute windows. Other details of these datasets have been concluded in Table 5. Table 5: Detailed dataset descriptions. Channels denotes the number of channels in each dataset. Dataset Split denotes the total number of time points in (Train, Validation, Test) split respectively. Prediction Length denotes the future time points to be predicted and four prediction settings are included in each dataset. Granularity denotes the sampling interval of time points. Dataset Channels Prediction Length Dataset Split Granularity Domain ETTh1, ETTh2 7 {96, 192, 336, 720} (8545, 2881, 2881) Hourly Electricity ETTm1, ETTm2 7 {96, 192, 336, 720} (34465, 11521, 11521) 15min Electricity Weather 21 {96, 192, 336, 720} (36792, 5271, 10540) 10min Weather ECL 321 {96, 192, 336, 720} (18317, 2633, 5261) Hourly Electricity Traffic 862 {96, 192, 336, 720} (12185, 1757, 3509) Hourly Transportation Solar-Energy 137 {96, 192, 336, 720} (36601, 5161, 10417) 10min Energy PEMS03 358 {12, 24, 48, 96} (15617,5135,5135) 5min Transportation PEMS04 307 {12, 24, 48, 96} (10172,3375,3375) 5min Transportation PEMS07 883 {12, 24, 48, 96} (16911,5622,5622) 5min Transportation PEMS08 170 {12, 24, 48, 96} (10690,3548,3548) 5min Transportation B Implement Details B.1 Overall architecture of SOFTS The overall architecture of SOFTS is detailed in Algorithm 1. Initially, a linear layer is employed to obtain the embedding for each series (Lines 1-2). Subsequently, several encoder layers are applied. Within each encoder layer, the core representation is first derived by applying an MLP to the series embeddings and pooling them (Line 4). This core representation is then concatenated with each series (Line 5), and another MLP is used to fuse them (Line 6). After passing through multiple encoder layers, a final linear layer projects the series embeddings to the predicted series (Line 8). 3https://github.com/zhouhaoyi/ETDataset 4http://pems.dot.ca.gov 5https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014 6https://pems.dot.ca.gov/ 14 Algorithm 1 Series-cOre Fused Time Series forecaster (SOFTS) applied to multivariate time series. Require: Look back window X∈RL×C; 1:X=X.transpose ▷X∈RC×L 2:S0= Linear( X) ▷Get series embedding, S0∈RC×d 3:forl= 1. . . L do 4: oi= Stoch _Pool(MLP( Si−1)) ▷Get core representation, oi∈Rd′ 5: Fi= Repeat _Concat( Si−1,oi) ▷Fi∈RC×(d+d′) 6: Si= MLP( Fi) +Si−1 ▷Fuse series and core, Si∈RC×d 7:end for 8:ˆY= Linear( SL) ▷Project series embedding to predicted series, ˆY∈RC×H 9:ˆY=ˆY.transpose ▷ˆY∈RH×C 10:return ˆY B.2 Details of Core Representation Computation Core representation. Recall that the core representation for the multivariate time series is defined in Definition 3.1 with the following form: o=f(s1,s2, . . . ,sC) To obtain the representation, we draw inspiration from the two theorems: Theorem B.1 (Kolmogorov-Arnold representation [ 20]).Letf: [0,1]M→Rbe an arbitrary multivariate continuous function iff it has the representation f(x1, . . . , x M) =ρ MX m=1λmϕ(xm)! with continuous outer and inner functions ρ:R2M+1→Randϕ:R→R2M+1. The inner function ϕis independent of the function f. Theorem B.2 (DeepSets [ 43]).Assume the elements are from a compact set in Rd, i.e. possibly uncountable, and the set size is fixed to M. Then any continuous function operating on a set X, i.e. f:Rd×M→Rwhich is permutation invariant to the elements in Xcan be approximated arbitrarily close in the form of ρ X x∈Xϕ(x)! , for suitable transformations ϕandρ. The two formulations are very similar, except for the dependence of inner transformation on the coor- dinate through λm. The existence of λdetermines whether the formulation is permutation invariant or not. In this paper, we find in Table 4 that the permutation invariant expression (Theorem B.2) performs much better than the permutation variant one Theorem B.1 . This may be attributed to the characteristics of channel series being enough to induce the index of each channel (coordinate). Introducing extra parameters specific to each channel may enhance the dependency channel coordi- nate and reduce the dependence on the history, therefore causing low robustness when encountering unknown series. Consequently, we utilize DeepSets form to express the core representation: o=ρ X s∈Sϕ(s)! . We propose two modifications to the expression: 1.We generalize the mean pooling over the inner transformation by arbitrary pooling methods. 2.We remove the outer transformation ρbecause it is redundant with the MLP during the fusion process. For 1., we tested several common pooling methods and found that the mean pooling and max pooling outperform each other in different cases. Stochastic pooling (described in the following paragraph) can achieve the best results in averaged cases (Table 3). So, the core is computed as Equation (3). 15 Stochastic pooling. Stochastic pooling is a pooling method that combines the characteristics of max pooling and mean pooling [ 44]. In stochastic pooling, the pooled map response is selected by sampling from a multinomial distribution formed from the activations of each pooling region. Specifically, we first calculate the probabilities pfor each dimension jby normalizing the softmax activations within the dimension: pij=eAij PC k=1eAkj(6) During training, we sample from the multinomial distribution based on pto pick a channel cwithin the dimension j. The pooled result is then simply Acj: oj=Acjwhere c∼P(p1j, p2j, ..., p Cj) (7) At test time, we use a probabilistic form of averaging: oj=CX i=1pijAij (8) This approach allows for a more robust and statistically grounded pooling mechanism, which can enhance the generalization capabilities of the model across different data scenarios. B.3 Experiment Details All the experiments are conducted on a single NVIDIA GeForce RTX 3090 with 24G VRAM. The mean squared error (MSE) loss function is utilized for model optimization. Performance comparison among different methods is conducted based on two primary evaluation metrics: Mean Squared Error (MSE) and Mean Absolute Error (MAE). We use the ADAM optimizer [ 18] with an initial learning rate of 3×10−4. This rate is modulated by a cosine learning rate scheduler. We explore the number of STAR blocks Nwithin the set {1,2,3,4}, and the dimension of series dwithin {128,256,512}. Additionally, the dimension of the core representation d′is searched in {64,128,256,512}, with the constraint that d′does not exceed d. Mean Squared Error (MSE) : MSE =1 HHX i=1(Yi−ˆYi)2(9) Mean Absolute Error (MAE) : MAE =1 HHX i=1|Yi−ˆYi| (10) where Y,ˆY∈RH×Care the ground truth and prediction results of the future with Htime points andCchannels. Yidenotes the i-th future time point. C Full Results C.1 Full Results of Multivariate Forecasting Benchmark The complete results of our forecasting benchmarks are presented in Table 6. We conducted experi- ments using six widely utilized real-world datasets and compared our method against ten previous state-of-the-art models. Our approach, SOFTS, demonstrates strong performance across these tests. C.2 Full Results of Pooling Method Ablation The complete results of our pooling method ablation are presented in Table 7. The term "w/o STAR" refers to a scenario where an MLP is utilized with the Channel Independent (CI) strategy, without the use of STAR. Mean pooling computes the average value of all the series representations. Max pooling selects the maximum value of each hidden feature among all the channels. Weighted average learns the weight for each channel. Stochastic pooling applies random selection during training and weighted average during testing according to the feature value. The result reveals that incorporating STAR into the model leads to a consistent enhancement in performance across all pooling methods. 16 Table 6: Multivariate forecasting results with prediction lengths H∈ {12,24,48,96}for PEMS andH∈ {96,192,336,720}for others and fixed lookback window length L= 96 . The results of PatchTST and TSMixer are reproduced for the ablation study and other results are taken from iTransformer [26]. Models SOFTS (ours) iTransformer PatchTST TSMixer Crossformer TiDE TimesNet DLinear SCINet FEDformer Stationary Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEETTm196 0.325 0.361 0.334 0.368 0.329 0.365 0.323 0.363 0.404 0.426 0.364 0.387 0.338 0.375 0.345 0.372 0.418 0.438 0.379 0.419 0.386 0.398 192 0.375 0.389 0.377 0.391 0.380 0.394 0.376 0.392 0.450 0.451 0.398 0.404 0.374 0.387 0.380 0.389 0.439 0.450 0.426 0.441 0.459 0.444 336 0.405 0.412 0.426 0.420 0.400 0.410 0.407 0.413 0.532 0.515 0.428 0.425 0.410 0.411 0.413 0.413 0.490 0.485 0.445 0.459 0.495 0.464 720 0.466 0.447 0.491 0.459 0.475 0.453 0.485 0.459 0.666 0.589 0.487 0.461 0.478 0.450 0.474 0.453 0.595 0.550 0.543 0.490 0.585 0.516 Avg 0.393 0.403 0.407 0.410 0.396 0.406 0.398 0.407 0.513 0.496 0.419 0.419 0.400 0.406 0.403 0.407 0.485 0.481 0.448 0.452 0.481 0.456ETTm296 0.180 0.261 0.180 0.264 0.184 0.264 0.182 0.266 0.287 0.366 0.207 0.305 0.187 0.267 0.193 0.292 0.286 0.377 0.203 0.287 0.192 0.274 192 0.246 0.306 0.250 0.309 0.246 0.306 0.249 0.309 0.414 0.492 0.290 0.364 0.249 0.309 0.284 0.362 0.399 0.445 0.269 0.328 0.280 0.339 336 0.319 0.352 0.311 0.348 0.308 0.346 0.309 0.347 0.597 0.542 0.377 0.422 0.321 0.351 0.369 0.427 0.637 0.591 0.325 0.366 0.334 0.361 720 0.405 0.401 0.412 0.407 0.409 0.402 0.416 0.408 1.730 1.042 0.558 0.524 0.408 0.403 0.554 0.522 0.960 0.735 0.421 0.415 0.417 0.413 Avg 0.287 0.330 0.288 0.332 0.287 0.330 0.289 0.333 0.757 0.610 0.358 0.404 0.291 0.333 0.350 0.401 0.571 0.537 0.305 0.349 0.306 0.347ETTh196 0.381 0.399 0.386 0.405 0.394 0.406 0.401 0.412 0.423 0.448 0.479 0.464 0.384 0.402 0.386 0.400 0.654 0.599 0.376 0.419 0.513 0.491 192 0.435 0.431 0.441 0.436 0.440 0.435 0.452 0.442 0.471 0.474 0.525 0.492 0.436 0.429 0.437 0.432 0.719 0.631 0.420 0.448 0.534 0.504 336 0.480 0.452 0.487 0.458 0.491 0.462 0.492 0.463 0.570 0.546 0.565 0.515 0.491 0.469 0.481 0.459 0.778 0.659 0.459 0.465 0.588 0.535 720 0.499 0.488 0.503 0.491 0.487 0.479 0.507 0.490 0.653 0.621 0.594 0.558 0.521 0.500 0.519 0.516 0.836 0.699 0.506 0.507 0.643 0.616 Avg 0.449 0.442 0.454 0.447 0.453 0.446 0.463 0.452 0.529 0.522 0.541 0.507 0.458 0.450 0.456 0.452 0.747 0.647 0.440 0.460 0.570 0.537ETTh296 0.297 0.347 0.297 0.349 0.288 0.340 0.319 0.361 0.745 0.584 0.400 0.440 0.340 0.374 0.333 0.387 0.707 0.621 0.358 0.397 0.476 0.458 192 0.373 0.394 0.380 0.400 0.376 0.395 0.402 0.410 0.877 0.656 0.528 0.509 0.402 0.414 0.477 0.476 0.860 0.689 0.429 0.439 0.512 0.493 336 0.410 0.426 0.428 0.432 0.440 0.451 0.444 0.446 1.043 0.731 0.643 0.571 0.452 0.452 0.594 0.541 1.000 0.744 0.496 0.487 0.552 0.551 720 0.411 0.433 0.427 0.445 0.436 0.453 0.441 0.450 1.104 0.763 0.874 0.679 0.462 0.468 0.831 0.657 1.249 0.838 0.463 0.474 0.562 0.560 Avg 0.373 0.400 0.383 0.407 0.385 0.410 0.401 0.417 0.942 0.684 0.611 0.550 0.414 0.427 0.559 0.515 0.954 0.723 0.437 0.449 0.526 0.516ECL96 0.143 0.233 0.148 0.240 0.164 0.251 0.157 0.260 0.219 0.314 0.237 0.329 0.168 0.272 0.197 0.282 0.247 0.345 0.193 0.308 0.169 0.273 192 0.158 0.248 0.162 0.253 0.173 0.262 0.173 0.274 0.231 0.322 0.236 0.330 0.184 0.289 0.196 0.285 0.257 0.355 0.201 0.315 0.182 0.286 336 0.178 0.269 0.178 0.269 0.190 0.279 0.192 0.295 0.246 0.337 0.249 0.344 0.198 0.300 0.209 0.301 0.269 0.369 0.214 0.329 0.200 0.304 720 0.218 0.305 0.225 0.317 0.230 0.313 0.223 0.318 0.280 0.363 0.284 0.373 0.220 0.320 0.245 0.333 0.299 0.390 0.246 0.355 0.222 0.321 Avg 0.174 0.264 0.178 0.270 0.189 0.276 0.186 0.287 0.244 0.334 0.251 0.344 0.192 0.295 0.212 0.300 0.268 0.365 0.214 0.327 0.193 0.296Traffic96 0.376 0.251 0.395 0.268 0.427 0.272 0.493 0.336 0.522 0.290 0.805 0.493 0.593 0.321 0.650 0.396 0.788 0.499 0.587 0.366 0.612 0.338 192 0.398 0.261 0.417 0.276 0.454 0.289 0.497 0.351 0.530 0.293 0.756 0.474 0.617 0.336 0.598 0.370 0.789 0.505 0.604 0.373 0.613 0.340 336 0.415 0.269 0.433 0.283 0.450 0.282 0.528 0.361 0.558 0.305 0.762 0.477 0.629 0.336 0.605 0.373 0.797 0.508 0.621 0.383 0.618 0.328 720 0.447 0.287 0.467 0.302 0.484 0.301 0.569 0.380 0.589 0.328 0.719 0.449 0.640 0.350 0.645 0.394 0.841 0.523 0.626 0.382 0.653 0.355 Avg 0.409 0.267 0.428 0.282 0.454 0.286 0.522 0.357 0.550 0.304 0.760 0.473 0.620 0.336 0.625 0.383 0.804 0.509 0.610 0.376 0.624 0.340Weather96 0.166 0.208 0.174 0.214 0.176 0.217 0.166 0.210 0.158 0.230 0.202 0.261 0.172 0.220 0.196 0.255 0.221 0.306 0.217 0.296 0.173 0.223 192 0.217 0.253 0.221 0.254 0.221 0.256 0.215 0.256 0.206 0.277 0.242 0.298 0.219 0.261 0.237 0.296 0.261 0.340 0.276 0.336 0.245 0.285 336 0.282 0.300 0.278 0.296 0.275 0.296 0.287 0.300 0.272 0.335 0.287 0.335 0.280 0.306 0.283 0.335 0.309 0.378 0.339 0.380 0.321 0.338 720 0.356 0.351 0.358 0.347 0.352 0.346 0.355 0.348 0.398 0.418 0.351 0.386 0.365 0.359 0.345 0.381 0.377 0.427 0.403 0.428 0.414 0.410 Avg 0.255 0.278 0.258 0.278 0.256 0.279 0.256 0.279 0.259 0.315 0.271 0.320 0.259 0.287 0.265 0.317 0.292 0.363 0.309 0.360 0.288 0.314Solar-Energy96 0.200 0.230 0.203 0.237 0.205 0.246 0.221 0.275 0.310 0.331 0.312 0.399 0.250 0.292 0.290 0.378 0.237 0.344 0.242 0.342 0.215 0.249 192 0.229 0.253 0.233 0.261 0.237 0.267 0.268 0.306 0.734 0.725 0.339 0.416 0.296 0.318 0.320 0.398 0.280 0.380 0.285 0.380 0.254 0.272 336 0.243 0.269 0.248 0.273 0.250 0.276 0.272 0.294 0.750 0.735 0.368 0.430 0.319 0.330 0.353 0.415 0.304 0.389 0.282 0.376 0.290 0.296 720 0.245 0.272 0.249 0.275 0.252 0.275 0.281 0.313 0.769 0.765 0.370 0.425 0.338 0.337 0.356 0.413 0.308 0.388 0.357 0.427 0.285 0.200 Avg 0.229 0.256 0.233 0.262 0.236 0.266 0.260 0.297 0.641 0.639 0.347 0.417 0.301 0.319 0.330 0.401 0.282 0.375 0.291 0.381 0.261 0.381PEMS0312 0.064 0.165 0.071 0.174 0.073 0.178 0.075 0.186 0.090 0.203 0.178 0.305 0.085 0.192 0.122 0.243 0.066 0.172 0.126 0.251 0.081 0.188 24 0.083 0.188 0.093 0.201 0.105 0.212 0.095 0.210 0.121 0.240 0.257 0.371 0.118 0.223 0.201 0.317 0.085 0.198 0.149 0.275 0.105 0.214 48 0.114 0.223 0.125 0.236 0.159 0.264 0.121 0.240 0.202 0.317 0.379 0.463 0.155 0.260 0.333 0.425 0.127 0.238 0.227 0.348 0.154 0.257 96 0.156 0.264 0.164 0.275 0.210 0.305 0.184 0.295 0.262 0.367 0.490 0.539 0.228 0.317 0.457 0.515 0.178 0.287 0.348 0.434 0.247 0.336 Avg 0.104 0.210 0.113 0.221 0.137 0.240 0.119 0.233 0.169 0.281 0.326 0.419 0.147 0.248 0.278 0.375 0.114 0.224 0.213 0.327 0.147 0.249PEMS0412 0.074 0.176 0.078 0.183 0.085 0.189 0.079 0.188 0.098 0.218 0.219 0.340 0.087 0.195 0.148 0.272 0.073 0.177 0.138 0.262 0.088 0.196 24 0.088 0.194 0.095 0.205 0.115 0.222 0.089 0.201 0.131 0.256 0.292 0.398 0.103 0.215 0.224 0.340 0.084 0.193 0.177 0.293 0.104 0.216 48 0.110 0.219 0.120 0.233 0.167 0.273 0.111 0.222 0.205 0.326 0.409 0.478 0.136 0.250 0.355 0.437 0.099 0.211 0.270 0.368 0.137 0.251 96 0.135 0.244 0.150 0.262 0.211 0.310 0.133 0.247 0.402 0.457 0.492 0.532 0.190 0.303 0.452 0.504 0.114 0.227 0.341 0.427 0.186 0.297 Avg 0.102 0.208 0.111 0.221 0.145 0.249 0.103 0.215 0.209 0.314 0.353 0.437 0.129 0.241 0.295 0.388 0.092 0.202 0.231 0.337 0.127 0.240PEMS0712 0.057 0.152 0.067 0.165 0.068 0.163 0.073 0.181 0.094 0.200 0.173 0.304 0.082 0.181 0.115 0.242 0.068 0.171 0.109 0.225 0.083 0.185 24 0.073 0.173 0.088 0.190 0.102 0.201 0.090 0.199 0.139 0.247 0.271 0.383 0.101 0.204 0.210 0.329 0.119 0.225 0.125 0.244 0.102 0.207 48 0.096 0.195 0.110 0.215 0.170 0.261 0.124 0.231 0.311 0.369 0.446 0.495 0.134 0.238 0.398 0.458 0.149 0.237 0.165 0.288 0.136 0.240 96 0.120 0.218 0.139 0.245 0.236 0.308 0.163 0.255 0.396 0.442 0.628 0.577 0.181 0.279 0.594 0.553 0.141 0.234 0.262 0.376 0.187 0.287 Avg 0.087 0.184 0.101 0.204 0.144 0.233 0.112 0.217 0.235 0.315 0.380 0.440 0.124 0.225 0.329 0.395 0.119 0.234 0.165 0.283 0.127 0.230PEMS0812 0.074 0.171 0.079 0.182 0.098 0.205 0.083 0.189 0.165 0.214 0.227 0.343 0.112 0.212 0.154 0.276 0.087 0.184 0.173 0.273 0.109 0.207 24 0.104 0.201 0.115 0.219 0.162 0.266 0.117 0.226 0.215 0.260 0.318 0.409 0.141 0.238 0.248 0.353 0.122 0.221 0.210 0.301 0.140 0.236 48 0.164 0.253 0.186 0.235 0.238 0.311 0.196 0.299 0.315 0.355 0.497 0.510 0.198 0.283 0.440 0.470 0.189 0.270 0.320 0.394 0.211 0.294 96 0.211 0.253 0.221 0.267 0.303 0.318 0.266 0.331 0.377 0.397 0.721 0.592 0.320 0.351 0.674 0.565 0.236 0.300 0.442 0.465 0.345 0.367 Avg 0.138 0.219 0.150 0.226 0.200 0.275 0.165 0.261 0.268 0.307 0.441 0.464 0.193 0.271 0.379 0.416 0.158 0.244 0.286 0.358 0.201 0.276 1stCount 40 47 2 4 6 8 1 0 3 0 0 0 1 2 1 0 5 4 4 0 0 0 17 Table 7: Comparison of the effect of different pooling methods. The term "w/o STAR" refers to a scenario where an MLP is utilized with the Channel Independent (CI) strategy, without the use of STAR. The result reveals that incorporating STAR into the model leads to a consistent enhancement in performance across all pooling methods. Apart from that, stochastic pooling performs better than mean and max pooling. Pooling Method w/o STAR Mean Max Weighted Stochastic Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ECL96 0.161 0.248 0.146 0.239 0.150 0.241 0.156 0.247 0.143 0.233 192 0.171 0.259 0.166 0.258 0.165 0.256 0.173 0.264 0.158 0.248 336 0.188 0.276 0.175 0.269 0.188 0.280 0.190 0.284 0.178 0.269 720 0.228 0.311 0.211 0.300 0.216 0.304 0.217 0.305 0.218 0.305 Avg 0.187 0.273 0.174 0.266 0.180 0.270 0.184 0.275 0.174 0.264 Traffic96 0.414 0.266 0.380 0.255 0.386 0.261 0.410 0.275 0.376 0.251 192 0.428 0.272 0.406 0.268 0.397 0.267 0.434 0.288 0.398 0.261 336 0.446 0.284 0.442 0.293 0.406 0.273 0.447 0.295 0.415 0.269 720 0.480 0.303 0.453 0.293 0.433 0.284 0.470 0.308 0.447 0.287 Avg 0.442 0.281 0.420 0.277 0.406 0.271 0.440 0.292 0.409 0.267 Weather96 0.179 0.217 0.174 0.213 0.172 0.211 0.180 0.222 0.166 0.208 192 0.227 0.259 0.227 0.260 0.226 0.260 0.226 0.261 0.217 0.253 336 0.281 0.299 0.281 0.299 0.280 0.298 0.284 0.302 0.282 0.300 720 0.357 0.348 0.361 0.352 0.360 0.350 0.360 0.351 0.356 0.351 Avg 0.261 0.281 0.261 0.281 0.259 0.280 0.263 0.284 0.255 0.278 Solar96 0.215 0.250 0.202 0.238 0.206 0.243 0.219 0.260 0.200 0.230 192 0.246 0.271 0.238 0.260 0.245 0.266 0.255 0.272 0.229 0.253 336 0.263 0.282 0.248 0.277 0.267 0.284 0.292 0.294 0.243 0.269 720 0.263 0.283 0.247 0.271 0.265 0.284 0.290 0.293 0.245 0.272 Avg 0.247 0.272 0.234 0.262 0.246 0.269 0.264 0.280 0.229 0.256 ETTh296 0.298 0.349 0.298 0.348 0.296 0.347 0.292 0.344 0.297 0.347 192 0.375 0.398 0.376 0.396 0.378 0.396 0.387 0.401 0.373 0.394 336 0.420 0.431 0.417 0.430 0.423 0.428 0.428 0.435 0.410 0.426 720 0.433 0.448 0.423 0.442 0.421 0.435 0.409 0.433 0.411 0.433 Avg 0.381 0.406 0.379 0.404 0.379 0.401 0.379 0.403 0.373 0.400 PEMS0412 0.084 0.189 0.075 0.177 0.078 0.182 0.077 0.180 0.074 0.176 24 0.113 0.220 0.090 0.196 0.095 0.204 0.094 0.203 0.088 0.194 48 0.164 0.266 0.117 0.225 0.126 0.236 0.120 0.231 0.110 0.219 96 0.209 0.304 0.142 0.250 0.164 0.269 0.147 0.258 0.135 0.244 Avg 0.143 0.245 0.106 0.212 0.116 0.223 0.109 0.218 0.102 0.208 18 C.3 Full Results of STAR Ablation The complete results of our ablation on universality of STAR are presented in Table 8. The STar Aggregate-Redistribute (STAR) module is a set-to-set function [ 39] that is replaceable to arbitrary transformer-based methods that use the attention mechanism. In this paragraph, we test the ef- fectiveness of STAR on different existing transformer-based forecasters, such as PatchTST [ 28] and Crossformer [ 47]. Note that our method can be regarded as replacing the channel attention in iTransformer [ 26]. Here we involve substituting the time attention in PatchTST with STAR and incrementally replacing both the time and channel attention in Crossformer with STAR. The results, as presented in Table 8, demonstrate that replacing attention with STAR, which deserves less computational resources, could maintain and even improve the models’ performance in several datasets. C.4 More Results of Lookback Ablation In this section, we extend the lookback ablation in section 4.2 to L∈[48,720]. Figure 7 shows the results in MSE. SOFTS performs almost consistently better than other models under different lookback window lengths. However, we also warn about the potential overfitting when the lookback length is very large, i.e.L= 512 orL= 720 . 489 61 923 365 127 200.160.180.200.220.240.26ECLMSEL ookback Window Length DLinear TSMixer PatchTST iTransformer SOFTS (ours) 489 61 923 365 127 200.40.50.60.70.8TrafficMSEL ookback Window Length DLinear TSMixer PatchTST iTransformer SOFTS (ours) 489 61 923 365 127 200.260.280.300.320.340.36ETTm2MSEL ookback Window Length DLinear TSMixer PatchTST iTransformer SOFTS (ours) Figure 7: Influence of lookback window length L∈ {48,96,192,336,512,720}. SOFTS performs almost consistently better than other models under different lookback window lengths. C.5 Full Results of Hyperparameter Sensitivity Experiments We investigate the impact of several key hyperparameters on our model’s performance: the hidden dimension of the model, denoted as d, the hidden dimension of the core, represented by d′, and the number of encoder layers, N. Figure 8 and Figure 10 indicate that complex traffic datasets (such as Traffic and PEMS) require larger hidden dimensions and more encoding layers to handle their intricacies effectively. Moreover, Figure 9 shows that variations in d′don’t influence the model’s overall performance so much. 32 64 128 256 512 10240.280.300.320.340.360.380.400.420.440.460.48MSE Dimension of Series ETTm1 ETTm2 ETTh1 ETTh2 32 64 128 256 512 10240.160.180.200.220.240.260.280.300.320.340.360.380.400.420.440.460.48MSE Dimension of Series ECL Traffic Weather Solar 32 64 128 256 512 10240.060.080.100.120.140.160.180.20MSE Dimension of Series PEMS03 PEMS04 PEMS07 PEMS08 Figure 8: Influence of the hidden dimension of series d. Traffic datasets (such as Traffic and PEMS) require larger hidden dimensions to handle their intricacies effectively. 19 Table 8: The performance of STAR in different models. The attention replaced by STAR here are the time attention in PatchTST, the channel attention in iTransformer, and both the time attention and channel attention in modified Crossformer. The results demonstrate that replacing attention with STAR, which requires less computational resources, could maintain and even improve the models’ performance in several datasets.†: The Crossformer used here is a modified version that replaces the decoder with a flattened head like what PatchTST does. Model iTransformer PatchTST Crossformer Component Attention STAR Attention STAR Attention STAR Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Electricity960.148 0.240 0.143 0.233 0.164 0.251 0.160 0.248 0.156 0.259 0.166 0.263 192 0.162 0.253 0.158 0.248 0.173 0.262 0.169 0.257 0.182 0.284 0.182 0.277 336 0.178 0.269 0.178 0.269 0.190 0.279 0.187 0.275 0.203 0.305 0.200 0.296 720 0.225 0.317 0.218 0.305 0.230 0.313 0.225 0.308 0.267 0.358 0.243 0.334 Avg 0.178 0.270 0.174 0.264 0.189 0.276 0.185 0.272 0.202 0.301 0.198 0.292 Traffic960.395 0.268 0.376 0.251 0.427 0.272 0.423 0.265 0.508 0.275 0.520 0.277 192 0.417 0.276 0.398 0.261 0.454 0.289 0.434 0.271 0.519 0.281 0.535 0.285 336 0.433 0.283 0.415 0.269 0.450 0.282 0.447 0.278 0.556 0.304 0.551 0.292 720 0.467 0.302 0.447 0.287 0.484 0.301 0.489 0.301 0.600 0.329 0.591 0.315 Avg 0.428 0.282 0.409 0.267 0.454 0.286 0.448 0.279 0.546 0.297 0.549 0.292 Weather960.174 0.214 0.166 0.208 0.176 0.217 0.170 0.214 0.174 0.245 0.174 0.239 192 0.221 0.254 0.217 0.253 0.221 0.256 0.215 0.251 0.219 0.283 0.220 0.282 336 0.278 0.296 0.282 0.300 0.275 0.296 0.273 0.296 0.271 0.327 0.272 0.324 720 0.358 0.347 0.356 0.351 0.352 0.346 0.349 0.346 0.351 0.383 0.343 0.376 Avg 0.258 0.278 0.255 0.278 0.256 0.279 0.252 0.277 0.254 0.310 0.252 0.305 PEMS03120.071 0.174 0.064 0.165 0.073 0.178 0.071 0.173 0.067 0.170 0.065 0.165 240.093 0.201 0.083 0.188 0.105 0.212 0.101 0.206 0.081 0.187 0.081 0.184 480.125 0.236 0.114 0.223 0.159 0.264 0.157 0.256 0.109 0.220 0.109 0.216 960.164 0.275 0.156 0.264 0.210 0.305 0.205 0.296 0.142 0.255 0.147 0.250 Avg 0.113 0.222 0.104 0.210 0.137 0.240 0.134 0.233 0.100 0.208 0.100 0.204 PEMS04120.078 0.183 0.074 0.176 0.085 0.189 0.082 0.184 0.069 0.171 0.071 0.174 240.095 0.205 0.088 0.194 0.115 0.222 0.108 0.214 0.082 0.190 0.079 0.185 480.120 0.233 0.110 0.219 0.167 0.273 0.155 0.258 0.097 0.207 0.091 0.200 960.150 0.262 0.135 0.244 0.211 0.310 0.198 0.297 0.111 0.222 0.106 0.218 Avg 0.111 0.221 0.102 0.208 0.145 0.249 0.136 0.238 0.090 0.198 0.087 0.194 PEMS07120.067 0.165 0.057 0.152 0.068 0.163 0.065 0.160 0.056 0.151 0.055 0.150 240.088 0.190 0.073 0.173 0.102 0.201 0.098 0.195 0.070 0.166 0.067 0.165 480.110 0.215 0.096 0.195 0.170 0.261 0.162 0.250 0.090 0.192 0.088 0.183 960.139 0.245 0.120 0.218 0.236 0.308 0.222 0.294 0.120 0.215 0.110 0.203 Avg 0.101 0.204 0.087 0.184 0.144 0.233 0.137 0.225 0.084 0.181 0.080 0.175 20 64 128 256 512 10240.280.300.320.340.360.380.400.420.440.460.48MSE Dimension of Core ETTm1 ETTm2 ETTh1 ETTh2 64 128 256 512 10240.160.180.200.220.240.260.280.300.320.340.360.380.400.42MSE Dimension of Core ECL Traffic Weather Solar 64 128 256 512 10240.060.080.100.120.140.160.18MSE Dimension of Core PEMS03 PEMS04 PEMS07 PEMS08Figure 9: Influence of the hidden dimension of the core d′. Variations in d′have a minimal influence on the model’s overall performance 1 2 3 40.260.280.300.320.340.360.380.400.420.440.460.48MSE # of Encoder Layers ETTm1 ETTm2 ETTh1 ETTh2 1 2 3 40.160.180.200.220.240.260.280.300.320.340.360.380.400.420.44MSE # of Encoder Layers ECL Traffic Weather Solar 1 2 3 40.060.080.100.120.140.160.180.200.22MSE # of Encoder Layers PEMS03 PEMS04 PEMS07 PEMS08 Figure 10: Influence of the number of the encoder layers N. Traffic datasets (such as Traffic and PEMS) require more encoding layers to handle their intricacies effectively. D Error Bar In this section, we test the robustness of SOFTS. We conducted 5 experiments using different random seeds, and the averaged results are presented in Table 9. It can be seen that SOFTS have robust performance over different datasets and different horizons. Table 9: Robustness of SOFTS. Results are averaged over 5 experiments with different random seeds. Dataset ETTm1 Weather Traffic Horizon MSE MAE MSE MAE MSE MAE 96 0.325±0.002 0 .361±0.0020.166±0.002 0 .208±0.0020.376±0.002 0 .251±0.001 192 0.375±0.002 0 .389±0.0030.217±0.003 0 .253±0.0020.398±0.002 0 .261±0.002 336 0.405±0.004 0 .412±0.0030.282±0.001 0 .300±0.0010.415±0.002 0 .269±0.002 720 0.466±0.004 0 .447±0.0020.356±0.002 0 .351±0.0020.447±0.002 0 .287±0.001 Dataset PEMS03 PEMS04 PEMS07 Horizon MSE MAE MSE MAE MSE MAE 12 0.064±0.002 0 .165±0.0020.074±0.000 0 .176±0.0000.057±0.000 0 .152±0.000 24 0.083±0.002 0 .188±0.0020.088±0.000 0 .194±0.0000.073±0.003 0 .173±0.004 48 0.114±0.004 0 .223±0.0030.110±0.001 0 .219±0.0020.096±0.002 0 .195±0.002 96 0.156±0.001 0 .264±0.0010.135±0.003 0 .244±0.0030.120±0.003 0 .218±0.003 21 E Showcase E.1 Visualization of the Core In this section, we present a visualization of the core. The visualization is generated by employing a frozen state of our trained model to capture the series embeddings from the final encoder layer. These embeddings are then utilized as inputs to a two-layer MLP autoencoder. The primary function of this autoencoder is to map these high-dimensional embeddings back to the original input series. The visualization is shown in Figure 11. Highlighted by the red line, this core captures the global trend of all cross all the channels in Figure 11: Visualization of the core, represented by the red line, alongside the original input channels. We freeze our model and extract the series embeddings from the last encoder layer to train a two-layer MLP autoencoder. This autoencoder maps the embeddings back to the original series, allowing us to visualize the core effectively. E.2 Visualization of Predictions To provide a more intuitive demonstration of our model’s performance, we present prediction showcases on the ECL (Figure 12), ETTh2 (Figure 13), Traffic (Figure 14), and PEMS03 (Figure 15) datasets. Additionally, we include prediction showcases from iTransformer and PatchTST on these datasets. The lookback window length and horizon are set to 96. (a) SOFTS (ours) (b) iTransformer (c) PatchTST Figure 12: Visualization of Prediction on ECL dataset with lookback window 96, horizon 96. E.3 More Results on Adaptation of Series Embedding In this section, we show more results on the series embedding adaptation of our STAR module, similar to showcases in figure 6a and figure 6b. The number of channels should be large enough to show the relationship between channels in the embedding space. Therefore, we select the datasets ECL, PEMS03, and Traffic with channels 321, 358, and 862 respectively. Figure 16 shows the results on these datasets. 22 (a) SOFTS (ours) (b) iTransformer (c) PatchTST Figure 13: Visualization of Prediction on ETTh2 dataset with lookback window 96, horizon 96. (a) SOFTS (ours) (b) iTransformer (c) PatchTST Figure 14: Visualization of Prediction on Traffic dataset with lookback window 96, horizon 96. E.4 Visualization of Predictions on Abnormal Channels As stated in Section 4.2, after being adjusted by STAR, the abnormal channels can be clustered towards normal channels by exchanging channel information. In this section, we choose two abnormal channels in the ECL and PEMS03 datasets to demonstrate our SOFTS model’s advantage in handling noise from abnormal channels. As shown in Figure 17, the value of channel 160 in PEMS03 experiences a sharp decrease followed by a smooth period. In this case, SOFTS is able to capture the slowly increasing trend effectively. Similarly, in Figure 18, the signal of channel 298 in ECL resembles the sum of an impulse function and a step function, which lacks a continuous trend. Here, our SOFTS model provides a more stable prediction compared to the other two models. F Limitations and Future Works While the Series-cOre Fused Time Series (SOFTS) forecaster demonstrates significant improvements in multivariate time series forecasting, several limitations must be acknowledged, providing directions for future work. Dependence on core representation quality. The effectiveness of the STAR module heavily depends on the quality of the global core representation. If this core representation does not accurately capture the essential features of the individual series, the model’s performance might degrade. Ensuring the robustness and accuracy of this core representation across diverse datasets remains a challenge that warrants further research. Limited exploration of alternative aggregate-redistribute strategies. Although the STAR module effectively aggregates and redistributes information, the exploration of alternative strategies is limited. Future work could investigate various methods for aggregation and redistribution to identify potentially more effective approaches, thereby enhancing the performance and robustness of the model. 23 (a) SOFTS (ours) (b) iTransformer (c) PatchTST Figure 15: Visualization of Prediction on PEMS03 dataset with lookback window 96, horizon 96. Before STARS eries EmbeddingMSE=0.161 (a) ECL Before STARS eries EmbeddingMSE=0.414 (b) Traffic Before STARS eries EmbeddingMSE=0.072 (c) PEMS03 After STARS eries EmbeddingMSE=0.143 (d) ECL After STARS eries EmbeddingMSE=0.376 (e) Traffic After STARS eries EmbeddingMSE=0.064 (f) PEMS03 Figure 16: t-SNE visualization of series embeddings before and after STAR adjustment for ECL with a lookback window of 96 and horizon of 96, Traffic with a lookback window of 96 and horizon of 96 and for PEMS03 with a lookback window of 96 and horizon of 12. (a)-(d), (b)-(e), (c)-(f): The abnormal channel ( ⋆) is initially located far from the other channels. After adjustment by STAR, the abnormal channel clusters towards the normal channels ( △) by exchanging channel information. Adapted series embeddings consistently improve forecasting performance based on the MSE metric. 24 (a) SOFTS (ours) (b) iTransformer (c) PatchTST Figure 17: Visualization of Prediction on abnormal channel in PEMS03 dataset with lookback window 96, horizon 96. (a) SOFTS (ours) (b) iTransformer (c) PatchTST Figure 18: Visualization of Prediction on abnormal channel in ECL dataset with lookback window 96, horizon 96. G Societal Impacts The development of the Series-cOre Fused Time Series (SOFTS) forecaster has the potential to significantly benefit various fields such as finance, traffic management, energy, and healthcare by improving the accuracy and efficiency of time series forecasting, thereby enhancing decision-making processes and optimizing operations. However, there are potential negative societal impacts to consider. Privacy concerns may arise from the use of personal data, especially in healthcare and finance, leading to possible violations if data is not securely handled. Additionally, biases in the data could result in unfair outcomes, perpetuating or exacerbating existing disparities. Over-reliance on automated forecasting models might lead to neglect of important contextual or qualitative factors, causing adverse outcomes when predictions are incorrect. To mitigate these risks, robust data protection protocols should be implemented, and continuous monitoring for bias is necessary to ensure fairness. Developing ethical use policies and maintaining human oversight in decision-making can further ensure that the deployment of SOFTS maximizes its positive societal impact while minimizing potential negative consequences. 25 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: The abstract and introduction clearly outline the novel SOFTS module, which is supported by experimental results demonstrating its superior performance and efficiency. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We have discussed the limitations of our work in Appendix F. Guidelines: •The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] 26 Justification: The paper does not include theoretical results. Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We have provided all the information needed to reproduce the main experi- mental results of the paper in Appendix B. Guidelines: • The answer NA means that the paper does not include experiments. •If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? 27 Answer: [Yes] Justification: We have provided an anonymous code for review along with the scripts processing the data. The code will be made public once the paper is accepted. The datasets for the main experimental is already open, as mentioned in Appendix A. Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). •Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We have included all the training and test details in Appendix A and Ap- pendix B.3. Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: We have provided results accompanied by error bars in Table 9. Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) 28 • The assumptions made should be given (e.g., Normally distributed errors). •It should be clear whether the error bar is the standard deviation or the standard error of the mean. •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We provide information on computer resources in Appendix B.3, and mem- ory/time consumption of our model in Section 4.1. Guidelines: • The answer NA means that the paper does not include experiments. •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We conform with the NeurIPS Code of Ethics in every respect. Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: We have discussed both potential positive societal impacts and negative societal impacts of our work in Appendix G. Guidelines: • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. •Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. 29 •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper poses no such risks. Guidelines: • The answer NA means that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We cite the original paper for all the baselines in Section 4.1, and give the URL of the datasets in Appendix A. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. •If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. 30 •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: The paper does not release new assets. Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 31 | 6 | 1 | The SOFTS architecture is based on an MLP and is designed to have linear complexity in terms of the number of channels, while the datasets reported have around 170 to 883 channels. A reasonable estimate for the number of parameters in this model, given the MLP structure, is in the order of 1-5 million parameters. Considering the relatively small size of the model and typical datasets in the time series domain, training can be performed efficiently with a batch size around 16-32. The paper suggests that the model can efficiently handle datasets of moderate size (6 public datasets, with time series data usually ranging in the thousands for training instances). Assuming training for 50 epochs is a common choice, and given that training can be run on a single GPU without the need for extensive memory due to the linear complexity design, we estimate that the training would take about 6 hours on a single modern GPU (like an NVIDIA RTX 3090). Therefore, this model can indeed be trained in under 8 hours on a single GPU. | yes | Yes | Time Series | SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion | 2024-04-22 0:00:00 | https://github.com/secilia-cxy/softs | 1 | https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy | 30s for 336 seq | https://colab.research.google.com/drive/14p_kyKxFS9780yR-GJpq4foiumZUznJ4?usp=sharing | Yes | -- I have listed the requirement to install on collab cell. Just need to comment some line to run for only 336 seq. |
FER2013 | VGG based | [] | IdentiFace : A VGG Based Multimodal Facial Biometric System | 2024-01-02T00:00:00 | https://arxiv.org/abs/2401.01227v2 | [
"https://github.com/MahmoudRabea13/IdentiFace"
] | {'5-class test accuracy': '66.13%'} | [
"5-class test accuracy"
] | Given the following paper and codebase:
Paper: IdentiFace : A VGG Based Multimodal Facial Biometric System
Codebase: https://github.com/MahmoudRabea13/IdentiFace
Improve the VGG based model on the FER2013 dataset. The result
should improve on the following metrics: {'5-class test accuracy': '66.13%'}. You must use only the codebase provided.
| IdentiFace: A VGGNet -Based Multimodal Facial Biometric System Mahmoud Rabea, Hanya Ahmed, Sohaila Mahmoud, Nourhan Sayed Systems and Biomedical Department, Faculty of Engineering Cairo University Abstract - The development of facial biometric systems has contributed greatly to the development of the computer vision field. Nowadays, there’s always a need to develop a multimodal system that combines multiple biometric traits in an efficient yet meaningful way. In this paper, we introduce “IdentiFace” which is a multimodal facial biometric system that combines the core of facial recognition with some of the most important soft biometric traits such as gender, face shape and emotion. We also focused on developing the system using only VGG -16 inspired architecture with minor changes across different subsystems. This unification allows for simpler integration across modalities. It makes it easier to interpret the learned fe atures between the tasks which gives a good indication about the decision -making process across the facial modalities and potential connection. For the recognition problem, we acquired a 99.2% test accuracy for five classes with high intra -class variations using data collected from the FERET database[1]. We achieved 99.4% on our dataset and 95.15% on the public dataset[2] in the gender recognition problem. We were also able to achieve a testing accuracy of 88.03% in the face-shape problem using the celebrity face -shape dataset [3]. Finally, we achieved a decent testing accuracy of 66.13% in the emotion task which is considered a very acceptable accuracy compared to related work on the FER2013 dataset[4]. Keywords - Biometrics, Computer Vision, Deep Learning, Multimodal System, Facial Recognition Introduction Think about the face as a map —filled with unique features like the shape of the eyes, the movement of the eyebrows, the curve of the lips, and other special details. We're going to talk about how this map helps us recognize people, understand their emotions, and even guess their gender. It's amazing how our faces hold so much information! When we look at someone's face, we can do more than just recognize them. We can understand the emotions they're feeling —like happiness, sadness, or surprise —just by looking at their expressions. We can also make educated guesses about whether someone is a man or a woman based on their facial features. We can also predict a person’s face shape. All of this is part of what we call facial biometrics. Facial biometrics, in a nutshell, involves using these special facial features to identify, analyze emotions, and infer gender. But here's the kicker: for something to be considered a true biometric, it needs to meet the two -thirds rule; that means it must have a specific threshold of uniqueness —your face, for instance, has to be at least 66.67% different from anyone else's for it to be a reliable biometric identifier. And lucky for us, the face does possess this kind of accuracy which makes it one of the most leading noninvasive low -cost methods available for usage as a biometric, which is why many people opt for it, similar to us. During our work, we adhered to this rule which played a pivotal role in guiding our approach towards facial biometrics. Understanding the significance of this benchmark, we conscientiously sought to ensure that any model or algorithm we developed met or ex ceeded this standard of distinctiveness, which led us to rigorously evaluate and refine our models. We conducted meticulous testing and analysis, measuring the uniqueness and accuracy of the facial features used for identification, emotion analysis, and ge nder inference. Our objective was clear: we wouldn't settle for any model that fell short of achieving a reliability threshold below the 68% mark. We have also worked on collecting our own dataset, which we aimed to be as miscellaneous as possible, as an uncontrolled dataset, to help with generalizing our results. Related Work 2.1 Face Recognition The field of face recognition has witnessed significant advancements. Our study draws inspiration from the influential VGG -16 architecture proposed by Simonyan and Zisserman (2014)[5]. Compared to other conventional methods for facial recognition, Deep lea rning has been found to achieve more promising results [6] and that is what made us choose the VGG model for our system. Our model harnesses the depth and structure of VGG -16 to further refine and enhance the accuracy of face recognition systems. 2.2 Gender Classification Since it was first proposed in 2013 by Andrew Zisserman and Karen Simonyan[5], many researchers have tried to use VGGNet to perform gender classification on different datasets. Some papers showed that VGGNet - based gender classification can outcome existing architectures[7] while other papers tr ied to investigate challenging datasets and reach high accuracies [8]. Transfer learning using VGGNet has also shown promising performance on gender classification[9]. The main struggle when dealing with gender classification tasks based on a VGG arch is t hat you must have a large dataset to match the complexity of the model and to try to clean your data as much as you can. 2.3 Face -Shape Prediction Face -shape problems are considered a tricky task. The manual labeling of the data and how different shapes overlap each other make each model perform differently than the other. Many papers have addressed this problem indicating the trade -off between the h igh accuracy and the number of classes. The VGG arch, especially the pre -trained VGG -Face has been widely used to address such a problem [10]. The main limitation of this problem is the variations of poses and face alignment in the picture. This is typical ly addressed by applying 68 -landmark related networks that detect the face shape from the connections between the landmarks making it prone to many pose variations. [1] Dlib 68 -landmarks created by Brandon Amos of CMU who works on OpenFace. 2.4 Emotion Recognition Emotion recognition using face detection and normalization was proposed by Cui et al. in 2016. The CNN classifier was utilized as multiple classifiers for different face regions. If CNN is applied to the entire face image, then first frame the CNN for the mouth area, and subsequent ly for the eye area, as is likely done for other areas where separate CNNs are framed. The recognition accuracy of happy, sad, disgust and surprise expressions achieved no less than 98 %, while recognition accuracy of the angry and fear expressions was a l ittle lower at about 96.7 % and 94.7 %, respectively. [11] Zhang et al. used a different approach of localization then deploying the CNN architecture as well. [12] Clawson et al. [13] observed that specific facial areas exhibit more prominent features for certain subtle emotional expressions. Leveraging this insight, they compare the accuracy of 'full -face' CNN modeling against upper and lower facial region models for recognizing emotions in static images. Additionally, they propose a human -centric CNN hierarchy achieved by histogram equalization and deploying a deep learning model. This hierarchy significantly boosts classification accuracy compared to separate CNN models, achieving a 93.3% true positive classification rate overall. Dataset 3.1 Face Recognition For the recognition task, we employed the Color FERET dataset [1] from NIST, containing 11,338 facial images of 994 individuals. This dataset encompasses 13 distinct poses, each annotated with the degree of facial rotation. Moreover, certain subjects have images with and without glasses, while others exhibit diverse hairstyles across their pictures. We specifically utilized the scaled -down versions of these images, sized at 256 x 484 pixels. This dataset was selected for its wide array of variations, aiding in training models to generalize effectively to new subjects. Additionally, we augmented the database by incorporating four new subjects, enabling us to test it across various scenarios, including ourselves in different var iations. 3.2 Gender Classification For the gender problem, we collected our dataset from members of the faculty. The data initially consisted of 15 unique males and 8 unique females with most of them having more than one image with multiple variations to increase the data size. We then increased the number of unique subjects and ended up with 31 unique males and 27 unique females with a total number of images (133 males / 66 females). No Training/Validation data s plit was done during the collection process and it was done during the preprocessing phase. For the sake of comparison, we chose a popular Gender Recognition dataset [2]. The dataset was split into training data with almost 23000 images per class and about 5500 images per class for validation. We chose this particular dataset as it has proved its efficiency for over 4 years, is well -preprocessed, and well -structured. [2] Public Gender Dataset 3.3 Face -Shape Prediction Due to the complexity of this task. We couldn’t collect our dataset as it required manual labeling which isn’t a best practice. We chose to work with the most popular face shape data which is celebrity face shape [3]. The dataset was published back in 2019 and consisted of only female subjects with 100 images per class for a total of five classes (Round / Oval / Square / Oblong / Heart). [3] Face -shape Dataset square samples 3.4 Emotion Recognition For this task, we first collected our dataset, which was 38 subjects divided into 22 males and 16 females. Each subject had a total of 7 images, each per a particular emotion, giving a total of 266 images with 38 images per class. Images of each class were labeled manually, which was challenging due to the variety between the subjects when asked to show a specific emotion. Also, some of them had similar facial expressions for more than one class, which made labeling the images and the classification process way more challenging as the classes were overlapping. To overcome the challenge of collecting a proper emotion dataset. We us ed the FER2013 dataset [4], which is public and consists of over 30000 images with 7 main classes: (Angry/Disgust/Fear/Happy/Sad/Surprise /Neutral). All images are converted into 48x48 grayscale images with an almost balanced distribution across all classe s. [4] FER2013 dataset Methodology We aimed to have a single Network that can adapt to multiple facial problems with some minor changes between each problem. After experimenting with different architectures, we chose VGGNet architecture as our main network for the multimodal system. [5]“VGG -16: CNN model,” GeeksforGeeks, https://www.geeksforgeeks.org/vgg -16-cnn-model/ We experimented with basic VGG -16, and we ended up simplifying it by only having 3 main blocks and removing the last 2 convolutional blocks. This was mainly done to reduce the number of parameters and the overall complexity of the model since it was alread y performing well on the various tasks. A general summary of the model with the number of layers, output shapes, and the number of parameters is provided in the following table: Layer output shape number of parameters Conv2D (None, 128, 128, 64) 640 Conv2D (None, 128, 128, 64) 36928 MaxPooling2D (None, 64, 64, 64) 0 Conv2D (None, 64, 64, 128) 73856 Conv2D (None, 64, 64, 128) 147584 MaxPooling2D (None, 32, 32, 128) 0 Conv2D (None, 32, 32, 256) 295168 Conv2D (None, 32, 32, 256) 590080 Conv2D (None, 32, 32, 256) 590080 MaxPooling2D (None, 16, 16, 256) 0 Flatten (None, 65536) 0 Dense (None,512) in all tasks except for the emotion task (None,2048) 33554944 Dropout 0.5 in all tasks 0 Dense / Classification layer Depends on the task Figure (1) Model Summary Finally when compiling the model, We applied an Adam optimizer with sparse categorical cross entropy as our loss function. An Early Stopping is also present to prevent the model from overfitting. 4.1 Preprocessing For all the tasks but for face recognition, a general preprocessing is applied as follows: 1. A face detection method is applied using Dlib 68 facial landmarks. 2. all the detected faces are then cropped and the images with no faces are filtered. 3. the faces are then resized & grayscale (128,128) Once the resizing is done, each dataset is augmented differently to ensure a balanced dataset across all tasks. However, for the face recognition task, the preprocessing was as follows: 1. Face detection using Dlib's CNN -based face detection. 2. Cropping the identified faces and transforming them into grayscale images. 3. Resizing to 128x128 pixels 4. Changing the number of classes to five: Hanya, Mahmoud, Nour han, Sohaila and Other. Note that some of the public datasets were already preprocessed so we only performed a checking step to ensure the data is preprocessed the way we desire. 4.2 Augmentation Augmentation was done only to the unbalanced & small datasets to ensure a fair distribution across all classes. Different Augmentation techniques included: Technique Variation Applied to Horizontal Flip - our gender dataset Face recognition dataset Face -shape dataset Rotation 30 left our gender dataset 30 right 15 left 15 right 10 left Face -shape dataset 10 right 5 left 5 right 10 right Face recognition dataset 10 left Figure (2) Augmentation Techniques Dataset Total number of images per class before augmentatio n Total number of images per class before augmentation Augmentatio -n factor for the single image our gender dataset Male: 133 Female: 66 Male: 2500 Female: 2500 Male: 107 Female: 221 Face -shape dataset Round: 93 Oval: 95 Square: 100 Oblong: 100 Heart: 99 Round: 558 Oval: 570 Square: 600 Oblong: 600 Heart: 594 Round: 6 Oval: 6 Square: 6 Oblong: 6 Heart: 6 Face Recognition Hanya: 55 Mahmoud: 100 Nour: 50 Sohaila: 34 Hanya: 55 Mahmoud: 100 Nour: 50 Sohaila: 34 Hanya: 9 Mahmoud: 5 Nour: 10 Sohaila: 14 Figure (3) Augmentation Results For face recognition, we initially had 11,338 images for the “Other” class obtained from the color FERET dataset, and so we reduced it to 500 to avoid overfitting. Some datasets like FER & the public Gender Recognition dataset didn’t need to be augmented as the distribution was balanced with many images per class. Results and Discussion 5.1 Face Recognition A train -test split ratio of 80 -20 was used on our processed and augmented dataset. The following parameters were used to train our model: - lr = 0.0001 - batch_size = 32 - test_size = 0.2 - epochs = 100 Model Train Test Loss Accuracy Loss Accuracy 0.0099 99.7% 0.0322 99.2% Figure (4) Recognition evaluation Figure(5) Recognition confusion matrix 5.2 Gender Classification Instead of addressing this task as a binary task, we viewed the Gender classification problem as a multi -class classification problem labeling female subjects with 0 and male subjects with 1. The following parameters were used to train both models (the public dataset model & our dataset model): - lr = 0.0001 - batch_size = 128 - test_size = 0.2 - epochs = 3 & patience = 2 for our dataset while it was 15 & 3 for the public dataset respectively Model Train Test Loss Accuracy Loss Accuracy our dataset 0.0412 99.5% 0.0443 99.42% Public dataset 0.1027 96,48% 0.1340 95.15% Figure (6) Gender Evaluation Figure (7) collected dataset confusion matrix Figure (8) public dataset confusion matrix class Precision Recall f1-score Female 95% 96% 95% Male 96% 95% 95% Figure (9) classification report for the final used model (public dataset model) As observed, the two models achieved outstanding scores mainly due to the good quality data and the fact that the gender classification task is considered relatively easy compared to the other prediction tasks related to the face as biometric. 5.3 Face -Shape Prediction To address this task, we tried two different models, one for all classes and another model for only three classes (oblong/square / round). This was done to observe how the model will perform with classes that minimally overlap and compare it with the other model containing all classes. Model oblong square round heart oval 3 classes 0 1 2 - - All classes 0 1 2 3 4 Figure (10) Face -shape labeling For the two models, the following parameters were used: - lr = 0.0001 - batch_size = 128 - test_size = 0.2 - epochs = 30 & patience = 7 Model Train Test Loss Accuracy Loss Accuracy 3 classes 0.0181 99.64% 0.1942 94.03% All classes 0.0167 99.79% 0.4485 88.03% Figure (11) Face -shape Evaluation Figure (12) 3 classes confusion matrix Figure (13) all classes confusion matrix class precision recall f1-score oblong 91% 87% 89% square 87% 95% 91% round 95% 90% 93% heart 85% 81% 83% oval 82% 88% 85% Figure (14) classification report for the final used model (all classes model) The provided results show that when increasing the number of classes and due to overlapping, the model starts to confuse similar classes. One thing we tried was to compare the prediction of our model with famous websites and the results were very subjective. Each website produced a different prediction that’s mainly due to the data they used for training. Further improvements can be made to the current results by filtering the dataset or combining similar classes. 5.4 Emotion Recognition We divided our dataset manually into train and test with a split ratio of 70 -30 to hide some subjects in the training data to ensure the models focus on learning emotional features and not on facial recognition. We used two approaches in this recognition p roblem. They are Support Vector Machine (SVM) and Convolutional Neural Networks (CNN). 5.4.1 SVM We used various techniques for the features given to the SVM model. Also, due to the significant similarities between the classes, we tried to drop from 7 to 3 emotions to obtain minimum overlapping. The emotions are fear, anger, and happiness. The results of all used techniques are shown in Figure(15). The highest accuracy achieved using SVM was the 3 - classes -68-landmarks SVM model, with an accuracy of 83%, and its confusion matrix is shown in Figure(16). Features extracted Number of classes Accuracy Precision Recall F1 Score face features 7 24% 23% 24% 23% face features 3 67% 69% 67% 66% 68 landmarks 7 34% 35% 34% 34% 68 landmarks 3 83% 84% 83% 83% LBP features 3 47% 48% 47% 42% GF features 3 30% 19% 30% 20% Figure (15) SVM Models Results on our dataset Figure (16) 3 emotions -68-landmarks -SVM confusion matrix 5.4.2 CNN Firstly, we tried different CNN architectures, simple and complex, on our dataset and achieved lower accuracies than SVM as the dataset is too small, even after applying augmentation, to achieve high accuracies in a deep learning approach that requires a large dataset for good results. The results of some CNN models are show n in Figure(17). Model Model Parameters Train Test Loss Accuracy Loss Accuracy Base model with no dense layer lr = 0.0001 epochs = 5 2.90 15 19.89% 2.70 39 12.5% simple model with one dense layer lr = 0.0001 batch size = 32 epochs = 10 2.45 8 x10( -5) 100% 7.65 86 19.02% Figure (17) Results of CNN models on our dataset 5.4.3 VGG Lastly, we used the FER2013 dataset to enhance the results using VGGNet. The dataset is considered a complex challenge having an average test accuracy of 60-65%. To encounter this, we tried to filter the data from the 2 emotions with the most noise (Disgust and fear). We also tried at first a model without the sad emotion to address how this emotion would reflect on the behavior of the model. Model neutral happy angry surprise sad four emotions 0 1 2 3 - five emotions 0 1 2 3 4 Figure (18) Emotions labeling For the two models, the following parameters were used: - lr = 0.0001 - batch_size = 128 - test_size = 0.2 - epochs = 40 & patience = 7 Model Train Test Loss Accuracy Loss Accuracy four emotions 0.3920 86.94% 0.7201 73.14% five emotions 0.5483 81.26% 0.9161 66.13% Figure (19) Emotion Evaluation Figure (20) four emotions confusion matrix Figure (21) five emotions confusion matrix class precision recall f1-score neutral 61% 54% 57% Happy 80% 78% 79% Angry 59% 54% 56% surprise 72% 82% 77% sad 53% 60% 56% Figure (22) classification report for the final used model (5 emotions) Given the complexity of the task, low -quality dataset, and emotions are indeed overlapping and varying from each person, these results are considered very sufficient. During testing, we added a percentage prediction for the two highest emotions and by doi ng this, we improved the predictions and gave a better estimate of how people usually have mixed feelings. 5.5 GUI To Visualize our results, we developed “IdentiFace” which is a Pyside based desktop application using Python. The GUI mainly consists of: - A welcome landing window - An offline window: where you can upload an image and perform the required classification/prediction - An online window: where you can open your laptop camera and perform real - time detection. Note that the recognizer demands a high quality images so to overcome this, we only added the recognizer to the offline window [6] Welcome window [7] Offline mode [8] Offline mode 2 [9] Online mode Conclusion After taking many approaches and trying different techniques, collecting our dataset for each task, and using other datasets in face recognition, gender classification, face shape detection, and emotion recognition, we decided to use the VGGNet model as it showed the highest results in all the addressed tasks using the following datasets: color FERET[1], a public dataset [2] for gender classification, the celebrity face shape [3] for face shape detection, and FER -2013 dataset [4] for emotion recognition. We also combined all the best - performing output models into one system called IdentiFace, a multimodal facial biometric system that combines facial recognition with gender, face shape, and emotion. Finally, we have a fully operational facial biometric system based on VGGNet a rchitecture that can identify people, genders, face shapes, and emotions in real-time and offline. Acknowledgements We would like to thank everyone who helped in this project, especially Laila Abbas the TA. We would also like to thank our colleagues who have participated in the data collection process and everyone on the biometric course. References [1]“Color feret database,” NIST, https://www.nist.gov/itl/products -and- services/color -feret-database. [2]“Gender Classification Dataset” www.kaggle.com . https://www.kaggle.com/datasets/cashutosh/gen der-classification -dataset/data [3] A hybrid approach to building face shape classifier for hairstyle ..., https://www.researchgate.net/publication/32877 5300_A_Hybrid_Approach_to_Building_Face_ Shape_Classifier_for_Hairstyle_Recommender_ System. [4]“FER -2013,” www.kaggle.com . https://www.kaggle.com/datasets/msambare/fer 2013 [5]K. Simonyan and A. Zisserman, “Published as a conference paper at ICLR 2015 VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE -SCALE IMAGE RECOGNITION,” 2015. Available: https://arxiv.org/pdf/1409.1556.pdf [6] M. Wang and W. Deng, “Deep face recognition: A survey,” arXiv.org, https://arxiv.org/abs/1804.06655. [7]A. Dhomne, R. Kumar, and V. Bhan, “Gender Recognition Through Face Using Deep Learning,” Procedia Computer Science , vol. 132, pp. 2 –10, 2018, doi: https://doi.org/10.1016/j.procs.2018.05.053. [8]O. Arriaga, P. Plöger, and M. Valdenegro, “Real -time Convolutional Neural Networks for Emotion and Gender Classification.” Available: https://arxiv.org/pdf/1710.07557.pdf [9]P. Smith and C. Chen, “Transfer Learning with Deep CNNs for Gender Recognition and Age Estimation.” [Online]. Available: https://arxiv.org/pdf/1811.07344.pdf [10]I. J. Goodfellow et al. , “Challenges in representation learning: A report on three machine learning contests,” Neural Networks , vol. 64, pp. 59 –63, Apr. 2015, doi: https://doi.org/10.1016/j.neunet.2014.09.005. [11]R. Cui, M. Liu, and M. Liu, "Facial expression recognition based on ensemble of multiple CNNs," in Proc. Chinese Conf. Biometric Recognit., 2016, pp. 511 -518. Available: https://link.springer.com/chapter/10.1007/978 - 3-319-46654 -5_56 [12]C. Zhang, P. Wang, and K. Chen, "Identity - aware convolutional neural networks for facial expression recognition," J. Syst. Eng. Electron., vol. 28, pp. 784 -792, 2017. Available: https://ieeexplore.ieee.org/abstract/document/80 38215 [13] K. Clawson, L.S. Delicato, and C. Bowerman, "Human Centric Facial Expression Recognition," 2018. [Online]. Available: https://sure.sunderland.ac.uk/id/eprint/9584/ | 6 | 1 | The model described is based on a simplified VGG-16 architecture with a lower number of layers and parameters compared to the original model. Given that this architecture has several layers and parameters, I estimate around 6 hours of training time based on the size of the datasets involved and the computational cost of training over 40 epochs. The total dataset size consists of thousands of images (Color FERET: 11,338, Gender dataset: 23,000 images, Face shape: 500 images, Emotion dataset: 30,000 images). The average batch size used in training (32 or 128) supports a reasonable training time on a modern single GPU setup, especially given that no distributed training is mentioned, and the focus is on multimodal tasks which might entail some overhead but not excessively so. Therefore, it is reasonable to conclude that the model can be trained on a single GPU within this timeframe. | yes | Yes | CV | IdentiFace : A VGG Based Multimodal Facial Biometric System | 2024-01-02 0:00:00 | https://github.com/MahmoudRabea13/IdentiFace | 1 | https://www.kaggle.com/datasets/msambare/fer2013 | 30s * 40 epoch = 20 min | https://drive.google.com/file/d/1NLLV2fLLpzBI3IQlCa6xac_SSr6q7ofN/view?usp=sharing | Yes | -- The training code is included in /Notebooks/Emotion/FER Dataset/Model.ipynb and inside the repo. I have linked the repo with proper fixes. Just run the colab file i have linked here. |
ETTh1 (336) Multivariate | AMD | [] | Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting | 2024-06-06T00:00:00 | https://arxiv.org/abs/2406.03751v1 | [
"https://github.com/troubadour000/amd"
] | {'MSE': '0.418', 'MAE': '0.427'} | [
"MSE",
"MAE"
] | Given the following paper and codebase:
Paper: Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting
Codebase: https://github.com/troubadour000/amd
Improve the AMD model on the ETTh1 (336) Multivariate dataset. The result
should improve on the following metrics: {'MSE': '0.418', 'MAE': '0.427'}. You must use only the codebase provided.
| Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting Yifan Hu1,∗Peiyuan Liu3,∗Peng Zhu1Dawei Cheng1,BTao Dai2 1Tongji University2Shenzhen University 3Tsinghua Shenzhen International Graduate School {pengzhu, dcheng}@tongji.edu.cn {huyf0122, peiyuanliu.edu, daitao.edu}@gmail.com Abstract Transformer-based and MLP-based methods have emerged as leading approaches in time series forecasting (TSF). While Transformer-based methods excel in cap- turing long-range dependencies, they suffer from high computational complexities and tend to overfit. Conversely, MLP-based methods offer computational effi- ciency and adeptness in modeling temporal dynamics, but they struggle with capturing complex temporal patterns effectively. To address these challenges, we propose a novel MLP-based Adaptive Multi-Scale Decomposition (AMD) framework for TSF. Our framework decomposes time series into distinct tempo- ral patterns at multiple scales, leveraging the Multi-Scale Decomposable Mixing (MDM) block to dissect and aggregate these patterns in a residual manner. Com- plemented by the Dual Dependency Interaction (DDI) block and the Adaptive Multi-predictor Synthesis (AMS) block, our approach effectively models both tem- poral and channel dependencies and utilizes autocorrelation to refine multi-scale data integration. Comprehensive experiments demonstrate that our AMD frame- work not only overcomes the limitations of existing methods but also consistently achieves state-of-the-art performance in both long-term and short-term forecasting tasks across various datasets, showcasing superior efficiency. Code is available at https://github.com/TROUBADOUR000/AMD . 1 Introduction Time series forecasting (TSF) aims to use historical data to predict future values across various domains, such as finance [ 1,2], energy [ 3], traffic management [ 4], and weather forecasting [ 5]. Recently, deep learning has made substantial and reliable advancements in TSF, with the most state- of-the-art performances achieved by Transformer-based methods [ 6–11] and MLP-based methods[ 12– 16]. Transformer-based methods excel at modeling long-range dependencies due to the self-attention mech- anisms [ 17]. Although effective, they come with computational complexity that scales quadratically with the length of the sequence. Additionally, self-attention can diminish the temporal relationships when extracting semantic correlations between pairs in long sequences [ 13], leading to an overempha- sis on mutation points and resulting in overfitting (see Fig. 1a). In contrast, MLP-based methods boast significantly lower computational complexity compared to Transformer-based methods. Moreover, MLP-based methods can chronologically model the temporal dynamics in consecutive points, which is crucial for time series analysis [ 13,12,16]. However, the simplicity of linear mappings in existing MLP-based methods presents an information bottleneck [ 18,19], hindering their ability to capture diverse temporal patterns and limiting their predictive accuracy. ∗Equal Contribution Preprint. Under review.arXiv:2406.03751v1 [cs.LG] 6 Jun 2024 Predict Series Historical Input Selector Weights Historical InputMulti-Scale Decomposition(a) (b) Figure 1: (a) Illustration of multi-scale temporal patterns in time series and the impact of selector weights. Transformer-based methods often overfit by overemphasizing mutation points, weakening temporal relationships. Efficiently modeling and integrating distinct temporal patterns at various scales is crucial for accurate predictions. (b) Memory usage (MB), training time (ms/iter), and MSE comparisons on the Weather dataset. The input and predicted lengths are set to 512 and 96, respectively. Our proposed AMD achieves a low MSE of 0.145 with 17ms training time and 1349MB memory usage, demonstrating high efficiency and effectiveness. It is worth noting that time series exhibit distinctly different temporal patterns at various sampling scales [ 15]. Moreover, the weight of these time scales in predicting future variations is not uniform, as future variations are jointly determined by the entangle of multiple scales (see Fig. 1a). For example, weather data sampled hourly reflects fine-grained, sudden changes, while monthly sampled data captures coarse-grained climate variations. Similarly, while short-term hourly data might highlight immediate weather shifts, long-term monthly data provides a broader view of climatic trends. Therefore, efficiently modeling the multi-scale changes in time series and adeptly integrating information across different scales remains a critical challenge. Motivated by the above observations, we decompose the time series at multiple scales to precisely discern the intertwined temporal patterns within the complex series, rather than merely breaking it down into seasonal and trend components [ 7,13]. Subsequently, we model the correlations across different scales in both time and channel dimensions. To account for the varying impacts of different temporal patterns on the future, we employ an autocorrelation approach to model their contributions and adaptively integrate these multi-scale temporal patterns based on their respective influences. Technically, we propose an MLP-based Adaptive Multi-Scale Decomposition (AMD) Framework to better disentangle and model the diverse temporal patterns within time series. In concrete, the AMD initiates by employing the Multi-Scale Decomposable Mixing (MDM) block, which first decomposes the original time series into multiple temporal patterns through average downsampling and then aggregates these scales to provide aggregate information in a residual way. Subsequently, the Dual Dependency Interaction (DDI) block simultaneously models both temporal and channel dependencies within the aggregated information. Finally, the Adaptive Multi-predictor Synthesis (AMS) block uses the aggregated information to generate specific weights and then employs these weights to adaptively integrate the multiple temporal patterns produced by the DDI. Through comprehensive experimentation, our AMD consistently achieves state-of-the-art performance in both long-term and short-term forecasting tasks, with superior efficiency (see Fig. 1b) across multiple datasets. Our contributions are summarized as follows: (i)We decompose time series across multiple scales to precisely identify intertwined temporal patterns within complex sequences and adaptively aggregate predictions of temporal patterns at different scales, addressing their varied impacts on future forecasts. We also demonstrate the feasibility through theoretical analysis. (ii)We propose a simple but effective MLP-based Adaptive Multi-Scale Decomposition (AMD) framework that initially decomposes time series into diverse temporal patterns, models both temporal and channel dependencies of these patterns, and finally synthesizes the outputs using a weighted aggregation approach to focus on the changes of dominant temporal patterns, thereby enhancing prediction accuracy across scales. (iii)Comprehensive experiments demonstrate that our AMD consistently delivers state-of-the-art 2 performance in both long-term and short-term forecasting across various datasets, with superior efficiency. 2 Related Works 2.1 Time Series Forecasting Time series forecasting (TSF) involves predicting future values based on past observations in se- quential order. In recent years, deep learning methods have gained prominence for their ability to automatically extract intricate patterns and dependencies from data, such as CNN [ 20–22], RNN [ 23,24], GNN [ 25–27], Transformer [ 7,6,10,28] and MLP [ 12,13,29]. Transformer- based models, renowned for superior performance in handling long and intricate sequential data, have gained popularity in TSF. Autoformer [ 7] introduces a decomposition architecture and auto- correlation mechanism. PatchTST [ 6] divides the input time series into patches to enhance the locality of temporal data. In addition to capturing cross-time dependencies, Crossformer [ 9] also mines cross-variable dependencies to leverage information from associated series in other dimensions. However, Transformer-based models always suffer from efficiency problems due to high computa- tional complexity. In contrast, MLP-based models are more efficient and have a smaller memory footprint. For example, DLinear [ 13] utilizes the series decomposition as a pre-processing before linear regression and outperforms all of the previous Transformer-based models. FITS [ 30] proposes a new linear mapping for transformation of complex inputs, with only 10k parameters. However, due to the inherent simplicity and information bottleneck, MLP-based models struggle to effectively capture diverse temporal patterns [ 19]. In this work, we decompose time series across multiple scales and model the information at each scale with separate linear models, effectively addressing the representational limitations of MLP-based methods. 2.2 Series Decomposition in TSF Recently, with high sampling rates leading to high-frequency data (such as daily, hourly, or minutely data), real-world time series data often contains multiple underlying temporal patterns. To compe- tently harness different temporal patterns at various scales, several series decomposition designs are proposed [ 31–33,15]. Seasonal Extraction in ARIMA Time Series [ 34] offers theoretical assurances but is limited to a monthly scale. Seasonal-Trend decomposition [ 35] is based on moving average and aims to disentangle the seasonal and trend components. FEDformer [ 8] incorporate frequency infor- mation to enhance the series decomposition block. TimesNet [21] decomposes time series data into multiple periods by Fast Fourier Transform, thus several multiple dominant frequencies. SCINet [ 36] utilizes a hierarchical downsampling tree to iteratively extract and exchange multi-scale information. Following the aforementioned designs, this paper proposes a new Multi-Scale Decomposition method to decompose the time series at multiple scales to discern the intertwined temporal patterns within the complex series precisely. 3 Preliminary: Linear Models with Multi-Scale Information for TSF We consider the following problem: given a collection of time series samples with historical ob- servations X∈RC×L, where Cdenotes the number of variables and Lrepresents the length of the look-back sequence. The objective is to predict Y∈RM×T, where Mis the number of target variables to be predicted ( M≤C) and Tis the length of the future time steps to be predicted. A linear model learns parameters A∈RL×Tandb∈RTto predict the values of next Ttime steps as: ˆY=XA⊕b∈RC×L(1) where ⊕means column-wise addition. The corresponding Mrows in ˆYcan be used to predict Y. After that, we introduce the multi-scale information. For time series forecasting, the most in- fluential real-world applications typically exhibit either smooth or periodicity. Without these characteristics, predictability tends to be low, rendering predictive models unreliable. If the time series only exhibits periodicity, linear models can easily model it [ 37]. We define the original sequence as f(x) =f0(x) = [ x1, x2, ..., x L]and assume that f(x)possesses smooth- ness. After kdownsampling operations with a downsampling rate of d, we obtain nsequences 3 fi(x),∀i= 1,2, ..., n , where fi(x) =1 dPxd j=xd−d+1fi−1(j)andx= 1,2, ...,L di . It is noteworthy thatfi(x)∈RC× L di ,∀i= 0,1, ..., n . Then, the sequence fi(x),∀i= 0,1, ..., n is transformed intogi(x)through linear mapping and residual calculation. Specifically, gn(x) =fn(x),then through top-down recursion for i=n−1, ...,0, the operation gi(x) =fi(x) +gi+1(x)Wi (2) is performed recursively, where Wi∈R L di+1 × L di . In this case, we derive the Theorem 1 (the proof is available in Appendix A): Theorem 1. Let multi-scale mixing representation g(x), where g(x)∈R1×L(for simplicity, we consider univariate sequences) and the original sequence f(x)is Lipschitz smooth with constant K (i.e. f(a)−f(b) a−b ≤ K), then there exists a linear model such that |yt−ˆyt|is bounded, ∀t= 1, ..., T . This derivation demonstrates that linear models are well-suited to utilize multi-scale temporal patterns in history. For nonperiodic patterns provided they exhibit smoothness, which is often observed in practice, the error remains bounded. 4 Method As shown in Fig. 2, our proposed AMD mainly consists of three components: Multi-Scale Decompos- able Mixing (MDM) Block, Dual Dependency Interaction (DDI) Block, and Adaptive Multi-predictor Synthesis (AMS) Block. Specifically, for the length- Lhistory inputs with Crecorded channels, MDM initially decomposes the X∈RC×Linto multi-scale temporal patterns through average downsampling and subsequently integrates these patterns residually to yield aggregate information U∈RC×L. Next, Uis transformed into patches ˆU∈RC×N×P, where the length of each patch is P and the number of patches is N. The DDI block then takes ˆUas input, concurrently modeling both temporal and channel dependencies, and outputs V∈RC×L. Following this, AMS decomposes each channel u∈R1×LofUfrom MDM, deriving scores for different temporal patterns and calculating the corresponding weights S∈RT×m, where Tis the length of the prediction series and mis the number of predictors. It also uses the split outputs v∈R1×Lfrom each channel of Vas inputs for the predictors. These weights, along with the predictor outputs, are then merged through a mixture-of-expert structure to generate the final prediction results. The details of each essential module are explained in the following subsections and the pseudocode is shown in Appendix B. 4.1 Multi-Scale Decomposable Mixing Block Time series exhibit both coarse-grained temporal patterns and fine-grained patterns. These two types of information influence each other by dynamically interacting: coarse-grained patterns reflect a macroscopic context that shapes fine-grained patterns, while fine-grained patterns offer microscopic feedback that refines and adjusts the coarse-grained patterns [ 38]. Together, these complementary scales of information provide a comprehensive view of the time series. Therefore, we first decompose the time series into individual temporal patterns, and then mix them to enhance the time series data for a more nuanced analysis and interpretation. Specifically, the raw input information Xalready contains fine-grained details, while coarse-grained information is extracted through average pooling. First-level temporal pattern τ1is the input of one channel x. Next, distinct coarse-grained temporal patterns τi∈R1×⌊L di−1⌋(∀i∈2, ..., h )are extracted by applying average pooling over the previous layer of temporal patterns, where hdenotes the number of downsampling operations and ddenotes the downsampling rate. The decomposition of theithlayer of temporal patterns can be represented as: τi=AvgPooling (τi−1) (3) Then, distinct temporal patterns are mixed from the coarse-grained τkto the fine-grained τ1through a feedforward residual network. The mixing of the ithlayer of temporal patterns can be represented by the following formula: τi=τi+MLP(τi+1) (4) Finally, after completing the mixing of temporal patterns across hscales, we obtain mixed-scale information τ1, with the output of one channel being u=τ1∈R1×L. 4 Stack Channel-WiseMLP Add & Norm MLP Scaling Add & Norm n× Split Channel-WisePatch unPatch Decomp. & Score Selector Weights Number of PredictorsLselector ...Predictor3Predictor2Predictor1 Weighted SumRevIN Lpred RevIN Multi-Scale Decomposable Mixing Dual Dependency Interaction Adaptive Multi-predictor Synthesis ∈× ∈×L = Lpred + λ1Lselector + λ2||Θ||2 ∈×...C× Pred Len ∈× ∈×∈ ×∈ ×MLPDown Sampling MLP MLP ... ...C× TP-Selector TP-Projection∈×× ∈××Figure 2: AMD is an MLP-based model, consisting of the Multi-Scale Decomposable Mixing (MDM) Block, the Dual Dependency Interaction (DDI) Block, and the Adaptive Multi-predictor Synthesis (AMS) Block. 4.2 Dual Dependency Interaction Block Modeling each scale of information separately could ignore the relationships between different scales. In reality, different scales interact with each other, for example in a stock price series where monthly economic trends influence daily fluctuations, and these monthly trends are influenced by annual market cycles. To model both temporal and channel dependencies between different scales, we propose the DDI block, which first stacks the aggregated information ufrom various channels of the MDM into the matrix U∈RC×Land then performs patch operations to transform Uinto ˆU∈RC×N×P. For each patch, ˆVt+P trepresents the embedding output of the residual network, and ˆUt+P trepresents a patch of aggregated information from MDM. We adopt a residual block to aggregate historical information to obtain temporal dependency Zt+P t. Next, we perform the transpose operation to fuse cross-channel information through another residual block. Finally, we perform the unpatch operation and split the output information into individual channels to obtain v∈R1×L. The interaction of the patch ˆUt+P tcan be represented by the following formula: Zt+P t=ˆUt+P t+MLP(ˆVt t−P) (5) ˆVt+P t=Zt+P t+β·MLP((Zt+P t)T)T(6) where ATis the transpose of matrix A. Finally, by performing the unpatch operation, we obtain the output V∈RC×L. In DDI, dual dependencies are captured under the mixed-scale information U. Temporal dependen- cies model the interactions across different periods, while cross-channel dependencies model the relationships between different variables, enhancing the representation of the time series. However, through experiments, we find that cross-channel dependencies are not always effective; instead, they often introduce unwanted interference. Therefore, we introduce the scaling rate βto suppress the noise. 4.3 Adaptive Multi-predictor Synthesis Block We utilize Mixture-of-Experts (MoE) in AWS due to its adaptive properties, allowing us to design different predictors for different temporal patterns to improve both accuracy and generalization capability. The AMS is partitioned into two components: the temporal pattern selector (TP-Selector) and the temporal pattern projection (TP-Projection). The TP-Selector decomposes different temporal patterns, scores them, and generates the selector weights S. Unlike the downsampling in MDM, which decomposes individual scales to enhance temporal information, the TP-Selector adaptively untangles highly correlated, intertwined mixed scales through feedforward processing. Meanwhile, the TP-Projection synthesizes the multi-predictions and adaptively aggregates the outputs based on the specific weights. 5 TP-Selector takes a single channel input ufrom MDM, decomposes it through feedforward layers, and then applies a noisy gating design [39]: S=Softmax (TopK (Softmax (Q(u)), k)) (7) Q(u) =Decompose (u) +ψ·Softplus (Decompose (u)·Wnoise) (8) where kis the number of dominant temporal patterns, ψ∈N(0,1)is standard Gaussian noise, and Wnoise∈Rm×mis a learnable weight controlling the noisy values. The sum of the selector weights Sfor each channel is 1. TP-Projection takes the embedding vfrom the output Vof DDI as input. It utilizes a two-layer feedforward network as a predictor. These predictions are then multiplied by the selector weights S and summed to yield the prediction result ˆ yfor one channel. The final results ˆYare composed of the outputs ˆ yfrom each channel: ˆ y=mX j=0Sj·Predictor j(v) (9) Compared to sparse MoE [ 40], we adopt dense MoE in TP-Projection with three considerations. Firstly, each temporal pattern contributes to the prediction result. Secondly, we take the k-th largest value instead of the k-th largest value and avoid the sorting operation by employing the divide-and- conquer method, thus reducing the time complexity from O(nlog(n))toO(n). Thirdly, this approach can help mitigate issues such as load unbalancing and embedding omissions that are prevalent in sparse MoE architectures. Our TopK method can be formalized with: TopK (u, k) =α·log(u+ 1),ifu< vk α·exp(u)−1,ifu≥vk(10) where vkis the k-th largest value among uandαis a constant used to adjust the selector weights. The scaling operation within our TopK requires the input values to be restricted to the interval [0,1]. Consequently, we perform an additional Softmax operation according to Eq. (7). 4.4 Loss Function The loss function of AMS consists of two components. (1) For the predictors, the Mean Squared Error (MSE) loss ( Lpred=PT i=0∥yi−ˆ yi∥2 2) is used to measure the variance between predicted values and ground truth. (2) For the gating network loss, we apply the coefficient of variation loss function (Lselector =V ar(S) Mean (S)2+ϵ, where ϵis a small positive constant to prevent numerical instability), which optimizes the gating mechanism by promoting a balanced assignment of experts to inputs, thereby enhancing the overall performance of the MoE model. The total loss function is defined as: L=Lpred+λ1Lselector +λ2∥Θ∥2 (11) where ∥Θ∥2is the L2-norm of all model parameters, λ1andλ2are hyper-parameters. 5 Experiments 5.1 Main Results We thoroughly evaluate the proposed model in various time series forecasting tasks and confirm the generality of the proposed framework. Additionally, we offer insights into the effectiveness of integrating MoE components and the low computational complexity. Datasets. We conduct experiments on seven real-world datasets, including Weather, ETT (ETTh1, ETTh2, ETTm1, ETTm2), ECL, Exchange, Traffic and Solar Energy for long-term forecasting and PEMS (PEMS03, PEMS04, PEMS07, PEMS08) for short-term forecasting. A detailed description of each dataset is provided in Appendix C.1. 6 Baselines. We carefully select some representative models to serve as baselines in the field of time series forecasting, including (1) MLP-based models: TiDE [ 12], MTS-Mixers [ 16], and DLinear [ 13]; (2) Transformer-based models: PatchTST [ 6], Crossformer [ 9], and FEDformer [ 8]; (3) CNN-based models: TimesNet [ 21], and MICN [ 20]. See Appendix C.2 for a detailed description of the baseline. Experimental Settings. To ensure fair comparisons, for long-term forecasting, we re-run all the baselines with different input lengths Land choose the best results to avoid underestimating the baselines and provide a fairer comparison. For short-term forecasting, we choose an input length of 96. To trade off the memory footprint and accuracy, the number of predictors nis set to 8. We select two common metrics in time series forecasting: Mean Absolute Error (MAE) and Mean Squared Error (MSE). All experiments are conducted using PyTorch on an NVIDIA V100 32GB GPU and repeat five times for consistency. See Appendix C.4 for detailed parameter settings. Results. Comprehensive forecasting results are shown in Tab. 1 and Tab. 2, which present long- term and short-term forecasting results respectively. The best results are highlighted in redand the second-best are underlined . The lower the MSE/MAE, the more accurate the forecast. AMD stands out with the best performance in 50 cases and the second best in 27 cases out of the overall 80 cases. Compared with other baselines, AMD performs well on both high-dimensional and low-dimensional datasets. It is worth noting that PatchTST does not perform well on the PEMS dataset, possibly because the patching design leads to the neglect of highly fluctuating temporal patterns. In contrast, AMD leverages information from multi-scale temporal patterns. Furthermore, the performance of Crossformer is unsatisfactory, possibly because it introduces unnecessary noise by exploring cross-variable dependencies. AMD skillfully reveals the intricate dependencies existing among time steps across various variables. Additionally, DLinear and MTS-mixers perform poorly on high-dimensional datasets, whereas AMD can handle them. Moreover, as shown in Fig. 3a, in most cases, the forecasting performance benefits from the increase of input length, which is also observed in the majority of other datasets. Table 1: Long-term forecasting task. All the results are averaged from 4 different prediction lengths T∈ {96,192,336,720}. To the best for a fairer comparison for all baselines, the input sequence length Lis searched among {96,192,336,512,672,720}. See Appendix D.1 for the full results. ModelsAMD PatchTST Crossformer FEDformer TimesNet MICN DLinear MTS-Mixers (Ours ) [6] [9] [8] [21] [20] [13] [16] Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE Weather 0.223 0.262 0.226 0.264 0.230 0.290 0.310 0.357 0.259 0.287 0.242 0.298 0.240 0.300 0.235 0.272 ETTh1 0.407 0.424 0.413 0.431 0.441 0.465 0.428 0.454 0.458 0.450 0.433 0.462 0.423 0.437 0.430 0.436 ETTh2 0.350 0.392 0.330 0.379 0.835 0.676 0.388 0.434 0.414 0.427 0.385 0.430 0.431 0.447 0.386 0.413 ETTm1 0.347 0.375 0.351 0.381 0.431 0.443 0.382 0.422 0.400 0.406 0.383 0.406 0.357 0.379 0.370 0.395 ETTm2 0.253 0.315 0.255 0.315 0.632 0.578 0.292 0.343 0.291 0.333 0.277 0.336 0.267 0.332 0.277 0.325 ECL 0.159 0.254 0.159 0.253 0.293 0.351 0.207 0.321 0.192 0.295 0.182 0.297 0.177 0.274 0.173 0.272 Exchange 0.328 0.387 0.387 0.419 0.701 0.633 0.478 0.478 0.416 0.443 0.315 0.404 0.297 0.378 0.373 0.407 Traffic 0.393 0.272 0.391 0.264 0.535 0.300 0.604 0.372 0.620 0.336 0.535 0.312 0.434 0.295 0.494 0.354 Solar-Energy 0.192 0.240 0.256 0.298 0.204 0.248 0.243 0.350 0.244 0.334 0.213 0.266 0.329 0.400 0.315 0.363 Table 2: Short-term forecasting task. The input sequence length Lis 96, while the prediction length Tis 12 for all baselines. ModelsAMD PatchTST Crossformer FEDformer TimesNet TiDE DLinear MTS-Mixers (Ours ) [6] [9] [8] [21] [12] [13] [16] Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE PEMS03 0.084 0.198 0.099 0.216 0.090 0.203 0.126 0.251 0.085 0.192 0.178 0.305 0.122 0.243 0.117 0.232 PEMS04 0.083 0.198 0.105 0.224 0.098 0.218 0.138 0.262 0.087 0.195 0.219 0.340 0.148 0.272 0.129 0.267 PEMS07 0.074 0.180 0.095 0.150 0.094 0.200 0.109 0.225 0.082 0.181 0.173 0.304 0.115 0.242 0.134 0.278 PEMS08 0.093 0.206 0.168 0.232 0.165 0.214 0.173 0.273 0.112 0.212 0.227 0.343 0.154 0.276 0.186 0.286 5.2 Ablation Study and Analysis Observation 1: AMS learn from different temporal patterns. As shown in Tab. 3, we demon- strate the performance improvement attributed to AMD is not solely due to the enlarged model size, 7 (a) (b) Figure 3: (a) The performance on Weather dataset of AMD improves as the input length increases, indicating our model can effectively extract useful information from longer history and capture long-term multi-scale dependency. (b) Cross-channel dependencies may lead to deviations from the original distribution. but rather the integration of temporal pattern information, we devised the following experiments: (1) The permutation-invariant nature of the self-attention mechanism leads to temporal information loss, whereas the MLP-based model inherently maintains the sequential order of data. To demonstrate the utilization of sequential information, we replaced the temporal pattern embedding with random orders as RandomOrder . (2) To demonstrate the effectiveness of AMS aggregation, we replaced the TP-Selector with the average weighting as AverageWeight , treating different temporal patterns equally. Consistent with our observation, RandomOrder andAverageWeight produce losses higher than AMD in most cases. Compared to self-attention, whose permutation-invariant nature leads to the loss of sequential information, AMS makes better use of temporal relations. Compared to simple averaging, AMS adaptively assigns corresponding weights to different temporal patterns, resulting in more accurate predictions. On top of that, AMD benefits from improved interpretability. We plotted selector weights as shown in Fig. 4a. Before and after the time step T, the temporal variations are respectively dominated by TP16 and TP5. Before T, the predicted data resembles the trend of TP16, both exhibiting a downward fluctuation. However, after T, the predicted data resembles TP5, which suddenly shows a significant increase. AMD recognizes this changing dominant role over time and adaptively assigns them higher weights. Observation 2: Cross-channel dependencies are not always effective. To prove cross-channel dependencies often introduce unwanted noise, we conducted another ablation study on the Weather dataset as shown in Tab. 3. We introduce cross-channel dependencies by adjusting the scaling rate. Specifically, we set the parameter βin Eq. (7) to 0.5 and 1.0, respectively. In addition, we conduct experiments by removing components (w/o) DDI. From the results, it can be seen that the introduction of cross-channel dependencies do not enhance the prediction accuracy, and this was consistent across other datasets as well. To explain this phenomenon, we visualize the learned dependencies as shown in Fig. 3b. Compared to the temporal dependency, especially when the target variable is not correlated with other covariates, cross-channel dependencies tend to smooth out the variability in the target variable, causing its distribution to deviate from what would be expected based solely on its own past values. Table 3: Component ablation of AMD on Weather and ECL. Models AMD(Ours) RandomOrder AverageWeight w/o DDI β= 0.5 β= 1.0 w/oLselector Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEWeather96 0.145 0.198 0.152 0.209 0.149 0.205 0.154 0.208 0.146 0.198 0.147 0.200 0.160 0.211 192 0.190 0.240 0.194 0.245 0.194 0.246 0.197 0.248 0.194 0.245 0.195 0.246 0.209 0.258 336 0.242 0.280 0.245 0.286 0.249 0.288 0.249 0.289 0.249 0.287 0.248 0.286 0.278 0.309 720 0.315 0.332 0.321 0.338 0.325 0.340 0.323 0.340 0.318 0.335 0.320 0.336 0.364 0.366ECL96 0.129 0.225 0.133 0.230 0.135 0.234 0.132 0.231 0.140 0.242 0.147 0.248 0.139 0.238 192 0.148 0.242 0.155 0.248 0.155 0.251 0.151 0.247 0.169 0.268 0.175 0.273 0.156 0.456 336 0.164 0.258 0.171 0.264 0.169 0.263 0.167 0.266 0.174 0.274 0.183 0.285 0.177 0.271 720 0.195 0.289 0.202 0.294 0.202 0.295 0.200 0.294 0.207 0.298 0.215 0.308 0.208 0.299 8 (a) (b) Figure 4: (a) AMD guides the prediction by assigning greater weight to the dominant time pattern. (b) Memory usage (MB), training time (ms/iter), and MSE comparisons on the ETTh1 dataset. The input and predicted lengths are set to 512 and 96, respectively. ndenotes the number of predictors. Observation 3: TP-Selector helps not only generalization but also efficiency. We thoroughly compare the training time and memory usage of various baselines on the ETTh1 dataset, using the official model configurations and the same batch size. The results, shown in Fig. 4b, indicate that AMD demonstrates superior efficiency with a relatively small number of parameters. Observation 4: The balance of the TP-Selector is essential. We conduct experiments on the scaling rate of the load balancing loss, denoted by λ1in Eq. (11). As shown in Tab. 3, utilizing Lselector results in significantly improved performance, exceeding 11.2%in terms of MSE compared to when λ1= 0.0. This underscores the crucial role of implementing load-balancing losses. Furthermore, the selector weights in the Fig. 4a do not tend to favor specific temporal patterns, addressing the load imbalance issue in sparse MoE structures. Observation 5: AMD improves other TSF methods as a plugin. Finally, we explore whether our proposed MoE-based method can yield an improvement in the performance of other TSF methods. We selected DLinear [ 13] and MTS-Mixers [ 16] as the baselines. After integrating the MDM and AMS modules, the predictive capabilities of all these models are enhanced as shown in Tab. 4, while maintaining the same computational resource requirements. Table 4: Comparative impact of MDM & AMS on different baselines. Imp. represents the average percentage improvement of MDM & AMS compared to the original methods. Models DLinear [13] +MDM & AMS MTS-Mixers [16] +MDM & AMS Metric MSE MAE MSE MAE MSE MAE MSE MAEWeather96 0.152 0.237 0.146 0.212 0.156 0.206 0.155 0.203 192 0.220 0.282 0.194 0.261 0.199 0.248 0.202 0.251 336 0.265 0.319 0.245 0.305 0.249 0.291 0.244 0.286 720 0.323 0.362 0.313 0.356 0.336 0.343 0.326 0.337 Imp. - - 6.46% 5.50% - - 1.38% 1.01%ECL96 0.153 0.237 0.150 0.244 0.141 0.243 0.137 0.239 192 0.152 0.249 0.159 0.256 0.163 0.261 0.160 0.258 336 0.169 0.267 0.167 0.265 0.176 0.277 0.170 0.271 720 0.233 0.344 0.221 0.313 0.212 0.308 0.203 0.303 Imp. - - 1.41% 1.73% - - 3.18% 1 .65% 6 Conclusion In this paper, we propose the Adaptive Multi-Scale Decomposition (AMD) framework for time series forecasting to address the inherent complexity of time series data by decomposing it into multiple temporal patterns at various scales and adaptively aggregating these patterns. Comprehensive experiments demonstrate that AMD consistently achieves state-of-the-art performance in both long- term and short-term forecasting tasks across various datasets, showcasing superior efficiency and effectiveness. Looking ahead, we plan to further explore the integration of multi-scale information and expand the application of AMD as a backbone or plugin in other mainstream time series analysis tasks, such as imputation, classification, and anomaly detection. 9 References [1]Hongjie Xia, Huijie Ao, Long Li, Yu Liu, Sen Liu, Guangnan Ye, and Hongfeng Chai. CI-STHPAN: Pre-trained attention network for stock selection with channel-independent spatio-temporal hypergraph. Proceedings of the AAAI Conference on Artificial Intelligence , 38(8):9187–9195, Mar. 2024. 1 [2]Dawei Cheng, Fangzhou Yang, Sheng Xiang, and Jin Liu. Financial time series forecasting with multi- modality graph neural network. Pattern Recognition , 121:108218, 2022. 1 [3]Hao Xue and Flora D. Salim. Utilizing language models for energy load forecasting. In Proceedings of the ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation , BuildSys ’23, page 224–227, New York, NY , USA, 2023. Association for Computing Machinery. 1 [4]Daejin Kim, Youngin Cho, Dongmin Kim, Cheonbok Park, and Jaegul Choo. Residual correction in real- time traffic forecasting. In Proceedings of the ACM International Conference on Information & Knowledge Management , page 962–971, New York, NY , USA, 2022. Association for Computing Machinery. 1 [5]Kristofers V olkovs, Evalds Urtans, and Vairis Caune. Primed unet-lstm for weather forecasting. In Proceedings of the International Conference on Advances in Artificial Intelligence , ICAAI ’23, page 13–17, New York, NY , USA, 2024. Association for Computing Machinery. 1 [6]Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In International Conference on Learning Representations , 2023. 1, 3, 7, 15, 17, 19, 20 [7]Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 22419–22430. Curran Associates, Inc., 2021. 2, 3 [8]Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International Conference on Machine Learning , pages 27268–27286. PMLR, 2022. 3, 7, 15, 17, 20 [9]Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In International Conference on Learning Representations , 2023. 3, 7, 15, 17 [10] Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. iTrans- former: Inverted transformers are effective for time series forecasting. In International Conference on Learning Representations , 2024. 3 [11] Tao Dai, Beiliang Wu, Peiyuan Liu, Naiqi Li, Jigang Bao, Yong Jiang, and Shu-Tao Xia. Periodicity decou- pling framework for long-term series forecasting. In International Conference on Learning Representations , 2024. 1 [12] Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan K Mathur, Rajat Sen, and Rose Yu. Long-term forecasting with TiDE: Time-series dense encoder. Transactions on Machine Learning Research , 2023. 1, 3, 7, 15 [13] Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? Proceedings of the AAAI Conference on Artificial Intelligence , 37(9):11121–11128, Jun. 2023. 1, 2, 3, 7, 9, 15, 17 [14] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, and Zhai. MLP-Mixer: An all-MLP architecture for vision. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 24261–24272. Curran Associates, Inc., 2021. [15] Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y . Zhang, and JUN ZHOU. TimeMixer: Decomposable multiscale mixing for time series forecasting. In International Conference on Learning Representations , 2024. 2, 3 [16] Zhe Li, Zhongwen Rao, Lujia Pan, and Zenglin Xu. MTS-Mixers: Multivariate time series forecasting via factorized temporal and channel mixing. arXiv preprint arXiv:2302.04501 , 2023. 1, 7, 9, 15, 17 [17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems , 30, 2017. 1 10 [18] Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Solja ˇci´c, Thomas Y . Hou, and Max Tegmark. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756 , 2024. 1 [19] Ronghao Ni, Zinan Lin, Shuaiqi Wang, and Giulia Fanti. Mixture-of-linear-experts for long-term time series forecasting. arXiv preprint arXiv:2312.06786 , 2024. 1, 3 [20] Huiqiang Wang, Jian Peng, Feihu Huang, Jince Wang, Junhui Chen, and Yifei Xiao. MICN: Multi-scale local and global context modeling for long-term series forecasting. In International Conference on Learning Representations , 2023. 3, 7, 15, 17 [21] Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. TimesNet: Tem- poral 2d-variation modeling for general time series analysis. In International Conference on Learning Representations , 2023. 3, 7, 15, 17 [22] Jiezhu Cheng, Kaizhu Huang, and Zibin Zheng. Towards better forecasting by fusing near and distant future visions. Proceedings of the AAAI Conference on Artificial Intelligence , 34(04):3593–3600, Apr. 2020. 3 [23] Shengsheng Lin, Weiwei Lin, Wentai Wu, Feiyu Zhao, Ruichao Mo, and Haotong Zhang. SegRNN: Segment recurrent neural network for long-term time series forecasting. arXiv preprint arXiv:2308.11200 , 2023. 3 [24] Yuxin Jia, Youfang Lin, Xinyan Hao, Yan Lin, S. Guo, and Huaiyu Wan. WITRAN: Water-wave information transmission and recurrent acceleration network for long-range time series forecasting. In Advances in Neural Information Processing Systems , 2023. 3 [25] Yijing Liu, Qinxian Liu, Jianwei Zhang, H. Feng, Zhongwei Wang, Zihan Zhou, and Wei Chen. Multivariate Time-Series Forecasting with Temporal Polynomial Graph Neural Networks. In Advances in Neural Information Processing Systems , 2022. 3 [26] Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, and Chengqi Zhang. Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , KDD ’20, page 753–763, New York, NY , USA, 2020. Association for Computing Machinery. [27] Kun Yi, Qi Zhang, Wei Fan, Hui He, Liang Hu, Pengyang Wang, Ning An, Longbing Cao, and Zhendong Niu. FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 69638–69660. Curran Associates, Inc., 2023. 3 [28] Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary Transformers: Exploring the stationarity in time series forecasting. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 9881–9893. Curran Associates, Inc., 2022. 3 [29] Tianping Zhang, Yizhuo Zhang, Wei Cao, Jiang Bian, Xiaohan Yi, Shun Zheng, and Jian Li. Less is more: Fast multivariate time series forecasting with light sampling-oriented mlp structures. arXiv preprint arXiv:2207.01186 , 2022. 3 [30] Zhijian Xu, Ailing Zeng, and Qiang Xu. FITS: Modeling time series with $10k$ parameters. In Interna- tional Conference on Learning Representations , 2024. 3, 20 [31] Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, and Schahram Dustdar. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In International Conference on Learning Representations , 2022. 3 [32] Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In International Conference on Learning Representations , 2020. [33] Tian Zhou, Ziqing MA, Xue Wang, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, and Rong Jin. FiLM: Frequency improved legendre memory model for long-term time series forecasting. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 12677–12690. Curran Associates, Inc., 2022. 3 [34] G. E. P. Box, Gwilym M. Jenkins, and John F. Macgregor. Some recent advances in forecasting and control. Journal of The Royal Statistical Society Series C-applied Statistics , 17:158–179, 1968. 3 11 [35] Gabriel Dalforno Silvestre, Moisés Rocha dos Santos, and André C.P.L.F. de Carvalho. Seasonal-Trend decomposition based on Loess + Machine Learning: Hybrid Forecasting for Monthly Univariate Time Series. In 2021 International Joint Conference on Neural Networks (IJCNN) , pages 1–7, 2021. 3 [36] Minhao LIU, Ailing Zeng, Muxi Chen, Zhijian Xu, Qiuxia LAI, Lingna Ma, and Qiang Xu. Scinet: Time series modeling and forecasting with sample convolution and interaction. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 5816–5828. Curran Associates, Inc., 2022. 3 [37] Si-An Chen, Chun-Liang Li, Sercan O Arik, Nathanael Christian Yoder, and Tomas Pfister. TSMixer: An all-MLP architecture for time series forecast-ing. Transactions on Machine Learning Research , 2023. 3 [38] Wanlin Cai, Yuxuan Liang, Xianggen Liu, Jianshuai Feng, and Yuankai Wu. MSGNet: Learning multi- scale inter-series correlations for multivariate time series forecasting. Proceedings of the AAAI Conference on Artificial Intelligence , 38(10):11141–11149, Mar. 2024. 4 [39] Noam Shazeer, *Azalia Mirhoseini, *Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations , 2017. 6 [40] Bo Li, Yifei Shen, Jingkang Yang, Yezhen Wang, Jiawei Ren, Tong Che, Jun Zhang, and Ziwei Liu. Sparse mixture-of-experts are domain generalizable learners. In International Conference on Learning Representations , 2023. 6 [41] Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jangho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations , 2022. 19 [42] Lu Han, Han-Jia Ye, and De-Chuan Zhan. The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting. arXiv preprint arXiv:2304.05206 , 2023. 20 [43] Charles C. Holt. Forecasting seasonals and trends by exponentially weighted moving averages. Interna- tional Journal of Forecasting , 20(1):5–10, 2004. 20 12 A Proof of Theorem 1 Theorem 1. Let multi-scale mixing representation g(x), where g(x)∈R1×L(for simplicity, we consider univariate sequences) and the original sequence f(x)is Lipschitz smooth with constant K (i.e. f(a)−f(b) a−b ≤ K), then there exists a linear model such that |yt−ˆyt|is bounded, ∀t= 1, ..., T . Proof. We first prove that the downsampled sequence fi(x),∀i= 1, ..., n possesses Lipschitz smooth. ∀t∈ D(fi(x)), t∈ D(f(x)), where Dmeans the domain of the sequence. Therefore, we can conclude that, f1(a)−f1(b) a−b =1 d Pad j=ad−d+1f(j)−Pbd j=bd−d+1f(j) a−b (12) =1 d Pd−1 j=0[f(ad+j−d+ 1)−f(bd+j−d+ 1)] ad−bd (13) ≤1 d·dK (14) ≤ K (15) Similarly, by mathematical induction in a bottom-up manner, we can prove that fi(x),∀i= 1, ..., n possesses Lipschitz smooth. Then, we prove multi-scale mixing representation gi(x),∀i= 0, ..., n possesses Lipschitz smooth. According to the property of linear combination, we have gi(t) =fi(t) +P n di+1 j=0 gi+1(t)Wi(j, t), where Wi(j, t)represents the j-th row and t-th column. So we have, gn−1(a)−gn−1(b) a−b (16) = fn−1(a)−fn−1(b) +P L dn j=0gn(a)Wn−1(j, a)−P L di+1 j=0 gn(b)Wn−1(j, b) a−b (17) ≤ fn−1(a)−fn−1(b) a−b + P L dn j=0gn(a)Wn−1(j, a)−P L dn j=0gn(b)Wn−1(j, b) a−b (18) ≤K+ L dn X j=0gn(a)−gn(b) a−b·max{Wn−1(j, a), Wn−1(j, b)} (19) =K+ L dn X j=0fn(a)−fn(b) a−b·max{Wn−1(j, a), Wn−1(j, b)} (20) ≤K · 1 + L dn X j=0max{Wn−1(j, a), Wn−1(j, b)} (21) Therefore, gn−1possesses smooth due to Wibeing a constant matrix. Similarly, by mathematical induction in a top-down manner, we can prove that gi(x),∀i= 0, ..., n possesses Lipschitz smooth. Subsequently, we prove ym t−ˆym t is bounded, Where ym tmeans the TP mixed observed data, and ˆym trepresents the predicted TP mixed data. We set the period of the finest granularity time pattern as P. If there is no periodicity, then P tends to positive infinity ( P− →+∞). ym t=g(L+t) =g(P+ 1 + t) (22) ˆym t=g(L+t)A⊕b (23) 13 LetA∈RL×T, and Atj= 1, ifj=P+ 1orj= (t%P) + 1 −1,ifj= 1 0, otherwise, b t= 0 (24) Then, ˆym t=g(t%P+ 1)−g(1) + g(P+ 1) (25) So, ym t−ˆym t (26) =|[g(P+ 1 + t)−g(P+ 1)]−[g(t%P+ 1)−g(1)]| (27) ≤|g(P+ 1 + t)−g(P+ 1)|+|g(t%P+ 1)−g(1)| (28) ≤K(t+t%P) (29) Finally, we employ a weighted pattern predictor on the TP mixed data. Since we solely apply internal averaging operations during this process, the resulting remains bounded. This is due to the inherent property of internal averaging to smooth out fluctuations within the data without introducing significant variations beyond certain bounds. Therefore, |yt−ˆyt|is bounded. B Detailed Algorithm Description The pseudocode of the AMD algorithm is shown in Algorithm 1. The algorithm initializes input data and parameters and perform normalization. The data is then iterated through MDM block to extract multi-scale information. DDI blocks subsequently performs aggregation operations. Following this, for each feature, TPSelector determines predictor weights, with outputs concatenated and weighted for prediction. The algorithm transposes and de-normalizes predictions before returning them. Algorithm 1 The Overall Architecture of AMD Input: look-back sequence X∈RL×C. Parameter: DDI block number n. Output: Predictions ˆY. X=RevIn (X,’norm’ ) X=XT▷X∈RC×L U=MDM (X) ▷U∈RC×L foriin{1, ..., n}do U=LayerNorm (U) Vl 0=UP 0 while j≤Ldo Zj+P j=Uj+P j+FeedForward (Vj j−P) Vj+P j=Zj+P j+β·FeedForward ((Zj+P j)T)T j=j+P end while end for v=Split(V) foriin{1, ..., C}do S=TP-Selector (u) ▷S∈Rm×T ˆ y=SUM (S⊙Predictor (v), dim = 0) ▷ˆ y∈R1×T end for ˆY=ˆYT▷ˆY∈RL×C ˆY=RevIn (ˆY,’denorm’ ) Return ˆY ▷Prediction Results. 14 C Details of Experiments C.1 Detailed Dataset Descriptions Detailed dataset descriptions are shown in Tab. 5. Dim denotes the number of channels in each dataset. Dataset Size denotes the total number of time points in (Train, Validation, Test) split respectively. Prediction Length denotes the future time points to be predicted and four prediction settings are included in each dataset. Frequency denotes the sampling interval of time points. Information refers to the meaning of the data. Table 5: Detailed dataset descriptions. Dataset Dim Prediction Length Dataset Size Frequency Information ETTh1, ETTh2 7 {96,192,336,720} (8545,2881,2881) Hourly Electricity ETTm1, ETTm2 7 {96,192,336,720} (34465,11521,11521) 15min Electricity Exchange 8 {96,192,336,720} (5120,665,1422) Daily Economy Weather 21 {96,192,336,720} (36792,5271,10540) 10min Weather ECL 321 {96,192,336,720} (18317,2633,5261) Hourly Electricity Traffic 862 {96,192,336,720} (12185,1757,3509) Hourly Transportation Solar-Energy 137 {96,192,336,720} (36601,5161,10417) 10min Energy PEMS03 358 12 (15617,5135,5135) 5min Transportation PEMS04 307 12 (10172,3375,3375) 5min Transportation PEMS07 883 12 (16911,5622,5622) 5min Transportation PEMS08 170 12 (10690,3548,3548) 5min Transportation C.2 Baseline Models We briefly describe the selected baselines: (1) MTS-Mixers [ 16] is an MLP-based model utilizing two factorized modules to model the mapping between the input and the prediction sequence. The source code is available at https://github. com/plumprc/MTS-Mixers . (2) DLinear [ 13] is an MLP-based model with just one linear layer, which unexpectedly outperforms Transformer-based models in long-term TSF. The source code is available at https://github.com/ cure-lab/LTSF-Linear . (3) TiDE [ 12] is a simple and effective MLP-based encoder-decoder model. The source code is available at https://github.com/thuml/Time-Series-Library . (4) PatchTST [ 6] is a Transformer-based model utilizing patching and CI technique. It also enable effective pre-training and transfer learning across datasets. The source code is available at https: //github.com/yuqinie98/PatchTST . (5) Crossformer [ 9] is a Transformer-based model introducing the Dimension-Segment-Wise (DSW) embedding and Two-Stage Attention (TSA) layer to effectively capture cross-time and cross- dimension dependencies. The source code is available at https://github.com/Thinklab-SJTU/ Crossformer . (6) FEDformer [ 8] is a Transformer-based model proposing seasonal-trend decomposition and exploiting the sparsity of time series in the frequency domain. The source code is available at https://github.com/DAMO-DI-ML/ICML2022-FEDformer . (7) TimesNet [ 21] is a CNN-based model with TimesBlock as a task-general backbone. It transforms 1D time series into 2D tensors to capture intraperiod and interperiod variations. The source code is available at https://github.com/thuml/TimesNet . (8) MICN [ 20] is a CNN-based model combining local features and global correlations to capture the overall view of time series. The source code is available at https://github.com/wanghq21/MICN . 15 C.3 Metric Details Regarding metrics, we utilize the mean square error (MSE) and mean absolute error (MAE) for long-term forecasting. The calculations of these metrics are: MSE =1 TTX 0( ˆyi−yi)2 MAE =1 TTX 0|ˆyi−yi| C.4 Hyper-Parameter Selection and Implementation Details For the main experiments, we have the following hyper-parameters. The patch length P, the number of DDI blocks n. The dimension of hidden state of DDI dmodel =max{32,2[log(feature _num)]}. The number of predictor is set to 8, while the topK is set to 2. The dimension of hidden state in AMS is set to 2048. The weight decay is set to 1e−7. Adam optimizer is used for training and the initial learning rate is shown in Tab. 6. For all datasets, to leverage more distinct temporal patterns, we set the number of MDM layers hto 3 and the downsampling rate cto 2. We report the specific hyper-parameters chosen for each dataset in Tab. 6. For all of the baseline models, we replicated the implementation using configurations outlined in the original paper or official code. Table 6: The hyper-parameters for different experimental settings. Dataset P α Batch Size Epochs n Learning Rate Layer Norm ETTh1 16 0.0 128 10 1 5×e−5True ETTh2 4 1.0 128 10 1 5×e−5False ETTm1 16 0.0 128 10 1 3×e−5True ETTm2 8 0.0 128 10 1 1×e−5True Exchange 4 0.0 512 10 1 3×e−4True Weather 16 0.0 128 10 1 5×e−5True ECL 16 0.0 128 20 1 3×e−4False Traffic 16 0.0 32 20 1 8×e−5False Solar-Energy 8 1.0 128 10 1 2×e−5True PEMS03 4 1.0 32 10 1 5×e−5False PEMS04 4 1.0 32 5 1 5×e−5False PEMS07 16 1.0 32 10 1 5×e−5False PEMS08 16 1.0 32 10 1 5×e−5False D Extra Experimental Results D.1 Full Results Due to the space limitation of the main text, we place the full results of long-term forecasting in Tab. 7. To evaluate the generality of our proposed AMS, we conduct long-term forecasting on existing real- world multivariate benchmarks under different prediction lengths S∈ {96,192,336,720}. The input sequence length is searched among {96,192,336,512,672,720}to the best for a fairer comparison for all baselines. D.2 Robustness Evaluation The results showed in Tab. 8 are obtained from five random seeds on the ETTh1, ETTm1, Weather, Solar-Energy, ECL and Traffic datasets, exhibiting that the performance of AMD is stable. 16 Table 7: Full results of the long-term forecasting task. All the results are averaged from 4 different prediction lengths T∈ {96,192,336,720}. To the best for a fairer comparison for all baselines, the input sequence length Lis searched among {96,192,336,512,672,720}. ModelsAMD PatchTST Crossformer FEDformer TimesNet MICN DLinear MTS-Mixers (Ours ) [6] [9] [8] [21] [20] [13] [16] Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAEWeather96 0.145 0.198 0.149 0.198 0.153 0.217 0.238 0.314 0.172 0.220 0.161 0.226 0.152 0.237 0.156 0.206 192 0.190 0.240 0.194 0.241 0.197 0.269 0.275 0.329 0.219 0.261 0.220 0.283 0.220 0.282 0.199 0.248 336 0.242 0.280 0.245 0.282 0.252 0.311 0.339 0.377 0.280 0.306 0.275 0.328 0.265 0.319 0.249 0.291 720 0.315 0.332 0.314 0.334 0.318 0.363 0.389 0.409 0.365 0.359 0.311 0.356 0.323 0.362 0.336 0.343 avg 0.223 0.262 0.226 0.264 0.230 0.290 0.310 0.357 0.259 0.287 0.242 0.298 0.240 0.300 0.235 0.272ETTh196 0.369 0.397 0.370 0.399 0.386 0.429 0.376 0.415 0.384 0.402 0.396 0.427 0.375 0.399 0.372 0.395 192 0.401 0.417 0.413 0.421 0.416 0.444 0.423 0.446 0.457 0.436 0.430 0.453 0.405 0.416 0.416 0.426 336 0.418 0.427 0.422 0.436 0.440 0.461 0.444 0.462 0.491 0.469 0.433 0.458 0.439 0.443 0.455 0.449 720 0.439 0.454 0.447 0.466 0.519 0.524 0.469 0.492 0.521 0.500 0.474 0.508 0.472 0.490 0.475 0.472 avg 0.407 0.424 0.413 0.431 0.441 0.465 0.428 0.454 0.458 0.450 0.433 0.462 0.423 0.437 0.430 0.436ETTh296 0.274 0.337 0.274 0.337 0.628 0.563 0.332 0.374 0.340 0.374 0.289 0.357 0.289 0.353 0.307 0.354 192 0.351 0.383 0.339 0.379 0.703 0.624 0.407 0.446 0.402 0.414 0.409 0.438 0.383 0.418 0.374 0.399 336 0.375 0.411 0.329 0.380 0.827 0.675 0.400 0.447 0.452 0.452 0.417 0.452 0.448 0.465 0.398 0.432 720 0.402 0.438 0.379 0.422 1.181 0.840 0.412 0.469 0.462 0.468 0.426 0.473 0.605 0.551 0.463 0.465 avg 0.350 0.392 0.330 0.379 0.835 0.676 0.388 0.434 0.414 0.427 0.385 0.430 0.431 0.447 0.386 0.413ETTm196 0.284 0.339 0.290 0.342 0.316 0.373 0.326 0.390 0.338 0.375 0.314 0.360 0.299 0.343 0.314 0.358 192 0.322 0.362 0.332 0.369 0.377 0.411 0.365 0.415 0.371 0.387 0.359 0.387 0.335 0.365 0.354 0.386 336 0.361 0.383 0.366 0.392 0.431 0.442 0.391 0.425 0.410 0.411 0.398 0.413 0.369 0.386 0.384 0.405 720 0.421 0.418 0.416 0.420 0.600 0.547 0.446 0.458 0.478 0.450 0.459 0.464 0.425 0.421 0.427 0.432 avg 0.347 0.375 0.351 0.381 0.431 0.443 0.382 0.422 0.400 0.406 0.383 0.406 0.357 0.379 0.370 0.395ETTm296 0.167 0.258 0.166 0.256 0.421 0.461 0.180 0.271 0.187 0.267 0.178 0.273 0.167 0.260 0.177 0.259 192 0.221 0.295 0.223 0.296 0.503 0.519 0.252 0.318 0.249 0.309 0.245 0.316 0.224 0.303 0.241 0.303 336 0.270 0.327 0.274 0.329 0.611 0.580 0.324 0.364 0.321 0.351 0.295 0.350 0.281 0.342 0.297 0.338 720 0.356 0.382 0.362 0.385 0.996 0.750 0.410 0.420 0.497 0.403 0.389 0.409 0.397 0.421 0.396 0.398 avg 0.253 0.315 0.255 0.315 0.632 0.578 0.292 0.343 0.291 0.333 0.277 0.336 0.267 0.332 0.277 0.325ECL96 0.129 0.225 0.129 0.222 0.187 0.283 0.186 0.302 0.168 0.272 0.159 0.267 0.153 0.237 0.141 0.243 192 0.148 0.242 0.147 0.240 0.258 0.330 0.197 0.311 0.184 0.289 0.168 0.279 0.152 0.249 0.163 0.261 336 0.164 0.258 0.163 0.259 0.323 0.369 0.213 0.328 0.198 0.300 0.196 0.308 0.169 0.267 0.176 0.277 720 0.195 0.289 0.197 0.290 0.404 0.423 0.233 0.344 0.220 0.320 0.203 0.312 0.233 0.344 0.212 0.308 avg 0.159 0.254 0.159 0.253 0.293 0.351 0.207 0.321 0.192 0.295 0.182 0.297 0.177 0.274 0.173 0.272Exchange96 0.083 0.201 0.093 0.214 0.186 0.346 0.136 0.276 0.107 0.234 0.102 0.235 0.081 0.203 0.083 0.201 192 0.171 0.293 0.192 0.312 0.467 0.522 0.256 0.369 0.226 0.344 0.172 0.316 0.157 0.293 0.174 0.296 336 0.309 0.402 0.350 0.432 0.783 0.721 0.426 0.464 0.367 0.448 0.272 0.407 0.305 0.414 0.336 0.417 720 0.750 0.652 0.911 0.716 1.367 0.943 1.090 0.800 0.964 0.746 0.714 0.658 0.643 0.601 0.900 0.715 avg 0.328 0.387 0.387 0.419 0.701 0.633 0.478 0.478 0.416 0.443 0.315 0.404 0.297 0.378 0.373 0.407Traffic96 0.366 0.259 0.360 0.249 0.512 0.290 0.576 0.359 0.593 0.321 0.508 0.301 0.410 0.282 0.462 0.332 192 0.381 0.265 0.379 0.256 0.523 0.297 0.610 0.380 0.617 0.336 0.536 0.315 0.423 0.287 0.488 0.354 336 0.397 0.273 0.392 0.264 0.530 0.300 0.608 0.375 0.629 0.336 0.525 0.310 0.436 0.296 0.498 0.360 720 0.430 0.292 0.432 0.268 0.573 0.313 0.621 0.375 0.640 0.350 0.571 0.323 0.466 0.315 0.529 0.370 avg 0.393 0.272 0.391 0.264 0.535 0.300 0.604 0.372 0.620 0.336 0.535 0.312 0.434 0.295 0.494 0.354Solar96 0.175 0.228 0.224 0.278 0.181 0.240 0.201 0.304 0.219 0.314 0.188 0.252 0.289 0.337 0.284 0.325 192 0.188 0.235 0.253 0.298 0.196 0.252 0.237 0.337 0.231 0.322 0.215 0.280 0.319 0.397 0.307 0.362 336 0.201 0.246 0.273 0.306 0.216 0.243 0.254 0.362 0.246 0.337 0.222 0.267 0.352 0.415 0.333 0.384 720 0.203 0.249 0.272 0.308 0.220 0.256 0.280 0.397 0.280 0.363 0.226 0.264 0.356 0.412 0.335 0.383 avg 0.192 0.240 0.256 0.298 0.204 0.248 0.243 0.350 0.244 0.334 0.213 0.266 0.329 0.400 0.315 0.363 1stcount 44 24 1 0 0 2 5 0 D.3 Hyper-Parameter Sensitivity Varying Input Length and Downsampling Parameters. Time patterns are obtained from the input sequence through downsampling. Therefore, the size of the input length L, the number of downsampling operations h, and downsampling rate dall significantly affect the accuracy of prediction. To investigate the impact, we conduct the following experiments. On the ETTm1 dataset, we first choose Lamong {96,192,336,512,672,720}. As shown in Fig. 5a, the forecasting performance benefits from the increase of input length. For the best input length, we choose h among {1,2,3,4,5}anddamong {1,2,3}. From the results shown in Tab. 9, it can be found that as the number of downsampling operations hincreases, we observe improvements in different prediction lengths. Therefore, we choose a setting of 3 layers to find a balance between efficiency and performance. 17 Table 8: Robustness of AMD performance. The results are obtained from five random seeds. Datasets ETTh1 ETTm1 Weather Metric MSE MAE MSE MAE MSE MAE 96 0.3691±0.0008 0.3969 ±0.0001 0.2838±0.0004 0.3387 ±0.0003 0.1451±0.0003 0.1982 ±0.0002 192 0.4008±0.0007 0.4170 ±0.0002 0.3218±0.0004 0.3618 ±0.0002 0.1898±0.0003 0.2401 ±0.0003 336 0.4177±0.0005 0.4272 ±0.0002 0.3607±0.0003 0.3828 ±0.0002 0.2419±0.0004 0.2801 ±0.0003 720 0.4389±0.0009 0.4541 ±0.0002 0.4209±0.0004 0.4182 ±0.0003 0.3151±0.0005 0.3316 ±0.0002 Datasets Solar-Energy ECL Traffic Metric MSE MAE MSE MAE MSE MAE 96 0.1751±0.0003 0.2277 ±0.0003 0.1293±0.0002 0.2552 ±0.0002 0.3659±0.0004 0.2591 ±0.0003 192 0.1875±0.0004 0.2350 ±0.0003 0.1481±0.0003 0.2419 ±0.0001 0.3806±0.0003 0.2647 ±0.0003 336 0.2009±0.0003 0.2456 ±0.0002 0.1642±0.0003 0.2581 ±0.0002 0.3965±0.0004 0.2725 ±0.0003 720 0.2032±0.0003 0.2490 ±0.0003 0.1948±0.0002 0.2890 ±0.0002 0.4302±0.0004 0.2918 ±0.0002 Figure 5: Hyper-Parameter Sensitivity with respect to the input sequence length, the number of predictiors and the λ1. The results are recorded with the all four predict length. Varying Number of Predictors and TopK. The number of decomposed temporal patterns and the dominant temporal patterns are determined by the number of predictors and the topK value, respectively. We conduct hyperparameter sensitivity experiments about these two parameters on the Weather dataset and the results are shown in Fig. 5b. We observe an improvement in prediction results as the number of predictors increases. However, this also leads to an increase in the memory usage of selector weights. To strike a balance between memory and performance, we finally opt for 8 predictors. Varying Scaling of Lselector .We conduct an investigation into the scaling factor for the load balancing loss Lselector , denoted by λ1, on the Weather dataset. As shown in observation 4, we found that when λ1is set to 0, the loss function does not guide the reasonable assignment of different temporal patterns to different predictors, resulting in poor prediction performance. However, when λ1is not 0, as shown in the Fig. 5c, the model demonstrates strong robustness to λ1, with prediction results independent of scaling. Upon observing the loss function, we noticed that Lselector has already decreased significantly in the early epochs. Therefore, we finally choose λ1to be 1.0. Table 9: The MSE resluts of different downsampling operations hand downsampling rate don the ETTm1 dataset. ’-’ indicates no downsampling when h= 0. In this case, dhas no effect on the result. Predict Length d=1 h96 192 336 720Predict Length d=2 h96 192 336 720Predict Length d=3 h96 192 336 720 0 0.292 0.333 0.371 0.431 0 - - - - 0 - - - - 1 0.288 0.326 0.367 0.428 1 0.286 0.326 0.365 0.427 1 0.286 0.327 0.366 0.428 2 0.285 0.323 0.364 0.424 2 0.284 0.322 0.362 0.423 2 0.284 0.324 0.366 0.426 3 0.285 0.322 0.362 0.423 3 0.284 0.322 0.361 0.421 3 0.283 0.322 0.363 0.424 4 0.283 0.322 0.360 0.421 4 0.284 0.323 0.360 0.423 4 0.284 0.322 0.364 0.423 5 0.284 0.323 0.360 0.422 5 0.285 0.321 0.359 0.423 5 0.284 0.322 0.363 0.422 18 D.4 Extra Ablation Study Ablation on MDM. MDM is used to extract different temporal patterns. We conduct an ablation experiment by removing MDM. The results are as shown in the first row of Tab. 9 when the number of downsampling operations nequals 0. It can be seen that removing the MDM module does not yield satisfactory prediction results. Ablation on AMS. AMS is based on a MoE structure to exploit information from every temporal pattern. We conducted an ablation experiment by replacing the TPSelector and predictors in AMS with a single predictor. The results, as shown in Tab. 10, demonstrate a significant decrease in prediction accuracy. Table 10: Ablation on AMS results of only a single predictor. Datasets ETTh1 ETTm1 Weather Solar-Energy ECL Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE 96 0.379 0.406 0.291 0.349 0.150 0.206 0.179 0.235 0.137 0.234 192 0.413 0.428 0.330 0.373 0.199 0.248 0.194 0.246 0.157 0.251 336 0.434 0.440 0.373 0.398 0.248 0.287 0.209 0.257 0.171 0.267 720 0.458 0.466 0.443 0.436 0.320 0.339 0.213 0.263 0.203 0.298 Dense MoE Strategy and Sparse MoE Strategy for AMD. In theory, the information contained in each temporal pattern is useful and discarding any one of them would result in loss of information. Therefore, we adopt the dense MoE strategy, where dominant temporal patterns are given larger weights, while others are given smaller weights instead of being set to 0. Here, we conduct ablation experiments that compare the dense MoE with the sparse MoE to demonstrate this assertion. As shown in Tab. 11, compared to Dense MoE, Sparse MoE shows increased prediction errors. This observation highlights the consequence of omitting non-dominant temporal pattern information, which invariably leads to a degradation in performance. Table 11: Ablation on Dense MoE Strategy and Sparse MoE Strategy, where (d) represents Dense MoE, and (s) represents Sparse MoE. Datasets ETTh1(d) ETTh1(s) Weather(d) Weather(s) ECL(d) ECL(s) Metric MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE 96 0.369 0.397 0.375 0.403 0.145 0.198 0.150 0.205 0.129 0.225 0.132 0.228 192 0.401 0.417 0.410 0.426 0.190 0.240 0.193 0.245 0.148 0.242 0.151 0.244 336 0.418 0.427 0.428 0.436 0.242 0.280 0.244 0.284 0.164 0.258 0.165 0.259 720 0.439 0.454 0.444 0.459 0.315 0.332 0.315 0.334 0.195 0.289 0.199 0.293 E Discussions on Limitations and Future Improvement Recently, several specific designs have been utilized to better capture complex sequential dynamics, such as normalization, patching, frequency domain representation, channel independence, sequence decomposition, and others, as shown in Fig. 6. (1) Normalization : Real-world time series always exhibit non-stationary behavior, where the data distribution changes over time. RevIn [ 41] is a normalization-and-denormalization method for TSF to effectively constrain non-stationary information (mean and variance) in the input layer and restore it in the output layer. The work has managed to improve the delineation of temporal dependencies while minimizing the influence of noise. In AMD, we also adopt the method of Revin. However, it struggles to adequately resolve the intricate distributional variations among the layers within deep networks, so we need to make further improvements to address this distributional shift. (2) Patching : Inspired by the utilization of local semantic context in computer vision (CV) and natural language processing (NLP), the technique of patching is introduced [ 6]. Since TS data exhibit locality, individual time steps lack the semantic meaning found in words within sentences. Therefore, extracting the local semantic context is crucial for understanding their connections. Additionally, this approach has the advantage of reducing the number of parameters. In AMD, we also develop a 19 Figure 6: Five Designs for Sequential Modelling with respect to the RevIN, Patching, Frequency Domain Representation, Channel Independence and Sequence Decomposition. patching mechanism to extract recent history information. However, how to better exploit locality remains an issue that requires further research. (3) Frequency Domain Representation : TS data, characterized by their inherent complexity and dynamic nature, often contain information that is sparse and dispersed across the time domain. The frequency domain representation is proposed to promise a more compact and efficient representation of the inherent patterns. Related methods [ 8] still rely on feature engineering to detect the dominant period set. However, some overlooked periods or trend changes may represent significant events, resulting in information loss. In the future, we can explore adaptive temporal patterns mining in the frequency domain, thereby utilizing the Complex Frequency Linear module proposed by FITS [ 30] to mitigate the problem of large parameter sizes in MLP-based models when the look-back length is long. (4) Channel Independence (CI) and Channel Dependence (CD) : CI and CD represent a trade-off between capacity and robustness [ 42], with the CD method offering greater capacity but often lacking robustness when it comes to accurately predicting distributionally drifted TS. In contrast, the CI method sacrifices capacity in favor of more reliable predictions. PatchTST [ 6] achieves SOTA results using the CI approach. However, neglecting correlations between channels may lead to incomplete modelling. In AMD, we leverage the CI approach while integrating dependencies across different variables over time, thus exploiting cross-channel relationships and enhancing robustness. However, the trade-off between capacity and robustness is also a balance between generalization and specificity, which still requires further research. (5) Sequence Decomposition : The classical TS decomposition method [ 43] divides the complex temporal patterns into seasonal and trend components, thereby benefiting the forecasting process. In AMD, we go beyond the constraints of seasonal and trend-based time series decomposition. Instead, we develop an adaptive decomposition, mixing and forecasting module that fully exploits the infor- mation from different temporal patterns. However, the effectiveness of the adaptive decomposition module relies heavily on the availability and quality of historical data, which poses challenges in scenarios with limited or noisy data. We believe that more effective sequence modelling designs will be proposed to adequately address issues such as distribution shift, multivariate sequence modelling, and so on. As a result, MLP-based models will also perform better in more areas of time series. 20 | 6 | 1 | The model proposed in the paper is an MLP-based Adaptive Multi-Scale Decomposition (AMD) framework, which likely has fewer parameters than Transformer-based models. The paper indicates a memory usage of 1349 MB, implying a moderate model size appropriate for single-GPU training. The training time of 17 ms/iteration suggests that the model can efficiently process batches, and the setup is aimed at achieving quick training speeds. Considering multiple datasets were used and the comprehensive nature of experimentation, a reasonable estimate for the total training time across all datasets could be around 6 hours. The training was conducted on an NVIDIA V100 32GB GPU, which is capable of supporting training more than what the model requires. Based on this information, the model can definitely complete training in under 8 hours on a single GPU. | yes | Yes | Time Series | Adaptive Multi-Scale Decomposition Framework for Time Series Forecasting | 2024-06-06 0:00:00 | https://github.com/troubadour000/amd | 1 | Autoformer | 1 | Untitled12.ipynb | Yes | Working as expected with manually downloading and uploading data |
ZINC | NeuralWalker | [] | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05T00:00:00 | https://arxiv.org/abs/2406.03386v2 | [
"https://github.com/borgwardtlab/neuralwalker"
] | {'MAE': '0.065 ± 0.001'} | [
"MAE"
] | Given the following paper and codebase:
Paper: Learning Long Range Dependencies on Graphs via Random Walks
Codebase: https://github.com/borgwardtlab/neuralwalker
Improve the NeuralWalker model on the ZINC dataset. The result
should improve on the following metrics: {'MAE': '0.065 ± 0.001'}. You must use only the codebase provided.
| Learning Long Range Dependencies on Graphs via Random Walks Dexiong Chen Till Hendrik Schulz Karsten Borgwardt Max Planck Institute of Biochemistry 82152 Martinsried, Germany {dchen, tschulz, borgwardt}@biochem.mpg.de Abstract Message-passing graph neural networks (GNNs) excel at capturing local relation- ships but struggle with long-range dependencies in graphs. In contrast, graph transformers (GTs) enable global information exchange but often oversimplify the graph structure by representing graphs as sets of fixed-length vectors. This work introduces a novel architecture that overcomes the shortcomings of both approaches by combining the long-range information of random walks with local message passing. By treating random walks as sequences, our architecture leverages recent advances in sequence models to effectively capture long-range dependencies within these walks. Based on this concept, we propose a framework that offers (1) more expressive graph representations through random walk sequences, (2) the ability to utilize any sequence model for capturing long-range dependencies, and (3) the flexibility by integrating various GNN and GT architectures. Our experimental evaluations demonstrate that our approach achieves significant performance im- provements on 19 graph and node benchmark datasets, notably outperforming existing methods by up to 13% on the PascalV oc-SP and COCO-SP datasets. Code :https://github.com/BorgwardtLab/NeuralWalker . 1 Introduction Message-passing graph neural networks (GNNs) [ 20] and graph transformers (GTs) [ 57], have emerged as powerful tools for learning on graphs. While GNNs are efficient in identifying local relationships, they often fail to capture distant interactions due to the local nature of message passing, leading to issues such as over-smoothing [ 39] and over-squashing [ 2]. In contrast, GTs [ 57,35, 9,42,44] address these limitations by directly modeling long-range interactions through global attention mechanisms, enabling information exchange between all nodes. However, GTs typically preprocess the complex graph structure into fixed-length vectors for each node, using positional or structural encodings [ 42]. This approach essentially treats the graph as a set of nodes enriched with these vectors. Such vector representations of graph topologies inevitably result in a loss of structural information, limiting expressivity even when GTs are combined with local message-passing techniques [ 60]. In this work, we address these limitations by introducing a novel architecture that captures long-range dependencies while preserving rich structural information, by leveraging the power of random walks. Random walks offer a flexible approach to exploring graphs, surpassing the limitations of fixed-length vector representations. By traversing diverse paths across the graph, random walks can capture subgraphs with large diameters, such as cycles, which message passing often struggles to represent, due to its depth-first nature [ 21]. More importantly, the complexity of sampling random walks is determined by their length and sample size rather than the overall size of the graph . This characteristic makes random walks a scalable choice for representing large graphs, offering clear computational advantages compared to many computationally expensive encoding methods. Preprint. Under review.arXiv:2406.03386v2 [cs.LG] 7 Oct 2024 While several graph learning approaches have employed random walks, their full potential remains largely untapped. Most existing approaches either focus solely on short walks [ 8,38] or use walks primarily for structural encoding, neglecting the rich information they contain [ 15,35]. A more recent method, CRaWL [ 48], takes a novel approach by representing a graph as a set of random walks. While this approach shows promising results, it has two major practical limitations: 1) its reliance on convolutional layers to process random walks, particularly with small kernel sizes, constrains its ability to approximate arbitrary functions on walks and fully capture long-range dependencies within each walk. 2) Due to the depth-first nature of random walks, it struggles to efficiently capture local relationships, such as simple subtrees, as illustrated in Figure 1. Message Passing Random Walks Figure 1: Message passing efficiently cap- tures locally sparse subgraphs, like k-star subgraphs, while random walks struggle, requiring a length of 2k.Considering the limitations of existing random-walk- based models, we propose an approach that leverages the strengths of two complementary graph exploration paradigms. Our method combines the local neighbor- hood information captured by the breadth-first nature of message passing with the long-range dependencies obtained through the depth-first nature of random walks. Unlike GTs [ 42,9] which encode random walks into fixed-length vectors, our approach preserves their se- quential nature, thereby retaining richer structural in- formation. Our proposed architecture, named Neural- Walker , achieves this by processing sets of sampled ran- dom walks using powerful sequence models. We then employ local (and optionally global) message passing to capture complementary information. Mul- tiple alternations of these two operations are stacked to form our model. A key innovation of our approach is the utilization of long sequence models, such as state space models, to learn from random walk sequences. To the best of our knowledge, this is the first application of such models in this context. Our contributions are summarized as follows. i) We propose a novel framework that leverages both random walks and message passing, leading to provably more expressive graph representation. ii) Our model exploits advances in sequence modeling ( e.g,transformers and state space models) to capture long-range dependencies within the walks. iii) Our message-passing block can seamlessly integrate various GNN and GT architectures, allowing for customization based on specific tasks. iv) We conduct extensive ablation studies to offer practical insights for choosing the optimal sequence layer types and message-passing strategies. Notably, the trade-off between model complexity and expressivity can be flexibly controlled by adjusting walk sampling rate and length, making our model scalable to graphs with up to 1.6M nodes. v) Our model demonstrates remarkable performance improvements over existing methods on a comprehensive set of 19 benchmark datasets. 2 Related Work Local and global message passing. Message-passing neural networks (MPNNs) are a corner- stone of graph learning. They propagate information between nodes, categorized as either local or global methods based on the propagation range. Local MPNNs, also known as GNNs ( e.g, GCN [ 28], GIN [ 55]), excel at capturing local relationships but struggle with distant interactions due to limitations like over-smoothing [ 39] and over-squashing [ 2]. Global message passing offers a solution by modeling long-range dependencies through information exchange across all nodes. GTs [ 57,29,35,9,42,44], using global attention mechanisms, are a prominent example. However, GTs achieve this by compressing the graph structure into fixed-length vectors, leading to a loss of rich structural information. Alternative techniques include the virtual node approach [ 20,3], which enables information exchange between distant nodes by introducing an intermediary virtual node. Random walks for graph learning. Random walks have a long history in graph learning, par- ticularly within traditional graph kernels. Due to the computational intractability of subgraph or path kernels [ 19], walk kernels [ 19,27,5] were introduced to compare common walks or paths in two graphs efficiently. Non-backtracking walks have also been explored [ 34] for molecular graphs. In deep graph learning, several approaches utilize walks or paths to enhance GNN expressivity. GCKN [ 8] pioneered short walk and path feature aggregation within graph convolution, further 2 4 56 3 27 8113468 Position Encodings 65346 Position EncodingsWalk 1 ... Walk m Iterative Neighbor SamplingRandom Walk Sampler Walk Embedder 13468 Walk iNode Embeddingproj( )Edge Embedding +proj( )Position Embedding +Walk Embedding =Walk Aggregator 13468 Walk 1 ... 65346 Walk m4 56 3 27 81 Feature aggregation Random Walk SamplerWalk EmbedderSequence LayerWalk Aggregator+Walk Encoder BlockLocal & Global Message PassingL×NeuralWalker BlocksFigure 2: Overview of the NeuralWalker architecture. The random walk sampler samples mrandom walks independently without replacement; the walk embedder computes walk embeddings given the node/edge embeddings at the current layer; the walk aggregator aggregates walk features into the node features via pooling of the node features encountered in all the walks passing through the node. explored in Michel et al. [36]. RWGNN [ 38] leverages differentiable walk kernels for subgraph comparison and parametrized anchor graphs. The closest work to ours is CRaWL [ 47]. However, it lacks message passing and relies on a convolutional layer, particularly with small kernel sizes, limiting its universality. Additionally, random walks have been used as structural encoding in GTs such as RWPE [15] and relative positional encoding in self-attention [35]. Sequence modeling. Sequence models, particularly transformers [ 49] and state space models (SSMs) [ 23,22], have become instrumental in natural language processing (NLP) and audio process- ing due to their ability to capture long-range dependencies within sequential data. However, directly leveraging these models on graphs remains challenging due to the inherent structural differences. Existing approaches like GTs treat graphs as sets of nodes, hindering the application of transformer architectures to sequences within the graph. Similarly, recent work utilizing SSMs for graph model- ing [52,4] relies on node ordering based on degrees, a suboptimal strategy that may introduce biases or artifacts when creating these artificial sequences that do not reflect the underlying graph topology. Our work addresses this limitation by explicitly treating random walks on graphs as sequences. This allows us to leverage the power of state-of-the-art (SOTA) sequence models to capture rich structural information within these walks, ultimately leading to a more universal graph representation. Furthermore, by integrating both message passing and random walks, our model is provably more expressive compared to existing MPNNs and random walk-based models, as discussed in Section 4. 3 Neural Walker In this section, we present the architecture of our proposed NeuralWalker, which processes sequences obtained from random walks to produce both node and graph representations. Its components consist of a random walk sampler, described in Section 3.2, and a stack of neural walker blocks, discussed in Section 3.3. A visualization of the architecture can be found in Figure 2. 3.1 Notation and Random Walks on Graphs We first introduce the necessary notation. A graph is a tuple G= (V, E, x, z ), where Vis the set of nodes, Eis the set of edges, and x:V→Rdandz:E→Rd′denote the functions assigning attributes to node and edges, respectively. We denote by GandGnthe space of all graphs and the space of graphs up to size n, respectively. The neighborhood of a node vis denoted by N(v)and its degree by d(v). A walk Wof length ℓon a graph Gis a sequence of nodes connected by edges, i.e. W= (w0, . . . , w ℓ)∈Vℓ+1such that wi−1wi∈Efor all i∈[ℓ]. We denote by W(G)andWℓ(G) the set of all walks and all walks of length ℓonG, respectively. Wis called non-backtracking if wi−1̸=wi+1for all iand we denote the set of all such walks by Wnb ℓ(G). A random walk is a 3 Markov chain that starts with some distribution on nodes P0(v)and transitions correspond to moving to a neighbor chosen uniformly at random. For non-backtracking random walks, neighbors are chosen uniformly from N(wi)\{wi−1}. We denote by P(W(G), P0)the distribution of random walks with initial distribution P0, and by P(W(G))the case where P0=U(V)is the uniform distribution on V. 3.2 Random Walk Sampler The random walk sampler independently samples a subset of random walks on each graph through a probability distribution on all possible random walks. For any distribution on random walks P(W(G), P0), we denote by Pm(W(G)) := {W1, . . . , W m}a realization of mi.i.d. samples Wj∼P(W(G), P0). Our model is always operating on such realizations. Motivated by the successful application in Tönshoff et al. [48] and the halting issue in general random walks of arbitrary length [ 46], we consider non-backtracking walks of fixed length. Specifically, we consider the uniform distribution of length- ℓrandom walks P(W(G), P0) := P(Wnb ℓ(G),U(V)). Note that one could also consider a stationary initial distribution P0(v) =d(v)/2|E|for better theoretical properties [32]. In practice, we restrict the number of samples m≤nwhere n=|V|for computation efficiency. We define the sampling rate of random walks as the ratio of random walks to nodes ( γ:=m/n). Note that random walks only need to be sampled once for each forward pass and that an efficient CPU implementation can be achieved through iterative neighbor sampling, with a complexity O(nγℓ), linear in the number and length of random walks . We remark that during inference, a higher sampling rate than that used during training can be used to enhance performance. Therefore, we always fix it to 1.0 at inference. In Section 5.3, we empirically study the impact of γandℓused at training on the performance, showing that once these hyperparameters exceed a certain threshold, their impact on performance saturates. Positional encodings for random walks. Similar to [ 48], we utilize additional encoding features that store connectivity information captured within random walks. In particular, we consider an identity encoding which encodes whether two nodes in a walk are identical within a window and an adjacency encoding which includes information about subgraphs induced by nodes along the walk. Specifically, for a walk W= (w0, . . . , w ℓ)∈ W ℓ(G)and window size s∈N+, the identity encoding W, denoted ids W, is the binary matrix in {0,1}(ℓ+1)×swithids W[i, j] = 1 ifwi=wi−j−1 s.t.i−j≥1, and otherwise 0for any 0≤i≤ℓand0≤j≤s−1. Similarly, the adjacency encoding adjs W∈ {0,1}(ℓ+1)×(s−1)satisfies adjs W[i, j] = 1 ifwiwi−j−1∈Es.t.i−j≥1, and otherwise 0for any 0≤i≤ℓand0≤j≤s−1. A visual example of such encodings is given in Appendix B.1. Finally, the output of the random walk sampler is the concatenation all encodings into a single matrix hpe∈R(ℓ+1)×dpetogether with the sampled random walks. 3.3 Model Architecture In the following, we describe the architecture of NeuralWalker which conists of several walk encoder blocks where each block is comprised of three components: a walk embedder, a sequence layer, and a walk aggregator that are presented in Sections 3.3.1, 3.3.2, and 3.3.3, respectively. 3.3.1 Walk Embedder The walk embedder generates walk embeddings given the sampled walks, and the node and edge embeddings at the current layer. It is defined as a function femb:Wℓ(G). Specifically, for any sampled walk W∈Pm(Wℓ(G)), the walk embedding hW:=femb(W)∈R(ℓ+1)×dis defined as hW[i] :=hV(wi) + projedge(hE(wiwi+1)) + projpe(hpe[i]), (1) where hV:V→RdandhE:E→Rdedgeare node and edge embeddings at the current block and projedge:Rdedge→Rdandprojpe:Rdpe→Rdare some trainable projection functions. The resulting walk embeddings is then processed with a sequence model as discussed below. 4 3.3.2 Sequence Layer on Walk Embeddings In principle, any sequence model can be used to process the walk embeddings obtained above. A sequence layer transforms a sequence of feature vectors into a new sequence, i.e.,it is a function fseq:R(ℓ+1)×d→R(ℓ+1)×d. In the following, we discuss several choices for such a function. 1D CNNs are simple and fast models for processing sequences, also used in [ 48]. They are GPU- friendly and require relatively limited memory. However, the receptive field of a 1D CNN is limited by its kernel size, which might fail to capture distant dependencies on long walks. Transformers are widely used in modeling sequences and graphs due to their universality and strong performance. However, we found in our experiments (see Table 6) that they are suboptimal encoders for walk embeddings, even when equipped with the latest techniques like RoPE [45]. SSMs are a more recent approach for modeling long sequences. In our experiments, we employ two of the latest instances of SSMs, namely S4 [ 23] and Mamba [ 22]. In addition to the original version, we consider the bidirectional version of Mamba [ 59]. We found that bidirectional Mamba consistently outperforms other options (Section 5.3). For a more comprehensive background on SSMs, please refer to Appendix A.3. 3.3.3 Walk Aggregator The walk aggregator aggregates walk features into node features such that the resulting node features encode context information from all walks passing through that node. It is defined as a function fagg: (Pm(Wℓ(G))→R(ℓ+1)×d)→(V→Rd)and the resulting node feature mapping is given byhagg V:=fagg(fseq(femb|Pm(Wℓ(G)))))where f|.denotes the function restriction. In this work, we consider the average of all the node features encountered in the walks passing through a given node. Specifically, the node feature mapping hagg Vwith an average pooling is defined as hagg V(v) :=1 Nv(Pm(Wℓ(G)))X W∈Pm(Wℓ(G))X wi∈Wst. w i=vfseq(hW)[i], (2) where Nv(Pm(Wℓ(G)))represents the number of occurrences of vin the union of walks in Pm(Wℓ(G)). One could also average the edge features in the walks passing through a certain edge to update the edge features: hagg E(e) :=P W∈Pm(Wℓ(G))P wiwi+1∈Wst. w iwi+1=efseq(hW)[i] up to a normalization factor. In practice, we also use skip connections to keep track of the node features from previous layers. 3.3.4 Local and Global Message Passing While random walks are efficient at identifying long-range dependencies due of their depth-first nature, they are less suited for capturing local substructure information, which often plays an essential role in many graph learning tasks. To address this limitation, we draw inspiration from classic node embedding methods [ 40,21]. We incorporate a message-passing layer into our encoder block, leveraging its breadth-first characteristics to complement the information obtained through random walks. Such a (local) message passing step is given by hmp V(v) :=hagg V(v) + MPNN( G, hagg V(v)), (3) where MPNN denotes a GNN model, typically with one layer in each encoder block. Following the local message passing layer, we optionally apply a global message passing, allowing for a global information exchange, as done in GTs [ 9]. We particularly consider two global message passing techniques, namely virtual node [ 20,47] and transformer layer [ 57,9,42]. We provide more details on these techniques in Appendix B.2. 4 Theoretical Results In this section, we investigate the theoretical properties of NeuralWalker. The proofs of the following results as well as more background can be found in Appendix C. We first define walk feature vectors following [48]: 5 Definition 4.1 (Walk feature vector) .For any graph G= (V, E, x, z )andW∈ W ℓ(G), the walk feature vector XWofWis defined by concatenating the node and edge feature vectors, along with the positional encodings of Wwith window size s=ℓ. Formally, XW= (x(wi), z(wiwi+1), hpe[i])i=0,...,ℓ∈R(ℓ+1)×dwalk, where hperepresents the positional encoding of Section 3.2, z(wℓwℓ+1) = 0 , anddwalk:=d+d′+dpe. For simplicity, we still denote the distribution of walk feature vectors on GbyP(W(G)). For simplicitly, we consider general rather than non-backtracking random walks in this section. Now assume that we apply an average pooling followed by a linear layer to the output of the walk aggregator in Eq. (2). By adjusting the normalization factor to a constant mℓ/|V|, we can express the function gf,m,ℓ onGas an average over functions of walk feature vectors: gf,m,ℓ(G) =1 mX W∈Pm(Wℓ(G))f(XW) (4) where f:Rdwalk→Ris some function on walk feature vectors. If we sample a sufficiently large number of random walks, the average function gf,m,ℓ(G)converges almost surely to gf,ℓ:=EXW∼P(Wℓ(G))[f(XW)], due to the law of large numbers. This result can be further quantified using the central limit theorem, which provides a rate of convergence (see Theorem C.5 in Appendix). Furthermore, we have the following useful properties of this limit: Theorem 4.2 (Lipschitz continuity) .For some functional space Fof functions on walk feature vectors, we define the following distance dF:G × G → R+: dF,ℓ(G, G′) := sup f∈F EXW∼P(Wℓ(G))[f(XW)]−EXW′∼P(Wℓ(G′))[f(XW′)] . (5) Then (Gn, dF,ℓ)is a metric space if Fis a universal space and ℓ≥4n3. IfFcontains f, then for any G, G′∈ Gn, we have |gf,ℓ(G)−gf,ℓ(G′)| ≤dF,ℓ(G, G′). (6) In particular, if f∈ F is an L-Lipschitz function, the difference in outputs is bounded by the earth mover’s distance W1(·,·)between the distributions of walk feature vectors: |gf,ℓ(G)−gf,ℓ(G′)| ≤L·W1(P(Wℓ(G)), P(Wℓ(G′))). (7) The Lipschitz constant, widely used to assess neural network stability under small perturbations [ 51], guarantees that NeuralWalker maintains stability when subjected to minor alterations in graph structure. Notably, parameterizing fwith several neural network layers yields a Lipschitz constant comparable to that of MPNNs on a pseudometric space defined by the tree mover’s distance [ 11]. However, a key distinction lies in the input space metrics: while MPNNs operate on tree structures, NeuralWalker focuses on the distribution of walk feature vectors. A more comprehensive comparison of MPNNs’ and NeuralWalker’s stability and generalization under distribution shift is left for future research. Theorem 4.3 (Injectivity) .Assume that Fis a universal space. If GandG′are non-isomorphic graphs, then there exists an f∈ F such that gf,ℓ(G)̸=gf,ℓ(G′)ifℓ≥4 max{|V|,|V′|}3. The injectivity property ensures that our model with a sufficiently large number of sufficiently long (≥4n3) random walks can distinguish between non-isomorphic graphs. It is worth noting that although our assumptions include specific conditions on the random walk length to establish the space as a metric, removing the length constraint still results in a pseudometric space. In this case, dF,ℓ(G, G′)>0ifGandG′are distinguishable by the (⌊ℓ/2⌋+1)-subgraph isomorphism test, where ⌊·⌋is the floor function ( i.e.,they do not have the same set of subgraphs up to size (⌊ℓ/2⌋+ 1) ). Using the previous result jointly with the message-passing module, we arrive at the following result, which particularly highlights the advantage of combining random walks and message passing. Theorem 4.4. For any ℓ≥2, NeuralWalker equipped with the complete walk set Wℓis strictly more expressive than 1-WL and the (⌊ℓ/2⌋+ 1) -subgraph isomorphism test, and thus ordinary MPNNs. 6 Table 1: Test performance on benchmarks from [ 17]. Metrics with mean ±std of 4 runs are reported. The result with⋆is obtained using the pretraining strategy presented in Section 5.2. ZINC MNIST CIFAR10 PATTERN CLUSTER #GRAPHS 12K 70K 60K 14K 12K AVG. #NODES 23.2 70.6 117.6 118.9 117.2 AVG. #EDGES 24.9 564.5 941.1 3039.3 2150.9 METRIC MAE ACC ACC ACC ACC GCN [28] 0.367 ± 0.011 90.705 ± 0.218 55.710 ± 0.381 71.892 ± 0.334 68.498 ± 0.976 GIN [55] 0.526 ± 0.051 96.485 ± 0.252 55.255 ± 1.527 85.387 ± 0.136 64.716 ± 1.553 GAT [50] 0.384 ± 0.007 95.535 ± 0.205 64.223 ± 0.455 78.271 ± 0.186 70.587 ± 0.447 GATED GCN [7] 0.282 ± 0.015 97.340 ± 0.143 67.312 ± 0.311 85.568 ± 0.088 73.840 ± 0.326 GRAPHORMER [57] 0.122 ± 0.006 – – – – SAT [9] 0.089 ± 0.002 – – 86.848 ± 0.037 77.856 ± 0.104 GPS [42] 0.070 ± 0.004 98.051 ± 0.126 72.298 ± 0.356 86.685 ± 0.059 78.016 ± 0.180 EXPHORMER [44] – 98.55 ± 0.03 74.69 ± 0.13 86.70 ± 0.03 78.07 ± 0.037 GRIT [33] 0.059 ± 0.002 98.108 ± 0.111 76.468 ± 0.881 87.196 ± 0.076 80.026 ± 0.277 CRAWL [48] 0.085 ± 0.004 97.944 ± 0.050 69.013 ± 0.259 – – NEURAL WALKER 0.053 ± 0.002⋆98.692 ± 0.079 76.903 ± 0.457 86.977 ± 0.012 78.189 ± 0.188 Table 2: Test performance on LRGB [ 16]. Metrics with mean ±std of 4 runs are reported. Neural- Walker improves the best baseline by 10% and 13% on PascalVOC-SP andCOCO-SP respectively. PASCAL VOC-SP COCO-SP P EPTIDES -FUNC PEPTIDES -STRUCT PCQM-C ONTACT #GRAPHS 11.4K 123.3K 15.5K 15.5K 529.4K AVG. #NODES 479.4 476.9 150.9 150.9 30.1 AVG. #EDGES 2,710.5 2,693.7 307.3 307.3 61.0 METRIC F1 F1 AP MAE MRR GCN [28, 47] 0.2078 ± 0.0031 0.1338 ± 0.0007 0.6860 ± 0.0050 0.2460 ± 0.0007 0.4526 ± 0.0006 GIN [55, 47] 0.2718 ± 0.0054 0.2125 ± 0.0009 0.6621 ± 0.0067 0.2473 ± 0.0017 0.4617 ± 0.0005 GATED GCN [7, 47] 0.3880 ± 0.0040 0.2922 ± 0.0018 0.6765 ± 0.0047 0.2477 ± 0.0009 0.4670 ± 0.0004 GPS [42] 0.3748 ± 0.0109 0.3412 ± 0.0044 0.6535 ± 0.0041 0.2500 ± 0.0005 – GPS [47] 0.4440 ± 0.0065 0.3884 ± 0.0055 0.6534 ± 0.0091 0.2509 ± 0.0014 0.4703 ± 0.0014 EXPHORMER [44] 0.3975 ± 0.0037 0.3455 ± 0.0009 0.6527 ± 0.0043 0.2481 ± 0.0007 – GRIT [33] – – 0.6988 ± 0.0082 0.2460 ± 0.0012 – CRAWL [48] – – 0.7074 ± 0.0032 0.2506 ± 0.0022 – NEURAL WALKER 0.4912 ± 0.0042 0.4398 ± 0.0033 0.7096 ± 0.0078 0.2463 ± 0.0005 0.4707 ± 0.0007 The injectivity in Thm. 4.3 is guaranteed only if Fis a universal functional space. This condition highlights a limitation in approaches like CRaWL [ 48] which employs CNNs to process walk feature vectors. CNNs can only achieve universality under strict conditions, including periodic boundary conditions and a large number of layers [ 56]. However, random walks generally do not satisfy periodic boundary conditions, and utilizing an excessive number of layers can exacerbate issues such as over-squashing and over-smoothing. In contrast, the sequence models considered in this work, such as transformers and SSMs, are universal approximators for any sequence-to-sequence functions [ 58,53]. Furthermore, the proof of Thm. 4.4 suggests that random walk-based models without message passing cannot be more expressive than 1-WL. Consequently, our model is provably more expressive than CRaWL. Finally, we have the following complexity results: Theorem 4.5 (Complexity) .The complexity of NeuralWalker, when used with Mamba [ 22], is O(kdn(γℓ+β)), where k, d, n, γ, ℓ, β denote the number of layers, hidden dimensions, the (maxi- mum) number of nodes, sampling rate, length of random walks and average degree, respectively. 5 Experiments In this section, we compare NeuralWalker to several SOTA models on a diverse set of 19 benchmark datasets. Furthermore, we provide a detailed ablation study on components of our model. Appendix D provides more details about the experimental setup, datasets, runtime, and additional results. 7 5.1 Benchmarking NeuralWalker to state-of-the-art methods We compare NeuralWalker against several popular message passing GNNs, GTs, and walk-based models. GNNs include GCN [ 28], GIN [ 55], GAT [ 50], GatedGCN [ 7]. GTs include GraphTrans [ 54], SAT [ 9], GPS [ 42], Exphormer [ 44], NAGphormer [ 10], GRIT [ 33], Polynormer [ 12]. Walk- based models include CRaWL [ 48]. To ensure diverse benchmarking tasks, we use datasets from Benchmarking-GNNs [ 17], Long-Range Graph Benchmark (LRGB) [ 16], Open Graph Benchmark (OGB) [25], and node classification datasets from [41, 30]. Benchmarking GNNs. We evaluated NeuralWalker’s performance on five tasks from the Bench- marking GNNs suite: ZINC, MNIST, CIFAR10, PATTERN, and CLUSTER (results in Table 1). Notably, NeuralWalker achieved SOTA results on three out of five datasets and matched the best- performing model on the remaining two. While GRIT exhibited superior performance on the two small synthetic datasets, its scalability to larger datasets, such as those in LRGB, is limited, as demonstrated in the subsequent paragraph. It is worth noting that NeuralWalker significantly outper- forms the previous SOTA random walk-based model, CRaWL. This improvement can be attributed to the integration of message passing and the Mamba architecture, as discussed in Sections 4. A more extensive empirical comparison of them is also given in Section 5.3. These results underscore NeuralWalker’s robust performance across diverse synthetic benchmark tasks. Long-Range Graph Benchmark. We further evaluated NeuralWalker’s ability to capture long- range dependencies on the recently introduced LRGB benchmark, encompassing five datasets de- signed to test this very capability (details in Rampášek et al. [42], Dwivedi et al. [16]). Note that for PCQM-Contact, we used the filtered Mean Reciprocal Rank (MRR), introduced by [ 47], as the evaluation metric. NeuralWalker consistently outperformed all baseline methods across all but one task (see Table 2). Notably, on PascalVOC-SP and COCO-SP, where previous work has shown the importance of long-range dependencies ( e.g, Tönshoff et al. [47]), NeuralWalker significantly surpassed the SOTA models by a substantial margin, up to a 10% improvement . Open Graph Benchmark. To assess NeuralWalker’s scalability on massive quantities of graphs, we evaluated it on the OGB benchmark, which includes datasets exceeding 100K graphs each. For computational efficiency, we employed 1D CNNs as the sequence layers in this experiment. NeuralWalker achieved SOTA performance on two out of the three datasets (Table 3), demonstrating its ability to handle large-scale graph data. However, the OGBG -PPAdataset presented challenges with overfitting. On this dataset, NeuralWalker with just one block outperformed its multiblock counterpart on this dataset, suggesting potential limitations in regularization needed for specific tasks. Node classification on large graphs. We further explored NeuralWalker’s ability to handle large graphs in node classification tasks. We integrated NeuralWalker with Polynormer [ 12], the current SOTA method in this domain. In this experiment, NeuralWalker utilized very long walks (up to 1,000 steps) with a low sampling rate ( ≤0.01) to capture long-range dependencies, replacing the transformer layer within Polynormer that still struggles to scale to large graphs even with linear complexity . Despite eschewing transformer layers entirely, NeuralWalker achieved performance comparable to Polynormer (Table 5), showing its scalability and effectiveness in modeling large graphs. Indeed, the complexity of NeuralWalker can be flexibly controlled by its sampling rate and length, as shown in Section 4. A notable highlight is NeuralWalker’s ability to efficiently process the massive pokec dataset (1.6M nodes) using a single H100 GPU with 80GB of RAM. 5.2 Masked Positional Encoding Pretraining Explicitly utilizing random walks as sequences offers a significant advantage: it allows for the application of advanced language modeling techniques. As a proof-of-concept, we adapt the BERT pretraining strategy [ 13] to the positional encodings hpeof random walks. Our approach involves randomly replacing 15% of the positions in hpewith a constant vector of 0.5, with the objective of recovering the original binary encoding vectors for these positions. This method can be further enhanced by combining it with other established pretraining strategies, such as attributes masking [ 26]. Our experiments, as shown in Table 4, demonstrate that combining these strategies ( i.e.,we first pretrain the model with masked positional encoding prediction and then continue with masked attributes pretraining) significantly enhances performance on the ZINC dataset. 8 Table 3: Test performance on OGB [ 25]. Metrics with mean ± std of 10 runs are reported. DATASET OGBG -MOLPCBA OGBG -PPA OGBG -CODE 2 #GRAPHS 437.9K 158.1K 452.7K AVG. #NODES 26.0 243.4 125.2 AVG. #EDGES 28.1 2,266.1 124.2 METRIC AP ACC F1 GCN 0.2424 ± 0.0034 0.6857 ± 0.0061 0.1595 ± 0.0018 GIN 0.2703 ± 0.0023 0.7037 ± 0.0107 0.1581 ± 0.0026 GRAPH TRANS 0.2761 ± 0.0029 – 0.1830 ± 0.0024 SAT – 0.7522 ± 0.0056 0.1937 ± 0.0028 GPS 0.2907 ± 0.0028 0.8015 ± 0.0033 0.1894 ± 0.0024 CRAWL 0.2986 ± 0.0025 – – NEURAL WALKER 0.3086 ± 0.0031 0.7888 ± 0.0059 0.1957 ± 0.0025Table 4: Comparison of different pretraining strategies on the ZINC dataset. The pretraining was per- formed on ZINC without using any external data . STRATEGY ZINC W/O PRETRAIN 0.063 ± 0.001 MASKED ATTR . 0.061 ± 0.001 MASKED PE 0.055 ± 0.004 MASKED PE + ATTR .0.053 ± 0.002 0.25 0.50 0.75 1.00 Sampling rate γ0.080.100.12 ZINC 0.25 0.50 0.75 1.00 Sampling rate γ0.7250.7500.7750.800 CIFAR10 0.25 0.50 0.75 1.00 Sampling rate γ0.460.48 PascalVOC-SP 11025 50 100 Random walk length /lscript0.0750.1000.1250.150 11025 50 100 Random walk length /lscript0.750.80 11025 50 100 Random walk length /lscript0.420.440.460.48 051015 0100200300 0100200300 051015 0200400 050100150 Validation performance Train time per epoch on one A100 GPU (s) Figure 3: Validation performance when varying sampling rate and length of random walks. 5.3 Ablation studies Here, we dissect the main components of our model architecture to gauge their contribution to predictive performance and to guide dataset-specific hyperparameter optimization. We perform ablation studies on three datasets, from small to large graphs. Our analysis focuses on three key aspects: 1) We demonstrate the crucial role of integrating local and global message passing with random walks. 2) we evaluate various options for the sequence layer to identify the optimal choice. 3) We examine the impact of varying the sampling rate and length of random walks, revealing a trade-off between expressivity and computational complexity. Notably, these parameters allow explicit control over model complexity , a unique feature of our approach compared to subgraph MPNNs, which typically exhibit high complexity. All ablation experiments were performed on thevalidation set , with results averaged over four random seeds. The comprehensive findings are summarized in Table 6. Since NeuralWalker’s output depends on the sampled random walks at inference, we demonstrate its robustness to sampling variability in Appendix D.5. Effect of local and global message passing. Motivated by the limitations of the depth-first nature inherent in pure random walk-based encoders, as discussed in Section 3.3.4, this study investigates the potential complementary benefits of message passing. We conducted an ablation study (Table 6a) comparing NeuralWalker’s variants with and without local or global message passing modules. For local message passing, we employed a GIN with edge features [ 55,24]. Global message passing was explored using virtual node layers [ 20] and transformer layers [ 49,9]. Keeping the sequence layer fixed to Mamba, we observed that NeuralWalker with GIN consistently outperforms the version without, confirming the complementary strengths of random walks and local message passing. The impact of global message passing, however, varies across datasets, a phenomenon also noted by Rosenbluth et al. [43]. Interestingly, larger graphs like PascalVOC-SP demonstrate more significant gains from global message passing. This observation suggests promising directions for future research, such as developing methods to automatically identify optimal configurations for specific datasets. 9 Table 5: Test performance on node classification benchmarks from [ 41] and [ 30]. Metrics with mean ± std of 10 runs are reported. DATASET ROMAN -EMPIRE AMAZON -RATINGS MINESWEEPER TOLOKERS QUESTIONS POKEC #NODES 22,662 24,492 10,000 11,758 48,921 1,632,803 #EDGES 32,927 93,050 39,402 519,000 153,540 30,622,564 METRIC ACC ACC ROC AUC ROC AUC ROC AUC ACC GCN 73.69 ± 0.74 48.70 ± 0.63 89.75 ± 0.52 83.64 ± 0.67 76.09 ± 1.27 75.45 ± 0.17 GAT(- SEP) 88.75 ± 0.41 52.70 ± 0.62 93.91 ± 0.35 83.78 ± 0.43 76.79 ± 0.71 72.23 ± 0.18 GPS 82.00 ± 0.61 53.10 ± 0.42 90.63 ± 0.67 83.71 ± 0.48 71.73 ± 1.47 OOM NAG PHORMER 74.34 ± 0.77 51.26 ± 0.72 84.19 ± 0.66 78.32 ± 0.95 68.17 ± 1.53 76.59 ± 0.25 EXPHORMER 89.03 ± 0.37 53.51 ± 0.46 90.74 ± 0.53 83.77 ± 0.78 73.94 ± 1.06 OOM POLYNORMER 92.55 ± 0.37 54.81 ± 0.49 97.46 ± 0.36 85.91 ± 0.74 78.92 ± 0.89 86.10 ± 0.05 NEURAL WALKER 92.92 ± 0.36 54.58 ± 0.36 97.82 ± 0.40 85.56 ± 0.74 78.52 ± 1.13 86.46 ± 0.09 Table 6: Ablation studies of NeuralWalker. Average validation performance over 4 runs is reported. (a) Comparison of local and global message passing (MP). The sequence layer is fixed to Mamba. VN de- notes the virtual node and Trans. denotes the trans- former layer. MP ( LOCAL +GLOBAL ) ZINC CIFAR10 PASCAL VOC-SP GIN + W/O 0.085 80.885 0.4611 W/O+W/O 0.090 79.035 0.4525 GIN + VN 0.078 78.610 0.4672 GIN + T RANS . 0.083 80.755 0.4877 GIN + W/O 0.085 80.885 0.4611(b) Comparison of sequence layers. Local and global MP are selected to give the best validation perfor- mance except for the highlighted row corresponding to CRaWL, which does not use message passing. SEQUENCE LAYER ZINC CIFAR10 PASCAL VOC-SP MAMBA 0.078 80.885 0.4877 MAMBA (W/O BID ) 0.089 74.910 0.4522 S4 0.082 77.970 0.4559 CNN 0.088 80.665 0.4652 TRANS . 0.084 72.850 0.4316 CNN ( W/OMP) 0.116 78.760 0.3954 Comparison of sequence layer architectures. We investigated the impact of various sequence layer architectures on walk embeddings, as shown in Table 6b. The architectures examined include CNN, transformer (with RoPE), and SSMs like S4 and Mamba. Surprisingly, transformers consistently underperformed compared to other architectures, contrasting with their good performance in other domains. This discrepancy may be attributed to the unique sequential nature of walk embeddings, which might not align well with the attention mechanism utilized by transformers. Mamba emerged as the top performer across all datasets, consistently outperforming its predecessors, S4 and the unidirectional version. However, CNNs present a compelling alternative for large datasets due to their faster computation (typically 2-3x faster than Mamba on A100). This presents a practical trade-off: Mamba offers superior accuracy but requires more computational resources. CNNs might be preferable for very large datasets or real-time applications where speed is critical. In our benchmarking experiments, we employed Mamba as the sequence layer, except for the OGB datasets. Finally, as predicted by Thm. 4.4, both our Mamba and CNN variants with message passing significantly outperform CRaWL which relies on CNNs and does not use any message passing. Impact of random walk sampling strategies. We examined the impact of varying random walk sampling rates and lengths on NeuralWalker’s performance, using Mamba as the sequence layer. While we adjusted the sampling rate during training, we fixed it at 1.0 for inference to maximize coverage. As anticipated, a larger number of longer walks led to improved coverage of the graph’s structure, resulting in clear performance gains (Figure 3). However, this improvement plateaus as walks become sufficiently long, indicating diminishing returns beyond a certain threshold. Crucially, these performance gains come at the cost of increased computation time, which scales linearly with both sampling rate and walk length, as predicted by Thm. 4.5. This underscores the trade-off between expressivity and complexity, which can be explicitly controlled through these two hyperparameters. In practice, this trade-off between performance and computational cost necessitates careful consideration of resource constraints when selecting sampling rates and walk lengths. Future research could explore more efficient sampling strategies to minimize the necessary sampling rate. 10 6 Conclusion We have introduced NeuralWalker, a powerful and flexible architecture that combines random walks and message passing to address the expressivity limitations of structural encoding in graph learning. By treating random walks as sequences and leveraging advanced sequence modeling techniques, NeuralWalker achieves superior performance compared to existing GNNs and GTs, as demonstrated through extensive experiments on various benchmarks. Looking forward, we acknowledge opportuni- ties for further exploration. First, investigating more efficient random walk sampling strategies with improved graph coverage could potentially enhance NeuralWalker’s performance. Second, exploring more self-supervised learning techniques for learning on random walks holds promise for extending NeuralWalker’s applicability to unlabeled graphs. Limitations. NeuralWalker demonstrates good scalability to large graphs. However, one potential limitation lies in the trade-off between the sampling efficiency of random walks and graph coverage for very large graphs. In this work, we explored a computationally efficient sampling strategy but probably not with the optimal graph coverage. Investigating more efficient random walk sampling strategies that improve coverage while maintaining computational efficiency could further enhance NeuralWalker’s performance. Additionally, we identify a scarcity of publicly available graph datasets with well-defined long-range dependencies. While datasets like LRGB provide valuable examples, the limited number of such datasets hinders comprehensive evaluation and the potential to push the boundaries of long-range dependency capture in graph learning tasks. Furthermore, based on our experiments and [ 47], only 2 out of the 5 datasets in LRGB seem to present long-range dependencies. Broader impacts. While our research primarily focuses on general graph representation learning, we recognize the importance of responsible and ethical application in specialized fields. When utilized in domains such as drug discovery or computational biology, careful attention must be paid to ensuring the trustworthiness and appropriate use of our method to mitigate potential misuse. Our extensive experiments demonstrate the significant potential of our approach in both social network and biological network analysis, highlighting the promising societal benefits our work may offer in these specific areas. Acknowledgements We thank Luis Wyss and Trenton Chang for their insightful feedback on the manuscript. References [1]Romas Aleliunas, Richard M Karp, Richard J Lipton, László Lovász, and Charles Rackoff. Random walks, universal traversal sequences, and the complexity of maze problems. In Symposium on Foundations of Computer Science (SFCS) , 1979. [2]Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations (ICLR) , 2021. [3]Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan-Pablo Silva. The logical expressiveness of graph neural networks. In International Conference on Learning Representations (ICLR) , 2020. [4]Ali Behrouz and Farnoosh Hashemi. Graph mamba: Towards learning on graphs with state space models. arXiv preprint arXiv:2402.08678 , 2024. [5]Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In International conference on data mining (ICDM) , 2005. [6]Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Trans. Pattern Anal. Mach. Intell. , 45(1):657–668, 2023. [7]Xavier Bresson and Thomas Laurent. Residual gated graph convnets. arXiv preprint arXiv:1711.07553 , 2017. 11 [8]Dexiong Chen, Laurent Jacob, and Julien Mairal. Convolutional kernel networks for graph- structured data. In International Conference on Machine Learning (ICML) , 2020. [9]Dexiong Chen, Leslie O’Bray, and Karsten Borgwardt. Structure-aware transformer for graph representation learning. In International Conference on Machine Learning (ICML) , 2022. [10] Jinsong Chen, Kaiyuan Gao, Gaichao Li, and Kun He. Nagphormer: A tokenized graph transformer for node classification in large graphs. In International Conference on Learning Representations (ICLR) , 2022. [11] Ching-Yao Chuang and Stefanie Jegelka. Tree mover’s distance: Bridging graph metrics and stability of graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. [12] Chenhui Deng, Zichao Yue, and Zhiru Zhang. Polynormer: Polynomial-expressive graph transformer in linear time. In International Conference on Learning Representations (ICLR) , 2024. [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2019. [14] R. ˜M. Dudley. Real analysis and probability . Cambridge University Press, 2018. [15] Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. In International Conference on Learning Representations (ICLR) , 2021. [16] Vijay Prakash Dwivedi, Ladislav Rampášek, Michael Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. [17] Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research , 24(43):1–48, 2023. [18] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with PyTorch Geometric. InICLR Workshop on Representation Learning on Graphs and Manifolds , 2019. [19] Thomas Gärtner, Peter Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient alternatives. In Conference on Learning Theory (COLT) . Springer, 2003. [20] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning (ICML) , 2017. [21] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Conference on Knowledge Discovery and Data Mining (KDD) , 2016. [22] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 , 2023. [23] Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations (ICLR) , 2021. [24] Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In International Conference on Learning Representations (ICLR) , 2019. [25] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. InAdvances in Neural Information Processing Systems (NeurIPS) , 2020. [26] Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In International Conference on Learning Representations (ICLR) , 2020. [27] Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs. In International Conference on Machine Learning (ICML) , 2003. [28] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR) , 2016. 12 [29] Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. In Advances in Neural Information Processing Systems (NeurIPS) , 2021. [30] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data , June 2014. [31] Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In Advances in Neural Information Processing Systems (NeurIPS) , 2021. [32] László Lovász. Random walks on graphs. Combinatorics, Paul erdos is eighty , 2(1-46):4, 1993. [33] Liheng Ma, Chen Lin, Derek Lim, Adriana Romero-Soriano, Puneet K Dokania, Mark Coates, Philip Torr, and Ser-Nam Lim. Graph inductive biases in transformers without message passing. InInternational Conference on Machine Learning (ICML) , 2023. [34] Pierre Mahé, Nobuhisa Ueda, Tatsuya Akutsu, Jean-Luc Perret, and Jean-Philippe Vert. Graph kernels for molecular structure-activity relationship analysis with support vector machines. Journal of chemical information and modeling , 45(4):939–951, 2005. [35] Grégoire Mialon, Dexiong Chen, Margot Selosse, and Julien Mairal. Graphit: Encoding graph structure in transformers. arXiv preprint arXiv:2106.05667 , 2021. [36] Gaspard Michel, Giannis Nikolentzos, Johannes F Lutzeyer, and Michalis Vazirgiannis. Path neural networks: Expressive and accurate graph neural networks. In International Conference on Machine Learning (ICML) , 2023. [37] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in applied probability , 29(2):429–443, 1997. [38] Giannis Nikolentzos and Michalis Vazirgiannis. Random walk graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS) , 2020. [39] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In International Conference on Learning Representations (ICLR) , 2020. [40] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Conference on Knowledge Discovery and Data Mining (KDD) , 2014. [41] Oleg Platonov, Denis Kuznedelev, Michael Diskin, Artem Babenko, and Liudmila Prokhorenkova. A critical look at the evaluation of gnns under heterophily: Are we really making progress? In International Conference on Learning Representations (ICLR) , 2022. [42] Ladislav Rampášek, Michael Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a general, powerful, scalable graph transformer. Advances in Neural Information Processing Systems (NeurIPS) , 2022. [43] Eran Rosenbluth, Jan Tönshoff, Martin Ritzert, Berke Kisin, and Martin Grohe. Distinguished in uniform: Self-attention vs. virtual nodes. In International Conference on Learning Represen- tations (ICLR) , 2024. [44] Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J Sutherland, and Ali Kemal Sinop. Exphormer: Sparse transformers for graphs. In International Conference on Machine Learning (ICML) , 2023. [45] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024. [46] Mahito Sugiyama and Karsten Borgwardt. Halting in random walk kernels. In Advances in Neural Information Processing Systems (NeurIPS) , 2015. [47] Jan Tönshoff, Martin Ritzert, Eran Rosenbluth, and Martin Grohe. Where did the gap go? reassessing the long-range graph benchmark. In Learning on Graphs Conference , 2023. [48] Jan Tönshoff, Martin Ritzert, Hinrikus Wolf, and Martin Grohe. Walking out of the weisfeiler leman hierarchy: Graph learning beyond message passing. Transactions on Machine Learning Research (TMLR) , 2023. [49] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa- tion Processing Systems (NeurIPS) , 2017. 13 [50] Petar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations (ICLR) , 2018. [51] Aladin Virmaux and Kevin Scaman. Lipschitz regularity of deep neural networks: analysis and efficient estimation. In Advances in Neural Information Processing Systems (NeurIPS) , 2018. [52] Chloe Wang, Oleksii Tsepa, Jun Ma, and Bo Wang. Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv preprint arXiv:2402.00789 , 2024. [53] Shida Wang and Beichen Xue. State-space models with layer-wise nonlinearity are universal ap- proximators with exponential decaying memory. In Advances in Neural Information Processing Systems (NeurIPS) , 2024. [54] Zhanghao Wu, Paras Jain, Matthew Wright, Azalia Mirhoseini, Joseph E Gonzalez, and Ion Stoica. Representing long-range context for graph neural networks with global attention. In Advances in Neural Information Processing Systems (NeurIPS) , 2021. [55] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR) , 2019. [56] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks. Constructive Approximation , 55(1):407–474, 2022. [57] Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? In Advances in Neural Information Processing Systems (NeurIPS) , 2021. [58] Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Are transformers universal approximators of sequence-to-sequence functions? In International Conference on Learning Representations (ICLR) , 2020. [59] Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv preprint arXiv:2401.09417 , 2024. [60] Wenhao Zhu, Tianyu Wen, Guojie Song, Liang Wang, and Bo Zheng. On structural expressive power of graph transformers. In Conference on Knowledge Discovery and Data Mining (KDD) , 2023. 14 Appendix This appendix provides both theoretical and experimental materials and is organized as follows: Section A provides a more detailed background of related work. Section B presents some additional remarks on Neural Walker, including limitations and societal impacts. Section C provides theoretical background and proofs. Section D provides experimental details and additional results. A Background A.1 Message-Passing Graph Neural Networks Graph Neural Networks (GNNs) refine node representations iteratively by integrating information from neighboring nodes. Xu et al. (2019) [ 55] provide a unifying framework for this process, consist- ing of three key steps: AGGREGATE, COMBINE, and READOUT. Various GNN architectures can be seen as variations within these functions. In each layer, the AGGREGATE step combines representations from neighboring nodes (e.g., using sum or mean), which are then merged with the node’s previous representation in the COMBINE step. This is typically followed by a non-linear activation function, such as ReLU. The updated representations are then passed to the next layer, and this process repeats for each layer in the network. These steps primarily capture local sub-structures, necessitating a deep network to model interactions across the entire graph. The READOUT function ultimately aggregates node representations to the desired output granularity, whether at the node or graph level. Both AGGREGATE and READOUT steps must be permutation invariant. This framework offers a comprehensive perspective for understanding the diverse array of GNN architectures. A.2 Transformer on Graphs While Graph Neural Networks (GNNs) explicitly utilize graph structures, Transformers infer node relationships by focusing on node attributes. Transformers, introduced by [ 49], treat the graph as a (multi-)set of nodes and use self-attention to determine node similarity. A Transformer consists of two main components: a self-attention module and a feed-forward neural network (FFN). In self-attention, input features Xare linearly projected into query ( Q), key ( K), and value ( V) matrices. Self-attention is then computed as: Attn( X) := softmaxQKT √dout V∈Rn×dout, where doutis the dimension of Q. Multi-head attention, which concatenates multiple instances of this equation, has proven effective in practice. A Transformer layer combines self-attention with a skip connection and FFN: X′=X+ Attn( X), X′′= FFN( X′) := ReLU (X′W1)W2. Stacking multiple layers forms a Transformer model, resulting in node-level representations. However, due to self-attention’s permutation equivariance, Transformers produce identical representations for nodes with matching attributes, regardless of their graph context. Thus, incorporating structural information, typically through positional or structural encoding such as Laplacian positional encoding or random walk structural encoding [15, 42], is crucial. A.3 State Space Models As we treat random walks explicitly as sequences, recent advances in long sequence modeling could be leveraged directly to model random walks. SSMs are a type of these models that have shown 15 promising performance in long sequence modeling. SSMs map input sequence x(t)∈Rto some response sequence y(t)∈Rthrough an implicit state h(t)∈RNand three parameters (A, B, C ): h′(t) =Ah(t) +Bx(t), y (t) =Ch(t). For computational reasons, structured SSMs (S4) [ 23] proposes to discretize the above system by introducing a time step variable ∆and a discretization rule, leading to a reparametrization of the parameters AandB. Then, the discrete-time SSMs can be computed in two ways either as a linear recurrence or a global convolution. Recently, a selection mechanism [ 22] has been introduced to control which part of the sequence can flow into the hidden states, making the parameters in SSMs time and data-dependent. The proposed model, named Mamba, significantly outperforms its predecessors and results in several successful applications in many tasks. More recently, a bidirectional version of Mamba [ 59] has been proposed to handle image data, by averaging the representations of both forward and backward sequences after each Mamba block. B Additional Remarks on Neural Walker B.1 Illustration of the Position Encodings for Random Walks Here, we give a visual example of the positional encodings that we presented in Section 3.2. The example is shown in Figure 4. 4 56 3 27 81 6 5 3 4 6Walk pIdentity Encoding 0000 0000 0000 0000 0001 Adjacency Encoding 000 000 000 001 000 Figure 4: An example of the identity encoding and adjacency encoding presented in Secion 3.2. On the random walk colored in red, we have idW[4,3] = 1 asw4=w0= 6. We have adjW[3,2] = 1 asw3w0∈Eis an edge of the graph. B.2 Global Message Passing Techniques Even though long random walks could be sufficient to capture global information, we empirically found that global message passing is still useful in certain tasks. Here, we consider two techniques, namely virtual node and transformer layer. Similar to Gilmer et al. [20], Tönshoff et al. [48], a virtual node layer could be a simple solution to achieve this. Such a layer is explicitly defined as the following: ht V(⋆) = MLP ht−1 V(⋆) +X v∈Vhmp V(v)! , hvn V(v) :=hmp V(v) +ht V(⋆), (8) where MLP is a trainable MLP, ht V(⋆)represents the virtual node embedding at block tandh0 V(⋆) = 0. Alternatively, one could use any transformer layer to achieve this. The vanilla transformer layer is given by: hattn V(v) =hmp V(v) + Attn( hmp V)(v), htrans V(v) =hattn V(v) + MLP( hattn V(v)), (9) where Attn is a trainable scaled dot-product attention layer [ 49]. This layer is widely used in recent GT models [57, 9, 42]. 16 C Theoretical Results In this section, we present the background of random walks on graphs and the theoretical properties of NeuralWalker. Definition C.1 (Walk feature vector) .For any graph G= (V, E, x, z )andW∈ W ℓ(G), the walk feature vector XWofWis defined, by concatenating the node and edge feature vectors as well as the positional encodings along Wof window size s=ℓ, as XW= (x(wi), z(wiwi+1), hpe[i])i=0,...,ℓ∈R(ℓ+1)×dwalk, where hpeis the positional encoding in Section 3.2, z(wℓwℓ+1) = 0 , and dwalk:=d+d′+dpe. By abuse of notation, we denote by W(G)the set of walk feature vectors on G, and by P(W(G))a distribution of walk feature vectors on G. Lemma C.2. The walk feature vector with full graph coverage uniquely determines the graph, i.e., for two graphs GandG′inGnif there exists a walk W∈ W ℓ(G)visiting all nodes on Gand a walk W′∈ W ℓvisiting all nodes on G′such that XW=XW′, then GandG′are isomorphic. Proof. The proof is immediate following the Observation 1 of [48]. Now if we replace the normalization factor Nv(Pm(Wℓ(G)))in the walk aggregator in Section 3.3.3 with a simpler deterministic constant mℓ/|V|and apply an average pooling followed by a linear layer x7→u⊤x+b∈Rto the output of the walk aggregator, then the resulting function gf,m,ℓ :G →R defined on the graph space Gcan be rewritten as the average of some function of walk feature vectors: gf,m,ℓ(G) =1 mX W∈Pm(Wℓ(G))f(XW), (10) where f(XW) =1 ℓX wi∈W(u⊤fseq(hW)[i] +b), (11) andhWdefined in Eq. (1) depend on XW. Note that the above replacement of the normalization factor is not a strong assumption. It is based on the following lemmas: Lemma C.3 ([32]).LetGbe a connected graph. For a random walk W∼P(W(G))with W= (w0, w1, . . . , w t, . . .), we denote by Ptthe distribution of wt. Then, π(v) =d(v) 2|E|, where d(v)denotes the degree of node v, is the (unique) stationary distribution, i.e., if P0=πthen Pt=P0for any t. IfP0=π(v), then we have E[Nv(Pm(Wℓ(G)))] =mℓd(v) 2|E|. In particular, if Gis a regular graph, π(v) = 1/|V|is the uniform distribution nad E[Nv(Pm(Wℓ(G)))] = mℓ/|V|. Lemma C.4 ([32]) .IfGis a non-bipartite graph, then Pt→π(v)ast→ ∞ . The above two lemmas link the random normalization factor to the deterministic one. If we have a sufficiently large number of random walks, by the law of large numbers, we have gf,m,ℓ(G)a.s.− − → gf,ℓ:=EXW∼P(Wℓ(G))[f(XW)], (12) wherea.s.− − → denotes the almost sure convergence. This observation inspires us to consider the following integral probability metric [37] comparing distributions of walk feature vectors: dF,ℓ(G, G′) := sup f∈F EXW∼P(Wℓ(G))[f(XW)]−EXW′∼P(Wℓ(G′))[f(XW′)] , (13) where Fis some functional class, such as the class of neural networks defined by the NeuralWalker model. The following result provides us insight into the rate of convergence of gf,m,ℓ togf,ℓ: 17 Theorem C.5 (Convergence rate) .Assume that Var[f(XW)] = σ2<∞. Then, as mtends to infinity, we have√m(gf,m,ℓ(G)−gf,ℓ(G))d− → N (0, σ2), whered− →denotes the convergence in distribution. Proof. The proof follows the central limit theorem [14]. dF,ℓis actually a metric on the graph space Gnof bounded order nifFis a universal space and ℓis sufficiently large: Theorem C.6. IfFis a universal space and ℓ≥4n3, then dF,ℓ:G × G → R+is a metric on Gn satisfying: •(positivity) if GandG′are non-isomorphic, then dF,ℓ(G, G′)>0. •(symmetry) dF,ℓ(G, G′) =dF,ℓ(G′, G). •(triangle inequality) dF,ℓ(G, G′′)≤dF,ℓ(G, G′) +dF,ℓ(G′, G′′). Proof. The symmetry and triangle inequality are trivial by definition of dF,ℓ. Let us focus on the positivity. We assume that dF,ℓ(G, G′) = 0 . By the universality of F, for any ε > 0and f∈C(Rdwalk), the space of bounded continuous functions on Rdwalk, there exists a g∈ F such that ∥f−g∥∞≤ε. We then make the expansion EXW∼P(Wℓ(G))[f(XW)]−EXW′∼P(Wℓ(G′))[f(XW′)] ≤ EXW∼P(Wℓ(G))[f(XW)]−EXW∼P(Wℓ(G))[g(XW)] + EXW∼P(Wℓ(G))[g(XW)]−EXW′∼P(Wℓ(G′))[g(XW′)] + EXW′∼P(Wℓ(G′))[g(XW′)]−EXW′∼P(Wℓ(G′))[f(XW′)] . The first and third terms satisfy EXW∼P(Wℓ(G))[f(XW)]−EXW∼P(Wℓ(G))[g(XW)] ≤EXW∼P(Wℓ(G))|f(XW)−g(XW)| ≤ε, and the second term equals 0 by assumption. Hence, EXW∼P(Wℓ(G))[f(XW)]−EXW′∼P(Wℓ(G′))[f(XW′)] ≤2ε, for all f∈C(Rdwalk)andε >0. This implies P(Wℓ(G)) = P(Wℓ(G′))by Lemma 9.3.2 of [ 14], meaning that the distribution of walk feature vectors of length ℓinGis identical to the distribution inG′. Without loss of generality, we assume that GandG′are connected and our arguments can be easily generalized to each connected component if Gis not connected. Now for a random walk W∼P(W(G)), let us denote by TWthe number of steps to reach every node on the graph. Then E[TW]is called the cover time. A well-known result in graph theory [ 1] states that the cover time is upper bounded: E[TW]≤4|V||E|. Therefore the cover time for graphs in Gnis uniformly bounded by E[TW]≤4n3as|V| ≤nand |E| ≤n2. Then, by applying Markov’s inequality, we have P[TW<4n3+ϵ] = 1−P[TW≥4n3+ϵ]≥1−E[TW] 4n3+ϵ≥ϵ 4n3+ϵ>0, for any ϵ >0. Thus, P[TW≤4n3]>0which means that there exists a random walk of not greater than4n3that visits all nodes in G. As a result, there exists a random walk of length ℓreaching all nodes for ℓ≥4n3.P(Wℓ(G)) =P(Wℓ(G′))implies that there also exists a random walk W′inG′ such that XW=XW′. As a consequence, GandG′are isomorphic following Lemma C.2. Now if we remove the condition on the random walk length ℓ, we still have a pseudometric space without the positivity in Thm C.6. Moreover, we define the following isomorphism test: 18 Definition C.7 (k-subgraph isomorphism test) .We define that two graphs GandG′are not distin- guishable by the k-subgraph isomorphism test iff they have the same set of subgraphs of size k, i.e., Sk(G) =Sk(G′)withSk(G)denoting the set of subgraphs of size k. And we have the following result which provides a weak positivity of dF,ℓfor any ℓ >0: Theorem C.8. IfGandG′are distinguishable by the (⌊ℓ/2⌋+ 1) -subgraph isomorphism test, then dF,ℓ(G, G′)>0. Proof. We assume that dF,ℓ(G, G′) = 0 . Using the same arguments as in Thm. C.6, we have P(Wℓ(G)) =P(Wℓ(G′)). Let k:=⌊ℓ/2⌋+ 1. For any subgraph H∈ Sk(G), there exists a walk of length ℓ, in the worst case, that visits all its nodes. To see this, let us assume that Gis connected without loss of generality. Then, there exists a spanning tree of H. Through a depth-first search on this spanning tree, there exists a walk of length 2(k−1)≤ℓthat visits all the nodes, by visiting each edge at most twice in the spanning tree. Now as GandG′have the same distributions of walk feature vectors, the same walk feature vector should be found in G′, thus H∈ Sk(G′). Thus, we haveSk(G)⊆ Sk(G′). Similarly, we have the other inclusion and therefore Sk(G) =Sk(G′). C.1 Stability Results Now that we have a metric space (Gn, dF,ℓ)withℓ≥4n3, we can show some useful properties of gf,ℓ: Theorem C.9 (Lipschiz continuity of gf,ℓ).For any GandG′inGn, ifFis a functional space containing f, we have |gf,ℓ(G)−gf,ℓ(G′)| ≤dF,ℓ(G, G′). (14) Proof. The proof is immediate from the definition of dF,ℓ. The Lipschiz property is needed for stability to perturbations in the sense that if G′is close to Gin (Gn, dF,ℓ), then their images by gf,ℓ(output of the model) are also close. C.2 Expressivity Results Theorem C.10 (Injectivity of gf,ℓ).Assume Fis a universal space. If GandG′are non-isomorphic graphs, then there exists a f∈ F such that gf,ℓ(G)̸=gf,ℓ(G′)ifℓ≥4 max{|V|,|V′|}3. Proof. We can prove this by contrapositive. We note that G, G′∈ G nmaxwith nmax := max{|V|,|V′|}. Assume that for all f∈ F,gf,ℓ(G) =gf,ℓ(G′). This implies that dF,ℓ(G, G′) = 0 . Then by the positivity of dF,ℓinGnmax,GandG′are isomorphic. The injectivity property ensures that our model with a sufficiently large number of sufficiently long (≥4n3) random walks can distinguish between non-isomorphic graphs, highlighting its expressive power. Complementary to the above results, we now show that the expressive power of our model exceeds that of ordinary message passing neural networks even when considering random walks of small size. Additionally, we show that the expressive power of our model is stronger than the subgraph isomorphism test up to a certain size. We base the following theorem on NeuralWalker’s ability to distinguish between substructures: Theorem C.11. For any ℓ≥2, NeuralWalker equipped with the complete walk set Wℓis strictly more expressive than 1-WL and the (⌊ℓ/2⌋+ 1) -subgraph isomorphism test, and thus ordinary MPNNs. For the subgraph isomorphism test, we simply use the above theorem and Thm. C.8 which suggests that there exists a f∈ F such that gf,ℓ(G)̸=gf,ℓ(G′)ifGandG′are distinguishable by the (⌊ℓ/2⌋+ 1)-subgraph isomorphism test. Note that 1-WL distinguishable graphs are not necessarily included in (⌊ℓ/2⌋+ 1)-subgraph isomorphism distinguishable graphs as the size of WL-unfolding subtrees could be arbitrarily large. In order to prove the 1-WL expressivity, we first state a result on the expressive power of the walk aggregator function. We show that there exist aggregation functions such that for a node vthis 19 function counts the number of induced subgraphs that vis part of. Since vassumes a particular role (also refered to as orbit) in the subgraph, we are essentially interested in the subgraph rooted at v. In the following, let Gvdenote the graph Grooted at node v. Then, the set xℓ(G, v) ={{Gv= G[{w0, . . . , w k=v}], W= (w0, . . . , w ℓ), W∈ W ℓ(G)}}corresponds to the set of subgraphs with rootvthat are identified when using random walks of size ℓ. Lemma C.12. There exists a function hV aggsuch that for any node v∈G,v′∈G′and walk length ℓ, it holds that hV agg(v) =hV agg(v′)if and only if xℓ(G, v) =xℓ(G′, v′). Proof. For simplicity, we assume graphs to be unlabeled, by noting that a generalization to the labeled case requires only slight modifications. Recall that the positional encoding of a walk W encodes the pairwise adjacency of nodes contained in W. More formally, for a length- ℓwalk W∈ W ℓ(G), the k-th row of the corresponding walk feature vector XWencodes the induced subgraph G[{w0, . . . , w k}]. Assuming wk=v, we can also infer about the structural role of vin G[{w0, . . . , w k}]. Now, the function hV aggaggregates this induced subgraph information for sets of subgraphs into node embeddings. That is, for a node v∈Gand the set of walks Wℓ(G), the function hV agg(v)maps vto an embedding that aggregates the set {{Gv=G[{w0, . . . , w k=v}], W= (w0, . . . , w ℓ), W∈ W ℓ(G)}}. By considering the complete set of walks Wℓ(G), we guarantee a deterministic embedding. Assuming a sufficiently powerful neural network, it is easy to see that such a function hV aggcan be realized by our model. The claim immediately follows. Notice that the above theorem is defined on the entire set of walks of up to size ℓin order to ensure a complete enumeration of subgraphs. By using an aggregation function that fulfills Lemma C.12, the resulting node embeddings encode the set of induced subgraphs that the nodes are part of. For example, with walk length ℓ= 2, the node embeddings contain information about the number of triangles that they are part of. In the subsequent message passing step, NeuralWalker propagates this subgraph information. Analogously to e.g. [ 6], it can easily be shown that with a sufficient number of such message passing layers and a powerful readout network, the resulting graph representations are strictly more powerful than ordinary MPNNs, proving Thm. C.11 above. C.3 Complexity Results Theorem C.13 (Complexity) .The complexity of NeuralWalker, when used with the Mamba sequence layer [ 22], isO(kdn(γℓ+β)), where k, d, n, γ, ℓ, β denote the number of layers, hidden dimensions, the (maximum) number of nodes, sampling rate, length of random walks, and the average degree, respectively. Proof. The complexity of sampling random walks is O(nγℓ). The Mamba model with klayers and hidden dimensions doperates on O(nγ)random walks of length ℓ. As Mamba scales linearly to the sequence length, number of layers, and hidden dimensions [ 22], its complexity is O(kdnγℓ ). The complexity of kmessage passing layers of hidden dimensions disO(kdnβ )where βshould be much smaller than γℓin general. D Experimental Details and Additional Results In this section, we provide implementation details and additional experimental results D.1 Dataset Description We provide details of the datasets used in our experiments. For each dataset, we follow their respective training protocols and use the standard train/validation/test splits and evaluation metrics. ZINC (MIT License) [ 17].The ZINC dataset is a subset of the ZINC database, containing 12,000 molecular graphs representing commercially available chemical compounds. These graphs range from 9 to 37 nodes in size, with each node corresponding to a "heavy atom" (one of 28 possible types) and each edge representing a bond (one of 3 types). The goal is to predict the constrained solubility (logP) using regression. The dataset is conveniently pre-split for training, validation, and testing, with a standard split of 10,000/1,000/1,000 molecules for each set, respectively. 20 Table 7: Summary of the datasets [17, 16, 25] used in this study. DATASET # G RAPHSAVG. # A VG. #DIRECTEDPREDICTION PREDICTIONMETRICNODES EDGES LEVEL TASK ZINC 12,000 23.2 24.9 N O GRAPH REGRESSION MEAN ABS. ERROR MNIST 70,000 70.6 564.5 Y ES GRAPH 10-CLASS CLASSIF . A CCURACY CIFAR10 60,000 117.6 941.1 Y ES GRAPH 10-CLASS CLASSIF . A CCURACY PATTERN 14,000 118.9 3,039.3 N O INDUCTIVE NODE BINARY CLASSIF . A CCURACY CLUSTER 12,000 117.2 2,150.9 N O INDUCTIVE NODE 6-CLASS CLASSIF . A CCURACY PASCAL VOC-SP 11,355 479.4 2,710.5 N O INDUCTIVE NODE 21-CLASS CLASSIF . F1 SCORE COCO-SP 123,286 476.9 2,693.7 N O INDUCTIVE NODE 81-CLASS CLASSIF . F1 SCORE PEPTIDES -FUNC 15,535 150.9 307.3 N O GRAPH 10-TASK CLASSIF . A VG. PRECISION PCQM-C ONTACT 529,434 30.1 61.0 N O INDUCTIVE LINK LINK RANKING MRR PEPTIDES -STRUCT 15,535 150.9 307.3 N O GRAPH 11-TASK REGRESSION MEAN ABS. ERROR OGBG -MOLPCBA 437,929 26.0 28.1 N O GRAPH 128- TASK CLASSIF . A VG. PRECISION OGBG -PPA 158,100 243.4 2,266.1 N O GRAPH 37-TASK CLASSIF . A CCURACY OGBG -CODE 2 452,741 125.2 124.2 Y ES GRAPH 5TOKEN SEQUENCE F1SCORE Table 8: Summary of the datasets for transductive node classification [41, 30] used in this study. DATASET HOMOPHILY SCORE # N ODES # E DGES # C LASSES METRIC ROMAN -EMPIRE 0.023 22,662 32,927 18 A CCURACY AMAZON -RATINGS 0.127 24,492 93,050 5 A CCURACY MINESWEEPER 0.009 10,000 39,402 2 ROC AUC TOLOKERS 0.187 11,758 519,000 2 ROC AUC QUESTIONS 0.072 48,921 153,540 2 ROC AUC POKEC 0.000 1,632,803 30,622,564 2 A CCURACY MNIST and CIFAR10 (CC BY-SA 3.0 and MIT License) [ 17].MNIST and CIFAR10 are adapted for graph-based learning by converting each image into a graph. This is achieved by segmenting the image into superpixels using SLIC (Simple Linear Iterative Clustering) and then connecting each superpixel to its 8 nearest neighbors. The resulting graphs maintain the original 10-class classification task and standard dataset splits ( i.e.,55K/5K/10K train/validation/test for MNIST and 45K/5K/10K for CIFAR10.). PATTERN and CLUSTER (MIT License) [ 17].PATTERN and CLUSTER are synthetic graph datasets constructed using the Stochastic Block Model (SBM). They offer a unique challenge for inductive node-level classification, where the goal is to predict the class label of unseen nodes. PATTERN: This dataset presents the task of identifying pre-defined sub-graph patterns (100 possible) embedded within the larger graph. These embedded patterns are generated from distinct SBM parameters compared to the background graph, requiring the model to learn these differentiating connection characteristics. CLUSTER: Each graph in CLUSTER consists of six pre-defined clusters generated using the same SBM distribution. However, only one node per cluster is explicitly labeled with its unique cluster ID. The task is to infer the cluster membership (ID) for all remaining nodes based solely on the graph structure and node connectivity information. PASCAL VOC-SP andCOCO-SP (Custom license for Pascal VOC 2011 respecting Flickr terms of use, and CC BY 4.0 license) [ 16].PascalVOC-SP and COCO-SP are graph datasets derived from the popular image datasets Pascal VOC and MS COCO, respectively. These datasets leverage SLIC superpixellization, a technique that segments images into regions with similar properties. In both datasets, each superpixel is represented as a node in a graph, and the classification task is to predict the object class that each node belongs to. PEPTIDES -FUNC and PEPTIDES -STRUCT (CC BY-NC 4.0) [ 16].Peptides-func and Peptides- struct offer complementary views of peptide properties by leveraging atomic graphs derived from the SATPdb database. Peptides-func focuses on multi-label graph classification, aiming to predict one or more functional classes (out of 10 non-exclusive categories) for each peptide. In contrast, Peptides-struct employs graph regression to predict 11 continuous 3D structural properties of the peptides. PCQM-C ONTACT (CC BY 4.0) [ 16].The PCQM-Contact dataset builds upon PCQM4Mv2 [ 25] by incorporating 3D molecular structures. This enables the task of binary link prediction, where the goal is to identify pairs of atoms (nodes) that are considered to be in close physical proximity 21 (less than 3.5 angstroms) in 3D space, yet appear far apart (more than 5 hops) when looking solely at the 2D molecular graph structure. The standard evaluation metric for this ranking task is Mean Reciprocal Rank (MRR). As noticed by [ 47], the original implementation by [ 16] suffers from false negatives and self-loops. Thus, we use the filtered version of the MRR provided by [47]. OGBG -MOLPCBA (MIT License) [ 25].The ogbg-molpcba dataset, incorporated by the Open Graph Benchmark (OGB) [ 25] from MoleculeNet, focuses on multi-task binary classification of molecular properties. This dataset leverages a standardized node (atom) and edge (bond) feature representation that captures relevant chemophysical information. Derived from PubChem BioAssay, ogbg-molpcba offers the task of predicting the outcome of 128 distinct bioassays, making it valuable for studying the relationship between molecular structure and biological activity. OGBG -PPA (CC-0 license) [ 25].The PPA dataset, introduced by OGB [ 25], focuses on species classification. This dataset represents protein-protein interactions within a network, where each node corresponds to a protein and edges denote associations between them. Edge attributes provide additional information about these interactions, such as co-expression levels. We employ the standard dataset splits established by OGB [25] for our analysis. OGBG -CODE 2(MIT License) [ 25].CODE2 [ 25] is a dataset containing source code from the Python programming language. It is made up of Abstract Syntax Trees where the task is to classify the sub-tokens that comprise the method name. We use the standard splits provided by OGB [25]. ROMAN -EMPIRE (MIT License) [ 41].This dataset creates a graph from the Roman Empire Wikipedia article. Each word becomes a node, and edges connect words that are either sequential in the text or grammatically dependent (based on the dependency tree). Nodes are labeled by their syntactic role ( 17 most frequent roles are selected as unique classes and all the other roles are grouped into the 18th class). We use the standard splits provided by [41]. AMAZON -RATINGS (MIT License) [ 41].Based on the Amazon product co-purchase data, this dataset predicts a product’s average rating (5 classes). Products (books, etc.) are nodes, connected if frequently bought together. Mean fastText embeddings are used for product descriptions as node features and focus on the largest connected component for efficiency (5-core). We use the standard splits provided by [41]. MINESWEEPER (MIT License) [ 41].This is a synthetic dataset with a regular 100x100 grid where nodes represent cells. Each node connects to its eight neighbors (except edges). 20% of nodes are randomly mined. The task is to predict which are mines. Node features are one-hot-encoded numbers of neighboring mines, but are missing for 50% of nodes (marked by a separate binary feature). This grid structure differs from other datasets due to its regularity (average degree: 7.88). Since mines are random, both adjusted homophily and label informativeness are very low. We use the standard splits provided by [41]. TOLOKERS (MIT License) [ 41].This dataset features workers (nodes) from crowdsourcing projects. Edges connect workers who have collaborated on at least one of the 13 projects. The task is to predict banned workers. Node features include profile information and performance statis- tics. This graph (11.8K nodes, avg. degree 88.28) is significantly denser compared to other datasets. We use the standard splits provided by [41]. QUESTIONS (MIT License) [ 41].This dataset focuses on user activity prediction. Users are nodes, connected if they answered each other’s questions (Sept 2021 - Aug 2022). The task is to predict which users remained active. User descriptions (if available) are encoded using fastText embeddings. Notably, 15% lack descriptions and are identified by a separate feature. We use the standard splits provided by [41]. POKEC (unknown License) [ 30].This dataset was retrieved from SNAP [ 30] and preprocessed by [31]. The dataset contains anonymized data of the whole network of Pokec, the most popular online social network in Slovakia which has been provided for more than 10 years and connects more than 1.6 million people. Profile data contains gender, age, hobbies, interests, education, etc, and the 22 Table 9: Hyperparameters for the 5 datasets from GNN Benchmarks [17]. HYPERPARAMETER ZINC MNIST CIFAR10 PATTERN CLUSTER # B LOCKS 3 3 3 3 16 HIDDEN DIM 80 80 80 80 32 SEQUENCE LAYER MAMBA (BIDIRECTIONAL ) LOCAL MESSAGE PASSING GIN GLOBAL MESSAGE PASSING VN N ONE NONE VN VN DROPOUT 0.0 0.0 0.0 0.0 0.0 GRAPH POOLING SUM MEAN MEAN – – RW SAMPLING RATE 1.0 0.5 0.5 0.5 0.5 RW LENGTH 50 50 50 100 200 RW POSITION ENCODING WINDOW SIZE 8 8 8 16 32 BATCH SIZE 50 32 32 32 32 LEARNING RATE 0.002 0.002 0.002 0.002 0.01 # EPOCHS 2000 100 100 100 100 # W ARMUP EPOCHS 50 5 5 5 5 WEIGHT DECAY 0.0 1 E-6 1 E-6 0.0 0.0 # PARAMETERS 502K 112K 112K 504K 525K TRAINING TIME (EPOCH /TOTAL ) 16 S/8.4 H90S/2.5 H 95S/2.6 H 57S/1.6 H 241 S/6.7 H task is to predict the gender. The dataset was not released with a license. Thus, we only provide numerical values without any raw texts from the dataset. D.2 Computing details We implemented our models using PyTorch Geometric [ 18] (MIT License). Experiments were conducted on a shared computing cluster with various CPU and GPU configurations, including a mix of NVIDIA A100 (40GB) and H100 (80GB) GPUs. Each experiment was allocated resources on a single GPU, along with 4-8 CPUs and up to 60GB of system RAM. The run-time of each model was measured on a single NVIDIA A100 GPU. D.3 Hyperparameters Given the large number of hyperparameters and datasets, we did not perform an exhaustive search beyond the ablation studies in Section 5.3. For each dataset, we then adjusted the number of layers, the hidden dimension, the learning rate, the weight decay based on hyperparameters reported in the related literature [42, 48, 12, 47]. For the datasets from Benchmarking GNNs [ 17] and LRGB [ 16], we follow the commonly used parameter budgets of 500K parameters. For the node classification datasets from [ 41] and [ 30], we strictly follow the experimental setup from the state-of-the-art method Polynormer [ 12]. We only replace the global attention blocks from Polynormer with NeuralWalker’s walk encoder blocks and use the same hyperparameters selected by Polynormer [12]. We use the AdamW optimizer throughout our experiments with the default beta parameters in Pytorch. We use a linear warm-up increase of the learning rate at the beginning of the training followed by its cosine decay as in [ 42]. The test sampling rate is always set to 1.0 if not specified. The detailed hyperparameters used in NeuralWalker as well as the model sizes and runtime on different datasets are provided in Table 9, 10, 11, and 12. D.4 Additional Results for Ablation Studies We provide more detailed results for ablation studies in Table 13. D.5 Detailed Results and Robustness to Sampling Variability Since NeuralWalker’s output depends on the sampled random walks, we evaluate its robustness to sampling variability. Following Tönshoff et al. [48], we measure the local standard deviation (local 23 Table 10: Hyperparameters for the 5 datasets from LRGB [16]. HYPERPARAMETER PASCAL VOC-SP COCO-SP P EPTIDES -FUNC PEPTIDES -STRUCT PCQM-C ONTACT # B LOCKS 6 6 6 6 3 HIDDEN DIM 52 56 56 56 80 SEQUENCE LAYER MAMBA (BIDIRECTIONAL ) LOCAL MESSAGE PASSING GIN GLOBAL MESSAGE PASSING TRANS . N ONE VN VN VN DROPOUT 0.0 0.0 0.0 0.0 0.0 GRAPH POOLING – – MEAN MEAN – RW SAMPLING RATE 0.5 0.25 0.5 0.5 0.5 RW LENGTH 100 100 100 100 75 RW POSITION ENCODING WINDOW SIZE 16 16 16 32 16 BATCH SIZE 32 32 32 32 256 LEARNING RATE 0.002 0.002 0.002 0.004 0.001 # EPOCHS 200 200 200 200 150 # W ARMUP EPOCHS 10 10 10 10 10 WEIGHT DECAY 1E-06 0.0 0.0 0.0 0.0 # PARAMETERS 556K 492K 530K 541K 505K TRAINING TIME (EPOCH /TOTAL ) 218 S/12H 1402 S/78H 112 S/6.2 H 112 S/6.2 H 528 S/22H Table 11: Hyperparameters for the 3 datasets from OGB [25]. HYPERPARAMETER OGBG -MOLPCBA OGBG -PPA OGBG -CODE 2 # B LOCKS 4 1 3 HIDDEN DIM 500 384 256 SEQUENCE LAYER CONV. C ONV. C ONV. LOCAL MESSAGE PASSING GATED GCN GIN GIN GLOBAL MESSAGE PASSING VN P ERFORMER TRANS . DROPOUT 0.4 0.4 0.0 GRAPH POOLING MEAN MEAN MEAN RW SAMPLING RATE 0.5 0.5 0.5 RW LENGTH 25 200 100 RW POSITION ENCODING WINDOW SIZE 8 32 64 BATCH SIZE 512 32 32 LEARNING RATE 0.002 0.002 0.0003 # EPOCHS 100 200 30 # W ARMUP EPOCHS 5 10 2 WEIGHT DECAY 0.0 0.0 0.0 # PARAMETERS 13.0M 3.1M 12.5M TRAINING TIME (EPOCH /TOTAL ) 226 S/6.3 H 671 S/37H 1597 S/13.3 H std) by computing the standard deviation of performance metrics obtained with five independent sets of random walks (details in Tönshoff et al. [48]). The complete results for all datasets are presented in Table 14. Notably, by comparing the local std to the cross-model std obtained from training different models with varying random seeds, we consistently observe a smaller local std. This finding suggests that NeuralWalker’s predictions are robust to the randomness inherent in the random walk sampling process. 24 Table 12: Hyperparameters for node classification datasets from Platonov et al. [41] and Leskovec and Krevl [30]. The other hyperparameters strictly follow Polynormer [12]. HYPERPARAMETER ROMAN -EMPIRE AMAZON -RATINGS MINESWEEPER TOLOKERS QUESTIONS POKEC SEQUENCE LAYER MAMBA CONV. DROPOUT 0.3 0.2 0.3 0.1 0.2 0.1 RW SAMPLING RATE 0.01 0.01 0.01 0.01 0.01 0.001 RW TEST SAMPLING RATE 0.1 0.1 0.1 0.1 0.05 0.001 RW LENGTH 1000 1000 1000 1000 1000 500 RW POSITION ENCODING WINDOW SIZE 8 8 8 8 8 8 LEARNING RATE 0.0005 0.0005 0.0005 0.001 5 E-5 0.0005 TRAINING TIME (EPOCH /TOTAL ) 0.50 S/0.35 H 0.6S/0.45 H 0.22 S/0.12 H 0.67 S/0.19 H0.67 S/0.32 H6.44 S/4.5 H Table 13: Ablation studies of NeuralWalker on different choices of the sequence layer, local and global message passing. Validation performances with mean ±std of 4 runs are reported. We compare different choices of sequence layers (Mamba, S4, CNN, and Transformer), local (with or without GIN) and global (virtual node (VN), Transformer, or none (w/o)) message passing layers. Note that the row highlighted with the light gray color corresponds to the choices of CRaWL [48]. SEQUENCE LAYER LOCAL MP G LOBAL MP ZINC CIFAR10 P ASCAL VOC-SP MAMBA GIN VN 0.078 ± 0.004 78.610 ± 0.524 0.4672 ± 0.0077 MAMBA GIN T RANS . 0.083 ± 0.003 80.755 ± 0.467 0.4877 ± 0.0042 MAMBA GIN W/O 0.085 ± 0.003 80.885 ± 0.769 0.4611 ± 0.0036 MAMBA W /O VN 0.086 ± 0.008 78.025 ± 0.552 0.4570 ± 0.0064 MAMBA W /O W /O 0.090 ± 0.002 79.035 ± 0.850 0.4525 ± 0.0044 MAMBA (W/O BID ) GIN VN 0.089 ± 0.004 74.910 ± 0.547 0.4522 ± 0.0063 S4 GIN VN 0.082 ± 0.004 77.970 ± 0.506 0.4559 ± 0.0064 CNN GIN VN 0.088 ± 0.004 80.240 ± 0.767 0.4652 ± 0.0058 CNN GIN T RANS . 0.092 ± 0.004 80.665 ± 0.408 0.4790 ± 0.0081 CNN GIN W/O 0.102 ± 0.003 80.020 ± 0.279 0.4155 ± 0.0050 CNN W/O W/O 0.116 ± 0.003 78.760 ± 0.242 0.3954 ± 0.0080 TRANS . GIN VN 0.084 ± 0.003 72.850 ± 0.373 0.4316 ± 0.0072 Table 14: Detailed results for all the datasets. Note that different metrics are used to measure the performance on the datasets. For each experiment, we provide the cross-model std using different random seeds and the local std using different sets of random walks. DATASET METRICTEST VALIDATION SCORE CROSS MODEL STD LOCAL STD SCORE CROSS -MODEL STD ZINC MAE 0.0646 0.0007 0.0005 0.0782 0.0038 MNIST ACC 0.9876 0.0008 0.0003 0.9902 0.0006 CIFAR10 ACC 0.8003 0.0019 0.0009 0.8125 0.0053 PATTERN ACC 0.8698 0.0001 0.0001 0.8689 0.0003 CLUSTER ACC 0.7819 0.0019 0.0004 0.7827 0.0007 PASCAL VOC-SP F1 0.4912 0.0042 0.0019 0.5053 0.0084 COCO-SP F1 0.4398 0.0033 0.0011 0.4446 0.0030 PEPTIDES -FUNC AP 0.7096 0.0078 0.0014 0.7145 0.0033 PEPTIDES -STRUCT AP 0.2463 0.0005 0.0004 0.2389 0.0021 PCQM-C ONTACT MRR 0.4707 0.0007 0.0002 0.4743 0.0006 OGBG -MOLPCBA AP 0.3086 0.0031 0.0010 0.3160 0.0032 OGBG -PPA ACC 0.7888 0.0059 0.0004 0.7460 0.0058 OGBG -CODE 2 F1 0.1957 0.0025 0.0005 0.1796 0.0031 ROMAN -EMPIRE ACC 0.9292 0.0036 0.0005 0.9310 0.0032 AMAZON -RATINGS ACC 0.5458 0.0036 0.0009 0.5491 0.0049 MINESWEEPER ROC AUC 0.9782 0.0040 0.0003 0.9794 0.0047 TOLOKERS ROC AUC 0.8556 0.0075 0.0010 0.8540 0.0096 QUESTIONS ROC AUC 0.7852 0.0113 0.0009 0.7902 0.0086 POKEC ACC 0.8646 0.0009 0.0001 0.8644 0.0003 25 | 6 | 1 | The proposed NeuralWalker architecture leverages random walks and message-passing mechanisms, which typically have a moderate level of complexity. It combines local and long-range dependencies, making it a compact yet powerful model. Given the extensive graphs and nodes it claims to handle (up to 1.6M nodes), one can estimate that the model parameters would be substantial but not overly large compared to typical models in GNNs or transformers. The benchmark datasets indicate varied sizes, but datasets like PascalVOC-SP and COCO-SP suggest that the training time could reasonably fit within 6 hours for a moderately sized model. A single modern GPU can efficiently handle the training of such GNNs and transformers, assuming the implementation is optimized. Considering these factors, it's likely that the model can be trained within the 8-hour window on a single GPU, especially if memory-efficient operations are used. Unfortunately, no specific hardware was mentioned in the paper, but a standard modern GPU with adequate memory (e.g. NVIDIA V100 or better) should suffice for this task. | yes | Yes | Graph | Learning Long Range Dependencies on Graphs via Random Walks | 2024-06-05 0:00:00 | https://github.com/borgwardtlab/neuralwalker | 1 | In Code | 1 | Untitled13.ipynb | Yes | Works with installing micromamba first |
MalNet-Tiny | GatedGCN+ | [] | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00 | https://arxiv.org/abs/2502.09263v1 | [
"https://github.com/LUOyk1999/GNNPlus"
] | {'Accuracy': '94.600±0.570'} | [
"Accuracy",
"MCC"
] | Given the following paper and codebase:
Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
Codebase: https://github.com/LUOyk1999/GNNPlus
Improve the GatedGCN+ model on the MalNet-Tiny dataset. The result
should improve on the following metrics: {'Accuracy': '94.600±0.570'}. You must use only the codebase provided.
| Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in capturing long-range dependencies, while Graph Transformers (GTs) are considered superior due to their global atten- tion mechanisms. Literature frequently suggests that GTs outperform GNNs, particularly in graph- level tasks such as graph classification and re- gression. In this study, we explore the untapped potential of GNNs through an enhanced frame- work, GNN+, which integrates six widely used techniques: edge feature integration, normaliza- tion, dropout, residual connections, feed-forward networks, and positional encoding, to effectively tackle graph-level tasks. We conduct a systematic evaluation of three classic GNNs—GCN, GIN, and GatedGCN—enhanced by the GNN+frame- work across 14 well-known graph-level datasets. Our results show that, contrary to the prevailing belief, classic GNNs excel in graph-level tasks, securing top three rankings across all datasets and achieving first place in eight, while also demonstrating greater efficiency than GTs. This highlights the potential of simple GNN architec- tures, challenging the belief that complex mech- anisms in GTs are essential for superior graph- level performance. Our source code is available at https://github.com/LUOyk1999/tunedGNN-G. 1. Introduction Graph machine learning addresses both graph-level tasks and node-level tasks, as illustrated in Figure 1. These tasks fundamentally differ in their choice of the basic unit for dataset composition, splitting, and training, with graph-level tasks focusing on the entire graph, while node-level tasks focus on individual nodes. Graph-level tasks (Dwivedi et al., 1Beihang University2The Hong Kong Polytechnic University. *Corresponding authors: Lei Shi <{leishi, luoyk }@buaa.edu.cn >, Xiao-Ming Wu <xiao-ming.wu@polyu.edu.hk >. Preprint. Figure 1. Differences between graph-level and node-level tasks. 2023; Hu et al., 2020; Luo et al., 2023b;a) often involve the classification of relatively small molecular graphs in chem- istry (Morris et al., 2020) or the prediction of protein proper- ties in biology (Dwivedi et al., 2022). In contrast, node-level tasks typically involve large social networks (Tang et al., 2009) or citation networks (Yang et al., 2016), where the primary goal is node classification. This distinction in the fundamental unit of dataset leads to differences in method- ologies, training strategies, and application domains. Message-passing Graph Neural Networks (GNNs) (Gilmer et al., 2017), which iteratively aggregate information from local neighborhoods to learn node representations, have be- come the predominant approach for both graph-level and node-level tasks (Niepert et al., 2016; Kipf & Welling, 2017; Veliˇckovi ´c et al., 2018; Xu et al., 2018; Bresson & Laurent, 2017; Wu et al., 2020). Despite their widespread success, GNNs exhibit several inherent limitations, including re- stricted expressiveness (Xu et al., 2018; Morris et al., 2019), over-smoothing (Li et al., 2018; Chen et al., 2020), over- squashing (Alon & Yahav, 2020), and a limited capacity to capture long-range dependencies (Dwivedi et al., 2022). A prevalent perspective is that Graph Transformers (GTs) (M¨uller et al., 2023; Min et al., 2022; Hoang et al., 2024), as an alternative to GNNs, leverage global attention mech- anisms that enable each node to attend to all others (Yun et al., 2019; Dwivedi & Bresson, 2020), effectively model- 1arXiv:2502.09263v1 [cs.LG] 13 Feb 2025 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence ing long-range interactions and addressing issues such as over-smoothing, over-squashing, and limited expressiveness (Kreuzer et al., 2021; Ying et al., 2021; Zhang et al., 2023; Luo et al., 2023c; 2024b). However, the quadratic com- plexity of global attention mechanisms limits the scalability of GTs in large-scale, real-world applications (Behrouz & Hashemi, 2024; Sancak et al., 2024; Ding et al., 2024). Moreover, it has been noted that many state-of-the-art GTs (Chen et al., 2022; Ramp ´aˇsek et al., 2022; Shirzad et al., 2023; Ma et al., 2023) still rely—either explicitly or implic- itly—on the message passing mechanism of GNNs to learn local node representations, thereby enhancing performance. Recent studies (Luo et al., 2024a; 2025a;b) have shown that, contrary to common belief, classic GNNs such as GCN (Kipf & Welling, 2017), GAT (Veli ˇckovi ´c et al., 2018), and GraphSAGE (Hamilton et al., 2017) can achieve perfor- mance comparable to, or even exceeding, that of state-of-the- art GTs for node-level tasks. However, a similar conclusion has not yet been established for graph-level tasks. While T¨onshoff et al. (2023) conducted pioneering research demon- strating that tuning a few hyperparameters can significantly enhance the performance of classic GNNs, their results indi- cate that these models still do not match the overall perfor- mance of GTs. Furthermore, their investigation is limited to the Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). This raises an important question: “Can classic GNNs also excel in graph-level tasks?” To thoroughly investigate this question, we introduce GNN+, an enhanced GNN framework that incorporates es- tablished techniques into the message-passing mechanism, to effectively address graph-level tasks. As illustrated in Fig. 2, GNN+integrates six widely used techniques: the incorporation of edge features (Gilmer et al., 2017), normal- ization (Ioffe & Szegedy, 2015), dropout (Srivastava et al., 2014), residual connections (He et al., 2016), feed-forward networks (FFN) (Vaswani et al., 2017), and positional en- coding (Vaswani et al., 2017). Each technique serves as a hyperparameter that can be tuned to optimize performance. We systematically evaluate 3 classic GNNs—GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bres- son & Laurent, 2017)—enhanced by the GNN+frame- work across 14 well-known graph-level datasets from GNN Benchmark (Dwivedi et al., 2023), LRGB (Dwivedi et al., 2022), and OGB (Hu et al., 2020). The results demonstrate that the enhanced versions of classic GNNs match or even outperform state-of-the-art (SOTA) GTs, achieving rankings in the top three , including first place in eight datasets , while exhibiting superior efficiency. These findings pro- vide a positive answer to the previously posed question, suggesting that the true potential of GNNs for graph-level applications has been previously underestimated, and the GNN+framework effectively unlocks this potential whileaddressing their inherent limitations. Our ablation study also highlights the importance of each technique used in GNN+and offers valuable insights for future research. 2. Classic GNNs for Graph-level Tasks Define a graph as G= (V,E,X,E), where Vis the set of nodes, and E ⊆ V × V is the set of edges. The node feature matrix is X∈R|V|×dV, where |V|is the number of nodes, anddVis the dimension of the node features. The edge feature matrix is E∈R|E|×dE, where |E|is the number of edges and dEis the dimension of the edge features. Let A∈R|V|×|V|denote the adjacency matrix of G. Message-passing Graph Neural Networks (GNNs) com- pute node representations hl vat each layer lvia a message- passing mechanism, defined by Gilmer et al. (2017): hl v=UPDATEl hl−1 v,AGGln hl−1 u|u∈ N(v)o , (1) whereN(v)represents the neighboring nodes adjacent to v, AGGlis the message aggregation function, and UPDATEl is the update function. Initially, each node vis assigned a feature vector h0 v=xv∈Rd. The function AGGlis then used to aggregate information from the neighbors of vto update its representation. The output of the last layer L, i.e., GNN (v,A,X) =hL v, is the representation of vproduced by the GNN. In this work, we focus on three classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bresson & Laurent, 2017), which differ in their approach to learning the node representation hl v. Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), the vanilla GCN model, is formulated as: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl), (2) where ˆdv= 1+P u∈N(v)1,P u∈N(v)1denotes the degree of node v,Wlis the trainable weight matrix in layer l, and σis the activation function, e.g., ReLU(·) = max(0 ,·). Graph Isomorphism Networks (GIN) (Xu et al., 2018) learn node representations through a different approach: hl v=MLPl((1 + ϵ)·hl−1 v+X u∈N(v)hl−1 u), (3) where ϵis a constant, typicallyset to 0, and MLPldenotes a multi-layer perceptron, which usually consists of 2 layers. Residual Gated Graph Convolutional Networks (Gat- edGCN) (Bresson & Laurent, 2017) enhance traditional graph convolutions by incorporating gating mechanisms, improving adaptability and expressiveness: hl v=hl−1 vWl 1+X u∈N(v)ηv,u⊙hl−1 uWl 2, (4) 2 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence where ηv,u=σ(hl−1 vWl 3+hl−1 uWl 4)is the gating func- tion, and σdenotes the sigmoid activation function. This gating function determines how much each neighboring node contributes to updating the representation of the cur- rent node. The matrices Wl 1,Wl 2,Wl 3,Wl 4are trainable weight matrices specific to the layer l. Graph-level tasks treat the entire graph, rather than indi- vidual nodes or edges, as the fundamental unit for dataset composition, splitting, and training. Formally, given a la- beled graph dataset Γ ={(Gi,yi)}n i=1, each graph Giis associated with a label vector yi, representing either cat- egorical labels for classification or continuous values for regression. Next, the dataset Γis typically split into training, validation, and test sets, denoted as Γ = Γ train∪Γval∪Γtest. Graph-level tasks encompass inductive prediction tasks that operate on entire graphs, as well as on individual nodes or edges (Dwivedi et al., 2022), with each corresponding to a distinct label vector yi. Each type of task requires a tai- lored graph readout function R, which aggregates the output representations to compute the readout result, expressed as: hreadout i = Rn hL v:v∈ Vio , (5) where Virepresents the set of nodes in the graph Gi. For example, for graph prediction tasks , which aim to make predictions about the entire graph, the readout function R often operates as a global mean pooling function. Finally, for any graph Gi, the readout result is passed through a prediction head g(·)to obtain the predicted label ˆyi= g(hreadout i). The training objective is to minimize the total lossL(θ) =P Gi∈Γtrainℓ(ˆyi,yi)w.r.t. all graphs in the training set Γtrain, where yirepresents the ground-truth label ofGiandθdenotes the trainable GNN parameters. 3. GNN+: Enhancing Classic GNNs for Graph-level Tasks We propose an enhancement to classic GNNs for graph-level tasks by incorporating six popular techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks (FFN), and positional encoding. The enhanced framework, GNN+, is illustrated in Figure 2. 3.1. Edge Feature Integration Edge features were initially incorporated into some GNN frameworks (Gilmer et al., 2017; Hu et al., 2019) by directly integrating them into the message-passing process to en- hance information propagation between nodes. Following this practice, GraphGPS (Ramp ´aˇsek et al., 2022) and subse- quent GTs encode edge features within their local modules to enrich node representations. Taking GCN (Eq. 2) as an example, the edge features are Figure 2. The architecture of GNN+. integrated into the massage-passing process as follows: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e),(6) where Wl eis the trainable weight matrix in layer l, andeuv is the feature vector of the edge between uandv. 3.2. Normalization Normalization techniques play a critical role in stabilizing the training of GNNs by mitigating the effects of covariate shift, where the distribution of node embeddings changes across layers during training. By normalizing node em- beddings at each layer, the training process becomes more stable, enabling the use of higher learning rates and achiev- ing faster convergence (Cai et al., 2021). Batch Normalization (BN) (Ioffe & Szegedy, 2015) and Layer Normalization (LN) (Ba et al., 2016) are widely used techniques, typically applied to the output of each layer before the activation function σ(·). Here, we use BN: hl v=σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e)). (7) 3.3. Dropout Dropout (Srivastava et al., 2014), a technique widely used in convolutional neural networks (CNNs) to address overfitting by reducing co-adaptation among hidden neurons (Hinton et al., 2012; Yosinski et al., 2014), has also been found to be effective in addressing similar issues in GNNs (Shu et al., 2022), where the co-adaptation effects propagate and accu- mulate via message passing among different nodes. Typi- 3 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence cally, dropout is applied to the embeddings after activation: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))).(8) 3.4. Residual Connection Residual connections (He et al., 2016) significantly enhance CNN performance by directly connecting the input of a layer to its output, thus alleviating the problem of vanishing gra- dient. They were first adopted by the vanilla GCN (Kipf & Welling, 2017) and has since been incorporated into subse- quent works such as GatedGCN (Bresson & Laurent, 2017) and DeepGCNs (Li et al., 2019). Formally, residual connec- tions can be integrated into GNNs as follows: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v.(9) While deeper networks, such as deep CNNs (He et al., 2016; Huang et al., 2017), are capable of extract more complex fea- tures, GNNs encounter challenges like over-smoothing (Li et al., 2018), where deeper models lead to indistinguishable node representations. Consequently, most GNNs are shal- low, typically with 2 to 5 layers. However, by incorporating residual connections, we show that deeper GNNs, ranging from 3 to 20 layers, can achieve strong performance. 3.5. Feed-Forward Network GTs incorporate a feed-forward network (FFN) as a crucial component within each of their layers. The FFN enhances the model’s ability to perform complex feature transforma- tions and introduces non-linearity, thereby increasing the network’s expressive power. Inspired by this, we propose appending a fully-connected FFN at the end of each layer of GNNs, defined as: FFN(h) =BN(σ(hWl FFN 1)Wl FFN 2+h), (10) where Wl FFN 1andWl FFN 2are the trainable weight matrices of the FFN at the l-th GNN layer. The node embeddings output by the FFN are then computed as: hl v=FFN(Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v). (11) 3.6. Positional Encoding Positional encoding (PE) was introduced in the Transformer model (Vaswani et al., 2017) to represent the positions of tokens within a sequence for language modeling. In GTs,Table 1. Overview of the datasets used for graph-level tasks. Dataset # graphs Avg. # nodes Avg. # edges Task Type ZINC 12,000 23.2 24.9 Graph regression MNIST 70,000 70.6 564.5 Graph classification CIFAR10 60,000 117.6 941.1 Graph classification PATTERN 14,000 118.9 3,039.3 Inductive node cls. CLUSTER 12,000 117.2 2,150.9 Inductive node cls. Peptides-func 15,535 150.9 307.3 Graph classification Peptides-struct 15,535 150.9 307.3 Graph regression PascalVOC-SP 11,355 479.4 2,710.5 Inductive node cls. COCO-SP 123,286 476.9 2,693.7 Inductive node cls. MalNet-Tiny 5,000 1,410.3 2,859.9 Graph classification ogbg-molhiv 41,127 25.5 27.5 Graph classification ogbg-molpcba 437,929 26.0 28.1 Graph classification ogbg-ppa 158,100 243.4 2,266.1 Graph classification ogbg-code2 452,741 125.2 124.2 Graph classification PE is used to incorporate graph positional or structural infor- mation. The encodings are typically added or concatenated to the input node features xvbefore being fed into the GTs. Various PE methods have been proposed, such as Laplacian Positional Encoding (LapPE) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), Weisfeiler-Lehman Positional Encod- ing (WLPE) (Zhang et al., 2020), Random Walk Structural Encoding (RWSE) (Li et al., 2020; Dwivedi et al., 2021; Ramp ´aˇsek et al., 2022), Learnable Structural and Positional Encodings (LSPE) (Dwivedi et al., 2021), and Relative Ran- dom Walk Probabilities (RRWP) (Ma et al., 2023). Follow- ing the practice, we use RWSE, one of the most efficient PE methods, to improve the performance of GNNs as follows: xv= [xv∥xRWSE v]WPE, (12) where [·∥·]denotes concatenation, xRWSE v represents the RWSE of node v, andWPEis the trainable weight matrix. 4. Assessment: Experimental Setup Datasets, Table 1 . We use widely adopted graph-level datasets in our experiments, including ZINC ,MNIST , CIFAR10 ,PATTERN , and CLUSTER from the GNN Benchmark (Dwivedi et al., 2023); Peptides-func ,Peptides- struct ,PascalVOC-SP ,COCO-SP , and MalNet-Tiny from Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021); and ogbg-molhiv ,ogbg- molpcba ,ogbg-ppa , and ogbg-code2 from Open Graph Benchmark (OGB) (Hu et al., 2020). We follow their re- spective standard evaluation protocols including the splits and metrics. For further details, refer to the Appendix A.2. Baselines. Our main focus lies on classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018; Hu et al., 2019), GatedGCN (Bresson & Laurent, 2017), the SOTA GTs: GT (2020), GraphTrans (2021), SAN (2021), Graphormer (2021), SAT (2022), EGT (2022), GraphGPS (2022; 2023), GRPE (2022), Graphormer-URPE (2022), Graphormer-GD (2023), Specformer (2023), LGI- GT (2023), GPTrans-Nano (2023b), Graph ViT/MLP-Mixer (2023), NAGphormer (2023a), DIFFormer (2023), MGT 4 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 2. Test performance on five benchmarks from (Dwivedi et al., 2023) (%). Shown is the mean ±s.d. of 5 runs with different random seeds.+denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for ZINC, PATTERN, and CLUSTER, and ∼100K for MNIST and CIFAR10. The top 1st,2ndand3rdresults are highlighted. ZINC MNIST CIFAR10 PATTERN CLUSTER # graphs 12,000 70,000 60,000 14,000 12,000 Avg. # nodes 23.2 70.6 117.6 118.9 117.2 Avg. # edges 24.9 564.5 941.1 3039.3 2150.9 Metric MAE↓ Accuracy ↑ Accuracy ↑ Accuracy ↑ Accuracy ↑ GT (2020) 0.226 ±0.014 90.831 ±0.161 59.753 ±0.293 84.808 ±0.068 73.169 ±0.622 SAN (2021) 0.139 ±0.006 – – 86.581 ±0.037 76.691 ±0.650 Graphormer (2021) 0.122 ±0.006 – – – – SAT (2022) 0.094 ±0.008 – – 86.848 ±0.037 77.856 ±0.104 EGT (2022) 0.108 ±0.009 98.173 ±0.087 68.702 ±0.409 86.821 ±0.020 79.232 ±0.348 GraphGPS (2022) 0.070 ±0.004 98.051 ±0.126 72.298 ±0.356 86.685 ±0.059 78.016 ±0.180 GRPE (2022) 0.094 ±0.002 – – 87.020 ±0.042 – Graphormer-URPE (2022) 0.086 ±0.007 – – – – Graphormer-GD (2023) 0.081 ±0.009 – – – – Specformer (2023) 0.066 ±0.003 – – – – LGI-GT (2023) – – – 86.930 ±0.040 – GPTrans-Nano (2023b) – – – 86.731 ±0.085 – Graph ViT/MLP-Mixer (2023) 0.073 ±0.001 98.460 ±0.090 73.960 ±0.330 – – Exphormer (2023) – 98.414 ±0.038 74.754 ±0.194 86.734 ±0.008 – GRIT (2023) 0.059 ±0.002 98.108 ±0.111 76.468 ±0.881 87.196 ±0.076 80.026 ±0.277 GRED (2024) 0.077 ±0.002 98.383 ±0.012 76.853 ±0.185 86.759 ±0.020 78.495 ±0.103 GEAET (2024) – 98.513 ±0.086 76.634 ±0.427 86.993 ±0.026 – TIGT (2024) 0.057 ±0.002 98.231 ±0.132 73.963 ±0.361 86.681 ±0.062 78.025 ±0.223 Cluster-GT (2024a) 0.071 ±0.004 – – – – GMN (2024) – 98.391 ±0.182 74.560 ±0.381 87.090 ±1.260 – Graph-Mamba (2024) – 98.420 ±0.080 73.700 ±0.340 86.710 ±0.050 76.800 ±0.360 GCN 0.367 ±0.011 90.705 ±0.218 55.710 ±0.381 71.892 ±0.334 68.498 ±0.976 GCN+0.076 ±0.00979.3%↓98.382 ±0.0958.5%↑69.824 ±0.41325.4%↑87.021 ±0.09521.1%↑77.109 ±0.87212.6%↑ GIN 0.526 ±0.051 96.485 ±0.252 55.255 ±1.527 85.387 ±0.136 64.716 ±1.553 GIN+0.065 ±0.00487.6%↓98.285 ±0.1031.9%↑69.592 ±0.28725.9%↑86.842 ±0.0481.7%↑ 74.794 ±0.21315.6%↑ GatedGCN 0.282 ±0.015 97.340 ±0.143 67.312 ±0.311 85.568 ±0.088 73.840 ±0.326 GatedGCN+0.077 ±0.00572.7%↓98.712 ±0.1371.4%↑77.218 ±0.38114.7%↑87.029 ±0.0371.7%↑ 79.128 ±0.2357.1%↑ Time (epoch) of GraphGPS 21s 76s 64s 32s 86s Time (epoch) of GCN+7s 60s 40s 19s 29s (2023), DRew (2023), Exphormer (2023), GRIT (2023), GRED (2024), GEAET (2024), Subgraphormer (2024), TIGT (2024), GECO (2024), GPNN (2024), Cluster-GT (2024a), and the SOTA graph state space models (GSSMs): GMN (2024), Graph-Mamba (2024), GSSC (2024b). Fur- thermore, various other GTs exist in related surveys (Hoang et al., 2024; Shehzad et al., 2024; M ¨uller et al., 2023), empir- ically shown to be inferior to the GTs we compared against for graph-level tasks. We report the performance results of baselines primarily from (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023), with the remaining obtained from their re- spective original papers or official leaderboards whenever possible, as those results are obtained by well-tuned models. Hyperparameter Configurations. We conduct hyperpa- rameter tuning on 3 classic GNNs, consistent with the hy- perparameter search space of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). Specifically, we utilize the AdamW optimizer (Loshchilov, 2017) with a learning rate from{0.0001,0.0005,0.001}and an epoch limit of 2000. As discussed in Section 3, we focus on whether to use the edge feature module, normalization (BN), residual connections, FFN, PE (RWSE), and dropout rates from {0.05,0.1,0.15,0.2,0.3}, the number of layers from 3 to 20. Considering the large number of hyperparameters anddatasets, we do not perform an exhaustive search. Addition- ally, we retrain baseline GTs using the same hyperparam- eter search space and training environments as the classic GNNs. Since the retrained results did not surpass those in their original papers, we present the results from those sources .GNN+denotes the enhanced version. We report mean scores and standard deviations after 5 independent runs with different random seeds. Detailed hyperparameters are provided in Appendix A. 5. Assessment: Results and Findings 5.1. Overall Performance We evaluate the performance of the enhanced versions of 3 classic GNNs across 14 well-known graph-level datasets. The enhanced versions of classic GNNs achieved state- of-the-art performance, ranking in the top three across 14 datasets , including first place in 8 of them , while also demonstrating superior efficiency . This suggests that the GNN+framework effectively harnesses the po- tential of classic GNNs for graph-level tasks and suc- cessfully mitigates their inherent limitations. 5 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 3. Test performance on five datasets from Long-Range Graph Benchmarks (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021). +denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for all. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny # graphs 15,535 15,535 11,355 123,286 5,000 Avg. # nodes 150.9 150.9 479.4 476.9 1,410.3 Avg. # edges 307.3 307.3 2,710.5 2,693.7 2,859.9 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ GT (2020) 0.6326 ±0.0126 0.2529 ±0.0016 0.2694 ±0.0098 0.2618 ±0.0031 – SAN (2021) 0.6439 ±0.0075 0.2545 ±0.0012 0.3230 ±0.0039 0.2592 ±0.0158 – GraphGPS (2022) 0.6535 ±0.0041 0.2500 ±0.0005 0.3748 ±0.0109 0.3412 ±0.0044 0.9350 ±0.0041 GraphGPS (2023) 0.6534 ±0.0091 0.2509 ±0.0014 0.4440 ±0.0065 0.3884 ±0.0055 0.9350 ±0.0041 NAGphormer (2023a) – – 0.4006 ±0.0061 0.3458 ±0.0070 – DIFFormer (2023) – – 0.3988 ±0.0045 0.3620 ±0.0012 – MGT (2023) 0.6817 ±0.0064 0.2453 ±0.0025 – – – DRew (2023) 0.7150 ±0.0044 0.2536 ±0.0015 0.3314 ±0.0024 – – Graph ViT/MLP-Mixer (2023) 0.6970 ±0.0080 0.2449 ±0.0016 – – – Exphormer (2023) 0.6258 ±0.0092 0.2512 ±0.0025 0.3446 ±0.0064 0.3430 ±0.0108 0.9402 ±0.0021 GRIT (2023) 0.6988 ±0.0082 0.2460 ±0.0012 – – – Subgraphormer (2024) 0.6415 ±0.0052 0.2475 ±0.0007 – – – GRED (2024) 0.7133 ±0.0011 0.2455 ±0.0013 – – – GEAET (2024) 0.6485 ±0.0035 0.2547 ±0.0009 0.3933 ±0.0027 0.3219 ±0.0052 – TIGT (2024) 0.6679 ±0.0074 0.2485 ±0.0015 – – – GECO (2024) 0.6975 ±0.0025 0.2464 ±0.0009 0.4210 ±0.0080 0.3320 ±0.0032 – GPNN (2024) 0.6955 ±0.0057 0.2454 ±0.0003 – – – Graph-Mamba (2024) 0.6739 ±0.0087 0.2478 ±0.0016 0.4191 ±0.0126 0.3960 ±0.0175 0.9340 ±0.0027 GSSC (2024b) 0.7081 ±0.0062 0.2459 ±0.0020 0.4561 ±0.0039 – 0.9406 ±0.0064 GCN 0.6860 ±0.0050 0.2460 ±0.0007 0.2078 ±0.0031 0.1338 ±0.0007 0.8100 ±0.0081 GCN+0.7261 ±0.0067 5.9%↑0.2421 ±0.0016 1.6%↓0.3357 ±0.0087 62.0%↑0.2733 ±0.0041 104.9% ↑0.9354 ±0.0045 15.5%↑ GIN 0.6621 ±0.0067 0.2473 ±0.0017 0.2718 ±0.0054 0.2125 ±0.0009 0.8898 ±0.0055 GIN+0.7059 ±0.0089 6.6%↑0.2429 ±0.0019 1.8%↓0.3189 ±0.0105 17.3%↑0.2483 ±0.0046 16.9%↑ 0.9325 ±0.0040 4.8%↑ GatedGCN 0.6765 ±0.0047 0.2477 ±0.0009 0.3880 ±0.0040 0.2922 ±0.0018 0.9223 ±0.0065 GatedGCN+0.7006 ±0.0033 3.6%↑0.2431 ±0.0020 1.9%↓0.4263 ±0.0057 9.9%↑ 0.3802 ±0.0015 30.1%↑ 0.9460 ±0.0057 2.6%↑ Time (epoch) of GraphGPS 6s 6s 17s 213s 46s Time (epoch) of GCN+6s 6s 12s 162s 6s GNN Benchmark, Table 2. We observe that our GNN+ implementation substantially enhances the performance of classic GNNs, with the most significant improvements on ZINC, PATTERN, and CLUSTER. On MNIST and CIFAR, GatedGCN+outperforms SOTA models such as GEAET and GRED, securing top rankings. Long-Range Graph Benchmark (LRGB), Table 3. The results reveal that classic GNNs can achieve strong perfor- mance across LRGB datasets. Specifically, GCN+excels on the Peptides-func and Peptides-struct datasets. On the other hand, GatedGCN+achieves the highest accuracy on MalNet-Tiny. Furthermore, on PascalVOC-SP and COCO- SP, GatedGCN+significantly improves performance, se- curing the third-best model ranking overall. These results highlight the potential of classic GNNs in capturing long- range interactions in graph-level tasks. Open Graph Benchmark (OGB), Table 4. Finally, we test our method on four OGB datasets. As shown in Table 4, GatedGCN+consistently ranks among the top three mod- els and achieves top performance on three out of the four datasets. On ogbg-ppa, GatedGCN+shows an improve- ment of approximately 9%, ranking first on the OGB leader- board. On ogbg-molhiv and ogbg-molpcba, GatedGCN+ even matches the performance of Graphormer and EGT pre-trained on other datasets. Additionally, on ogbg-code2, GatedGCN+secures the third-highest performance, under-scoring the potential of GNNs for large-scale OGB datasets. 5.2. Ablation Study To examine the unique contributions of different technique used in GNN+, we conduct a series of ablation analysis by selectively removing elements such as edge feature module (Edge.), normalization (Norm), dropout, residual connec- tions (RC), FFN, PE from GCN+, GIN+, and GatedGCN+. The effect of these ablations is assessed across GNN Bench- mark (see Table 5), LRGB, and OGB (see Table 6) datasets. Our ablation study demonstrates that each module incor- porated in GNN+—including edge feature integration, normalization, dropout, residual connections, FFN, and PE—is indispensable ; the removal of any single com- ponent results in a degradation of overall performance. Observation 1: The integration of edge features is par- ticularly effective in molecular and image superpixel datasets, where these features carry critical information. In molecular graphs such as ZINC and ogbg-molhiv, edge features represent chemical bond information, which is es- sential for molecular properties. Removing this module leads to a significant performance drop. In protein networks ogbg-ppa, edges represent normalized associations between proteins. Removing the edge feature module results in a sub- 6 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 4. Test performance in four benchmarks from Open Graph Benchmark (OGB) (Hu et al., 2020).+denotes the enhanced version, while the baseline results were obtained from their respective original papers.†indicates the use of additional pretraining datasets, included here for reference only and excluded from ranking. ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # graphs 41,127 437,929 158,100 452,741 Avg. # nodes 25.5 26.0 243.4 125.2 Avg. # edges 27.5 28.1 2,266.1 124.2 Metric AUROC ↑ Avg. Precision ↑ Accuracy ↑ F1 score ↑ GT (2020) – – 0.6454 ±0.0033 0.1670 ±0.0015 GraphTrans (2021) – 0.2761 ±0.0029 – 0.1830 ±0.0024 SAN (2021) 0.7785 ±0.2470 0.2765 ±0.0042 – – Graphormer (pre-trained) (2021) 0.8051 ±0.0053†– – – SAT (2022) – – 0.7522 ±0.0056 0.1937 ±0.0028 EGT (pre-trained) (2022) 0.8060 ±0.0065†0.2961 ±0.0024†– – GraphGPS (2022) 0.7880 ±0.0101 0.2907 ±0.0028 0.8015 ±0.0033 0.1894 ±0.0024 Specformer (2023) 0.7889 ±0.0124 0.2972 ±0.0023 – – Graph ViT/MLP-Mixer (2023) 0.7997 ±0.0102 – – – Exphormer (2023) 0.7834 ±0.0044 0.2849 ±0.0025 – – GRIT (2023) 0.7835 ±0.0054 0.2362 ±0.0020 – – Subgraphormer (2024) 0.8038 ±0.0192 – – – GECO (2024) 0.7980 ±0.0200 0.2961 ±0.0008 0.7982 ±0.0042 0.1915 ±0.0020 GSSC (2024b) 0.8035 ±0.0142 – – – GCN 0.7606 ±0.0097 0.2020 ±0.0024 0.6839 ±0.0084 0.1507 ±0.0018 GCN+0.8012 ±0.0124 5.4%↑0.2721 ±0.0046 34.7%↑0.8077 ±0.0041 18.1%↑0.1787 ±0.0026 18.6%↑ GIN 0.7835 ±0.0125 0.2266 ±0.0028 0.6892 ±0.0100 0.1495 ±0.0023 GIN+0.7928 ±0.0099 1.2%↑0.2703 ±0.0024 19.3%↑0.8107 ±0.0053 17.7%↑0.1803 ±0.0019 20.6%↑ GatedGCN 0.7687 ±0.0136 0.2670 ±0.0020 0.7531 ±0.0083 0.1606 ±0.0015 GatedGCN+0.8040 ±0.0164 4.6%↑0.2981 ±0.0024 11.6%↑0.8258 ±0.0055 9.7%↑ 0.1896 ±0.0024 18.1%↑ Time (epoch/s) of GraphGPS 96s 196s 276s 1919s Time (epoch/s) of GCN+16s 91s 178s 476s Table 5. Ablation study on GNN Benchmark (Dwivedi et al., 2023) (%). - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. ZINC MNIST CIFAR10 PATTERN CLUSTER Metric MAE↓ Accuracy ↑Accuracy ↑Accuracy ↑Accuracy ↑ GCN+0.076 ±0.009 98.382 ±0.095 69.824 ±0.413 87.021 ±0.095 77.109 ±0.872 (-) Edge. 0.135 ±0.004 98.153 ±0.042 68.256 ±0.357 86.854 ±0.054 – (-) Norm 0.107 ±0.011 97.886 ±0.066 60.765 ±0.829 52.769 ±0.874 16.563 ±0.134 (-) Dropout – 97.897 ±0.071 65.693 ±0.461 86.764 ±0.045 74.926 ±0.469 (-) RC 0.159 ±0.016 95.929 ±0.169 58.186 ±0.295 86.059 ±0.274 16.508 ±0.615 (-) FFN 0.132 ±0.021 97.174 ±0.063 63.573 ±0.346 86.746 ±0.088 72.606 ±1.243 (-) PE 0.127 ±0.010 – – 85.597 ±0.241 75.568 ±1.147 GIN+0.065 ±0.004 98.285 ±0.103 69.592 ±0.287 86.842 ±0.048 74.794 ±0.213 (-) Edge. 0.122 ±0.009 97.655 ±0.075 68.196 ±0.107 86.714 ±0.036 65.895 ±3.425 (-) Norm 0.096 ±0.006 97.695 ±0.065 64.918 ±0.059 86.815 ±0.855 72.119 ±0.359 (-) Dropout – 98.214 ±0.064 66.638 ±0.873 86.836 ±0.053 73.316 ±0.355 (-) RC 0.137 ±0.031 97.675 ±0.175 64.910 ±0.102 86.645 ±0.125 16.800 ±0.088 (-) FFN 0.104 ±0.003 11.350 ±0.008 60.582 ±0.395 58.511 ±0.016 62.175 ±2.895 (-) PE 0.123 ±0.014 – – 86.592 ±0.049 73.925 ±0.165 GatedGCN+0.077 ±0.005 98.712 ±0.137 77.218 ±0.381 87.029 ±0.037 79.128 ±0.235 (-) Edge. 0.119 ±0.001 98.085 ±0.045 72.128 ±0.275 86.879 ±0.017 76.075 ±0.845 (-) Norm 0.088 ±0.003 98.275 ±0.045 71.995 ±0.445 86.942 ±0.023 78.495 ±0.155 (-) Dropout 0.089 ±0.003 98.225 ±0.095 70.383 ±0.429 86.802 ±0.034 77.597 ±0.126 (-) RC 0.106 ±0.002 98.442 ±0.067 75.149 ±0.155 86.845 ±0.025 16.670 ±0.307 (-) FFN 0.098 ±0.005 98.438 ±0.151 76.243 ±0.131 86.935 ±0.025 78.975 ±0.145 (-) PE 0.174 ±0.009 – – 85.595 ±0.065 77.515 ±0.265 stantial accuracy decline, ranging from 0.5083 to 0.7310 for classic GNNs. Similarly, in image superpixel datasets like CIFAR-10, PascalVOC-SP, and COCO-SP, edge features encode spatial relationships between superpixels, which are crucial for maintaining image coherence. However, in codegraphs such as ogbg-code2 and MalNet-Tiny, where edges represent call types, edge features are less relevant to the prediction tasks, and their removal has minimal impact. Observation 2: Normalization tends to have a greater impact on larger-scale datasets, whereas its impact is less significant on smaller datasets. For large-scale datasets such as CIFAR 10, COCO-SP, and the OGB datasets, removing normalization leads to signifi- cant performance drops. Specifically, on ogbg-ppa, which has 158,100 graphs, ablating normalization results in an accuracy drop of around 15% for three classic GNNs. This result is consistent with Luo et al. (2024a), who found that normalization is more important for GNNs in node clas- sification on large graphs. In such datasets, where node feature distributions are more complex, normalizing node embeddings is essential for stabilizing the training process. Observation 3: Dropout proves advantageous for most datasets, with a very low dropout rate being sufficient and optimal . Our analysis highlights the crucial role of dropout in main- taining the performance of classic GNNs on GNN Bench- mark and LRGB and large-scale OGB datasets, with its ablation causing significant declines—for instance, an 8.8% relative decrease for GatedGCN+on CIFAR-10 and a 20.4% relative decrease on PascalVOC-SP. This trend continues in 7 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 6. Ablation study on LRGB and OGB datasets. - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ AUROC ↑Avg. Precision ↑Accuracy ↑ F1 score ↑ GCN+0.7261 ±0.0067 0.2421 ±0.0016 0.3357 ±0.0087 0.2733 ±0.0041 0.9354 ±0.0045 0.8012 ±0.0124 0.2721 ±0.0046 0.8077 ±0.0041 0.1787 ±0.0026 (-) Edge. 0.7191 ±0.0036 – 0.2942 ±0.0043 0.2219 ±0.0060 0.9292 ±0.0034 0.7714 ±0.0204 0.2628 ±0.0019 0.2994 ±0.0062 0.1785 ±0.0033 (-) Norm 0.7107 ±0.0027 0.2509 ±0.0026 0.1802 ±0.0111 0.2332 ±0.0079 0.9236 ±0.0054 0.7753 ±0.0049 0.2528 ±0.0016 0.6705 ±0.0104 0.1679 ±0.0027 (-) Dropout 0.6748 ±0.0055 0.2549 ±0.0025 0.3072 ±0.0069 0.2601 ±0.0046 – 0.7431 ±0.0185 0.2405 ±0.0047 0.7893 ±0.0052 0.1641 ±0.0043 (-) RC – – 0.2734 ±0.0036 0.1948 ±0.0096 0.8916 ±0.0048 – – 0.7520 ±0.0157 0.1785 ±0.0029 (-) FFN – – 0.2786 ±0.0068 0.2314 ±0.0073 0.9118 ±0.0078 0.7432 ±0.0052 0.2621 ±0.0019 0.7672 ±0.0071 0.1594 ±0.0020 (-) PE 0.7069 ±0.0093 0.2447 ±0.0015 – – – 0.7593 ±0.0051 0.2667 ±0.0034 – – GIN+0.7059 ±0.0089 0.2429 ±0.0019 0.3189 ±0.0105 0.2483 ±0.0046 0.9325 ±0.0040 0.7928 ±0.0099 0.2703 ±0.0024 0.8107 ±0.0053 0.1803 ±0.0019 (-) Edge. 0.7033 ±0.0015 0.2442 ±0.0028 0.2956 ±0.0047 0.2259 ±0.0053 0.9286 ±0.0049 0.7597 ±0.0103 0.2702 ±0.0021 0.2789 ±0.0031 0.1752 ±0.0020 (-) Norm 0.6934 ±0.0077 0.2444 ±0.0015 0.2707 ±0.0037 0.2244 ±0.0063 0.9322 ±0.0025 0.7874 ±0.0114 0.2556 ±0.0026 0.6484 ±0.0246 0.1722 ±0.0034 (-) Dropout 0.6384 ±0.0094 0.2531 ±0.0030 0.3153 ±0.0113 – – – 0.2545 ±0.0068 0.7673 ±0.0059 0.1730 ±0.0018 (-) RC 0.6975 ±0.0038 0.2527 ±0.0015 0.2350 ±0.0044 0.1741 ±0.0085 0.9150 ±0.0047 0.7733 ±0.0122 0.1454 ±0.0061 – 0.1617 ±0.0026 (-) FFN – – 0.2393 ±0.0049 0.1599 ±0.0081 0.8944 ±0.0074 – 0.2534 ±0.0033 0.6676 ±0.0039 0.1491 ±0.0016 (-) PE 0.6855 ±0.0027 0.2455 ±0.0019 0.3141 ±0.0031 – – 0.7791 ±0.0268 0.2601 ±0.0023 – – GatedGCN+0.7006 ±0.0033 0.2431 ±0.0020 0.4263 ±0.0057 0.3802 ±0.0015 0.9460 ±0.0057 0.8040 ±0.0164 0.2981 ±0.0024 0.8258 ±0.0055 0.1896 ±0.0024 (-) Edge. 0.6882 ±0.0028 0.2466 ±0.0018 0.3764 ±0.0117 0.3172 ±0.0109 0.9372 ±0.0062 0.7831 ±0.0157 0.2951 ±0.0028 0.0948 ±0.0000 0.1891 ±0.0021 (-) Norm 0.6733 ±0.0026 0.2474 ±0.0015 0.3628 ±0.0043 0.3527 ±0.0051 0.9326 ±0.0056 0.7879 ±0.0178 0.2748 ±0.0012 0.6864 ±0.0165 0.1743 ±0.0026 (-) Dropout 0.6695 ±0.0101 0.2508 ±0.0014 0.3389 ±0.0066 0.3393 ±0.0051 – – 0.2582 ±0.0036 0.8088 ±0.0062 0.1724 ±0.0027 (-) RC – 0.2498 ±0.0034 0.4075 ±0.0052 0.3475 ±0.0064 0.9402 ±0.0054 0.7833 ±0.0177 0.2897 ±0.0016 0.8099 ±0.0053 0.1844 ±0.0025 (-) FFN – – – 0.3508 ±0.0049 0.9364 ±0.0059 – 0.2875 ±0.0022 – 0.1718 ±0.0024 (-) PE 0.6729 ±0.0084 0.2461 ±0.0025 0.4052 ±0.0031 – – 0.7771 ±0.0057 0.2813 ±0.0022 – – large-scale OGB datasets, where removing dropout results in a 5–13% performance drop across 3 classic GNNs on ogbg-molpcba. Notably, 97% of the optimal dropout rates are≤0.2, and 64% are ≤0.1, indicating that a very low dropout rate is both sufficient and optimal for graph-level tasks. Interestingly, this finding for graph-level tasks con- trasts with Luo et al. (2024a)’s observations for node-level tasks, where a higher dropout rate is typically required. Observation 4: Residual connections are generally es- sential, except in shallow GNNs applied to small graphs. Removing residual connections generally leads to signifi- cant performance drops across datasets, with the only excep- tions being found in the peptide datasets. Although similar in the number of nodes to CLUSTER and PATTERN, pep- tide datasets involve GNNs with only 3-5 layers, while the others use deeper networks with over 10 layers. For shallow networks in small graphs, residual connections may not be as beneficial and can even hurt performance by disrupting feature flow. In contrast, deeper networks in larger graphs rely on residual connections to maintain gradient flow and enable stable, reliable long-range information exchange. Observation 5: FFN is crucial for GIN+and GCN+, greatly impacting their performance across datasets. Ablating FFN leads to substantial performance declines for GIN+and GCN+across almost all datasets, highlighting its essential role in graph-level tasks. Notably, on MNIST, removing FNN leads to an 88% relative accuracy drop for GIN+. This is likely because the architectures of GIN+and GCN+rely heavily on FFN for learning complex node fea-ture representations. In contrast, GatedGCN+uses gating mechanisms to adaptively adjust the importance of neigh- boring nodes’ information, reducing the need for additional feature transformations. The only exceptions are observed in the peptides datasets, where FFN is not used in all three models. This may be due to the shallow GNN architecture, where complex feature transformations are less necessary. Observation 6: PE is particularly effective for small- scale datasets, but negligible for large-scale datasets. Removing PE significantly reduces performance for classic GNNs on small-scale datasets like ZINC, PATTERN, CLUS- TER, Peptides-func, and ogbg-molhiv, which only contain 10,000-40,000 graphs. By contrast, on large-scale datasets like ogbg-code2, ogbg-molpcba, ogbg-ppa, and COCO-SP (over 100,000 graphs), the impact of PE is less pronounced. This may be because smaller datasets rely more on PE to capture graph structure, whereas larger datasets benefit from the abundance of data, reducing the need for PE. 6. Conclusion This study highlights the often-overlooked potential of clas- sic GNNs in tacking graph-level tasks. By integrating six widely used techniques into a unified GNN+framework, we enhance three classic GNNs for graph-level tasks. Evalu- ations on 14 benchmark datasets reveal that, these enhanced GNNs match or outperform GTs, while also demonstrating greater efficiency. These findings challenge the prevailing belief that GTs are inherently superior, reaffirming the capa- bility of simple GNN structures as powerful models. 8 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Impact Statements This paper presents work whose goal is to advance the field of Graph Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. References Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205 , 2020. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. Bar-Shalom, G., Bevilacqua, B., and Maron, H. Sub- graphormer: Unifying subgraph gnns and graph transformers via graph products. arXiv preprint arXiv:2402.08450 , 2024. Behrouz, A. and Hashemi, F. Graph mamba: Towards learn- ing on graphs with state space models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 119–130, 2024. Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028 , 2023. Bresson, X. and Laurent, T. Residual gated graph convnets. arXiv preprint arXiv:1711.07553 , 2017. Cai, T., Luo, S., Xu, K., He, D., Liu, T.-y., and Wang, L. Graphnorm: A principled approach to accelerating graph neural network training. In International Conference on Machine Learning , pp. 1204–1215. PMLR, 2021. Chen, D., Lin, Y ., Li, W., Li, P., Zhou, J., and Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelli- gence , volume 34, pp. 3438–3445, 2020. Chen, D., O’Bray, L., and Borgwardt, K. Structure-aware transformer for graph representation learning. In Interna- tional Conference on Machine Learning , pp. 3469–3489. PMLR, 2022. Chen, J., Gao, K., Li, G., and He, K. NAGphormer: A tokenized graph transformer for node classification in large graphs. In The Eleventh International Confer- ence on Learning Representations , 2023a. URL https: //openreview.net/forum?id=8KYeilT3Ow. Chen, Z., Tan, H., Wang, T., Shen, T., Lu, T., Peng, Q., Cheng, C., and Qi, Y . Graph propagation trans- former for graph representation learning. arXiv preprint arXiv:2305.11424 , 2023b.Choi, Y . Y ., Park, S. W., Lee, M., and Woo, Y . Topology-informed graph transformer. arXiv preprint arXiv:2402.02005 , 2024. Ding, Y ., Orvieto, A., He, B., and Hofmann, T. Recurrent distance-encoding neural networks for graph representa- tion learning, 2024. URL https://openreview.net/forum? id=lNIj5FdXsC. Dwivedi, V . P. and Bresson, X. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699 , 2020. Dwivedi, V . P., Luu, A. T., Laurent, T., Bengio, Y ., and Bres- son, X. Graph neural networks with learnable structural and positional representations. In International Confer- ence on Learning Representations , 2021. Dwivedi, V . P., Ramp ´aˇsek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph bench- mark. arXiv preprint arXiv:2206.08164 , 2022. Dwivedi, V . P., Joshi, C. K., Luu, A. T., Laurent, T., Ben- gio, Y ., and Bresson, X. Benchmarking graph neural networks. Journal of Machine Learning Research , 24 (43):1–48, 2023. Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 , 2019. Freitas, S. and Dong, Y . A large-scale database for graph representation learning. Advances in neural information processing systems , 2021. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chem- istry. In International conference on machine learning , pp. 1263–1272. PMLR, 2017. Gutteridge, B., Dong, X., Bronstein, M. M., and Di Gio- vanni, F. Drew: Dynamically rewired message pass- ing with delay. In International Conference on Machine Learning , pp. 12252–12267. PMLR, 2023. Hamilton, W., Ying, Z., and Leskovec, J. Inductive repre- sentation learning on large graphs. Advances in neural information processing systems , 30, 2017. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. A generalization of vit/mlp-mixer to graphs. InInternational conference on machine learning , pp. 12724–12745. PMLR, 2023. 9 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012. Hoang, V . T., Lee, O., et al. A survey on structure-preserving graph transformers. arXiv preprint arXiv:2401.16176 , 2024. Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V ., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 , 2019. Hu, W., Fey, M., Zitnik, M., Dong, Y ., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems , 33:22118–22133, 2020. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 4700–4708, 2017. Huang, S., Song, Y ., Zhou, J., and Lin, Z. Cluster-wise graph transformer with dual-granularity kernelized at- tention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. URL https://openreview.net/forum?id=3j2nasmKkP. Huang, Y ., Miao, S., and Li, P. What can we learn from state space models for machine learning on graphs? arXiv preprint arXiv:2406.05815 , 2024b. Hussain, M. S., Zaki, M. J., and Subramanian, D. Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 655–665, 2022. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. InInternational conference on machine learning , pp. 448– 456. pmlr, 2015. Kipf, T. N. and Welling, M. Semi-supervised classifica- tion with graph convolutional networks. In International Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl. Kreuzer, D., Beaini, D., Hamilton, W., L ´etourneau, V ., and Tossou, P. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems , 34:21618–21629, 2021. Li, G., Muller, M., Thabet, A., and Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision , pp. 9267–9276, 2019.Li, P., Wang, Y ., Wang, H., and Leskovec, J. Distance en- coding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems , 33:4465–4478, 2020. Li, Q., Han, Z., and Wu, X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI conference on artificial intelligence , 2018. Liang, J., Chen, M., and Liang, J. Graph external attention enhanced transformer. arXiv preprint arXiv:2405.21061 , 2024. Lin, C., Ma, L., Chen, Y ., Ouyang, W., Bronstein, M. M., and Torr, P. Understanding graph transformers by gen- eralized propagation, 2024. URL https://openreview.net/ forum?id=JfjduOxrTY. Loshchilov, I. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. Luo, S., Li, S., Zheng, S., Liu, T.-Y ., Wang, L., and He, D. Your transformer may not be as powerful as you expect. Advances in Neural Information Processing Systems , 35: 4301–4315, 2022. Luo, Y ., Shi, L., and Thost, V . Improving self-supervised molecular representation learning using persistent homol- ogy. In Thirty-seventh Conference on Neural Information Processing Systems , 2023a. URL https://openreview.net/ forum?id=wEiUGpcr0M. Luo, Y ., Shi, L., Xu, M., Ji, Y ., Xiao, F., Hu, C., and Shan, Z. Impact-oriented contextual scholar profiling using self-citation graphs. arXiv preprint arXiv:2304.12217 , 2023b. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. In Thirty-seventh Conference on Neural Information Processing Systems , 2023c. URL https:// openreview.net/forum?id=g49s1N5nmO. Luo, Y ., Shi, L., and Wu, X.-M. Classic GNNs are strong baselines: Reassessing GNNs for node classification. In The Thirty-eight Conference on Neural Information Pro- cessing Systems Datasets and Benchmarks Track , 2024a. URL https://openreview.net/forum?id=xkljKdGe4E. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. Advances in Neural Information Process- ing Systems , 36, 2024b. Luo, Y ., Li, H., Liu, Q., Shi, L., and Wu, X.-M. Node identifiers: Compact, discrete representations for effi- cient graph learning. In The Thirteenth International Conference on Learning Representations , 2025a. URL https://openreview.net/forum?id=t9lS1lX9FQ. 10 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Luo, Y ., Wu, X.-M., and Zhu, H. Beyond random masking: When dropout meets graph convolutional networks. In The Thirteenth International Conference on Learning Representations , 2025b. URL https://openreview.net/ forum?id=PwxYoMvmvy. Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P., and Lim, S.-N. Graph inductive biases in transformers without message passing. arXiv preprint arXiv:2305.17589 , 2023. Min, E., Chen, R., Bian, Y ., Xu, T., Zhao, K., Huang, W., Zhao, P., Huang, J., Ananiadou, S., and Rong, Y . Trans- former for graphs: An overview from architecture per- spective. arXiv preprint arXiv:2202.08455 , 2022. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Pro- ceedings of the AAAI conference on artificial intelligence , volume 33, pp. 4602–4609, 2019. Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M. Tudataset: A collection of bench- mark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 , 2020. M¨uller, L., Galkin, M., Morris, C., and Ramp ´aˇsek, L. Attending to graph transformers. arXiv preprint arXiv:2302.04181 , 2023. Ngo, N. K., Hy, T. S., and Kondor, R. Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics , 159(3), 2023. Niepert, M., Ahmed, M., and Kutzkov, K. Learning con- volutional neural networks for graphs. In International conference on machine learning , pp. 2014–2023. PMLR, 2016. Park, W., Chang, W., Lee, D., Kim, J., and Hwang, S.-w. Grpe: Relative positional encoding for graph transformer. arXiv preprint arXiv:2201.12787 , 2022. Ramp ´aˇsek, L., Galkin, M., Dwivedi, V . P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scal- able graph transformer. arXiv preprint arXiv:2205.12454 , 2022. Sancak, K., Hua, Z., Fang, J., Xie, Y ., Malevich, A., Long, B., Balin, M. F., and C ¸ataly ¨urek, ¨U. V . A scalable and effective alternative to graph transformers. arXiv preprint arXiv:2406.12059 , 2024. Shehzad, A., Xia, F., Abid, S., Peng, C., Yu, S., Zhang, D., and Verspoor, K. Graph transformers: A survey. arXiv preprint arXiv:2407.09777 , 2024.Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. arXiv preprint arXiv:2303.06147 , 2023. Shu, J., Xi, B., Li, Y ., Wu, F., Kamhoua, C., and Ma, J. Understanding dropout for graph neural networks. In Companion Proceedings of the Web Conference 2022 , pp. 1128–1138, 2022. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929–1958, 2014. Tang, J., Sun, J., Wang, C., and Yang, Z. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowl- edge discovery and data mining , pp. 807–816, 2009. T¨onshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. arXiv preprint arXiv:2309.00367 , 2023. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. Veliˇckovi ´c, P., Cucurull, G., Casanova, A., Romero, A., Li`o, P., and Bengio, Y . Graph attention networks. In International Conference on Learning Representations , 2018. Wang, C., Tsepa, O., Ma, J., and Wang, B. Graph-mamba: Towards long-range graph sequence modeling with se- lective state spaces. arXiv preprint arXiv:2402.00789 , 2024. Wu, Q., Yang, C., Zhao, W., He, Y ., Wipf, D., and Yan, J. DIFFormer: Scalable (graph) transformers induced by en- ergy constrained diffusion. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=j6zUzrapY3L. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y . A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning sys- tems, 32(1):4–24, 2020. Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems , 34:13266– 13279, 2021. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. 11 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yang, Z., Cohen, W., and Salakhudinov, R. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning , pp. 40–48. PMLR, 2016. Yin, S. and Zhong, G. Lgi-gt: Graph transformers with local and global operators interleaving. 2023. Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y ., and Liu, T.-Y . Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems , 34:28877–28888, 2021. Yosinski, J., Clune, J., Bengio, Y ., and Lipson, H. How trans- ferable are features in deep neural networks? Advances in neural information processing systems , 27, 2014. Yun, S., Jeong, M., Kim, R., Kang, J., and Kim, H. J. Graph transformer networks. Advances in neural information processing systems , 32, 2019. Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of GNNs via graph biconnectivity. In The Eleventh International Conference on Learning Rep- resentations , 2023. URL https://openreview.net/forum? id=r9hNv76KoT3. Zhang, J., Zhang, H., Xia, C., and Sun, L. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140 , 2020. 12 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence A. Datasets and Experimental Details A.1. Computing Environment Our implementation is based on PyG (Fey & Lenssen, 2019). The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. A.2. Datasets Table 7 presents a summary of the statistics and characteristics of the datasets. •GNN Benchmark (Dwivedi et al., 2023) . ZINC contains molecular graphs with node features representing atoms and edge features representing bonds The task is to regress the constrained solubility (logP) of the molecule. MNIST and CIFAR10 are adapted from image classification datasets, where each image is represented as an 8-nearest-neighbor graph of SLIC superpixels, with nodes representing superpixels and edges representing spatial relationships. The 10-class classification tasks follow the original image classification tasks. PATTERN andCLUSTER are synthetic datasets sampled from the Stochastic Block Model (SBM) for inductive node classification, with tasks involving sub-graph pattern recognition and cluster ID inference. For all datasets, we adhere to the respective training protocols and standard evaluation splits (Dwivedi et al., 2023). •Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021) . Peptides-func andPeptides- struct are atomic graphs of peptides from SATPdb, with tasks of multi-label graph classification into 10 peptide functional classes and graph regression for 11 3D structural properties, respectively. PascalVOC-SP andCOCO-SP are node classification datasets derived from the Pascal VOC and MS COCO images by SLIC superpixelization, where each superpixel node belongs to a particular object class. We did not use PCQM-Contact in (Dwivedi et al., 2022) as its download link was no longer valid. MalNet-Tiny (Freitas & Dong, 2021) is a subset of MalNet with 5,000 function call graphs (FCGs) from Android APKs, where the task is to predict software type based on structure alone. For each dataset, we follow standard training protocols and splits (Dwivedi et al., 2022; Freitas & Dong, 2021). •Open Graph Benchmark (OGB) (Hu et al., 2020) .We also consider a collection of larger-scale datasets from OGB, containing graphs in the range of hundreds of thousands to millions: ogbg-molhiv andogbg-molpcba are molecular property prediction datasets from MoleculeNet. ogbg-molhiv involves binary classification of HIV inhibition, while ogbg-molpcba predicts results of 128 bioassays in a multi-task setting. ogbg-ppa contains protein-protein association networks, where nodes represent proteins and edges encode normalized associations between them; the task is to classify the origin of the network among 37 taxonomic groups. ogbg-code2 consists of abstract syntax trees (ASTs) from Python source code, with the task of predicting the first 5 subtokens of the function’s name. We maintain all the OGB standard evaluation settings (Hu et al., 2020). Table 7. Overview of the datasets used for graph-level tasks (Dwivedi et al., 2023; 2022; Hu et al., 2020; Freitas & Dong, 2021). Dataset # graphs Avg. # nodes Avg. # edges # node/edge feats Prediction level Prediction task Metric ZINC 12,000 23.2 24.9 28/1 graph regression MAE MNIST 70,000 70.6 564.5 3/1 graph 10-class classif. Accuracy CIFAR10 60,000 117.6 941.1 5/1 graph 10-class classif. Accuracy PATTERN 14,000 118.9 3,039.3 3/1 inductive node binary classif. Accuracy CLUSTER 12,000 117.2 2,150.9 7/1 inductive node 6-class classif. Accuracy Peptides-func 15,535 150.9 307.3 9/3 graph 10-task classif. Avg. Precision Peptides-struct 15,535 150.9 307.3 9/3 graph 11-task regression MAE PascalVOC-SP 11,355 479.4 2,710.5 14/2 inductive node 21-class classif. F1 score COCO-SP 123,286 476.9 2,693.7 14/2 inductive node 81-class classif. F1 score MalNet-Tiny 5,000 1,410.3 2,859.9 5/1 graph 5-class classif. Accuracy ogbg-molhiv 41,127 25.5 27.5 9/3 graph binary classif. AUROC ogbg-molpcba 437,929 26.0 28.1 9/3 graph 128-task classif. Avg. Precision ogbg-ppa 158,100 243.4 2,266.1 1/7 graph 37-task classif. Accuracy ogbg-code2 452,741 125.2 124.2 2/2 graph 5 token sequence F1 score A.3. Hyperparameters and Reproducibility Please note that we mainly follow the experiment settings of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). For the hyperparameter selections of classic GNNs, in addition to what we have covered, we list other settings in Tables 8, 9, 10, 13 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence 11, 12, 13. Further details regarding hyperparameters can be found in our code. In all experiments, we use the validation set to select the best hyperparameters. GNN+denotes enhanced implementation of the GNN model. Our code is available under the MIT License. Table 8. Hyperparameter settings of GCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 6 5 12 12 Edge Feature Module True True True True False Normalization BN BN BN BN BN Dropout 0.0 0.15 0.05 0.05 0.1 Residual Connections True True True True True FFN True True True True True PE RWSE-32 False False RWSE-32 RWSE-20 Hidden Dim 64 60 65 90 90 Graph Pooling add mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.0005 0.001 0.001 0.001 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 260,177 112,570 114,345 517,219 516,674 Time (epoch) 7.6s 60.1s 40.2s 19.5s 29.7s Table 9. Hyperparameter settings of GCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 14 18 8 4 10 4 4 Edge Feature Module True False True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.05 0.0 0.1 0.2 0.2 0.2 Residual Connections False False True True True False False True True FFN False False True True True True True True True PE RWSE-32 RWSE-32 False False False RWSE-20 RWSE-16 False False Hidden Dim 275 255 85 70 110 256 512 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.001 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 400 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 507,351 506,127 520,986 460,611 494,235 1,407,641 13,316,700 5,549,605 23,291,826 Time (epoch) 6.9s 6.6s 12.5s 162.5s 6.6s 16.3s 91.4s 178.2s 476.3s 14 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 10. Hyperparameter settings of GIN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 5 5 8 10 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.0 0.1 0.05 0.05 0.05 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 80 60 60 100 90 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.001 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 477,241 118,990 115,450 511,829 497,594 Time (epoch) 9.4s 56.8s 46.3s 18.5s 20.5s Table 11. Hyperparameter settings of GIN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 16 16 5 3 16 5 4 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.0 0.0 0.0 0.3 0.15 0.1 Residual Connections True True True True True True True False True FFN False False True True True False True True True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 240 200 70 70 130 256 300 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 250 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 506,126 518,127 486,039 487,491 514,545 481,433 8,774,720 8,173,605 24,338,354 Time (epoch) 7.4s 6.1s 14.8s 169.2s 5.9s 10.9s 89.2s 213.9s 489.8s 15 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 12. Hyperparameter settings of GatedGCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 9 10 10 12 16 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.05 0.05 0.15 0.2 0.2 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 70 35 35 64 56 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.0005 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 413,355 118,940 116,490 466,001 474,574 Time (epoch) 10.5s 137.9s 115.0s 32.6s 34.1s Table 13. Hyperparameter settings of GatedGCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 5 4 12 20 6 3 10 4 5 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.05 0.2 0.15 0.05 0.0 0.0 0.2 0.15 0.2 Residual Connections False True True True True True True True True FFN False False False True True False True False True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 135 145 95 52 100 256 256 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 32 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 521,141 492,897 559,094 508,589 550,905 1,076,633 6,016,860 5,547,557 29,865,906 Time (epoch) 17.3s 8.0s 21.3s 208.8s 8.9s 15.1s 85.1s 479.8s 640.1s 16 | 8 | 1 | The training process involves using 3 classic GNN architectures (GCN, GIN, GatedGCN) enhanced by the GNN+ framework. Given that each model has around 500K parameters and will be trained on 14 datasets with different sizes (ZINC has 12K graphs while ogbg-code2 has over 450K). The average time per epoch as reported is lower for GNN+ than for the latest GTs, which suggests efficiency improvements. Considering typical GNN training requires several epochs (800-2000 as per standard practices), I estimate up to 8 hours for completion on a single GPU, assuming a sufficient batch size and GPU memory to accommodate the models and datasets concurrently. Additionally, since similar models may take less than 8 hours on a single GPU given parameter counts and efficiencies of typical GNNs, this provides a rational estimate. | yes | Yes | Graph | Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence | 2025-02-13T00:00:00.000Z | [https://github.com/LUOyk1999/GNNPlus] | 1 | http://malnet.cc.gatech.edu/graph-data/malnet-graphs-tiny.tar.gz,http://malnet.cc.gatech.edu/split-info/split_info_tiny.zip | 1 hour - (avg 24 sec * 150 epochs) | https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing | Yes | null |
STL-10, 40 Labels | SemiOccam | [] | ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels | 2025-06-04T00:00:00 | https://arxiv.org/abs/2506.03582v1 | [
"https://github.com/Shu1L0n9/SemiOccam"
] | {'Accuracy': '95.43'} | [
"Accuracy"
] | Given the following paper and codebase:
Paper: ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels
Codebase: https://github.com/Shu1L0n9/SemiOccam
Improve the SemiOccam model on the STL-10, 40 Labels dataset. The result
should improve on the following metrics: {'Accuracy': '95.43'}. You must use only the codebase provided.
| Rui et al. Harbin Engineering University VITSGMM: A R OBUST SEMI-SUPERVISED IMAGE RECOGNITION NETWORK USING SPARSE LABELS Rui Yann∗ Shu1L0n9@gmail.comXianglei Xing† xingxl@hrbeu.edu.cn General Artificial Intelligence Laboratory College of Intelligent Systems Science and Engineering Harbin Engineering University Harbin, 150001, China ABSTRACT We present ViTSGMM, an image recognition network that leverages semi- supervised learning in a highly efficient manner. Existing works often rely on complex training techniques and architectures, while their generalization abil- ity when dealing with extremely limited labeled data remains to be improved. To address these limitations, we construct a hierarchical mixture density classi- fication decision mechanism by optimizing mutual information between feature representations and target classes, compressing redundant information while re- taining crucial discriminative components. Experimental results demonstrate that our method achieves state-of-the-art performance on STL-10 and CIFAR-10/100 datasets when using negligible labeled samples. Notably, this paper also reveals a long-overlooked data leakage issue in the STL-10 dataset for semi-supervised learning tasks and removes duplicates to ensure the reliability of experimental results.1 Keywords: Semi-Supervised Learning, Image Recognition, Vision Transformer, Gaussian Mixture Model 1 I NTRODUCTION Deep learning has achieved remarkable success in image classification tasks. However, its perfor- mance is highly dependent on large-scale annotated datasets. In real-world applications, labeled data is often scarce and expensive to acquire. Therefore, semi-supervised learning (SSL) has re- ceived widespread attention in recent years as an effective solution. The goal of semi-supervised learning is to train models using a small amount of labeled data and large amount of unlabeled data, thereby improving the model’s generalization ability. Nevertheless, most existing semi-supervised learning methods, such as consistency regularization and pseudo-labeling, rely on complex train- ing techniques and architectural designs, and their generalization ability when the labeled datas are extremely limited remains to be improved. Furthermore, although some studies have attempted to apply the recently popular Vision Transformer (ViT) to semi-supervised learning, they have not yet fully leveraged its advantages. Gaussian Mixture Models (GMMs) are powerful probabilistic models that assume data is generated from a mixture of multiple Gaussian distributions. In image recognition tasks, GMMs can model the distribution of data features and subsequently perform classification. The inspiration for this research comes from Miller & Uyar’s systematic study of the “Mixture of Experts” structure based on the joint probability model of features and labels. In our research, we found that the performance of GMMs is highly related to the quality of feature vectors. If the features lack representativeness or have insufficient discriminative power, the classification performance will be severely affected. ∗Rui performed this research during an internship. †Corresponding author. 1Code available at https://github.com/Shu1L0n9/ViTSGMM . 1arXiv:2506.03582v1 [cs.CV] 4 Jun 2025 Rui et al. Harbin Engineering University To address this, we designed a feature encoding framework based on a self-attention mechanism to fully exploit the global contextual information in images and generate more discriminative feature representations. Simultaneously, we incorporated semi-supervised learning strategies, utilizing a pseudo-labeling mechanism to efficiently leverage unlabeled data. This approach achieves superior classification performance within the framework of mixture density estimation. This method signif- icantly overcomes the limitations of traditional methods in terms of feature representation capability and model generalization performance. Taking the Dino with ViT-large backbone as our feature extraction model, we validated the pro- posed semi-supervised Gaussian mixture model (SGMM) method on three commonly used image classification datasets: STL-10, CIFAR-10, and CIFAR-100. The experimental results show that our method achieves state-of-the-art performance and exhibits good stability and robustness. 10 40 100 250 1000 Number of Labels65707580859095100 Classification Accuracy (%) #MixMatch #ReMixMatch #UDA #Dash #MPL #FixMatch #FlexMatch #RelationMatch #ViTSGMM(Ours) Figure 1: Performance Comparison. Comparison of our method with recent semi-supervised learn- ing models on STL-10 dataset with different numbers of labels, achieving state-of-the-art perfor- mance. It is worth mentioning that we have revealed a serious problem that has been overlooked by existing semi-supervised learning research: there are 7,500+ duplicate samples between the unlabeled sam- ples of STL-10 and the test set samples, which is undoubtedly a serious data leakage problem when directly used for semi-supervised learning tasks. Our main contributions can be summarized as follows: • We introduce the powerful Vision Transformer (ViT) as a feature extractor into the classical Mix- ture of Experts (MoE) classifier for image recognition tasks. • We design a simple but performant semi-supervised learning strategy that effectively leverages negligible labels. • We demonstrate that this method achieves state-of-the-art performance. • We reveal the longstanding, yet overlooked issue of trainset and testset contamination in the STL- 10 dataset, which has not been addressed in other works. The remainder of this paper is organized as follows: Section 2 reviews related works; Section 3 introduces our ViTSGMM network; Section 4 demonstrates and analyzes experimental results; Sec- tion 5 discusses the significance and limitations of the research findings and outlines future research directions. 2 Rui et al. Harbin Engineering University 2 R ELATED WORKS 2.1 S EMI-SUPERVISED LEARNING AND GAUSSIAN MIXTURE MODEL (GMM) Semi-Supervised Learning (SSL) has been widely applied in tasks such as image classification and speech recognition in recent years, especially in scenarios where labeled data is scarce. Early SSL methods mainly relied on generating pseudo-labels and consistency regularization. Pseudo-Labeling is one of the classic methods in SSL, with the core idea of using an existing model to predict unlabeled data Yarowsky (1995) and adding its predicted labels to the training set as pseudo-labels. Representative works include Lee et al. (2013) and Zhang et al. (2021), which iteratively enhance the labeled dataset through self-training. Although pseudo-labeling methods can effectively improve model performance, they are often affected by the quality of pseudo-labels and can easily lead to model overfitting. The study by Mishra et al. (2024) explores the issue of over- confidence caused by pseudo-labels in semi-supervised learning, analyzes the source of calibration errors, and aims to improve the reliability of the model by evaluating and potentially improving methods. Consistency Regularization methods improve the generalization ability of models by forcing them to maintain consistency on unlabeled data. The Virtual Adversarial Training (V AT) method proposed by Miyato et al. (2018) enhances the robustness of the model by perturbing the input data and maintaining the stability of the prediction results. The Mean Teacher method proposed by Tarvainen & Valpola (2017) also further improves model performance by using the consistency between the predictions of the teacher model and the output of the student model. Although these methods have achieved good results in many tasks, they often rely on complex training strategies and architectural designs, and their generalization ability across different datasets still needs to be improved. Semi-Supervised Gaussian Mixture Model (SGMM) is the basic probabilistic framework of our method, combining GMM with unlabeled data, and proposes a learning method that simultaneously considers labeled data and unlabeled data by maximizing the joint log-likelihood function. Miller & Uyar (1996) proposed a classifier structure and learning algorithm that can effectively use unlabeled data to improve performance. There are also many impressive works such as Zong et al. (2018) using deep autoencoders and Gaussian mixture models for anomaly detection. Most related to ours is the model of Zhao et al. (2023), which also uses ViT for image embedding and then applies GMM for classification. This model is very similar to ViTSGMM, but it addresses the Generalized Category Discovery (GCD) problem from Vaze et al. (2022), aiming to discover new categories, whereas our work goes further to demonstrate that combining deep learning with SGMM can outperform state-of-the-art methods, especially in scenarios with extremely limited labeled data, yielding significant advantages. 2.2 D EEP NEURAL NETWORKS (DNN) AND VISUAL TRANSFORMERS (VIT) Deep Neural Networks (DNNs) have made significant progress in various supervised learning tasks in recent years. The breakthrough achieved by AlexNet, proposed by Krizhevsky et al. (2012), in image classification tasks laid the foundation for the development of deep learning. To improve the generalization ability of DNNs, many methods have been proposed to address overfitting issues, such as Dropout by Srivastava et al. (2014) and BatchNormalization by Ioffe & Szegedy (2015). The Residual Network (ResNet) proposed by He et al. (2016) effectively solved the gradient vanishing problem in deep network training by introducing residual connections, making it possible to train deeper networks. Subsequently, DLA by Yu et al. (2018) introduced a dynamic hierarchical attention mechanism on top of ResNet, further optimizing layer-wise interaction and feature fusion. Visual Transformers (ViT) are image classification models based on the Transformer architecture that capture global information in images through self-attention mechanisms. Dosovitskiy et al. (2021) proposed dividing images into fixed-size patches and applying self-attention mechanisms. In this way, ViT demonstrates strong performance in image classification tasks, especially on large- scale datasets, exhibiting higher generalization ability compared to traditional CNN architectures. The success of ViT has inspired many researchers to try applying it to self-supervised learning. The DINO method proposed by Caron et al. (2021) learns useful feature representations through self- 3 Rui et al. Harbin Engineering University supervised pre-training of ViT models. These representations can be fine-tuned on a small amount of labeled data to achieve high classification accuracy. 2.3 S ELF-SUPERVISED LEARNING AND DATA LEAKAGE ISSUES Self-Supervised Learning is an unsupervised learning approach that automatically learns feature representations through pretext tasks (e.g., image jigsaw puzzles, predicting future frames). The CLIP model proposed by Radford et al. (2021) achieves strong image understanding capabilities through self-supervised learning of image-text contrast, making its performance on unlabeled data very prominent. In addition, methods like SimCLR from Chen et al. (2020) and MoCo from He et al. (2020) train neural networks through contrastive learning, enabling them to learn discriminative features without labeled data. These methods provide higher quality representations for feature extractors, thereby improving the performance of downstream tasks (such as classification, face recognition, etc.). In machine learning, the issue of data leakage has always been a potential challenge. Off-topic images, near-duplicate images, and labeling errors in benchmark datasets can lead to inaccurate estimations of model performance. Handling and removing duplicate samples has become an issue that cannot be ignored in semi-supervised learning. Therefore, Gr ¨oger et al. (2024) re-examines the task of data cleaning and formalizes it as a ranking problem. 3 M ETHOD We assume that the entire dataset X={xi∈RH×W×C, i= 1, . . . , N }consists of a labeled dataset Xl={(x1, c1), . . . , (xNl, cNl)}and an unlabeled dataset Xu={xNl+1, . . . ,xN}. Here, the number of labeled data Nlis much smaller than the total data amount N(Nl≪N). Our method is as follows: 3.1 P OWERFUL FEATURE EXTRACTOR For each input image xi∈ X , where xi∈RH×W×C, we divide it into a sequence of fixed-size P×Ppatches. Let Np=HW P2denote the number of patches per image. Each patch p(i) k∈RP2C (where k= 1, . . . , N p) is flattened into a vector. Then, through a linear transformation, each patch vector p(i) kis mapped to a D-dimensional vector z(i) k∈RD, expressed as: z(i) k=Wp(i) k+b (1) where W∈RD×P2Candb∈RDare learnable parameters. A learnable positional embedding ek∈RDis added to each patch vector z(i) kto retain the positional information of the image patches, yielding z′(i) k=z(i) k+ek. This results in the image patch sequence Z(i)={z′(i) 1,z′(i) 2, . . . ,z′(i) Np}. Subsequently, the image patch sequence Z(i)is input into a Transformer encoder. The encoder utilizes the self-attention mechanism, which is defined as: Attention (Q,K,V) =softmaxQKT √ D V (2) where Q=Z(i)WQ,K=Z(i)WK, and V=Z(i)WV. Through multiple layers of Trans- former encoding and Principal Component Analysis (PCA) dimensionality reduction, we finally obtain high-quality image feature vectors fi∈Rdwhere d < D . 4 Rui et al. Harbin Engineering University 3.2 S EMI-SUPERVISED MIXTURE OF EXPERTS CLASSIFIER Each feature vector fi∈Rdis obtained from the feature extractor. Each labeled sample (fi, ci) is associated with a class label cibelonging to the label set L={1,2, . . . , K }, where K= max{c1, . . . , c Nl}represents the total number of classes. E X T R A C T O R F E A T U R E 1 0 2 4 Dims PCA 64 Dims Feature Vector MoE Classifier 1 3 1 3 2 2 1 Divide into patches Figure 2: Overview of our ViTSGMM network. The network consists of a feature extractor and a semi-supervised mixture of experts classifier. An example of a sub-model is shown in the ellipse at the bottom right, where the circle labeled 1 represents a feature vector with a corresponding label, 2 represents a pseudo-labeled feature vector, and 3 represents an unlabeled feature vector. The classifier aims to maximize the following joint log-likelihood function: L(Θ) =X fi∈FulogLX l=1πlp(fi|θl) | {z } unlabeled data likelihood+X fi∈FllogLX l=1πlp(fi|θl)P(ci|l) | {z } labeled data likelihood(3) where Θ ={π1, . . . , π L, θ1, . . . , θ L}is the set of model parameters withPL l=1πl= 1,πlrepresents the mixing coefficient of the l-th Gaussian component, p(fi|θl)is the probability density function of thel-th Gaussian component with parameters θl={µl,Σl}, and P(ci|l)represents the conditional probability that a feature belongs to class cigiven Gaussian component l. The EM algorithm updates parameters as follows: γ(t) il=π(t) lp(fi|θ(t) l) PL j=1π(t) jp(fi|θ(t) j)∀fi∈ Fu (4) γ(t) il|ci=π(t) lp(fi|θ(t) l)P(ci|l)(t) PL j=1π(t) jp(fi|θ(t) j)P(ci|j)(t)∀fi∈ Fl (5) µ(t+1) l=P fi∈Flfiγ(t) il|ci+P fi∈Fufiγ(t) ilP fi∈Flγ(t) il|ci+P fi∈Fuγ(t) il(6) 5 Rui et al. Harbin Engineering University Σ(t+1) l=P fi∈FlM(t) ilγ(t) il|ci+P fi∈FuM(t) ilγ(t) ilP fi∈Flγ(t) il|ci+P fi∈Fuγ(t) il(7) where M(t) il= (fi−µ(t) l)(fi−µ(t) l)⊤. π(t+1) l=1 N X fi∈Flγ(t) il|ci+X fi∈Fuγ(t) il! (8) P(k|l)(t+1)=P fi∈Fl,ci=kγ(t) il|ciP fi∈Flγ(t) il|ci(9) 3.3 P SEUDO -LABELING MECHANISM To effectively leverage information from the unlabeled dataset Xu, we introduce a pseudo-labeling mechanism based on the component-class association probabilities . Unlike traditional iterative methods, our pseudo-label generation is performed in a single calculation after the initial SGMM training reaches convergence, reducing computational cost and mitigating error accumulation risks. For unlabeled features fi∈ F u, we compute class probabilities through Gaussian component re- sponsibilities: p(k) i=LX l=1P(k|l)(t)γ(t) il, ξ i= max 1≤k≤Kp(k) i (10) where P(k|l)represents the class probability conditioned on component l, and γ(t) ilis the responsi- bility from Equation 4. We construct class-specific candidate sets with confidence thresholding sorted by ξidescending: Ck= fi|arg max kp(k) i=k, ξi> τ, τ ∈(0,1) (11) To prevent class imbalance, we adopt proportional sampling: nfinal= min 1≤k≤K⌊α|Ck|⌋(α∈(0,1)) (12) The pseudo-labeled set is then constructed as: Dp={(fi, k)|k= 1, . . . , K, fi∈ Ck[1 :nfinal]} (13) Finally, combine the pseudo-labeled data Dpwith the labeled features Fland use the EM algorithm for iterative. We use Kmeans++ to initialize the parameters first, during the training process, log- likelihood value calculated using Equation 3 increases steadily. We show the procedure of our method in Algorithm 1. 6 Rui et al. Harbin Engineering University Algorithm 1 ViTSGMM Network Require: Labeled data Xl, Unlabeled data Xu, Pre-trained ViT, SGMM components L, Confidence threshold τ, Sampling ratio α Ensure: Trained SGMM parameters Θ 1:Extract features Fl← {ViT(xi)|(xi, ci)∈ Xl}andFu← {ViT(xi)|xi∈ Xu} 2:PCAFlandFuintoRd 3:Lcenters cluster Fl∪ Fuwith K-means++ 4:Initialize {πl,µl,Σl}from cluster results 5:fort= 1toT1do 6: EM algorithm: Initial iterative via (4-9) 7:end for 8:Construct Dpusing (11-13) with τandα 9:Augment labeled set: Fl← F l∪ Dp 10:fort= 1toT2do 11: EM algorithm: Final iterative via (4-9) 12:end for 13:return Θ 4 E XPERIMENTS 4.1 S TANDARD SGMM P ERFORMANCE ANALYSIS The performance of the SGMM is influenced by several factors. This section will delve into the impact of the number of labels, feature dimensions, and the number of Gaussian components on the model’s behavior. SGMM’s core of its labeling efficiency lies in using unlabeled data to assist the model in learning the data distribution. Theoretically, as the number of labeled samples increases, the model can more accurately estimate the conditional probability of categories P(k|l), thereby improving discrimina- tion ability. When the labeled samples reach a certain scale, the marginal benefit of the additional information provided by unlabeled data diminishes, and the performance improvement curve tends to flatten. Therefore, SGMM’s advantage is that it can quickly improve performance with very few labels and demonstrate high efficiency in applications with limited labeling resources. 0200 400 600 8001000 2000 500010000 30000 60000 Number of Labels0.50.60.70.80.91.0Classification Accuracy Accuracy Trend Figure 3: Relationship between the number of labels and performance. The trend of accuracy with the number of labels for SGMM on the MNIST dataset shows a steady improvement in accuracy with an increase in labels. When dealing with high-dimensional data, the feature dimension is crucial to the performance and computational efficiency of SGMM. High-dimensional feature spaces can lead to the “curse of di- mensionality”, increasing the complexity of model training and the risk of overfitting, while also consuming excessive computational resources. Reducing feature dimensions through dimensional- ity reduction techniques such as PCA can retain discriminative feature information while removing redundant dimensions, alleviating these problems to some extent. However, excessive dimensional- 7 Rui et al. Harbin Engineering University ity reduction can lead to the loss of key feature information. Therefore, the feature dimension should be reduced as much as possible while ensuring model performance. 0 24 48 72 96 120 144 PCA Dimensions0.50.60.70.80.91.0Classification Accuracy Accuracy Trend 0 50 100 150 200 250 300 350 Number of Gaussian Components (ncomponents)0.800.850.900.95Classification Accuracy Accuracy Trend Figure 4: Relationship between PCA dimensions/Gaussian components and performance. The left figure shows the trend of SGMM accuracy with PCA dimensions on the MNIST dataset, while the right figure shows the trend of accuracy with the number of Gaussian submodels. Gaussian Mixture Models use multiple Gaussian components to fit complex data distributions, and the number of components directly affects the model’s complexity and fitting ability. When the num- ber of components is small, the model may not be able to fully capture the fine-grained structure of the data distribution, leading to underfitting. When the number of components is too large, the model complexity increases, which can easily lead to overfitting of the training data, reducing gen- eralization ability and increasing computational costs. Therefore, the optimal number of Gaussian components should match the intrinsic class structure and complexity of the dataset. In summary, labeling efficiency reflects the core advantage of semi-supervised learning. Feature di- mension efficiency emphasizes the role of dimensionality reduction in improving model efficiency and generalization ability. The number of Gaussian components reflects the trade-off between model complexity and data fitting ability. Understanding the impact of these factors on SGMM perfor- mance helps to adjust model parameters according to specific tasks in practical applications. 4.2 M ORE POWERFUL FEATURE EXTRACTORS To further analyze the performance of the SGMM, we experimented on the CIFAR-10 dataset us- ing different feature extractors: PCA, Deep Neural Networks (ResNet101, DLA169), and ViT- Base/Large backbone DINO pre-trained models. Figure 5 shows that with DNNs and ViTs, in- creasing the number of Gaussian components leads to a decrease in classification accuracy, which contrasts with the performance trend observed when using PCA. 0 50 100 150 200 Number of Gaussian Components (ncomponents)0.280.330.38Classification Accuracy PCA 0 50 100 150 200 Number of Gaussian Components (ncomponents)0.820.860.900.940.98Classification Accuracy 1020DLA169 ViT/L DINO ResNet101 ViT/B DINO Figure 5: Performance trends. The left figure shows the accuracy trend of SGMM on the CIFAR-10 dataset processed with PCA, while the right figure shows the accuracy trend with the number of Gaussian submodels when using DNNs and ViTs. Next, we performed t-SNE visualization on the features extracted by the three methods. In Figure 6, we present the t-SNE visualization results of feature representations obtained using different feature extraction methods on the CIFAR-10 dataset. It can be clearly seen from the figure that the feature 8 Rui et al. Harbin Engineering University points extracted by PCA exhibit a highly mixed state in the t-SNE space, and it is difficult to distin- guish between samples from different categories. This indicates that the feature representation after PCA dimensionality reduction lacks sufficient discriminative power. In contrast, the feature points extracted using deep neural networks exhibit a certain clustering trend in the t-SNE space, but there is still some overlap between the boundaries of different categories. The feature points extracted using Visual Transformer (ViT) form a relatively clear cluster structure in the t-SNE space, which indicates that the feature representation extracted by ViT has stronger separability and can better capture the intrinsic structure of the data, providing a more favorable feature basis for subsequent classification tasks. PCA has limited feature discrimination ability, so it needs more Gaussian sub-models to model complex intra-class variations. At this time, increasing the number of Gaussian components helps to approximate the true distribution; while the features extracted by DNN/ViT are close to a single Gaussian for each category, increasing the number of components will cause the model complexity to exceed the actual needs. These analyses are consistent with the experimental results in Figure 5. 100 50 0 5050 050PCA Features 100 50 0 5050 050DLA Features 100 50 0 50100 50 050ViT Features Figure 6: t-SNE Visualization. t-SNE visualization results of different feature extraction methods on the CIFAR-10 dataset. We analyzed high-confidence samples from two Gaussian components of the same bird category. Observing the original images in Figure 7, we found that these two Gaussian components exhibit different feature preferences: one component tends to give high confidence to samples of ostriches (long-necked birds), while the other component prefers plump, neckless birds. Guassian 1 Guassian 2 Figure 7: Comparison raw images. Visualization of high-confidence samples from two Gaussian components of the bird category on the CIFAR-10 dataset. Through experimental results and visualization analysis, we found that DNN and ViT can better cap- ture the distribution characteristics of data compared to the traditional PCA dimensionality reduction method, thereby improving the classification performance of SGMM. 4.3 E XPERIMENT SETUP Datasets. We report results on the Cifar-10/100 and STL-10 datasets. CIFAR-10 contains 10 classes with 6,000 32x32 color images per class, totaling 60,000 images, of which 50,000 are for training and 10,000 for testing. CIFAR-100 is similar to CIFAR-10 but contains 100 classes with 600 images per class, of which 500 are for training and 100 for testing. STL-10 is an image classification 9 Rui et al. Harbin Engineering University dataset designed for self-supervised and semi-supervised learning, with 10 classes in the training set, containing 5,000 labeled samples and 100,000 unlabeled images, where images have a resolution of 96x96 pixels. We performed special deduplication processing on the STL-10 dataset, remains 90,455 training images and 8,000 images for testing. Implementation details. We use the DINO pre-trained model based on the ViT-large backbone to extract feature representations. On the Cifar-10/100 datasets, the values of the Gaussian components are set to 10/100 respectively, and the PCA dimension is set to 60. In the STL-10 dataset, since its training set contains more than 10 categories, we set the number of Gaussian components to 15 to obtain better robustness, the PCA dimension is set to 45, and the convergence threshold (Tol) is set to 1. The PCA dimension is set to use accuracy as an evaluation metric to evaluate the classification performance of different methods. All experiments are performed on a Tesla T4 GPU with 15GB memory. 4.4 C OMPARISON WITH STATE -OF-THE-ART METHODS Results on CIFAR-10. We show the benchmark results on CIFAR-10 in Table 1. Our method achieves competitive performance compared with state-of-the-art methods, including FixMatch, UDA, and ReMixMatch. It can be clearly seen that our method consistently outperforms other methods under all settings. Table 1: Benchmark results on CIFAR-10. Algorithms Error Rate (40 labels) Error Rate (250 labels) Error Rate (4000 labels) Dash (2021) 9.29 ±3.28 5.16 ±0.23 4.36 ±0.11 MPL (2021) 6.62 ±0.91 5.76 ±0.24 4.36 ±0.11 FlexMatch (2021) 4.97 ±0.06 4.98 ±0.09 4.19 ±0.01 CoMatch (2021) 6.51 ±1.18 5.35 ±0.14 4.27 ±0.12 SimMatch (2022) 5.38 ±0.01 5.36 ±0.08 4.41 ±0.07 AdaMatch (2022) 5.09 ±0.21 5.13 ±0.05 4.36 ±0.05 FreeMatch (2023) 4.90 ±0.12 4.88 ±0.09 4.16 ±0.06 SoftMatch (2023) 5.11 ±0.14 4.96 ±0.09 4.27 ±0.05 SequenceMatch (2024a) 4.80 ±0.01 4.75 ±0.05 4.15 ±0.01 EPASS (2024b) 5.55 ±0.21 5.31 ±0.13 4.23 ±0.05 ViTSGMM(Ours) 3.51 ±0.12 3.47 ±0.17 3.45 ±0.16 Results on STL-10. In Table 2, we present the performance on STL-10. Our method achieves state-of-the-art performance on STL-10, outperforming other methods by a large margin. Our method achieves the best performance in all three settings, demonstrating the effectiveness of our proposed method. Compared to SequenceMatch, our ViTSGMM achieves +10.88%, +8.35%, and +1.41% on 40-label, 250-label, and 1000-label settings, respectively. Table 2: Benchmark results on STL-10. Algorithms Error Rate (40 labels) Error Rate (250 labels) Error Rate (1000 labels) Dash (2021) 42.00 ±4.94 10.50 ±1.37 6.30 ±0.49 MPL (2021) 35.97 ±4.14 9.90 ±0.96 6.66 ±0.00 FlexMatch (2021) 29.12 ±5.04 9.85 ±1.35 6.08 ±0.34 CoMatch (2021) 13.74 ±4.20 7.63 ±0.94 5.71 ±0.08 SimMatch (2022) 16.98 ±4.24 8.27 ±0.40 5.74 ±0.31 AdaMatch (2022) 19.95 ±5.17 8.59 ±0.43 6.01 ±0.02 FreeMatch (2023) 28.50 ±5.41 9.29 ±1.24 5.81 ±0.32 SoftMatch (2023) 22.23 ±3.82 9.18 ±0.63 5.79 ±0.15 SequenceMatch (2024a) 15.45 ±1.40 12.78 ±0.76 5.56 ±0.35 EPASS (2024b) 9.15 ±3.25 6.27 ±0.03 5.40 ±0.12 ViTSGMM(Ours) 4.57 ±0.24 4.43 ±0.08 4.15 ±0.07 Results on CIFAR-100. In Table 3, we compare the performance of ViTSGMM with state-of-the- art methods on CIFAR-100 dataset. Our method outperforms other methods in the 40 labels setting, and achieves comparable performance in the 2500 labels and 10,000 labels settings. 10 Rui et al. Harbin Engineering University Table 3: Benchmark results on CIFAR-100. Algorithms Error Rate (400 labels) Error Rate (2500 labels) Error Rate (10000 labels) Dash (2021) 44.82 ±0.96 27.15 ±0.22 21.88 ±0.07 MPL (2021) 46.26 ±1.84 27.71 ±0.19 21.74 ±0.09 FlexMatch (2021) 39.94 ±1.62 26.49 ±0.20 21.90 ±0.15 CoMatch (2021) 53.41 ±2.36 29.78 ±0.11 22.11 ±0.22 SimMatch (2022) 39.32 ±0.72 26.21 ±0.37 21.50 ±0.11 AdaMatch (2022) 38.08 ±1.35 26.66 ±0.33 21.99 ±0.15 FreeMatch (2023) 39.52 ±0.01 26.22 ±0.08 21.81 ±0.17 SoftMatch (2023) 37.60 ±0.24 26.39 ±0.38 21.86 ±0.16 SequenceMatch (2024a) 37.86 ±1.07 25.99 ±0.22 20.10±0.04 EPASS (2024b) 38.88 ±0.24 25.68 ±0.33 21.32 ±0.14 ViTSGMM(Ours) 26.59 ±1.02 22.19 ±0.81 21.21±0.26 4.5 A BLATION STUDY Comparing Semi-supervised Learning Methods. In order to test the effectiveness of the SGMM component, we compared it with the original classification head of ViT. We trained on three datasets, error rates shown in Table 4, under scenarios with extremely few labels, the performance of SGMM is significantly superior to that of the Softmax layer. Effect of Pseudo-labeling. We conducted an ablation study to evaluate the impact of pseudo- labeling on the performance of ViTSGMM. We compared the performance of ViTSGMM with and without pseudo-labeling on three datasets, error rates shown in Table 4. Table 4: Ablation study with three datasets. Dataset CIFAR-10 CIFAR-100 STL-10 # Label 40 250 4000 400 2500 10000 40 250 1000 w Softmax 51.05 ±16.69 5.40 ±1.70 3.75 ±0.21 57.32±10.91 43.01 ±5.89 33.70 ±2.71 16.29±2.85 4.88 ±1.50 4.38 ±0.40 w/o P-L 3.73 ±0.21 3.70 ±0.23 3.63 ±0.19 30.56±0.80 23.21 ±0.66 21.56 ±0.43 4.91±0.12 4.81 ±0.12 4.58 ±0.10 ViTSGMM 3.51 ±0.12 3.47 ±0.17 3.45 ±0.16 26.59±1.02 22.19 ±0.81 21.21 ±0.26 4.44±0.04 4.43 ±0.08 4.15 ±0.07 5 C ONCLUSIONS We introduce high-performance feature extractors into traditional generative models. Unlike tra- ditional semi-supervised learning methods, this approach does not introduce domain-specific con- straints or assumptions. The cross-modal unified modeling strategy performs surprisingly well when trained with a very small amount of labeled data and a large amount of unlabeled data. Extensive experiments show that ViTSGMM achieves new performance breakthroughs in benchmarks such as STL-10 and Cifar-10, maintaining over 96% classification accuracy even when using only 4 labeled samples per class. Future development of this technology can proceed along several dimensions. First, constructing dynamic component adjustment mechanism based on a Bayesian framework to achieve parameter adaptivity. Secondly, enhancing the model’s discriminative performance when facing complex tasks such as fine-grained classification. Most interesting, exploring its application in other fields, such as electroencephalogram (EEG) signal analysis and medical image processing, thereby broadening its application scope and verifying its transferability. We believe these improvements will further unlock the technological benefits brought about by the fusion of probabilistic graphical models and deep learning. ACKNOWLEDGMENTS This work was supported in part by the National Natural Science Foundation of China under Grant 62076078 and in part by the Chinese Association for Artificial Intelligence (CAAI)-Huawei Mind- Spore Open Fund under Grant CAAIXSJLJJ-2020-033A. 11 Rui et al. Harbin Engineering University REFERENCES Mathilde Caron, Hugo Touvron, Ishan Misra, Herv ´e J´egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision , pp. 9650–9660, 2021. Hao Chen, Ran Tao, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Bhiksha Raj, and Marios Savvides. Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning. In International Conference on Learning Representations (paper) , 2023. T Chen, S Kornblith, M Norouzi, and G Hinton. Simclr: A simple framework for contrastive learning of visual representations [c]. International Con-ference on Learning Representations , 2020. Jeremy Chopin and Rozenn Dahyot. Performance of gaussian mixture model classifiers on embed- ded feature spaces, 2024. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko- reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. Fabian Gr ¨oger, Simone Lionetti, Philippe Gottfrois, Alvaro Gonzalez-Jimenez, Ludovic Am- ruthalingam, Matthew Groh, Alexander A Navarini, and Marc Pouly. Intrinsic self-supervision for data quality audits. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , June 2020. Sergey Ioffe and Christian Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Inter- national Conference on Machine Learning , pp. 448–456, 2015. Li Jing, Pascal Vincent, Yann LeCun, and Yuandong Tian. Understanding dimensional collapse in contrastive self-supervised learning. In 10th International Conference on Learning Representa- tions, paper 2022 , 2022. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. Advances in neural information processing systems , 25, 2012. Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML , volume 3, pp. 896. Atlanta, 2013. Junnan Li, Caiming Xiong, and Steven CH Hoi. Comatch: Semi-supervised learning with con- trastive graph regularization. In Proceedings of the IEEE/CVF international conference on com- puter vision , pp. 9475–9484, 2021. Michael Majurski, Sumeet Menon, Parniyan Farvardin, and David Chapman. A method of moments embedding constraint and its application to semi-supervised learning, 2024. David J Miller and Hasan Uyar. A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data. In Advances in Neural Information Processing Systems , volume 9. MIT Press, 1996. URL https://proceedings.neurips.cc/paper_files/paper/ 1996/hash/a58149d355f02887dfbe55ebb2b64ba3-Abstract.html . Shambhavi Mishra, Balamurali Murugesan, Ismail Ben Ayed, Marco Pedersoli, and Jose Dolz. Do not trust what you trust: Miscalibration in semi-supervised learning, 2024. 12 Rui et al. Harbin Engineering University Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence , 41(8):1979–1993, 2018. Sangwoo Mo, Minkyu Kim, Kyungmin Lee, and Jinwoo Shin. S-clip: Semi-supervised vision- language learning using few specialist captions, 2023. Khanh-Binh Nguyen. Sequencematch: Revisiting the design of weak-strong augmentations for semi-supervised learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pp. 96–106, January 2024a. Khanh-Binh Nguyen. Debiasing, calibrating, and improving semi-supervised learning performance via simple ensemble projector. In Proceedings of the IEEE/CVF Winter Conference on Applica- tions of Computer Vision (WACV) , pp. 2441–2451, January 2024b. Hieu Pham, Zihang Dai, Qizhe Xie, and Quoc V Le. Meta pseudo labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pp. 11557–11568, 2021. Francois Porcher, camille couprie, Marc Szafraniec, and Jakob Verbeek. Better (pseudo-)labels for semi-supervised instance segmentation, 2024. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pp. 8748–8763. PMLR, 2021. Becca Roelofs, David Berthelot, Kihyuk Sohn, Nicholas Carlini, and Alex Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. In International Conference on Learning Representations (paper) , 2022. Amir Hossein Saberi, Amir Najafi, Alireza Heidari, Mohammad Hosein Movasaghinia, Abolfazl Motahari, and Babak H. Khalaj. Out-of-domain unlabeled data improves generalization, 2024. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929–1958, 2014. Publisher: JMLR. org. Takashi Takahashi. The role of pseudo-labels in self-training linear classifiers on high-dimensional gaussian mixture data, 2024. Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged con- sistency targets improve semi-supervised deep learning results. Advances in neural information processing systems , 30, 2017. Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Generalized category discovery. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 7492–7501, 2022. Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, , Zhen Wu, Jindong Wang, Mar- ios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, and Xing Xie. Freematch: Self- adaptive thresholding for semi-supervised learning. In International Conference on Learning Representations (paper) , 2023. Xianglei Xing, Yao Yu, Hua Jiang, and Sidan Du. A multi-manifold semi-supervised Gaus- sian mixture model for pattern classification. Pattern Recognition Letters , 34(16):2118–2125, 2013. ISSN 0167-8655. doi: https://doi.org/10.1016/j.patrec.2013.08.005. URL https: //www.sciencedirect.com/science/article/pii/S0167865513003000 . Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, and Rong Jin. Dash: Semi- supervised learning with dynamic thresholding. In International conference on machine learning , pp. 11525–11536. PMLR, 2021. David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics , pp. 189–196, 1995. 13 Rui et al. Harbin Engineering University Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition , pp. 2403–2412, 2018. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo la- beling. Advances in Neural Information Processing Systems , 34:18408–18419, 2021. Bingchen Zhao, Xin Wen, and Kai Han. Learning semi-supervised gaussian mixture models for generalized category discovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pp. 16623–16633, 2023. Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi- supervised learning with similarity matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pp. 14471–14481, 2022. Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, and Haifeng Chen. Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection. In International Conference on Learning Representations , 2018. APPENDIX A STL-10 DATA CLEAN Duplicate samples in the STL-10 dataset can potentially bias model evaluation. To address this, we deduplicated the dataset. Specifically, we used a script to compute image hashes for STL-10 dataset, then removed 7,545 samples from the trainset that were found to be duplicates of samples in the testset. The pseudo-code for this deduplication process is provided in Algorithm A.1. Algorithm A.1 Deduplication of Dataset Require: Main dataset, test dataset, batch size Ensure: Deduplicated dataset, CSV file with duplicate information Build test set hash dictionary: Compute hash for each test image 2:Initialize valid indices and duplicates as empty lists foreach batch in main dataset do 4: foreach image in batch do Compute hash of the image 6: ifhash exists in test set hash dictionary then Record duplicate sample details 8: else Add current index to valid indices 10: end if end for 12:end for ifduplicates is not empty then 14: Save duplicate details to CSV file end if Following processing, the training set consists of 5,000 labeled samples and 90,455 unsupervised samples. This effectively addresses the problem of data leakage, thereby guaranteeing the credibility of the experimental findings. B C OMPARING HEATMAPS OF DNN AND VIT We compare the heatmaps of DNN and ViT on several randomly selected images from the internet, each containing two animals simultaneously. The heatmaps are generated using the Grad-CAM method, which visualizes the importance of each pixel in the image for the classification result. 14 Rui et al. Harbin Engineering University Figure B.1: Heatmap comparison. From left to right: raw image ,DLA169 ,DINO . Note the small penguin on the right in the third row. As shown in Figure B.1, the heatmaps of DNN and ViT differ significantly. While DNN focuses primarily on one animal, ViT attends to both animals in the image, demonstrating its ability to capture global context more effectively. This comparison highlights the differences in how these models prioritize regions of the image during classification. C PCA E XPLAINED VARIANCE In our experiments, the hyperparameter of PCA dimensions for different datasets is determined by the PCA explained variance, which represents the percentage of variance in the data that can be ex- plained by the feature vectors after PCA dimensionality reduction. We typically choose dimensions that can explain over 60% of the variance. 0 200 400 600 800 1000 PCA Dimensions0.250.500.751.00Cumulative Ratio CIFAR-10 Figure C.1: PCA explained variance. The percentage of variance explained by each PCA dimension on the CIFAR-10 dataset. D V IT C LASSIFICATION HEAD FOR SEMI-SUPERVISED LEARNING The following is the mathematical implementation of the original ViT classification head for semi- supervised learning, which has been used in our ablation study in section 4.5. In section 3.1, we define the feature vector fi∈Rd. The standard ViT classification head includes a linear transformation si=Whfi+bh, where Wh∈RK×dandbh∈RKare learnable parameters, andKis the number of categories; followed by softmax: 15 Rui et al. Harbin Engineering University P(c=k|fi) =exp(si,k)PK j=1exp(si,j),∀k∈ {1, . . . , K } For labeled samples Fl={(fi, ci)}, we optimize the standard cross-entropy loss: Lsup=−1 |Fl|X (fi,ci)∈FllogP(c=ci|fi) Unlabeled data Fu={fj}adopts a confidence-based pseudo-labeling strategy, where the pseudo- label is ˆcj= arg max kP(c=k|fj), and the confidence score is ξj= max kP(c=k|fj). The unsupervised loss is calculated as follows: Lunsup=−1 |Fu|X fj∈FuI(ξj> τ) logP(c= ˆcj|fj) The total loss combines the supervised and unsupervised objectives, with the weight λ(t)changing over time, where Trampcorresponds to the training period: Ltotal=Lsup+λmax·min 1,t Tramp | {z } λ(t)Lunsup 16 | 8 | 1 | The ViTSGMM model utilizes the Vision Transformer architecture (likely ViT-base or ViT-large), which has approximately 86 million parameters for ViT-base and over 300 million for ViT-large. Considering the CIFAR-10 dataset has 60,000 images and STL-10 has about 13,000 images, the added computational complexity from semi-supervised and mixture density classification methods further increases training time. Assuming a reasonable batch size of 32, which is common for transformer architectures, training on a single high-end GPU (like an NVIDIA A100 with 40GB of memory) should suffice to accommodate the model size and dataset. Given that no extreme optimizations are noted in the paper, an estimate of around 8 hours on one GPU seems plausible given modern training setups, but with high efficiency may lean towards slightly longer if using standard training routines. | yes | Yes | CV | ViTSGMM: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels | 2025-06-04T00:00:00.000Z | [https://github.com/Shu1L0n9/SemiOccam] | 1 | Code Downloads Dynamically after cahnging the dataset name | 3 Hours | Copy of experiment.ipynb | Yes | It starts and runs successfully |
CIFAR-10 | ResNet18 (FSGDM) | [] | On the Performance Analysis of Momentum Method: A Frequency Domain Perspective | 2024-11-29T00:00:00 | https://arxiv.org/abs/2411.19671v6 | [
"https://github.com/yinleung/FSGDM"
] | {'Percentage correct': '95.66'} | [
"Percentage correct",
"Top-1 Accuracy",
"Accuracy",
"Parameters",
"Top 1 Accuracy",
"F1",
"Cross Entropy Loss"
] | Given the following paper and codebase:
Paper: On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
Codebase: https://github.com/yinleung/FSGDM
Improve the ResNet18 (FSGDM) model on the CIFAR-10 dataset. The result
should improve on the following metrics: {'Percentage correct': '95.66'}. You must use only the codebase provided.
| Published as a conference paper at ICLR 2025 ON THE PERFORMANCE ANALYSIS OF MOMENTUM METHOD : A F REQUENCY DOMAIN PERSPECTIVE Xianliang Li∗1,2, Jun Luo∗1,2, Zhiwei Zheng∗3, Hanxiao Wang2,4, Li Luo5, Lingkun Wen2,6, Linlong Wu7, Sheng Xu†1 1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences3University of California, Berkeley 4Institute of Automation, Chinese Academy of Sciences5Sun Yat-sen University 6Shanghai Astronomical Observatory, Chinese Academy of Sciences7University of Luxembourg yinleung.ley@gmail.com ,{j.luo3,sheng.xu }@siat.ac.cn , zhiwei.zheng@berkeley.edu ,wanghanxiao18@mails.ucas.ac.cn , luoli33@mail2.sysu.edu.cn ,wenlingkun@shao.ac.cn ,linlong.wu@uni.lu ABSTRACT Momentum-based optimizers are widely adopted for training neural networks. However, the optimal selection of momentum coefficients remains elusive. This uncertainty impedes a clear understanding of the role of momentum in stochastic gradient methods. In this paper, we present a frequency domain analysis frame- work that interprets the momentum method as a time-variant filter for gradients, where adjustments to momentum coefficients modify the filter characteristics. Our experiments support this perspective and provide a deeper understanding of the mechanism involved. Moreover, our analysis reveals the following significant findings: high-frequency gradient components are undesired in the late stages of training; preserving the original gradient in the early stages, and gradually am- plifying low-frequency gradient components during training both enhance per- formance. Based on these insights, we propose Frequency Stochastic Gradient Descent with Momentum (FSGDM), a heuristic optimizer that dynamically ad- justs the momentum filtering characteristic with an empirically effective dynamic magnitude response. Experimental results demonstrate the superiority of FSGDM over conventional momentum optimizers.1 1 I NTRODUCTION Momentum has achieved great success in deep learning applications when combined with Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Among various momentum methods (Polyak, 1964; Nesterov, 1983; Van Scoy et al., 2017; Ma & Yarats, 2018; Kidambi et al., 2018), one of the most prevalent variants is the momentum method utilized within Stochastic Gradient Descent with Momentum (SGDM) (Sutskever et al., 2013; Paszke et al., 2019), which can be expressed as: Standard-SGDM (decoupled ) :mt=utmt−1+vtgt, x t=xt−1−αtmt, (1) where gtdenotes the gradient at iteration t,mtis the momentum buffer, and xtrepresents the learnable parameters. The momentum coefficients utandvtcontrol the influence of the previous momentum and the current gradient, respectively, and αtis the learning rate. For these time-variant momentum coefficients, a multistage setting has been commonly adopted in the machine learning community (Aybat et al., 2019; Kulunchakov & Mairal, 2019; Liu et al., 2020). Throughout this paper, we refer to this formulation, which decouples the two momentum coefficients, as Standard- SGDM. In contrast, another prevalent variant couples the two momentum coefficients using the Exponential Moving Average (EMA) method (Gardner Jr, 1985), leading to the formulation of EMA- SGDM: EMA-SGDM (coupled ) :mt=utmt−1+ (1−ut)gt, x t=xt−1−αtmt, (2) ∗: Equal contribution. †: Corresponding author. 1Our implementation of FSGDM is available at https://github.com/yinleung/FSGDM . 1arXiv:2411.19671v6 [cs.LG] 21 May 2025 Published as a conference paper at ICLR 2025 where ut∈[0,1)is the momentum coefficient. Notably, this coupled momentum formulation is a special case of the decoupled one, i.e., Standard-SGDM with vt= 1−ut. Our experiments show performance gaps between these two formulations. Moreover, how the momentum coefficients change over time can significantly affect the test accuracy (see Section 3). The existence of these two distinct momentum formulations and their differing performances raise two primary questions in modern deep learning: 1.Decoupling vs. Coupling : Should the coefficients utandvtbe decoupled or coupled? 2.Temporal Variation : How should the momentum coefficients evolve over time during training to achieve better model performance? For Question 1, some literature has investigated the convergence of the coupled method (Mai & Johansson, 2020; Li et al., 2022). Liu et al. (2020) argued that coupling the coefficients leads only to a constant scaling difference. Wang et al. (2024) further demonstrated that the mathematical equiv- alence between EMA-SGDM and Standard-SGDM can be achieved by adjusting the momentum coefficients and the learning rates in a coupled way. However, in practice, learning rate schedules are typically independent of momentum coefficient tuning during network training. On the other hand, popular frameworks like PyTorch (Paszke et al., 2019) adopt a decoupled momentum strategy by default. In our framework, we tackle the first question from the frequency domain perspective, revealing the relationship between the coupled and decoupled constructions. Regarding Question 2, prior research offered diverse opinions on how the momentum coefficients should vary over time. Some studies preferred fixed decoupled momentum coefficients (Yan et al., 2018; Liu et al., 2018; Yu et al., 2019), commonly selecting utvalues as 0.9 and vtvalue as 1. Liu et al. (2020) highlighted the benefits of stagewise learning rate schedules in EMA-SGDM, noting thatutcan either remain constant or increase along with the stagewise adjustments. Conversely, Smith (2018) demonstrated that decreasing the momentum coefficients while increasing the learning rate improves test performance. Moreover, Adaptive momentum methods (Kingma & Ba, 2014; Reddi et al., 2018; Luo et al., 2019; Chen et al., 2018) proved the convergence of decreasing coupled momentum coefficients in the context of online convex optimization. Nonetheless, a consensus regarding the optimal time-variant pattern of the momentum coefficients has yet to be reached. To answer these questions, one has to understand how the momentum method affects the training process. Goh (2017) analyzed the momentum method from the aspect of convergence and dynam- ics. Several prior studies (Cutkosky & Orabona, 2019; Ma & Yarats, 2018) speculated that averaging past stochastic gradients through momentum might reduce the variance of the noise in the parameter update, thus making the loss decrease faster. Polyak (1964); Rumelhart et al. (1986) argued that the EMA momentum can cancel out oscillations along high-curvature directions and add up contribu- tions along low-curvature directions. From the signal processing perspective, the EMA method acts as a discrete low-pass filter for smoothing out high-frequency fluctuations while retaining the low- frequency baseband pattern of the signal (Gardner Jr, 1985). These points of view bring us a new insight into connecting the momentum update processes with the specific filters. In this aspect, the momentum methods with different coefficient selections can be interpreted in a unified frequency domain analysis framework, whereby Questions 1 and 2 are resolved. In this paper, we propose a novel frequency domain analysis framework to address the two questions and provide a deeper understanding of the role of momentum in stochastic optimization. To the best of our knowledge, this paper, for the first time, reveals the fundamental difference between Standard-SGDM and EMA-SGDM and uncovers the effects of the dynamic momentum coefficients clearly from the frequency domain perspective. This perspective not only explains the difference between various momentum methods but also provides practical guidelines for designing efficient optimizers. Accordingly, we introduce FSGDM, an optimizer that dynamically adjusts momentum filter characteristics during training. Experiments show that FSGDM outperforms traditional SGD- based momentum optimizers. 2 F REQUENCY DOMAIN ANALYSIS FRAMEWORK This section introduces the background of Z-transform (Zadeh, 1950) in signal processing and then proposes a new frequency domain analysis framework for momentum methods. 2 Published as a conference paper at ICLR 2025 2.1 Z-T RANSFORM AND QUASI -STATIONARY APPROXIMATION Frequency analysis is a crucial technique for understanding how systems react to varying fre- quency components of input signals. Specifically, for discrete-time linear time-invariant systems, Z-transform is leveraged to examine how systems attenuate or amplify signals at specific frequen- cies, especially in the study of system stability, pole-zero behavior, etc. (Oppenheim et al., 1996). Interestingly, in neural network training, the momentum update process at time tcan be seen as a recursive filter where the gradient gtand the momentum mtact as input and output signals, re- spectively. The momentum coefficients affect the gradient adjustments across different frequency components. The high-frequency gradient components correspond to large and more abrupt changes in the gradient; while the low-frequency components indicate smooth and more gradual adjustments. However, one key issue is that the momentum system can be inherently time-variant, as its coef- ficients may change stagewise throughout the training process. This variability makes it difficult to apply traditional Z-transform analysis. To overcome this, inspired by the Zadeh (1961); Jury (1964), we approximate the system as time-invariant in each discrete interval stage. By holding the momentum coefficients constant over every interval, we construct a time-invariant quasi-stationary system (Hubner & Tran-Gia, 1991), enabling us to apply the Z-transform validly. In our following analysis framework and our later optimizer design, we follow this multistage strat- egy for changing momentum coefficients. Particularly, for a predefined stage whose length is de- noted by δ, the momentum coefficients are redefined using the floor function to ensure they remain constant over the whole stage: ut=u(⌊t/δ⌋ ×δ)and vt=v(⌊t/δ⌋ ×δ), (3) where u(t), v(t)are the continuous dynamic sequence functions with respect to t. While there are multiple sequences with different designs, in this paper, we use the following increasing and decreasing sequences: Increasing :u(t)orv(t) =t t+µ,Decreasing :u(t)orv(t) = 1−t+ 1 t+ν, (4) where µandνare the increasing and decreasing factors2. In Appendix C.1, we also examined the test set performance using other kinds of dynamic sequences. Under the above settings, for a given stage k(k= 1,···, N), with t∈[(k−1)δ, kδ−1], the momentum system becomes: mt=ukmt−1+vkgt (5) where uk=u((k−1)δ)andvk=v((k−1)δ)are constants for the duration of the k-th stage. Additionally, we set the total number of stages, denoted by N, to a constant value of 300 for all the experiments in this paper. 2.2 F REQUENCY DOMAIN ANALYSIS OF THE MOMENTUM METHOD In this subsection, we introduce our frequency domain analysis framework and analyze the impacts of the momentum method on neural network training. We first apply Z-transform, denoted by Z, to Equation 5: M(z) =ukz−1M(z) +vkG(z), (6) where G(z) =Z{gt},M(z) =Z{mt}, and z−1M(z) =Z{mt−1}. To obtain the frequency response of the momentum system during stage k, we evaluate the transfer function Hk(z)on the unit circle (Oppenheim et al., 1996): Hk(z) =M(z) G(z)=vk 1−ukz−1z=ejω = = = =⇒ Hk(ω) =vk 1−uke−jω, (7) where ω∈[0, π]is the normalized angular frequency of the real-value signal. The frequency re- sponse of the momentum system describes how the input gradient signal G(z)is altered to the output momentum signal M(ω)when it passes through the system. Note that this transfer function is valid for the entire duration of the k-th quasi-stationary stage. 2Note that different from the increasing sequence, the numerator of the decreasing sequence is t+ 1. This design avoids the zero gradients at the first training stage. 3 Published as a conference paper at ICLR 2025 Magnitude Response. The magnitude response of the momentum system in the k-th stage can be calculated by taking the magnitude of Hk(ω): |Hk(ω)|=|vk|p 1−2ukcosω+u2 k. (8) The magnitude response describes the amplitude scaling effect of the system at different frequen- cies. It indicates how the momentum system amplifies or attenuates different frequency components during each stage. This characteristic of the momentum system plays a key role in affecting the optimization process. Notably, when |Hk(ω)|<1, the momentum system attenuates the signals with frequency ω; when |Hk(ω)|>1, the momentum system amplifies the signals with ω. Conse- quently, we divide the momentum systems into two categories: Orthodox Momentum Systems and Unorthodox Momentum Systems . Orthodox Momentum Systems are the ones whose amplitude of the magnitude response will not surpass 1, like the EMA-SGDM (2). This kind of momentum system only shows attenuating char- acteristics. Specifically, the momentum system behaves as a low-pass filter when uk>0and a high-pass filter when uk<0. Additionally, when ukgets close to 1, the momentum system will prefer to attenuate the gradient components with high frequencies. The visualization of the (dy- namic) magnitude responses of orthodox momentum systems is in Section 3.1 and Appendix C.2. ForUnorthodox Momentum Systems where the amplitude of magnitude response will surpass 1, such as selecting ut= 0.9andvt= 1 in Standard-SGDM (1), the momentum system possesses both amplifying and attenuating characteristics. In this paper, we refer to these kinds of unortho- dox filters as low/high-pass gain filters. Specifically, the momentum system behaves as a low-pass gain filter when uk>0, vk= 1 and a high-pass gain filter when uk<0, vk= 1. Additionally, ifukis close to 1, the momentum system attenuates high-frequency gradient components while strongly amplifying low-frequency components; if ukis close to −1, the momentum system atten- uates low-frequency gradient components while strongly amplifying high-frequency components. The visualization of the (dynamic) magnitude responses of unorthodox momentum systems is in Section 3.2 and Appendix C.2. To demonstrate the momentum effects from the frequency perspective, in Figure 1, we compare an original sinusoidal signal, a noisy version injected with Gaussian noise, and the signal after applying the momentum method (which is called momentum signal for short) in the time domain. The red curve represents the noisy signal, the black dashed curve corresponds to the original noise-free true signal, and the cyan curve shows the momentum signal. We can see that different selections of uk andvksignificantly affect the amplifying or attenuating effects of the momentum system. 0 50 100 150 200 250 300 Time Step-50050100AmplitudeNoisy Signal True Signal Momentum Signal (a) Dynamic Low-pass Filter 0 50 100 150 200 250 300 Time Step-50050100AmplitudeNoisy Signal True Signal Momentum Signal (b) Dynamic High-pass Filter 0 50 100 150 200 250 300 Time Step-400-2000200400600800AmplitudeNoisy Signal True Signal Momentum Signal (c) Low-pass Gain Filter 0 50 100 150 200 250 300 Time Step-50050100AmplitudeNoisy Signal True Signal Momentum Signal (d) High-pass gain Filter Figure 1: Visualization of different filters towards the noisy sinusoidal signal. (a) uk= 0→1, vk= 1−uk, with the system gradually shifting from an all-pass filter to a narrow low-pass filter; (b) uk= 0→ −1, vk= 1 + uk, with the system gradually shifting from an all-pass filter to a narrow high-pass filter; (c) uk= 0.9, vk= 1, which indicates the momentum behaves like a low-pass gain filter with amplification on low-frequency gradient components; (d) uk=−0.9, vk= 1, which indicates the momentum behaves like a high-pass gain filter with amplification on high-frequency components. The amplifying and attenuating effects of different momentum systems are verified. Similarly, we also have the phase response of the momentum system (see Appendix A). While the phase response of the momentum only provides limited insights, understanding the behavior of the magnitude response across stages is essential for analyzing the time-variant characteristics of the momentum system. By plotting the dynamic magnitude response value |Hk(ω)|on the normalized 4 Published as a conference paper at ICLR 2025 angular frequency axis for each stage k, we can track how the frequency-dependent behavior of the multistage momentum system evolves. This provides valuable insights into the amplifying or atten- uating characteristics of the momentum system. Further results on the comparisons of momentum systems with different dynamic magnitude responses are presented in the next section. 3 D YNAMIC MAGNITUDE RESPONSE OF THE MOMENTUM SYSTEMS In this section, we present an empirical study to discover the influence of the momentum coeffi- cients by comparing the test performance on momentum systems with different dynamic magnitude responses. We train VGG (Simonyan & Zisserman, 2014) on the CIFAR-10 (Krizhevsky et al., 2009) dataset and ResNet50 (He et al., 2016) on the CIFAR-100 dataset using different momentum coefficients, while keeping all other hyperparameters unchanged. For each experiment, we report the mean and standard error (as subscripts) of test accuracy for 3 runs with random seeds from 0-2. The detailed experimental settings can be found in Appendix D. The experimental results in CIFAR- 10 show high similarity to those in CIFAR-100. Thus, here, we mainly focus on the analysis based on CIFAR-100 and defer the experimental results of VGG16 on CIFAR-10 in Appendix C.3. 3.1 O RTHODOX MOMENTUM SYSTEMS 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e3 =1e4 =1e5 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.20.40.60.81.0|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e2 =1e3 =1e4 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch0123456Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch01234567Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 2: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for EMA-SGDM with low-pass momentum defined in Equation 9. The solid lines denote the mag- nitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for EMA- SGDM with low-pass momentum. Left Column: increasing sequence. Middle Column: fixed se- quence. Right Column: decreasing sequence. We first focus on the orthodox momentum systems with the following two main types: low-pass and high-pass momentum, defined as: Low-pass :mt=utmt−1+ (1−ut)gt,High-pass :mt=−utmt−1+ (1−ut)gt,(9) where ut∈[0,1)can be set as increasing, decreasing sequences, or fixed value. For time-variant momentum systems, different strategies of utresult in different time-variant filtering characteris- tics during training. According to Section 2.1, scaling the increasing and decreasing factors af- fects the changing rates of ut. In the following, we demonstrate the dynamic magnitude responses, comparisons between gradient norms and momentum norms, and test accuracy results of orthodox momentum systems under different utsequences3. 3Note that selecting µ= 100 andν= 104lead to a long stage of the super narrow-band filter. To avoid this problem, we select µ= 103,104,105andν= 102,103,104in this paper. 5 Published as a conference paper at ICLR 2025 Example 1: Low-Pass Momentum. We first explore the effect of increasing, fixed, and decreasing utsequences in low-pass momentum. Figure 2(a) - 2(c) show the corresponding dynamic magnitude responses over time. With increasing ut, the system transits from an all-pass to a progressively nar- rower low-pass filter, gradually attenuating high-frequency components. Larger µresults in slower transitions. Decreasing utshows a reverse behavior, with larger νresulting in slower transitions. ut with a fixed value maintains a constant filter, with larger utleading to more aggressive smoothing and noise reduction characteristics. The norm comparisons in Figure 2(d) - 2(f) show that the mo- mentum norms in low-pass momentum systems are always less than corresponding gradient norms. Larger ut, νand smaller µlead to more reduced momentum norms, which validates the time-variant filtering characteristics of orthodox momentum systems. Test accuracy results in Table 1 reveal that increasing or fixing utcan achieve higher accuracy compared to applying decreasing sequences of ut. In particular, momentum systems with proper increasing sequences of utcan outperform those with fixed ut. We also find that larger νresults in poorer model performance. These phenomena indicate that gradually attenuating high-frequency components during training improves test set performance, while excessive suppression of low- frequency gradient components in early stages and retention of high-frequency components in late stages degrade model performance. Example 2: High-Pass Momentum. High-pass momentum systems exhibit symmetric dynamic magnitude responses and similar norm comparisons, compared to their low-pass counterparts (see Figure 6 in Appendix C.2). With increasing ut, the system shifts from an all-pass to a narrow high-pass filter, progressively attenuating low-frequency components. Decreasing sequences act in reverse. Fixed sequences with larger utlead to more aggressive attenuation of low-frequency com- ponents. The comparison of gradient norms and momentum norms can be found in Appendix C.2. Test accuracy in Table 1 shows that dynamic high-pass systems with larger µand smaller νyield better top-1 accuracy performance. When selecting fixed values, momentum systems with larger ut perform more poorly. These results confirm that suppressing low-frequency gradient components is harmful. Moreover, high-pass systems generally outperform low-pass systems when applying decreasing strategies with the same ν, suggesting that high-frequency components play a crucial role in the early training stages, which is also supported by the studies in Appendix C.4. From Examples 1 and 2, we empirically verify that high-frequency gradient components are detri- mental in late training stages, while their preservation in early stages leads to higher test accuracy, which matches the viewpoint that gradient noise has a generalization benefit early in training (Smith et al., 2020). Table 1: Top-1 ACC. (%) comparisons of different momentum coefficient strategies of orthodox momentum systems of ResNet50 on CIFAR-100. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-pass 77.12 0.07 77.06 0.14 76.86 0.12 76.98 0.09 76.82 0.18 76.84 0.06 72.58 0.44 70.53 0.31 69.69 0.75 High-pass 51.59 0.78 67.55 0.22 74.72 0.06 72.46 0.13 65.14 0.17 53.43 0.26 76.82 0.25 75.92 0.12 70.99 0.18 3.2 U NORTHODOX MOMENTUM SYSTEMS Unorthodox momentum systems allow magnitude responses larger than 1, meaning they can both attenuate and amplify gradients in different frequency bands. We focus on two main types: low-pass gain and high-pass gain momentum, defined as: Low-pass gain :mt=utmt−1+gt,High-pass gain :mt=−utmt−1+gt, (10) where ut∈[0,1)can follow increasing, fixed, or decreasing sequences. For simplification reasons, we use the PyTorch setting with vt= 1. We show the dynamic magnitude responses, comparisons between gradient norms and momentum norms, and test accuracy results of unorthodox momentum systems under different utsequences as follows. Example 3: Low-Pass Gain Momentum. In low-pass gain momentum, the system transits from an all-pass to a narrower low-pass gain filter as utincreases, amplifying low-frequency components while attenuating high-frequency components. Figure 3(a) - 3(c) show the corresponding dynamic 6 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 020406080100|H()| =1e3 =1e4 =1e5 0.0 0.5 1.0 1.5 2.0 2.5 3.0012345 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 246810|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 02468101214|H()| =1e2 =1e3 =1e4 0.0 0.5 1.0 1.5 2.0 2.5 3.00.000.250.500.751.001.251.501.752.00 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch05101520Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch024681012Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 3: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for Standard-SGDM with low-pass gain momentum defined in Equation 10. The solid lines denote the magnitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for Standard-SGDM with low-pass gain momentum. Left Column: increasing sequence. Middle Column: fixed sequence. Right Column: decreasing sequence. magnitude responses over time. A large µcorresponds to the slow shifts. Decreasing utreverses the trend, heavily amplifying low-frequency components early and relaxing this effect over time. Fixed utmaintains constant filters, in which larger utamplifies low-frequency components more aggressively. Figure 3(d) - 3(f) demonstrate larger momentum norms compared to gradient norms, indicating the amplification effects in gain filters. Larger ut, νand smaller µlead to more reduced momentum norms, which validates the time-variant filtering characteristics of orthodox momentum systems. Test results in Table 2 indicate that increasing utwith appropriate µoutperforms the sce- narios using fixed and decreasing sequences of ut. We also find that smaller νyields worse accuracy in test sets. From these results, we conclude that amplifying low-frequency gradient components and properly attenuating high-frequency ones, improves test set performance. Example 4: High-Pass Gain Momentum. High-pass gain momentum mirrors the dynamic magni- tude response behavior of low-pass gain systems (see Figure 7 in App C.2). Increasing utgradually amplifies high-frequency gradient components and attenuates low-frequency ones. Decreasing ut reverses this pattern, heavily amplifying high-frequency components early on. Fixed constructions more aggressively amplify high-frequency components for larger ut. The comparison of gradient norms and momentum norms can be found in Appendix C.2. Test accuracy in Table 2 shows that fixed constructions with larger utand decreasing utwith larger νperform worse. These findings confirm that amplifying high-frequency gradients in training might be undesirable. From Examples 3 and 4, we empirically verify that proper amplification in unorthodox momen- tum systems can improve model performance, particularly when amplifying low-frequency gradient components. Table 2: Top-1 ACC. (%) comparisons of different momentum coefficient strategies of unorthodox momentum systems of ResNet50 on CIFAR-100. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-Pass Gain 76.10 0.14 80.48 0.03 78.02 0.03 78.01 0.04 79.51 0.15 79.71 0.25 70.37 0.67 71.53 0.62 76.18 0.38 High-Pass Gain 75.47 0.21 74.54 0.16 75.97 0.27 75.68 0.18 74.56 0.09 73.77 0.18 76.41 0.41 74.00 0.26 68.90 0.82 7 Published as a conference paper at ICLR 2025 3.3 D ISCUSSION The differences in norm comparison and test accuracy between orthodox and unorthodox momentum systems validate the distinctions between EMA-SGDM and Standard-SGDM. While EMA-SGDM possesses attenuating filter effects, Standard-SGDM can both amplify and attenuate different fre- quency gradient components. Moreover, our findings indicate that with appropriate momentum coefficients, Standard-SGDM consistently outperforms EMA-SGDM, showing the advantages of decoupling momentum coefficients, which answers Question 1. Regarding Question 2, the test results show that decoupled momentum coefficients with a properly increasing utand fixed vtcan achieve better performance. In particular, our empirical findings re- veal the following insights in training convolutional neural networks (CNNs): (1)high-frequency gradient components are undesired in the late stages of training; (2)preserving the original gradi- ent in the early stages leads to improved test set accuracy; (3)gradually amplifying low-frequency gradient components enhances performance. Furthermore, we find that these insights are also adapt- able in various learning areas (see Section 5). Based on these insights, it may be possible to design a more effective optimizer by appropriately adjusting the momentum coefficients. 4 F REQUENCY -BASED OPTIMIZER As suggested by our frequency domain analysis framework, achieving better test performance is equivalent to finding an appropriate dynamic filter-changing pattern for momentum systems. Based on this idea, we propose FSGDM, a heuristic optimizer that dynamically adjusts momentum filtering characteristics. Furthermore, to explore the potential optimal strategies of our proposed FSGDM based on the findings in Section 3.3, several sets of experiments in various deep-learning tasks are conducted. 4.1 F REQUENCY STOCHASTIC GRADIENT DESCENT WITH MOMENTUM Algorithm 1: FSGDM Input: Σ,c,v,N; Initialization: m0,µ=cΣ, δ= Σ/N; foreacht= 1,2, . . .do gt=∇Lt(xt−1, ζt−1); u(t) =t t+µ, u t=u(⌊t/δ⌋×δ); mt=utmt−1+vgt; xt=xt−1−αtmt; endGenerally, determining the best optimization strategy by tuning utandvtaccording to our frequency domain analysis is challenging. In the field of signal process- ing, how to select the best filters for different problems is still an open problem. However, we can design a bet- teroptimizer based on the findings in Section 3.3. Still, there are infinite dynamic magnitude responses that can meet the requirements of the aforementioned findings. Based on Occam’s Razor principle, we provide a mini- malist form of our proposed optimizer in Algorithm 1, where Σis the total gradient update steps in the whole training process determined by the epoch number and the size of the dataset, cis a scaling factor, Lt:Rd→Ris the loss for the t-th step, ζt−1denotes a minibatch drawn from the training data, and Nis the number of stages. µandvare adjustable parameters that dominate the filtering characteristic of FSGDM. Moreover, since µis a function ofΣ, the dynamic magnitude response can be inherited when Σvaries. In particular, we have the following proposition. Proposition 1. By fixing the number of stages Nand the scaling factor c, the dynamic magnitude response of Algorithm 1 keeps invariant with respect to changes in the total number of training steps. The proof of Proposition 1 is deferred in Appendix B.3. By this, we show that the dynamic mag- nitude response of a well-performed FSGDM can be adaptable to various tasks. In the following subsection, we explore the optimal scaling factor cand momentum coefficient vfor FSGDM. 4.2 E MPIRICAL EXPLORATION OF OPTIMAL SETTINGS FOR FSGDM As discussed in Section 3, different choices of candvcan significantly affect the filtering character- istics of FSGDM. To understand their impact on optimization performance and to identify optimal parameter settings, we conduct a comprehensive empirical study. 8 Published as a conference paper at ICLR 2025 Specifically, we empirically explore the optimal parameter selection of FSGDM across three dif- ferent image classification tasks by first sweeping candvwithin the ranges of (0,1)and[0.5,3], respectively. Specifically, we conduct three sets of experiments using the same codebase (See Ap- pendix D for more training details): (1) training ResNet18 for 100 epochs on CIFAR-10, (2) training ResNet34 for 100 epochs on Tiny-ImageNet (Le & Yang, 2015), and (3) training ResNet50 for 300 epochs on CIFAR-100. We also explore the optimal parameter selection on one natural language processing task in Appendix C.7. By finding the parameter selections with better test performance in different tasks, we try to empirically summarize the law of optimal parameter selection. 0.01 0.1 1 Scaling Factor c0.51.01.52.02.53.0Momentum Coefficient vOptimal ZoneCIFAR10-ResNet18 0.01 0.1 1 Scaling Factor cMomentum Coefficient vOptimal ZoneTiny-ImageNet-ResNet34 0.01 0.1 1 Scaling Factor cMomentum Coefficient vOptimal ZoneCIFAR100-ResNet50 4.65004.74744.84494.94235.03975.13725.23465.33215.42955.5269 31.71032.09932.48732.87633.26533.65434.04234.43134.82035.208 18.30018.62818.95619.28519.61319.94120.26920.59720.92621.254 Figure 4: The Top-1 test errors of training ResNet18 on CIFAR-10, ResNet34 on Tiny-ImageNet and ResNet50 on CIFAR-100. The results show that the optimal parameter selections across these three training settings exhibit a high similarity. The black points denote the parameter selections with better test performance. The optimal zone of the parameter selection is circled in red. The results in Figure 4 show that there exists an optimal zone where relatively better test accuracy results can be achieved. When the momentum coefficient vis fixed, the test accuracy shows an initial increase followed by a decline as the scaling factor cincreases. In Appendix C.8, we plot the magnitude responses and the test accuracy results of the black points in Figure 4 and find that these parameter selections have similar dynamic magnitude responses and test accuracy curves. Thus, we assume the parameter selections with similar dynamic magnitude responses will lead to close performance. More discussions are in Appendix C.8. 5 E XPERIMENTS To verify the generalization of the proposed FSGDM, we perform a large-scale comparison across vision classification tasks, natural language processing (NLP) tasks, and reinforcement learning (RL) tasks. We compare the test performance of FSGDM and conventional SGD-based momentum optimizers, including Standard-SGDM and EMA-SGDM. We set ut= 0.9, vt= 1 for Standard- SGDM, and ut= 0.9for EMA-SGDM, which are the common momentum coefficient selections in training neural networks. For a fair comparison and convenience, we set c= 0.033, v= 1, which is one of the black points in the optimal zone in Figure 4, for FSGDM. Note that other combinations of candvin the optimal zone can also be selected. For the other adjustable parameters in Algorithm 1, we set N to 300 as mentioned at the end of Section 2.1, and set Σas the number of total training steps. Notably, since our focus is on comparing the performance of different optimizers, we do not fine-tune every parameter for each model but use the same hyperparameters across all models for convenience. See Appendix D for more experimental details. Table 3: Performance on Image Classification Experiments Dataset CIFAR-10 CIFAR-100 Tiny-ImageNet ImageNet Model VGG16 ResNet18 ResNet50 DenseNet121 ResNet34 MobileNet ResNet50 EMA-SGDM 93.71 0.07 94.19 0.07 76.84 0.06 76.18 0.23 62.28 0.17 55.00 0.10 74.24 0.04 Standard-SGDM 94.08 0.07 95.57 0.06 79.71 0.25 80.49 0.09 67.51 0.08 58.31 0.20 76.66 0.09 FSGDM 94.19 0.07 95.66 0.07 81.44 0.06 81.14 0.05 67.74 0.06 59.61 0.11 76.91 0.05 Image Classification. We perform four sets of experiments with different datasets in computer vi- sion tasks and use various CNN architectures for training them. Specifically, we select: (a) VGG16 and ResNet18 for CIFAR-10; (b) ResNet50 and DenseNet121 (Huang et al., 2017) for CIFAR-100; (c) ResNet34 and MobileNet (Howard, 2017) for Tiny-ImageNet; (d) ResNet50 for ILSVRC 2012 9 Published as a conference paper at ICLR 2025 ImageNet Russakovsky et al. (2015). For each task, we report the mean and standard error (as subscripts) of test accuracy for 3 runs with random seeds from 0-2. The results in Table 3 show that our FSGDM consistently achieves better test set performance. Additionally, we can observe that Standard-SGDM steadily outperforms EMA-SGDM, which aligns with our discoveries in Sec- tion 3.3. Natural Language Processing. We conduct experiments on the IWSLT14 German-English trans- lation task (Cettolo et al., 2014) to represent NLP tasks, a widely used benchmark in the community. Specifically, we train six different models encompassing a variety of architectures: two convolution- based models, FConv (Gehring et al., 2017) and LightConv (Wu et al., 2019); two LSTM-based models, vanilla LSTM (Hochreiter & Schmidhuber, 1997) and LSTM-W (Wiseman & Rush, 2016); and two Transformer-based models (Vaswani et al., 2017) with different sizes, Transformer-tiny and Transformer. Model performance is reported using BLEU scores, where higher scores indicate bet- ter performance, and we summarize all results in Table 4. Compared with the baseline optimizers, FSGDM outperforms all others in this task across six different models. This shows the effectiveness of our optimizer in improving translation quality. Moreover, the consistent improvement highlights the robustness of FSGDM and its ability to generalize across different neural network structures in natural language processing tasks. Table 4: Performance on IWSLT14 Dataset Model FConv LightConv LSTM LSTM-W Transformer-tiny Transformer EMA-SGDM 13.97 0.01 10.56 0.01 4.99 0.01 1.20 0.07 5.17 0.01 6.27 0.01 Standard-SGDM 27.41 0.02 33.05 0.04 28.12 0.06 24.66 0.06 18.16 0.03 31.50 0.05 FSGDM 28.30 0.01 33.44 0.02 29.27 0.02 27.41 0.03 19.94 0.07 32.40 0.05 0 1/uni00A0M 2/uni00A0M 3/uni00A0M 4/uni00A0M 5/uni00A0M 6/uni00A0M Timesteps050010001500200025003000Episode/uni00A0RewardWalker2d/uni00ADv4 EMA/uni00ADSGDM Standard/uni00ADSGDM FSGDM 0 1/uni00A0M 2/uni00A0M 3/uni00A0M 4/uni00A0M 5/uni00A0M 6/uni00A0M Timesteps0100020003000400050006000Episode/uni00A0RewardHalfCheetah/uni00ADv4 EMA/uni00ADSGDM Standard/uni00ADSGDM FSGDM 0 1/uni00A0M 2/uni00A0M 3/uni00A0M 4/uni00A0M 5/uni00A0M 6/uni00A0M Timesteps0500100015002000250030003500Episode/uni00A0RewardAnt/uni00ADv4 EMA/uni00ADSGDM Standard/uni00ADSGDM FSGDM Figure 5: The reward curves of EMA-, Standard-SGDM, and FSGDM on three MuJoCo tasks. Reinforcement Learning. We evaluate FSGDM on PPO (Schulman et al., 2017), one of the most popular policy gradient methods in reinforcement learning. We replace the default Adam opti- mizer (Kingma & Ba, 2014) in PPO with FSGDM, Standard-SGDM, and EMA-SGDM. We test the three optimizers on Walked2d-v4, HalfCheetah-v4, and Ant-V4, which are continuous control environments simulated by the standard and widely-used engine, MuJoCo (Todorov et al., 2012). Following standard evaluation, we run each game under 10 random seeds (range from 0-9) and test the performance for 10 episodes every 30,000 steps. All experiments are conducted using the Tian- shou codebase (Weng et al., 2022), a widely known RL framework. Figure 5 presents the results on three tasks, where the solid line represents the average episode rewards during evaluation, and the shaded region indicates the 75% confidence interval. It can be easily observed that on three test games, our FSGDM achieves higher rewards than Standard-SGDM and EMA-SGDM. 6 C ONCLUSIONS This paper proposes a frequency domain analysis framework for the momentum method. Based on the proposed framework, we find that different selections of momentum coefficients correspond to different filter characteristics of the momentum methods. Performance will be significantly different under different time-variant momentum coefficients. Furthermore, we develop a heuristic optimizer named FSGDM which outperforms the conventional SGD-based momentum optimizers in various learning tasks. Future work may explore the best filtering strategy for all general scenarios and extend the frequency domain analysis framework to other optimizers such as Adam. 10 Published as a conference paper at ICLR 2025 ACKNOWLEDGMENTS We would like to especially thank Prof. K. C. Ho for many enlightening discussions. The work of Xianliang Li and Sheng Xu was supported by the National Natural Science Foundation of China (62273327) and the Shenzhen Science and Technology Program (KCXFZ20211020165003005). The work of Linlong Wu was supported by Luxembourg FNR CORE METSA project C22/IS/17391632. REFERENCES N. S. Aybat, A. Fallah, M. Gurbuzbalaban, and A. Ozdaglar. A universally optimal multistage accelerated stochastic gradient method. Advances in Neural Information Processing Systems , 32, 2019. M. Cettolo, J. Niehues, S. St ¨uker, L. Bentivogli, and M. Federico. Report on the 11th IWSLT evaluation campaign. In Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign , pp. 2–17, 2014. X. Chen, S. Liu, R. Sun, and M. Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. International Conference on Learning Representations , 2018. A. Cutkosky and F. Orabona. Momentum-based variance reduction in non-convex SGD. Advances in Neural Information Processing Systems , 32, 2019. E. S. Gardner Jr. Exponential smoothing: The state of the art. Journal of forecasting , 4(1):1–28, 1985. J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y . N. Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning , pp. 1243–1252. PMLR, 2017. G. Goh. Why momentum really works. Distill , 2017. doi: 10.23915/distill.00006. K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level per- formance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision , pp. 1026–1034, 2015. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 770–778, 2016. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation , 1997. A. G. Howard. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 , 2017. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 4700–4708, 2017. F. Hubner and P. Tran-Gia. Quasi-stationary analysis of a finite capacity asynchronous multiplexer with modulated deterministic input. ITC-13, Copenhagen , 1991. E. I. Jury. Theory and application of the z-transform method. Electronics and Power , 11(3):291–299, 1964. R. Kidambi, P. Netrapalli, P. Jain, and S. Kakade. On the insufficiency of existing momentum schemes for stochastic optimization. In 2018 Information Theory and Applications Workshop (ITA) , pp. 1–9. IEEE, 2018. D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations , 2014. A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto , 2009. 11 Published as a conference paper at ICLR 2025 A. Kulunchakov and J. Mairal. A generic acceleration framework for stochastic composite opti- mization. Advances in Neural Information Processing Systems , 32, 2019. Y . Le and X. Yang. Tiny imagenet visual recognition challenge. CS231N , 7(7):3, 2015. X. Li, M. Liu, and F. Orabona. On the last iterate convergence of momentum methods. In Interna- tional Conference on Algorithmic Learning Theory , pp. 699–717. PMLR, 2022. T. Liu, Z. Chen, E. Zhou, and T. Zhao. A diffusion approximation theory of momentum SGD in nonconvex optimization. arXiv preprint arXiv:1802.05155 , 2018. Y . Liu, Y . Gao, and W. Yin. An improved analysis of stochastic gradient descent with momentum. InAdvances in Neural Information Processing Systems , volume 33, pp. 18261–18271, 2020. I. Loshchilov and F. Hutter. SGDR: Stochastic gradient descent with warm restarts. International Conference on Learning Representations , 2016. L. Luo, Y . Xiong, Y . Liu, and X. Sun. Adaptive gradient methods with dynamic bound of learning rate. International Conference on Learning Representations , 2019. J. Ma and D. Yarats. Quasi-hyperbolic momentum and Adam for deep learning. In International Conference on Learning Representations , 2018. V . Mai and M. Johansson. Convergence of a stochastic gradient method with momentum for non- smooth non-convex optimization. In International Conference on Machine Learning , pp. 6630– 6639. PMLR, 2020. Y . Nesterov. A method for solving the convex programming problem with convergence rate o (1/k2). InDokl Akad Nauk SSSR , volume 269, pp. 543, 1983. A. V . Oppenheim, A. S. Willsky, and S. H. Nawab. Signals & systems (2nd ed.) . Prentice-Hall, Inc., USA, 1996. ISBN 0138147574. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems , 32, 2019. B. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computa- tional Mathematics and Mathematical Physics , 4(5):1–17, 1964. ISSN 0041-5553. S. J. Reddi, S. Kale, and S. Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations , 2018. H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statis- tics, pp. 400–407, 1951. D. E. Rumelhart, G. Hinton, and R. J. Williams. Learning internal representations by error prop- agation, parallel distributed processing, explorations in the microstructure of cognition, ed. de rumelhart and j. mcclelland. vol. 1. 1986. Biometrika , 71(599-607):6, 1986. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision , 115:211–252, 2015. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556 , 2014. L. N. Smith. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820 , 2018. S. Smith, E. Elsen, and S. De. On the generalization benefit of noise in stochastic gradient descent. InInternational Conference on Machine Learning , pp. 9058–9067. PMLR, 2020. 12 Published as a conference paper at ICLR 2025 I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning , volume 28 of Proceedings of Machine Learning Research , pp. 1139–1147. PMLR, 2013. E. Todorov, T. Erez, and Y . Tassa. MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp. 5026–5033. IEEE, 2012. B. Van Scoy, R. A. Freeman, and K. M. Lynch. The fastest known globally convergent first-order method for minimizing strongly convex functions. IEEE Control Systems Letters , 2(1):49–54, 2017. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo- sukhin. Attention is all you need. Advances in Neural Information Processing Systems , 2017. R. Wang, S. Malladi, T. Wang, K. Lyu, and Z. Li. The marginal value of momentum for small learning rate SGD. In International Conference on Learning Representations , 2024. J. Weng, H. Chen, D. Yan, K. You, A. Duburcq, M. Zhang, Y . Su, H. Su, and J. Zhu. Tianshou: A highly modularized deep reinforcement learning library. Journal of Machine Learning Research , 23(267):1–6, 2022. S. Wiseman and A. M. Rush. Sequence-to-sequence learning as beam-search optimization. Pro- ceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , 2016. F. Wu, A. Fan, A. Baevski, Y . N. Dauphin, and M. Auli. Pay less attention with lightweight and dynamic convolutions. International Conference on Learning Representations , 2019. Y . Yan, T. Yang, Z. Li, Q. Lin, and Y . Yang. A unified analysis of stochastic momentum methods for deep learning. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence , 2018. H. Yu, R. Jin, and S. Yang. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In International Conference on Machine Learning , pp. 7184–7193. PMLR, 2019. L. A. Zadeh. Frequency analysis of variable networks. Proceedings of the IRE , 38(3):291–299, 1950. L. A. Zadeh. Time-varying networks, i. Proceedings of the IRE , 49(10):1488–1503, 1961. 13 Published as a conference paper at ICLR 2025 A P HASE RESPONSE The phase response of the momentum system in the k-th stage can be written as, arg(Hk(ω)) = arg( vk)−tan−1uksinω 1−ukcosω , (11) where arg(·)is the argument operator. For any real value vk,arg(vk) = 0 ifvk>0andarg(vk) =π ifvk<0; for any ω∈[0, π]anduk∈(−1,1),tan−1(uksinω/(1−ukcosω))∈(−π 2,π 2). The phase response describes the phase-shifting effect of the momentum system at different frequencies. In the context of gradient-based optimization, the phase shift indicates a change in the optimization direction. Therefore, when vk<0, the phase shift of the momentum adds up an extra πrad on the shifted direction, indicating that the direction of the update is greatly reversed, which can lead to oscillations, instability, or divergence in the optimization process. Thus, it is necessary to select a positive vkwhen applying momentum methods. B A DDITIONAL DERIVATIONS AND PROOF B.1 D ERIVATION OF EQUATION 8 |Hk(ω)|=q Hk(ω)H† k(ω) =rvk 1−uke−jω·vk 1−ukejω =s v2 k 1−uke−jω−ukejω+u2 ke−jωejω =s v2 k 1−uk(cosω−jsinω)−uk(cosω+jsinω) +u2 k(cos2ω+ sin2ω) =s v2 k 1−2ukcosω+u2 k =|vk|p 1−2ukcosω+u2 k B.2 D ERIVATION OF EQUATION 11 arg(Hk(ω)) = arg( vk)−arg(1−uke−jω) = arg( vk)−arg ((1 −ukcosω) +j(uksinω)) = arg( vk)−tan−1uksinω 1−ukcosω B.3 P ROOF OF PROPOSITION 1 According to Algorithm 1, the momentum coefficient in the k-th stage ( k= 1,2,···, N) is uk=(k−1)δ (k−1)δ+µ=(k−1)δ (k−1)δ+ Σ/c=(k−1)δ (k−1)δ+cNδ=k−1 k−1 +cN. (12) This guarantees that the number of training steps, which may be different when choosing other training strategies or changing datasets, is independent of ukwhen the scaling factor cand the number of stages Nare already determined. 14 Published as a conference paper at ICLR 2025 C A DDITIONAL EXPERIMENTS In this section, we present several supplementary experiments. The detailed experimental settings are shown in Appendix D. C.1 D YNAMIC SEQUENCE CONSTRUCTION There are infinite increasing or decreasing sequences. In this part, we compare the test set perfor- mance of the sequence mentioned in Equation 4 with four other dynamic sequences. Specifically, we compare with the following four dynamic increasing sequences within Algorithm 1: Linear: u(t) =a1t; Exponential: u(t) = 1−e−a2t; Sine: u(t) = sin( a3t); Logarithmic: u(t) = ln( a4t); where a1toa4are scaling coefficients. For a fair comparison, we adjust the coefficients to keep the utof all sequences unchanged in the beginning and ending stages. To make other types of sequences unique, we keep the utof different dynamic sequences nearly unchanged in the beginning and ending stages. Table 5 displays their test accuracy results after 300epochs of training on CIFAR- 100 using ResNet50. We ran each experiment under 3 different random seeds (0, 1, 2). Clearly, the dynamic sequence we use in Equation 4 shows its superiority over other constructions. Table 5: Top-1 ACC. (%) comparisons of using linear, exponential, sine, logarithmic, and our se- quences when adopting FSGDM. Dynamic Sequence Type Ours Linear Exponential Sine Logarithmic ACC-1 (%) 81.44 0.06 78.24 0.24 80.38 0.04 78.76 0.29 78.70 0.09 Specifically, (a1, a2, a3, a4) = (8 .271×10−6,3.793×10−5,1.125×10−5,1.394×10−5). C.2 A DDITIONAL FIGURES OF HIGH-PASS MOMENTUM SYSTEMS ON CIFAR-100 This subsection provides the figures of the dynamic magnitude responses and norm of high-pass (gain) momentum systems mentioned in Section 3. Figure 6 and Figure 7 show the magnitude re- sponses and norm comparisons of high-pass and high-pass gain momentum systems, respectively. The high-pass (gain) momentum systems preserve or even amplify rapidly fluctuating gradient com- ponents, leading to sharp oscillations in gradient norm curves and momentum norm curves across iterations. C.3 A DDITIONAL EXPERIMENTS OF VGG16 ONCIFAR-10 In this subsection, we provide experiments of training VGG16 on CIFAR-10. The experimental settings follow Section 3 and Appendix D. From the test accuracy in Table 6 and Table 7, we ob- serve that the test performances and norm comparisons in different momentum methods in training VGG16 on CIFAR-10 are similar to those in training ResNet50 on CIFAR-100. This similarity implies that the empirical findings in Section 3 are applicable to various CNNs. Table 6: Comparison of Top-1 Accuracy (%) among different momentum coefficient methods in orthodox momentum systems using VGG16 on CIFAR-10. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-Pass 93.80 0.05 93.78 0.12 93.79 0.09 93.68 0.18 93.64 0.08 93.71 0.07 92.33 0.04 90.89 0.11 90.560.19 High-Pass 90.02 0.05 92.64 0.09 93.41 0.01 93.52 0.16 92.71 0.07 90.32 0.07 93.86 0.09 93.73 0.08 93.38 0.09 15 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e3 =1e4 =1e5 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.20.40.60.81.0|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e2 =1e3 =1e4 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch024681012Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch0510152025Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 6: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for EMA-SGDM with high-pass momentum defined in Equation 9. The solid lines denote the mag- nitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for EMA- SGDM with high-pass momentum. Left Column: increasing sequence. Middle Column: fixed sequence. Right Column: decreasing sequence. Table 7: Comparison of Top-1 Accuracy (%) among different momentum coefficient methods in unorthodox momentum systems using VGG16 on CIFAR-10. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-Pass Gain 84.01 0.13 94.19 0.07 93.85 0.07 93.86 0.11 93.98 0.09 94.08 0.07 92.00 0.05 92.27 0.12 92.97 0.23 High-Pass Gain 93.34 0.03 93.56 0.06 93.79 0.13 93.71 0.11 93.46 0.06 93.33 0.02 93.79 0.07 93.33 0.12 93.05 0.08 C.4 T HEEARLY STAGES OF TRAINING This subsection focuses on the test performance affected by the momentum coefficients in the very early training stages. We plot the test accuracy curves for the first 10 epochs of different momentum systems in Section 3 and study the early behaviors of different momentum systems. Figure 8 demonstrates the early test accuracy curves of different momentum coefficient methods. For orthodox momentum systems, preserving the original gradient (i.e., all-pass momentum sys- tem, low-pass momentum system with an increasing ut, and high-pass momentum system with an increasing ut) or attenuating high-frequency gradient components(i.e., static low-pass momentum system with ut= 0.9) results in better initial performance, while greatly attenuating high-frequency gradient components (i.e., low-pass momentum system with a decreasing ut) or attenuating low- pass components (i.e., static high-pass and high-pass momentum system with a decreasing ut) lead to bad test performance at the beginning. On the other hand, for unorthodox momentum systems, preserving the original gradient (i.e., all- pass momentum system, low-pass gain momentum system with an increasing ut, and high-pass gain momentum system with an increasing ut) can achieve better early performance, while greatly amplifying high-frequency gradient components (i.e., static high-pass gain momentum system and high-pass gain momentum system with a decreasing ut) leads to bad initial accuracy results. 16 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 020406080100|H()| =1e3 =1e4 =1e5 0.0 0.5 1.0 1.5 2.0 2.5 3.0012345 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 246810|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 02468101214|H()| =1e2 =1e3 =1e4 0.0 0.5 1.0 1.5 2.0 2.5 3.00.000.250.500.751.001.251.501.752.00 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch051015202530Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810121416Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch024681012Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 7: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for Standard-SGDM with high-pass gain momentum defined in Equation 10. The solid lines denote the magnitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for Standard-SGDM with high-pass gain momentum. Left Column: increasing sequence. Middle Column: fixed sequence. Right Column: decreasing sequence. 0 2 4 6 8 10 Epoch01020304050T est Accuracy (%) Low-pass =1e4 Low-pass ut=0.9 Low-pass =1e4 High-pass =1e4 High-pass ut=0.9 High-pass =1e4 All-pass ut=0 FSGDM (a) Orthodox Momentum Systems 0 2 4 6 8 10 Epoch01020304050T est Accuracy (%) Low-pass gain =1e4 Low-pass gain ut=0.9 Low-pass gain =1e4 High-pass gain =1e4 High-pass gain ut=0.9 High-pass gain =1e4 All-pass ut=0 FSGDM (b) Unorthodox Momentum Systems Figure 8: The first 10epochs of the test accuracy curves with different momentum coefficient meth- ods. We choose 104for both increasing and decreasing factors ( µandν) in dynamic momentum systems and ut= 0.9for static momentum coefficient. These observations significantly validate that preserving the original gradient in early stages en- hances test performance, which matches the findings in Section 3. Additionally, our proposed FS- GDM retains the all-pass characteristic and possesses the same quick start property in test accuracy curves. 17 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| Stage = 1 Stage = 150 Stage = 300 (a) LP2HP 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| Stage = 1 Stage = 150 Stage = 300 (b) HP2LP 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 150 Stage = 300 (c) LPG2HPG 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 150 Stage = 300 (d) HPG2LPG Figure 9: The magnitude response curves of Stage 1,150,300in different momentum systems. C.5 C OMPARISON WITH SPECIAL MOMENTUM SYSTEMS In this subsection, we investigate the test performance of the following four types of momentum systems: 1) low-pass to high-pass momentum system (LP2HP); 2) high-pass to low-pass momen- tum system (HP2LP); 3) low-pass gain to high-pass gain momentum system (LPG2HPG); 4) high- pass gain to low-pass gain momentum system (HPG2LPG). Their dynamic magnitude responses are shown in Figure 9. Note that the maximum values |H(ω)|of these four systems are the same as the default setting in FSGDM. We run each experiment under 3 different random seeds (0-2). Table 8 displays the test accuracy results of four types of momentum systems and FSGDM. Our proposed FSGDM outperforms all four special momentum systems. Specifically, the test accuracy of the mo- mentum systems shifting from high-pass to low-pass is better than that shifting from low-pass to high-pass. This indicates that compared to the low-frequency gradient components, high-frequency components are more undesired in the late training stages, which supports the finding in Section 3. Table 8: Comparison of Top-1 Accuracy (%) among the low-pass to high-pass, high-pass to low- pass, low-pass gain to high-pass gain, high-pass gain to low-pass gain momentum systems and FSGDM. Dynamic Magnitude Response FSGDM LP2HP HP2LP LPG2HPG HPG2LPG ACC-1 (%) 81.44 0.06 74.77 0.21 77.00 0.13 72.60 0.58 78.91 0.25 C.6 T RAINING WITH EXTREME MOMENTUM COEFFICIENTS 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| ut=0.9,vt=0.1 ut=0.93,vt=0.07 ut=0.96,vt=0.04 ut=0.99,vt=0.01 ut=0.999,vt=0.001 (a) Extreme utin EMA Setting 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 020406080100|H()| ut=0.9,vt=1 ut=0.93,vt=1 ut=0.96,vt=1 ut=0.99,vt=1 (b) Extreme utwithvt= 1 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0510152025303540|H()| ut=0.9,vt=1 ut=0.9,vt=2 ut=0.9,vt=3 ut=0.9,vt=4 (c) Extreme vtwithut= 0.9 Figure 10: The magnitude responses of different utandvtwith extreme value ranges. (a): EMA- SGDM; (b), (c): Standard-SGDM. Why do researchers usually choose ut= 0.9orvt= 1instead of larger values? From the frequency domain perspective, we discover that 1) when utis extremely close to 1in EMA-SGDM, the mo- mentum system will behave like a super narrow low-pass filter, with an extreme reduction in most of the high-frequency gradient components; 2) when utis extremely close to 1in Standard-SGDM, the momentum system will behave like a super narrow low-pass gain filter, with a reduction in high- frequency gradient components and high amplification in a narrow band of low-frequency gradient 18 Published as a conference paper at ICLR 2025 components; 3) when vtis larger than 1in Standard-SGDM, the attenuation of high-frequency gra- dient components is then reduced. We speculate that all these poor filtering characteristics of the momentum systems will lead to bad test performance. Figure 10 displays the magnitude response of these three situations. As shown in Figure 11, the test performance results validate our previous speculations and support our frequency domain analysis framework. 0 50 100 150 200 250 300 Epoch010203040506070T est Accuracy (%)ut=0.9,vt=0.1 (76.63% ± 0.06) ut=0.93,vt=0.07 (75.91% ± 0.34) ut=0.96,vt=0.04 (75.63% ± 0.30) ut=0.99,vt=0.01 (73.09% ± 0.41) ut=0.999,vt=0.001 (63.87% ± 0.02) (a) Extreme utin EMA Setting 0 50 100 150 200 250 300 Epoch01020304050607080T est Accuracy (%)ut=0.9,vt=1 (79.70% ± 0.25) ut=0.93,vt=1 (79.37% ± 0.37) ut=0.96,vt=1 (72.84% ± 1.82) ut=0.99,vt=1 (28.27% ± 1.75) (b) Extreme utwithvt= 1 0 50 100 150 200 250 300 Epoch01020304050607080T est Accuracy (%) ut=0.9,vt=1 (79.70% ± 0.25) ut=0.9,vt=2 (78.17% ± 0.51) ut=0.9,vt=3 (75.33% ± 1.45) ut=0.9,vt=4 (72.08% ± 1.45) (c) Extreme vtwithut= 0.9 Figure 11: The test accuracy curves of different utandvtin extreme value ranges. (a): EMA- SGDM; (b), (c): Standard-SGDM. C.7 A DDITIONAL EXPLORATION OF OPTIMAL SETTINGS FORNLP T ASKS In this subsection, we provide experiments that explore the optimal parameter selection of FSGDM for the IWSLT14 translation task by training LSTM-W and Transformer-tiny. The experimental settings follow Section 5 and Appendix D. 0.01 0.1 1 Scaling Factor c0.51.01.52.02.53.0Momentum Coefficient vOptimal ZoneIWSLT14-LSTM-W 0.01 0.1 1 Scaling Factor cMomentum Coefficient vOptimal ZoneIWSLT14-Transformer-tiny 20.00020.80621.61222.41823.22524.03124.83725.64326.44927.255 12.00012.84413.68814.53215.37616.22117.06517.90918.75319.597 Figure 12: The BLEU scores of training LSTM-W and Transformer-tiny on IWSLT14 German- English translation task. The results show that the optimal parameter selections across these two training settings exhibit a high similarity. The black points denote the parameter selections with better test performance. The optimal zone of the parameter selection is circled in blue. The results in Figure 12 indicate that similar optimal zones can be observed on the NLP task. When the momentum coefficient vis fixed, the BLEU score shows an initial increase followed by a decline as the scaling factor cincreases, which is highly consistent with the results in Section 4.2. In addition, we find that the empirical insights discussed in Section 3.3 are also applicable to various deep learning models beyond CNNs, as well as NLP tasks. C.8 O PTIMAL ZONE OF FSGDM In this subsection, we go deeper into the optimal zone. We suspect that the similarity of the dynamic magnitude responses may lead to close test set performance. The dynamic magnitude responses of the black points with different parameters in the optimal zone (Figure 4) are shown in Figure 13. We train ResNet50 on CIFAR-100 and visualize the training losses and the test accuracy curves of different points in the optimal zone. The results are shown in Figure 14. 19 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (a)c= 0.016, v= 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (b)c= 0.033, v= 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (c)c= 0.051, v= 1.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (d)c= 0.069, v= 2.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (e)c= 0.088, v= 2.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 51015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (f)c= 0.107, v= 3.0 Figure 13: The dynamic magnitude responses of the black points in the optimal zone. 0 50 100 150 200 250 300 Epoch0123456Training Lossc=0.107,v=3.0 c=0.088,v=2.5 c=0.069,v=2.0 c=0.051,v=1.5 c=0.033,v=1.0 c=0.016,v=0.5 (a) Training Losses 0 50 100 150 200 250 300 Epoch01020304050607080T est Accuracy (%)c=0.107,v=3.0 (81.20%) c=0.088,v=2.5 (80.98%) c=0.069,v=2.0 (81.21%) c=0.051,v=1.5 (81.70%) c=0.033,v=1.0 (81.54%) c=0.016,v=0.5 (81.35%) (b) Test Accuracy Figure 14: The training losses and test accuracy of different parameter settings in the optimal zone. From the training loss and test accuracy curves, we find that the optimization processes of different black points in the optimal zone resemble each other. According to the existing parameter settings of the black points, one can find that the mathematical relationship between candvin training ResNet50 on CIFAR-100 is approximately30.992 v≈1 +1 c4. C.9 A BLATION STUDY ON DIFFERENT BATCH SIZE This subsection provides the ResNet50 training experiments on CIFAR-100 with different batch size settings. We compare the Top-1 accuracy of the test set by using our FSGDM with c= 0.033, v= 1, Standard-SGDM with ut= 0.9, vt= 1, and EMA-SGDM with ut= 0.9, as shown in Table 9. The test results show that our FSGDM consistently outperforms popular conventional SGD-based momentum optimizers. 4This relationship can be better approximated and generalized with continued experimentations across di- verse tasks. 20 Published as a conference paper at ICLR 2025 Table 9: Comparison of Top-1 Accuracy (%) among the FSGDM, Standard-SGDM, and EMA- SGDM with different batch size settings. Batch size 64 128 256 EMA-SGDM 79.42 0.11 76.84 0.06 69.03 0.39 Standard-SGDM 79.55 0.13 79.71 0.25 78.96 0.33 FSGDM 80.92 0.13 81.44 0.06 80.34 0.01 D E XPERIMENTAL SETTINGS D.1 T RAINING SETTINGS FOR VISION CLASSIFICATION TASKS We use custom training code based on the PyTorch tutorial code for all our visual classification experiments (including the experiments in Section 3, Section 4.2 and Section 5) We choose the CosineAnnealingLR (Loshchilov & Hutter, 2016) as our training scheduler. Additionally, we set the learning rate as 1×10−1for all experiments, while the weight decay is set as 5×10−4for experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, and 1×10−1for ImageNet. All models we used are simply following their paper’s original architecture, and adopt the weight initialization introduced by He et al. (2015). Additionally, we train 300 epochs for experiments on CIFAR- 10 and CIFAR-100 and train 100 epochs for Tiny-ImageNet and ImageNet. We use a 128 batch size for experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, and 256 for ImageNet. All experiments are conducted on RTX 4090 or A100 GPUs. Data Augmentation. For experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, we adopt PyTorch’s RandomCrop, followed by random horizontal flips. Specifically, the random crop size is set to 32x32 for CIFAR-10 and CIFAR-100 and set to 64x64 for Tiny-ImageNet. For experiments on ImageNet, we adopt PyTorch’s RandomResizedCrop, cropping to 224x224 followed by random horizontal flips. Test images use a fixed resize to 256x256 followed by a center crop to 224x224. At last, a data normalization is adopted to input images. D.2 T RAINING SETTINGS FOR NATURAL LANGUAGE PROCESSING TASKS All models used in our experiments are directly adopted from the FairSeq5framework. We retain the original architecture of each model and train all models for 100 epochs using a single NVIDIA RTX 4090 GPU. We set the maximum batch size to 4,096 tokens and apply gradient clipping with a threshold of 0.1. The baseline learning rate is set to 0.25, and for the optimizer, we use a weight decay of 0.0001. D.3 T RAINING SETTINGS FOR REINFORCEMENT LEARNING TASKS For the experiments in RL tasks, we do not make any changes except for replacing the original Adam optimizer with Standard-SGDM, EMA-SGDM, and our proposed FSGDM. To ensure fairness, we use Tianshou’s (Weng et al., 2022) default hyperparameters for PPO training. However, since SGD- based optimizers are highly sensitive to the learning rate, we searched for suitable learning rates across the three games, ultimately setting 10−2,10−2and10−3for Walker2d-v4, HalfCheetah-v4, and Ant-v4, respectively. 5https://github.com/facebookresearch/fairseq 21 Published as a conference paper at ICLR 2025 E C HALLENGES IN THE FREQUENCY DOMAIN ANALYSIS FOR ADAPTIVE OPTIMIZERS Algorithm 2: RMSprop Input β2,ϵ,v0; foreacht= 1,2, . . .do gt=∇Lt(xt−1, ζt−1); vt=β2vt−1+ (1−β2)g2 t; xt=xt−1−αtgt/(√vt+ϵ); endAlgorithm 3: Adam Input β1,β2,ϵ,m0,v0; foreacht= 1,2, . . .do gt=∇Lt(xt−1, ζt−1); mt=β1mt−1+ (1−β1)gt; vt=β2vt−1+ (1−β2)g2 t; cmt=mt 1−βt 1,bvt=vt 1−βt 2; xt=xt−1−αtcmt/(√bvt+ϵ); end In this section, we make a discussion on the potential challenges for the extension of the frequency domain analysis framework to adaptive optimizers like RMSprop and Adam as shown in Algo- rithm 2 and 3. The first-moment estimate of Adam is in the form of EMA and thus acts as a low-pass filter. However, the second-moment estimate presents additional obstacles for frequency domain analysis in the following ways: 1. The second-moment estimates of Adam and RMSprop involve the squared gradient term g2 t, resulting in nonlinearity that complicates the direct application of the Z-transform. 2. Adam introduces both the first- and second-moment estimates ( mtandvt), and adopts bmt/(√bvt+ϵ)as the update step. This intricate interaction between mtandvtalso makes the analysis more challenging. At this stage, we believe that our argument regarding the three insights discussed in Section 3.3 is also applicable to other optimizers. However, it remains unclear how the different frequency gradient components in the model parameter updates are processed by the Adam optimizer. We anticipate that resolving these issues will provide deeper insight. 22 | 8 | 1 | The paper involves training two models: ResNet50 on CIFAR-100 and VGG16 on CIFAR-10. ResNet50 has approximately 25.6 million parameters, while VGG16 has around 138 million parameters. Both datasets (CIFAR-10 and CIFAR-100) are relatively small (60,000 images total for CIFAR-10 and 100,000 images for CIFAR-100) and are often used in deep learning benchmarks. Training such models on these datasets typically requires a total training time of around 8 hours on a modern GPU for standard training procedures like using SGD with momentum. Given that the paper does not mention any complex modifications to the training pipeline or significantly larger batch sizes, it can be estimated that a single modern GPU could feasibly train these models within the 8-hour window. | yes | Yes | CV | On the Performance Analysis of Momentum Method: A Frequency Domain Perspective | 2024-11-29T00:00:00.000Z | [https://github.com/yinleung/FSGDM] | 1 | dataset or example for train found at: [https://github.com/yinleung/FSGDM/tree/main/examples/CIFAR100] | 10 | https://colab.research.google.com/drive/1rYHru1icUH3Yj4kvEvVuriMhdqM--kCS?usp=sharing | YES, Successfully Run! | But need to chnage in little bit code and optimize it. And for training on a example need too much time. |
CIFAR-10 | ResNet18 (FSGDM) | [] | On the Performance Analysis of Momentum Method: A Frequency Domain Perspective | 2024-11-29T00:00:00 | https://arxiv.org/abs/2411.19671v6 | [
"https://github.com/yinleung/FSGDM"
] | {'Percentage correct': '95.66'} | [
"Percentage correct",
"Top-1 Accuracy",
"Accuracy",
"Parameters",
"Top 1 Accuracy",
"F1",
"Cross Entropy Loss"
] | Given the following paper and codebase:
Paper: On the Performance Analysis of Momentum Method: A Frequency Domain Perspective
Codebase: https://github.com/yinleung/FSGDM
Improve the ResNet18 (FSGDM) model on the CIFAR-10 dataset. The result
should improve on the following metrics: {'Percentage correct': '95.66'}. You must use only the codebase provided.
| Published as a conference paper at ICLR 2025 ON THE PERFORMANCE ANALYSIS OF MOMENTUM METHOD : A F REQUENCY DOMAIN PERSPECTIVE Xianliang Li∗1,2, Jun Luo∗1,2, Zhiwei Zheng∗3, Hanxiao Wang2,4, Li Luo5, Lingkun Wen2,6, Linlong Wu7, Sheng Xu†1 1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences3University of California, Berkeley 4Institute of Automation, Chinese Academy of Sciences5Sun Yat-sen University 6Shanghai Astronomical Observatory, Chinese Academy of Sciences7University of Luxembourg yinleung.ley@gmail.com ,{j.luo3,sheng.xu }@siat.ac.cn , zhiwei.zheng@berkeley.edu ,wanghanxiao18@mails.ucas.ac.cn , luoli33@mail2.sysu.edu.cn ,wenlingkun@shao.ac.cn ,linlong.wu@uni.lu ABSTRACT Momentum-based optimizers are widely adopted for training neural networks. However, the optimal selection of momentum coefficients remains elusive. This uncertainty impedes a clear understanding of the role of momentum in stochastic gradient methods. In this paper, we present a frequency domain analysis frame- work that interprets the momentum method as a time-variant filter for gradients, where adjustments to momentum coefficients modify the filter characteristics. Our experiments support this perspective and provide a deeper understanding of the mechanism involved. Moreover, our analysis reveals the following significant findings: high-frequency gradient components are undesired in the late stages of training; preserving the original gradient in the early stages, and gradually am- plifying low-frequency gradient components during training both enhance per- formance. Based on these insights, we propose Frequency Stochastic Gradient Descent with Momentum (FSGDM), a heuristic optimizer that dynamically ad- justs the momentum filtering characteristic with an empirically effective dynamic magnitude response. Experimental results demonstrate the superiority of FSGDM over conventional momentum optimizers.1 1 I NTRODUCTION Momentum has achieved great success in deep learning applications when combined with Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951). Among various momentum methods (Polyak, 1964; Nesterov, 1983; Van Scoy et al., 2017; Ma & Yarats, 2018; Kidambi et al., 2018), one of the most prevalent variants is the momentum method utilized within Stochastic Gradient Descent with Momentum (SGDM) (Sutskever et al., 2013; Paszke et al., 2019), which can be expressed as: Standard-SGDM (decoupled ) :mt=utmt−1+vtgt, x t=xt−1−αtmt, (1) where gtdenotes the gradient at iteration t,mtis the momentum buffer, and xtrepresents the learnable parameters. The momentum coefficients utandvtcontrol the influence of the previous momentum and the current gradient, respectively, and αtis the learning rate. For these time-variant momentum coefficients, a multistage setting has been commonly adopted in the machine learning community (Aybat et al., 2019; Kulunchakov & Mairal, 2019; Liu et al., 2020). Throughout this paper, we refer to this formulation, which decouples the two momentum coefficients, as Standard- SGDM. In contrast, another prevalent variant couples the two momentum coefficients using the Exponential Moving Average (EMA) method (Gardner Jr, 1985), leading to the formulation of EMA- SGDM: EMA-SGDM (coupled ) :mt=utmt−1+ (1−ut)gt, x t=xt−1−αtmt, (2) ∗: Equal contribution. †: Corresponding author. 1Our implementation of FSGDM is available at https://github.com/yinleung/FSGDM . 1arXiv:2411.19671v6 [cs.LG] 21 May 2025 Published as a conference paper at ICLR 2025 where ut∈[0,1)is the momentum coefficient. Notably, this coupled momentum formulation is a special case of the decoupled one, i.e., Standard-SGDM with vt= 1−ut. Our experiments show performance gaps between these two formulations. Moreover, how the momentum coefficients change over time can significantly affect the test accuracy (see Section 3). The existence of these two distinct momentum formulations and their differing performances raise two primary questions in modern deep learning: 1.Decoupling vs. Coupling : Should the coefficients utandvtbe decoupled or coupled? 2.Temporal Variation : How should the momentum coefficients evolve over time during training to achieve better model performance? For Question 1, some literature has investigated the convergence of the coupled method (Mai & Johansson, 2020; Li et al., 2022). Liu et al. (2020) argued that coupling the coefficients leads only to a constant scaling difference. Wang et al. (2024) further demonstrated that the mathematical equiv- alence between EMA-SGDM and Standard-SGDM can be achieved by adjusting the momentum coefficients and the learning rates in a coupled way. However, in practice, learning rate schedules are typically independent of momentum coefficient tuning during network training. On the other hand, popular frameworks like PyTorch (Paszke et al., 2019) adopt a decoupled momentum strategy by default. In our framework, we tackle the first question from the frequency domain perspective, revealing the relationship between the coupled and decoupled constructions. Regarding Question 2, prior research offered diverse opinions on how the momentum coefficients should vary over time. Some studies preferred fixed decoupled momentum coefficients (Yan et al., 2018; Liu et al., 2018; Yu et al., 2019), commonly selecting utvalues as 0.9 and vtvalue as 1. Liu et al. (2020) highlighted the benefits of stagewise learning rate schedules in EMA-SGDM, noting thatutcan either remain constant or increase along with the stagewise adjustments. Conversely, Smith (2018) demonstrated that decreasing the momentum coefficients while increasing the learning rate improves test performance. Moreover, Adaptive momentum methods (Kingma & Ba, 2014; Reddi et al., 2018; Luo et al., 2019; Chen et al., 2018) proved the convergence of decreasing coupled momentum coefficients in the context of online convex optimization. Nonetheless, a consensus regarding the optimal time-variant pattern of the momentum coefficients has yet to be reached. To answer these questions, one has to understand how the momentum method affects the training process. Goh (2017) analyzed the momentum method from the aspect of convergence and dynam- ics. Several prior studies (Cutkosky & Orabona, 2019; Ma & Yarats, 2018) speculated that averaging past stochastic gradients through momentum might reduce the variance of the noise in the parameter update, thus making the loss decrease faster. Polyak (1964); Rumelhart et al. (1986) argued that the EMA momentum can cancel out oscillations along high-curvature directions and add up contribu- tions along low-curvature directions. From the signal processing perspective, the EMA method acts as a discrete low-pass filter for smoothing out high-frequency fluctuations while retaining the low- frequency baseband pattern of the signal (Gardner Jr, 1985). These points of view bring us a new insight into connecting the momentum update processes with the specific filters. In this aspect, the momentum methods with different coefficient selections can be interpreted in a unified frequency domain analysis framework, whereby Questions 1 and 2 are resolved. In this paper, we propose a novel frequency domain analysis framework to address the two questions and provide a deeper understanding of the role of momentum in stochastic optimization. To the best of our knowledge, this paper, for the first time, reveals the fundamental difference between Standard-SGDM and EMA-SGDM and uncovers the effects of the dynamic momentum coefficients clearly from the frequency domain perspective. This perspective not only explains the difference between various momentum methods but also provides practical guidelines for designing efficient optimizers. Accordingly, we introduce FSGDM, an optimizer that dynamically adjusts momentum filter characteristics during training. Experiments show that FSGDM outperforms traditional SGD- based momentum optimizers. 2 F REQUENCY DOMAIN ANALYSIS FRAMEWORK This section introduces the background of Z-transform (Zadeh, 1950) in signal processing and then proposes a new frequency domain analysis framework for momentum methods. 2 Published as a conference paper at ICLR 2025 2.1 Z-T RANSFORM AND QUASI -STATIONARY APPROXIMATION Frequency analysis is a crucial technique for understanding how systems react to varying fre- quency components of input signals. Specifically, for discrete-time linear time-invariant systems, Z-transform is leveraged to examine how systems attenuate or amplify signals at specific frequen- cies, especially in the study of system stability, pole-zero behavior, etc. (Oppenheim et al., 1996). Interestingly, in neural network training, the momentum update process at time tcan be seen as a recursive filter where the gradient gtand the momentum mtact as input and output signals, re- spectively. The momentum coefficients affect the gradient adjustments across different frequency components. The high-frequency gradient components correspond to large and more abrupt changes in the gradient; while the low-frequency components indicate smooth and more gradual adjustments. However, one key issue is that the momentum system can be inherently time-variant, as its coef- ficients may change stagewise throughout the training process. This variability makes it difficult to apply traditional Z-transform analysis. To overcome this, inspired by the Zadeh (1961); Jury (1964), we approximate the system as time-invariant in each discrete interval stage. By holding the momentum coefficients constant over every interval, we construct a time-invariant quasi-stationary system (Hubner & Tran-Gia, 1991), enabling us to apply the Z-transform validly. In our following analysis framework and our later optimizer design, we follow this multistage strat- egy for changing momentum coefficients. Particularly, for a predefined stage whose length is de- noted by δ, the momentum coefficients are redefined using the floor function to ensure they remain constant over the whole stage: ut=u(⌊t/δ⌋ ×δ)and vt=v(⌊t/δ⌋ ×δ), (3) where u(t), v(t)are the continuous dynamic sequence functions with respect to t. While there are multiple sequences with different designs, in this paper, we use the following increasing and decreasing sequences: Increasing :u(t)orv(t) =t t+µ,Decreasing :u(t)orv(t) = 1−t+ 1 t+ν, (4) where µandνare the increasing and decreasing factors2. In Appendix C.1, we also examined the test set performance using other kinds of dynamic sequences. Under the above settings, for a given stage k(k= 1,···, N), with t∈[(k−1)δ, kδ−1], the momentum system becomes: mt=ukmt−1+vkgt (5) where uk=u((k−1)δ)andvk=v((k−1)δ)are constants for the duration of the k-th stage. Additionally, we set the total number of stages, denoted by N, to a constant value of 300 for all the experiments in this paper. 2.2 F REQUENCY DOMAIN ANALYSIS OF THE MOMENTUM METHOD In this subsection, we introduce our frequency domain analysis framework and analyze the impacts of the momentum method on neural network training. We first apply Z-transform, denoted by Z, to Equation 5: M(z) =ukz−1M(z) +vkG(z), (6) where G(z) =Z{gt},M(z) =Z{mt}, and z−1M(z) =Z{mt−1}. To obtain the frequency response of the momentum system during stage k, we evaluate the transfer function Hk(z)on the unit circle (Oppenheim et al., 1996): Hk(z) =M(z) G(z)=vk 1−ukz−1z=ejω = = = =⇒ Hk(ω) =vk 1−uke−jω, (7) where ω∈[0, π]is the normalized angular frequency of the real-value signal. The frequency re- sponse of the momentum system describes how the input gradient signal G(z)is altered to the output momentum signal M(ω)when it passes through the system. Note that this transfer function is valid for the entire duration of the k-th quasi-stationary stage. 2Note that different from the increasing sequence, the numerator of the decreasing sequence is t+ 1. This design avoids the zero gradients at the first training stage. 3 Published as a conference paper at ICLR 2025 Magnitude Response. The magnitude response of the momentum system in the k-th stage can be calculated by taking the magnitude of Hk(ω): |Hk(ω)|=|vk|p 1−2ukcosω+u2 k. (8) The magnitude response describes the amplitude scaling effect of the system at different frequen- cies. It indicates how the momentum system amplifies or attenuates different frequency components during each stage. This characteristic of the momentum system plays a key role in affecting the optimization process. Notably, when |Hk(ω)|<1, the momentum system attenuates the signals with frequency ω; when |Hk(ω)|>1, the momentum system amplifies the signals with ω. Conse- quently, we divide the momentum systems into two categories: Orthodox Momentum Systems and Unorthodox Momentum Systems . Orthodox Momentum Systems are the ones whose amplitude of the magnitude response will not surpass 1, like the EMA-SGDM (2). This kind of momentum system only shows attenuating char- acteristics. Specifically, the momentum system behaves as a low-pass filter when uk>0and a high-pass filter when uk<0. Additionally, when ukgets close to 1, the momentum system will prefer to attenuate the gradient components with high frequencies. The visualization of the (dy- namic) magnitude responses of orthodox momentum systems is in Section 3.1 and Appendix C.2. ForUnorthodox Momentum Systems where the amplitude of magnitude response will surpass 1, such as selecting ut= 0.9andvt= 1 in Standard-SGDM (1), the momentum system possesses both amplifying and attenuating characteristics. In this paper, we refer to these kinds of unortho- dox filters as low/high-pass gain filters. Specifically, the momentum system behaves as a low-pass gain filter when uk>0, vk= 1 and a high-pass gain filter when uk<0, vk= 1. Additionally, ifukis close to 1, the momentum system attenuates high-frequency gradient components while strongly amplifying low-frequency components; if ukis close to −1, the momentum system atten- uates low-frequency gradient components while strongly amplifying high-frequency components. The visualization of the (dynamic) magnitude responses of unorthodox momentum systems is in Section 3.2 and Appendix C.2. To demonstrate the momentum effects from the frequency perspective, in Figure 1, we compare an original sinusoidal signal, a noisy version injected with Gaussian noise, and the signal after applying the momentum method (which is called momentum signal for short) in the time domain. The red curve represents the noisy signal, the black dashed curve corresponds to the original noise-free true signal, and the cyan curve shows the momentum signal. We can see that different selections of uk andvksignificantly affect the amplifying or attenuating effects of the momentum system. 0 50 100 150 200 250 300 Time Step-50050100AmplitudeNoisy Signal True Signal Momentum Signal (a) Dynamic Low-pass Filter 0 50 100 150 200 250 300 Time Step-50050100AmplitudeNoisy Signal True Signal Momentum Signal (b) Dynamic High-pass Filter 0 50 100 150 200 250 300 Time Step-400-2000200400600800AmplitudeNoisy Signal True Signal Momentum Signal (c) Low-pass Gain Filter 0 50 100 150 200 250 300 Time Step-50050100AmplitudeNoisy Signal True Signal Momentum Signal (d) High-pass gain Filter Figure 1: Visualization of different filters towards the noisy sinusoidal signal. (a) uk= 0→1, vk= 1−uk, with the system gradually shifting from an all-pass filter to a narrow low-pass filter; (b) uk= 0→ −1, vk= 1 + uk, with the system gradually shifting from an all-pass filter to a narrow high-pass filter; (c) uk= 0.9, vk= 1, which indicates the momentum behaves like a low-pass gain filter with amplification on low-frequency gradient components; (d) uk=−0.9, vk= 1, which indicates the momentum behaves like a high-pass gain filter with amplification on high-frequency components. The amplifying and attenuating effects of different momentum systems are verified. Similarly, we also have the phase response of the momentum system (see Appendix A). While the phase response of the momentum only provides limited insights, understanding the behavior of the magnitude response across stages is essential for analyzing the time-variant characteristics of the momentum system. By plotting the dynamic magnitude response value |Hk(ω)|on the normalized 4 Published as a conference paper at ICLR 2025 angular frequency axis for each stage k, we can track how the frequency-dependent behavior of the multistage momentum system evolves. This provides valuable insights into the amplifying or atten- uating characteristics of the momentum system. Further results on the comparisons of momentum systems with different dynamic magnitude responses are presented in the next section. 3 D YNAMIC MAGNITUDE RESPONSE OF THE MOMENTUM SYSTEMS In this section, we present an empirical study to discover the influence of the momentum coeffi- cients by comparing the test performance on momentum systems with different dynamic magnitude responses. We train VGG (Simonyan & Zisserman, 2014) on the CIFAR-10 (Krizhevsky et al., 2009) dataset and ResNet50 (He et al., 2016) on the CIFAR-100 dataset using different momentum coefficients, while keeping all other hyperparameters unchanged. For each experiment, we report the mean and standard error (as subscripts) of test accuracy for 3 runs with random seeds from 0-2. The detailed experimental settings can be found in Appendix D. The experimental results in CIFAR- 10 show high similarity to those in CIFAR-100. Thus, here, we mainly focus on the analysis based on CIFAR-100 and defer the experimental results of VGG16 on CIFAR-10 in Appendix C.3. 3.1 O RTHODOX MOMENTUM SYSTEMS 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e3 =1e4 =1e5 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.20.40.60.81.0|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e2 =1e3 =1e4 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch0123456Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch01234567Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 2: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for EMA-SGDM with low-pass momentum defined in Equation 9. The solid lines denote the mag- nitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for EMA- SGDM with low-pass momentum. Left Column: increasing sequence. Middle Column: fixed se- quence. Right Column: decreasing sequence. We first focus on the orthodox momentum systems with the following two main types: low-pass and high-pass momentum, defined as: Low-pass :mt=utmt−1+ (1−ut)gt,High-pass :mt=−utmt−1+ (1−ut)gt,(9) where ut∈[0,1)can be set as increasing, decreasing sequences, or fixed value. For time-variant momentum systems, different strategies of utresult in different time-variant filtering characteris- tics during training. According to Section 2.1, scaling the increasing and decreasing factors af- fects the changing rates of ut. In the following, we demonstrate the dynamic magnitude responses, comparisons between gradient norms and momentum norms, and test accuracy results of orthodox momentum systems under different utsequences3. 3Note that selecting µ= 100 andν= 104lead to a long stage of the super narrow-band filter. To avoid this problem, we select µ= 103,104,105andν= 102,103,104in this paper. 5 Published as a conference paper at ICLR 2025 Example 1: Low-Pass Momentum. We first explore the effect of increasing, fixed, and decreasing utsequences in low-pass momentum. Figure 2(a) - 2(c) show the corresponding dynamic magnitude responses over time. With increasing ut, the system transits from an all-pass to a progressively nar- rower low-pass filter, gradually attenuating high-frequency components. Larger µresults in slower transitions. Decreasing utshows a reverse behavior, with larger νresulting in slower transitions. ut with a fixed value maintains a constant filter, with larger utleading to more aggressive smoothing and noise reduction characteristics. The norm comparisons in Figure 2(d) - 2(f) show that the mo- mentum norms in low-pass momentum systems are always less than corresponding gradient norms. Larger ut, νand smaller µlead to more reduced momentum norms, which validates the time-variant filtering characteristics of orthodox momentum systems. Test accuracy results in Table 1 reveal that increasing or fixing utcan achieve higher accuracy compared to applying decreasing sequences of ut. In particular, momentum systems with proper increasing sequences of utcan outperform those with fixed ut. We also find that larger νresults in poorer model performance. These phenomena indicate that gradually attenuating high-frequency components during training improves test set performance, while excessive suppression of low- frequency gradient components in early stages and retention of high-frequency components in late stages degrade model performance. Example 2: High-Pass Momentum. High-pass momentum systems exhibit symmetric dynamic magnitude responses and similar norm comparisons, compared to their low-pass counterparts (see Figure 6 in Appendix C.2). With increasing ut, the system shifts from an all-pass to a narrow high-pass filter, progressively attenuating low-frequency components. Decreasing sequences act in reverse. Fixed sequences with larger utlead to more aggressive attenuation of low-frequency com- ponents. The comparison of gradient norms and momentum norms can be found in Appendix C.2. Test accuracy in Table 1 shows that dynamic high-pass systems with larger µand smaller νyield better top-1 accuracy performance. When selecting fixed values, momentum systems with larger ut perform more poorly. These results confirm that suppressing low-frequency gradient components is harmful. Moreover, high-pass systems generally outperform low-pass systems when applying decreasing strategies with the same ν, suggesting that high-frequency components play a crucial role in the early training stages, which is also supported by the studies in Appendix C.4. From Examples 1 and 2, we empirically verify that high-frequency gradient components are detri- mental in late training stages, while their preservation in early stages leads to higher test accuracy, which matches the viewpoint that gradient noise has a generalization benefit early in training (Smith et al., 2020). Table 1: Top-1 ACC. (%) comparisons of different momentum coefficient strategies of orthodox momentum systems of ResNet50 on CIFAR-100. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-pass 77.12 0.07 77.06 0.14 76.86 0.12 76.98 0.09 76.82 0.18 76.84 0.06 72.58 0.44 70.53 0.31 69.69 0.75 High-pass 51.59 0.78 67.55 0.22 74.72 0.06 72.46 0.13 65.14 0.17 53.43 0.26 76.82 0.25 75.92 0.12 70.99 0.18 3.2 U NORTHODOX MOMENTUM SYSTEMS Unorthodox momentum systems allow magnitude responses larger than 1, meaning they can both attenuate and amplify gradients in different frequency bands. We focus on two main types: low-pass gain and high-pass gain momentum, defined as: Low-pass gain :mt=utmt−1+gt,High-pass gain :mt=−utmt−1+gt, (10) where ut∈[0,1)can follow increasing, fixed, or decreasing sequences. For simplification reasons, we use the PyTorch setting with vt= 1. We show the dynamic magnitude responses, comparisons between gradient norms and momentum norms, and test accuracy results of unorthodox momentum systems under different utsequences as follows. Example 3: Low-Pass Gain Momentum. In low-pass gain momentum, the system transits from an all-pass to a narrower low-pass gain filter as utincreases, amplifying low-frequency components while attenuating high-frequency components. Figure 3(a) - 3(c) show the corresponding dynamic 6 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 020406080100|H()| =1e3 =1e4 =1e5 0.0 0.5 1.0 1.5 2.0 2.5 3.0012345 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 246810|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 02468101214|H()| =1e2 =1e3 =1e4 0.0 0.5 1.0 1.5 2.0 2.5 3.00.000.250.500.751.001.251.501.752.00 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch05101520Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch024681012Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 3: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for Standard-SGDM with low-pass gain momentum defined in Equation 10. The solid lines denote the magnitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for Standard-SGDM with low-pass gain momentum. Left Column: increasing sequence. Middle Column: fixed sequence. Right Column: decreasing sequence. magnitude responses over time. A large µcorresponds to the slow shifts. Decreasing utreverses the trend, heavily amplifying low-frequency components early and relaxing this effect over time. Fixed utmaintains constant filters, in which larger utamplifies low-frequency components more aggressively. Figure 3(d) - 3(f) demonstrate larger momentum norms compared to gradient norms, indicating the amplification effects in gain filters. Larger ut, νand smaller µlead to more reduced momentum norms, which validates the time-variant filtering characteristics of orthodox momentum systems. Test results in Table 2 indicate that increasing utwith appropriate µoutperforms the sce- narios using fixed and decreasing sequences of ut. We also find that smaller νyields worse accuracy in test sets. From these results, we conclude that amplifying low-frequency gradient components and properly attenuating high-frequency ones, improves test set performance. Example 4: High-Pass Gain Momentum. High-pass gain momentum mirrors the dynamic magni- tude response behavior of low-pass gain systems (see Figure 7 in App C.2). Increasing utgradually amplifies high-frequency gradient components and attenuates low-frequency ones. Decreasing ut reverses this pattern, heavily amplifying high-frequency components early on. Fixed constructions more aggressively amplify high-frequency components for larger ut. The comparison of gradient norms and momentum norms can be found in Appendix C.2. Test accuracy in Table 2 shows that fixed constructions with larger utand decreasing utwith larger νperform worse. These findings confirm that amplifying high-frequency gradients in training might be undesirable. From Examples 3 and 4, we empirically verify that proper amplification in unorthodox momen- tum systems can improve model performance, particularly when amplifying low-frequency gradient components. Table 2: Top-1 ACC. (%) comparisons of different momentum coefficient strategies of unorthodox momentum systems of ResNet50 on CIFAR-100. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-Pass Gain 76.10 0.14 80.48 0.03 78.02 0.03 78.01 0.04 79.51 0.15 79.71 0.25 70.37 0.67 71.53 0.62 76.18 0.38 High-Pass Gain 75.47 0.21 74.54 0.16 75.97 0.27 75.68 0.18 74.56 0.09 73.77 0.18 76.41 0.41 74.00 0.26 68.90 0.82 7 Published as a conference paper at ICLR 2025 3.3 D ISCUSSION The differences in norm comparison and test accuracy between orthodox and unorthodox momentum systems validate the distinctions between EMA-SGDM and Standard-SGDM. While EMA-SGDM possesses attenuating filter effects, Standard-SGDM can both amplify and attenuate different fre- quency gradient components. Moreover, our findings indicate that with appropriate momentum coefficients, Standard-SGDM consistently outperforms EMA-SGDM, showing the advantages of decoupling momentum coefficients, which answers Question 1. Regarding Question 2, the test results show that decoupled momentum coefficients with a properly increasing utand fixed vtcan achieve better performance. In particular, our empirical findings re- veal the following insights in training convolutional neural networks (CNNs): (1)high-frequency gradient components are undesired in the late stages of training; (2)preserving the original gradi- ent in the early stages leads to improved test set accuracy; (3)gradually amplifying low-frequency gradient components enhances performance. Furthermore, we find that these insights are also adapt- able in various learning areas (see Section 5). Based on these insights, it may be possible to design a more effective optimizer by appropriately adjusting the momentum coefficients. 4 F REQUENCY -BASED OPTIMIZER As suggested by our frequency domain analysis framework, achieving better test performance is equivalent to finding an appropriate dynamic filter-changing pattern for momentum systems. Based on this idea, we propose FSGDM, a heuristic optimizer that dynamically adjusts momentum filtering characteristics. Furthermore, to explore the potential optimal strategies of our proposed FSGDM based on the findings in Section 3.3, several sets of experiments in various deep-learning tasks are conducted. 4.1 F REQUENCY STOCHASTIC GRADIENT DESCENT WITH MOMENTUM Algorithm 1: FSGDM Input: Σ,c,v,N; Initialization: m0,µ=cΣ, δ= Σ/N; foreacht= 1,2, . . .do gt=∇Lt(xt−1, ζt−1); u(t) =t t+µ, u t=u(⌊t/δ⌋×δ); mt=utmt−1+vgt; xt=xt−1−αtmt; endGenerally, determining the best optimization strategy by tuning utandvtaccording to our frequency domain analysis is challenging. In the field of signal process- ing, how to select the best filters for different problems is still an open problem. However, we can design a bet- teroptimizer based on the findings in Section 3.3. Still, there are infinite dynamic magnitude responses that can meet the requirements of the aforementioned findings. Based on Occam’s Razor principle, we provide a mini- malist form of our proposed optimizer in Algorithm 1, where Σis the total gradient update steps in the whole training process determined by the epoch number and the size of the dataset, cis a scaling factor, Lt:Rd→Ris the loss for the t-th step, ζt−1denotes a minibatch drawn from the training data, and Nis the number of stages. µandvare adjustable parameters that dominate the filtering characteristic of FSGDM. Moreover, since µis a function ofΣ, the dynamic magnitude response can be inherited when Σvaries. In particular, we have the following proposition. Proposition 1. By fixing the number of stages Nand the scaling factor c, the dynamic magnitude response of Algorithm 1 keeps invariant with respect to changes in the total number of training steps. The proof of Proposition 1 is deferred in Appendix B.3. By this, we show that the dynamic mag- nitude response of a well-performed FSGDM can be adaptable to various tasks. In the following subsection, we explore the optimal scaling factor cand momentum coefficient vfor FSGDM. 4.2 E MPIRICAL EXPLORATION OF OPTIMAL SETTINGS FOR FSGDM As discussed in Section 3, different choices of candvcan significantly affect the filtering character- istics of FSGDM. To understand their impact on optimization performance and to identify optimal parameter settings, we conduct a comprehensive empirical study. 8 Published as a conference paper at ICLR 2025 Specifically, we empirically explore the optimal parameter selection of FSGDM across three dif- ferent image classification tasks by first sweeping candvwithin the ranges of (0,1)and[0.5,3], respectively. Specifically, we conduct three sets of experiments using the same codebase (See Ap- pendix D for more training details): (1) training ResNet18 for 100 epochs on CIFAR-10, (2) training ResNet34 for 100 epochs on Tiny-ImageNet (Le & Yang, 2015), and (3) training ResNet50 for 300 epochs on CIFAR-100. We also explore the optimal parameter selection on one natural language processing task in Appendix C.7. By finding the parameter selections with better test performance in different tasks, we try to empirically summarize the law of optimal parameter selection. 0.01 0.1 1 Scaling Factor c0.51.01.52.02.53.0Momentum Coefficient vOptimal ZoneCIFAR10-ResNet18 0.01 0.1 1 Scaling Factor cMomentum Coefficient vOptimal ZoneTiny-ImageNet-ResNet34 0.01 0.1 1 Scaling Factor cMomentum Coefficient vOptimal ZoneCIFAR100-ResNet50 4.65004.74744.84494.94235.03975.13725.23465.33215.42955.5269 31.71032.09932.48732.87633.26533.65434.04234.43134.82035.208 18.30018.62818.95619.28519.61319.94120.26920.59720.92621.254 Figure 4: The Top-1 test errors of training ResNet18 on CIFAR-10, ResNet34 on Tiny-ImageNet and ResNet50 on CIFAR-100. The results show that the optimal parameter selections across these three training settings exhibit a high similarity. The black points denote the parameter selections with better test performance. The optimal zone of the parameter selection is circled in red. The results in Figure 4 show that there exists an optimal zone where relatively better test accuracy results can be achieved. When the momentum coefficient vis fixed, the test accuracy shows an initial increase followed by a decline as the scaling factor cincreases. In Appendix C.8, we plot the magnitude responses and the test accuracy results of the black points in Figure 4 and find that these parameter selections have similar dynamic magnitude responses and test accuracy curves. Thus, we assume the parameter selections with similar dynamic magnitude responses will lead to close performance. More discussions are in Appendix C.8. 5 E XPERIMENTS To verify the generalization of the proposed FSGDM, we perform a large-scale comparison across vision classification tasks, natural language processing (NLP) tasks, and reinforcement learning (RL) tasks. We compare the test performance of FSGDM and conventional SGD-based momentum optimizers, including Standard-SGDM and EMA-SGDM. We set ut= 0.9, vt= 1 for Standard- SGDM, and ut= 0.9for EMA-SGDM, which are the common momentum coefficient selections in training neural networks. For a fair comparison and convenience, we set c= 0.033, v= 1, which is one of the black points in the optimal zone in Figure 4, for FSGDM. Note that other combinations of candvin the optimal zone can also be selected. For the other adjustable parameters in Algorithm 1, we set N to 300 as mentioned at the end of Section 2.1, and set Σas the number of total training steps. Notably, since our focus is on comparing the performance of different optimizers, we do not fine-tune every parameter for each model but use the same hyperparameters across all models for convenience. See Appendix D for more experimental details. Table 3: Performance on Image Classification Experiments Dataset CIFAR-10 CIFAR-100 Tiny-ImageNet ImageNet Model VGG16 ResNet18 ResNet50 DenseNet121 ResNet34 MobileNet ResNet50 EMA-SGDM 93.71 0.07 94.19 0.07 76.84 0.06 76.18 0.23 62.28 0.17 55.00 0.10 74.24 0.04 Standard-SGDM 94.08 0.07 95.57 0.06 79.71 0.25 80.49 0.09 67.51 0.08 58.31 0.20 76.66 0.09 FSGDM 94.19 0.07 95.66 0.07 81.44 0.06 81.14 0.05 67.74 0.06 59.61 0.11 76.91 0.05 Image Classification. We perform four sets of experiments with different datasets in computer vi- sion tasks and use various CNN architectures for training them. Specifically, we select: (a) VGG16 and ResNet18 for CIFAR-10; (b) ResNet50 and DenseNet121 (Huang et al., 2017) for CIFAR-100; (c) ResNet34 and MobileNet (Howard, 2017) for Tiny-ImageNet; (d) ResNet50 for ILSVRC 2012 9 Published as a conference paper at ICLR 2025 ImageNet Russakovsky et al. (2015). For each task, we report the mean and standard error (as subscripts) of test accuracy for 3 runs with random seeds from 0-2. The results in Table 3 show that our FSGDM consistently achieves better test set performance. Additionally, we can observe that Standard-SGDM steadily outperforms EMA-SGDM, which aligns with our discoveries in Sec- tion 3.3. Natural Language Processing. We conduct experiments on the IWSLT14 German-English trans- lation task (Cettolo et al., 2014) to represent NLP tasks, a widely used benchmark in the community. Specifically, we train six different models encompassing a variety of architectures: two convolution- based models, FConv (Gehring et al., 2017) and LightConv (Wu et al., 2019); two LSTM-based models, vanilla LSTM (Hochreiter & Schmidhuber, 1997) and LSTM-W (Wiseman & Rush, 2016); and two Transformer-based models (Vaswani et al., 2017) with different sizes, Transformer-tiny and Transformer. Model performance is reported using BLEU scores, where higher scores indicate bet- ter performance, and we summarize all results in Table 4. Compared with the baseline optimizers, FSGDM outperforms all others in this task across six different models. This shows the effectiveness of our optimizer in improving translation quality. Moreover, the consistent improvement highlights the robustness of FSGDM and its ability to generalize across different neural network structures in natural language processing tasks. Table 4: Performance on IWSLT14 Dataset Model FConv LightConv LSTM LSTM-W Transformer-tiny Transformer EMA-SGDM 13.97 0.01 10.56 0.01 4.99 0.01 1.20 0.07 5.17 0.01 6.27 0.01 Standard-SGDM 27.41 0.02 33.05 0.04 28.12 0.06 24.66 0.06 18.16 0.03 31.50 0.05 FSGDM 28.30 0.01 33.44 0.02 29.27 0.02 27.41 0.03 19.94 0.07 32.40 0.05 0 1/uni00A0M 2/uni00A0M 3/uni00A0M 4/uni00A0M 5/uni00A0M 6/uni00A0M Timesteps050010001500200025003000Episode/uni00A0RewardWalker2d/uni00ADv4 EMA/uni00ADSGDM Standard/uni00ADSGDM FSGDM 0 1/uni00A0M 2/uni00A0M 3/uni00A0M 4/uni00A0M 5/uni00A0M 6/uni00A0M Timesteps0100020003000400050006000Episode/uni00A0RewardHalfCheetah/uni00ADv4 EMA/uni00ADSGDM Standard/uni00ADSGDM FSGDM 0 1/uni00A0M 2/uni00A0M 3/uni00A0M 4/uni00A0M 5/uni00A0M 6/uni00A0M Timesteps0500100015002000250030003500Episode/uni00A0RewardAnt/uni00ADv4 EMA/uni00ADSGDM Standard/uni00ADSGDM FSGDM Figure 5: The reward curves of EMA-, Standard-SGDM, and FSGDM on three MuJoCo tasks. Reinforcement Learning. We evaluate FSGDM on PPO (Schulman et al., 2017), one of the most popular policy gradient methods in reinforcement learning. We replace the default Adam opti- mizer (Kingma & Ba, 2014) in PPO with FSGDM, Standard-SGDM, and EMA-SGDM. We test the three optimizers on Walked2d-v4, HalfCheetah-v4, and Ant-V4, which are continuous control environments simulated by the standard and widely-used engine, MuJoCo (Todorov et al., 2012). Following standard evaluation, we run each game under 10 random seeds (range from 0-9) and test the performance for 10 episodes every 30,000 steps. All experiments are conducted using the Tian- shou codebase (Weng et al., 2022), a widely known RL framework. Figure 5 presents the results on three tasks, where the solid line represents the average episode rewards during evaluation, and the shaded region indicates the 75% confidence interval. It can be easily observed that on three test games, our FSGDM achieves higher rewards than Standard-SGDM and EMA-SGDM. 6 C ONCLUSIONS This paper proposes a frequency domain analysis framework for the momentum method. Based on the proposed framework, we find that different selections of momentum coefficients correspond to different filter characteristics of the momentum methods. Performance will be significantly different under different time-variant momentum coefficients. Furthermore, we develop a heuristic optimizer named FSGDM which outperforms the conventional SGD-based momentum optimizers in various learning tasks. Future work may explore the best filtering strategy for all general scenarios and extend the frequency domain analysis framework to other optimizers such as Adam. 10 Published as a conference paper at ICLR 2025 ACKNOWLEDGMENTS We would like to especially thank Prof. K. C. Ho for many enlightening discussions. The work of Xianliang Li and Sheng Xu was supported by the National Natural Science Foundation of China (62273327) and the Shenzhen Science and Technology Program (KCXFZ20211020165003005). The work of Linlong Wu was supported by Luxembourg FNR CORE METSA project C22/IS/17391632. REFERENCES N. S. Aybat, A. Fallah, M. Gurbuzbalaban, and A. Ozdaglar. A universally optimal multistage accelerated stochastic gradient method. Advances in Neural Information Processing Systems , 32, 2019. M. Cettolo, J. Niehues, S. St ¨uker, L. Bentivogli, and M. Federico. Report on the 11th IWSLT evaluation campaign. In Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign , pp. 2–17, 2014. X. Chen, S. Liu, R. Sun, and M. Hong. On the convergence of a class of Adam-type algorithms for non-convex optimization. International Conference on Learning Representations , 2018. A. Cutkosky and F. Orabona. Momentum-based variance reduction in non-convex SGD. Advances in Neural Information Processing Systems , 32, 2019. E. S. Gardner Jr. Exponential smoothing: The state of the art. Journal of forecasting , 4(1):1–28, 1985. J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y . N. Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning , pp. 1243–1252. PMLR, 2017. G. Goh. Why momentum really works. Distill , 2017. doi: 10.23915/distill.00006. K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level per- formance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision , pp. 1026–1034, 2015. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 770–778, 2016. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation , 1997. A. G. Howard. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 , 2017. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 4700–4708, 2017. F. Hubner and P. Tran-Gia. Quasi-stationary analysis of a finite capacity asynchronous multiplexer with modulated deterministic input. ITC-13, Copenhagen , 1991. E. I. Jury. Theory and application of the z-transform method. Electronics and Power , 11(3):291–299, 1964. R. Kidambi, P. Netrapalli, P. Jain, and S. Kakade. On the insufficiency of existing momentum schemes for stochastic optimization. In 2018 Information Theory and Applications Workshop (ITA) , pp. 1–9. IEEE, 2018. D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations , 2014. A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto , 2009. 11 Published as a conference paper at ICLR 2025 A. Kulunchakov and J. Mairal. A generic acceleration framework for stochastic composite opti- mization. Advances in Neural Information Processing Systems , 32, 2019. Y . Le and X. Yang. Tiny imagenet visual recognition challenge. CS231N , 7(7):3, 2015. X. Li, M. Liu, and F. Orabona. On the last iterate convergence of momentum methods. In Interna- tional Conference on Algorithmic Learning Theory , pp. 699–717. PMLR, 2022. T. Liu, Z. Chen, E. Zhou, and T. Zhao. A diffusion approximation theory of momentum SGD in nonconvex optimization. arXiv preprint arXiv:1802.05155 , 2018. Y . Liu, Y . Gao, and W. Yin. An improved analysis of stochastic gradient descent with momentum. InAdvances in Neural Information Processing Systems , volume 33, pp. 18261–18271, 2020. I. Loshchilov and F. Hutter. SGDR: Stochastic gradient descent with warm restarts. International Conference on Learning Representations , 2016. L. Luo, Y . Xiong, Y . Liu, and X. Sun. Adaptive gradient methods with dynamic bound of learning rate. International Conference on Learning Representations , 2019. J. Ma and D. Yarats. Quasi-hyperbolic momentum and Adam for deep learning. In International Conference on Learning Representations , 2018. V . Mai and M. Johansson. Convergence of a stochastic gradient method with momentum for non- smooth non-convex optimization. In International Conference on Machine Learning , pp. 6630– 6639. PMLR, 2020. Y . Nesterov. A method for solving the convex programming problem with convergence rate o (1/k2). InDokl Akad Nauk SSSR , volume 269, pp. 543, 1983. A. V . Oppenheim, A. S. Willsky, and S. H. Nawab. Signals & systems (2nd ed.) . Prentice-Hall, Inc., USA, 1996. ISBN 0138147574. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems , 32, 2019. B. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computa- tional Mathematics and Mathematical Physics , 4(5):1–17, 1964. ISSN 0041-5553. S. J. Reddi, S. Kale, and S. Kumar. On the convergence of Adam and beyond. In International Conference on Learning Representations , 2018. H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statis- tics, pp. 400–407, 1951. D. E. Rumelhart, G. Hinton, and R. J. Williams. Learning internal representations by error prop- agation, parallel distributed processing, explorations in the microstructure of cognition, ed. de rumelhart and j. mcclelland. vol. 1. 1986. Biometrika , 71(599-607):6, 1986. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision , 115:211–252, 2015. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556 , 2014. L. N. Smith. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820 , 2018. S. Smith, E. Elsen, and S. De. On the generalization benefit of noise in stochastic gradient descent. InInternational Conference on Machine Learning , pp. 9058–9067. PMLR, 2020. 12 Published as a conference paper at ICLR 2025 I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning , volume 28 of Proceedings of Machine Learning Research , pp. 1139–1147. PMLR, 2013. E. Todorov, T. Erez, and Y . Tassa. MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp. 5026–5033. IEEE, 2012. B. Van Scoy, R. A. Freeman, and K. M. Lynch. The fastest known globally convergent first-order method for minimizing strongly convex functions. IEEE Control Systems Letters , 2(1):49–54, 2017. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo- sukhin. Attention is all you need. Advances in Neural Information Processing Systems , 2017. R. Wang, S. Malladi, T. Wang, K. Lyu, and Z. Li. The marginal value of momentum for small learning rate SGD. In International Conference on Learning Representations , 2024. J. Weng, H. Chen, D. Yan, K. You, A. Duburcq, M. Zhang, Y . Su, H. Su, and J. Zhu. Tianshou: A highly modularized deep reinforcement learning library. Journal of Machine Learning Research , 23(267):1–6, 2022. S. Wiseman and A. M. Rush. Sequence-to-sequence learning as beam-search optimization. Pro- ceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , 2016. F. Wu, A. Fan, A. Baevski, Y . N. Dauphin, and M. Auli. Pay less attention with lightweight and dynamic convolutions. International Conference on Learning Representations , 2019. Y . Yan, T. Yang, Z. Li, Q. Lin, and Y . Yang. A unified analysis of stochastic momentum methods for deep learning. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence , 2018. H. Yu, R. Jin, and S. Yang. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In International Conference on Machine Learning , pp. 7184–7193. PMLR, 2019. L. A. Zadeh. Frequency analysis of variable networks. Proceedings of the IRE , 38(3):291–299, 1950. L. A. Zadeh. Time-varying networks, i. Proceedings of the IRE , 49(10):1488–1503, 1961. 13 Published as a conference paper at ICLR 2025 A P HASE RESPONSE The phase response of the momentum system in the k-th stage can be written as, arg(Hk(ω)) = arg( vk)−tan−1uksinω 1−ukcosω , (11) where arg(·)is the argument operator. For any real value vk,arg(vk) = 0 ifvk>0andarg(vk) =π ifvk<0; for any ω∈[0, π]anduk∈(−1,1),tan−1(uksinω/(1−ukcosω))∈(−π 2,π 2). The phase response describes the phase-shifting effect of the momentum system at different frequencies. In the context of gradient-based optimization, the phase shift indicates a change in the optimization direction. Therefore, when vk<0, the phase shift of the momentum adds up an extra πrad on the shifted direction, indicating that the direction of the update is greatly reversed, which can lead to oscillations, instability, or divergence in the optimization process. Thus, it is necessary to select a positive vkwhen applying momentum methods. B A DDITIONAL DERIVATIONS AND PROOF B.1 D ERIVATION OF EQUATION 8 |Hk(ω)|=q Hk(ω)H† k(ω) =rvk 1−uke−jω·vk 1−ukejω =s v2 k 1−uke−jω−ukejω+u2 ke−jωejω =s v2 k 1−uk(cosω−jsinω)−uk(cosω+jsinω) +u2 k(cos2ω+ sin2ω) =s v2 k 1−2ukcosω+u2 k =|vk|p 1−2ukcosω+u2 k B.2 D ERIVATION OF EQUATION 11 arg(Hk(ω)) = arg( vk)−arg(1−uke−jω) = arg( vk)−arg ((1 −ukcosω) +j(uksinω)) = arg( vk)−tan−1uksinω 1−ukcosω B.3 P ROOF OF PROPOSITION 1 According to Algorithm 1, the momentum coefficient in the k-th stage ( k= 1,2,···, N) is uk=(k−1)δ (k−1)δ+µ=(k−1)δ (k−1)δ+ Σ/c=(k−1)δ (k−1)δ+cNδ=k−1 k−1 +cN. (12) This guarantees that the number of training steps, which may be different when choosing other training strategies or changing datasets, is independent of ukwhen the scaling factor cand the number of stages Nare already determined. 14 Published as a conference paper at ICLR 2025 C A DDITIONAL EXPERIMENTS In this section, we present several supplementary experiments. The detailed experimental settings are shown in Appendix D. C.1 D YNAMIC SEQUENCE CONSTRUCTION There are infinite increasing or decreasing sequences. In this part, we compare the test set perfor- mance of the sequence mentioned in Equation 4 with four other dynamic sequences. Specifically, we compare with the following four dynamic increasing sequences within Algorithm 1: Linear: u(t) =a1t; Exponential: u(t) = 1−e−a2t; Sine: u(t) = sin( a3t); Logarithmic: u(t) = ln( a4t); where a1toa4are scaling coefficients. For a fair comparison, we adjust the coefficients to keep the utof all sequences unchanged in the beginning and ending stages. To make other types of sequences unique, we keep the utof different dynamic sequences nearly unchanged in the beginning and ending stages. Table 5 displays their test accuracy results after 300epochs of training on CIFAR- 100 using ResNet50. We ran each experiment under 3 different random seeds (0, 1, 2). Clearly, the dynamic sequence we use in Equation 4 shows its superiority over other constructions. Table 5: Top-1 ACC. (%) comparisons of using linear, exponential, sine, logarithmic, and our se- quences when adopting FSGDM. Dynamic Sequence Type Ours Linear Exponential Sine Logarithmic ACC-1 (%) 81.44 0.06 78.24 0.24 80.38 0.04 78.76 0.29 78.70 0.09 Specifically, (a1, a2, a3, a4) = (8 .271×10−6,3.793×10−5,1.125×10−5,1.394×10−5). C.2 A DDITIONAL FIGURES OF HIGH-PASS MOMENTUM SYSTEMS ON CIFAR-100 This subsection provides the figures of the dynamic magnitude responses and norm of high-pass (gain) momentum systems mentioned in Section 3. Figure 6 and Figure 7 show the magnitude re- sponses and norm comparisons of high-pass and high-pass gain momentum systems, respectively. The high-pass (gain) momentum systems preserve or even amplify rapidly fluctuating gradient com- ponents, leading to sharp oscillations in gradient norm curves and momentum norm curves across iterations. C.3 A DDITIONAL EXPERIMENTS OF VGG16 ONCIFAR-10 In this subsection, we provide experiments of training VGG16 on CIFAR-10. The experimental settings follow Section 3 and Appendix D. From the test accuracy in Table 6 and Table 7, we ob- serve that the test performances and norm comparisons in different momentum methods in training VGG16 on CIFAR-10 are similar to those in training ResNet50 on CIFAR-100. This similarity implies that the empirical findings in Section 3 are applicable to various CNNs. Table 6: Comparison of Top-1 Accuracy (%) among different momentum coefficient methods in orthodox momentum systems using VGG16 on CIFAR-10. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-Pass 93.80 0.05 93.78 0.12 93.79 0.09 93.68 0.18 93.64 0.08 93.71 0.07 92.33 0.04 90.89 0.11 90.560.19 High-Pass 90.02 0.05 92.64 0.09 93.41 0.01 93.52 0.16 92.71 0.07 90.32 0.07 93.86 0.09 93.73 0.08 93.38 0.09 15 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e3 =1e4 =1e5 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.20.40.60.81.0|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| =1e2 =1e3 =1e4 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch024681012Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch0510152025Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 6: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for EMA-SGDM with high-pass momentum defined in Equation 9. The solid lines denote the mag- nitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for EMA- SGDM with high-pass momentum. Left Column: increasing sequence. Middle Column: fixed sequence. Right Column: decreasing sequence. Table 7: Comparison of Top-1 Accuracy (%) among different momentum coefficient methods in unorthodox momentum systems using VGG16 on CIFAR-10. Increasing Factor ( µ) Fixed Value ( ut) Decreasing Factor ( ν) Parameters 1k 10k 100k 0.3 0.6 0.9 100 1k 10k Low-Pass Gain 84.01 0.13 94.19 0.07 93.85 0.07 93.86 0.11 93.98 0.09 94.08 0.07 92.00 0.05 92.27 0.12 92.97 0.23 High-Pass Gain 93.34 0.03 93.56 0.06 93.79 0.13 93.71 0.11 93.46 0.06 93.33 0.02 93.79 0.07 93.33 0.12 93.05 0.08 C.4 T HEEARLY STAGES OF TRAINING This subsection focuses on the test performance affected by the momentum coefficients in the very early training stages. We plot the test accuracy curves for the first 10 epochs of different momentum systems in Section 3 and study the early behaviors of different momentum systems. Figure 8 demonstrates the early test accuracy curves of different momentum coefficient methods. For orthodox momentum systems, preserving the original gradient (i.e., all-pass momentum sys- tem, low-pass momentum system with an increasing ut, and high-pass momentum system with an increasing ut) or attenuating high-frequency gradient components(i.e., static low-pass momentum system with ut= 0.9) results in better initial performance, while greatly attenuating high-frequency gradient components (i.e., low-pass momentum system with a decreasing ut) or attenuating low- pass components (i.e., static high-pass and high-pass momentum system with a decreasing ut) lead to bad test performance at the beginning. On the other hand, for unorthodox momentum systems, preserving the original gradient (i.e., all- pass momentum system, low-pass gain momentum system with an increasing ut, and high-pass gain momentum system with an increasing ut) can achieve better early performance, while greatly amplifying high-frequency gradient components (i.e., static high-pass gain momentum system and high-pass gain momentum system with a decreasing ut) leads to bad initial accuracy results. 16 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 020406080100|H()| =1e3 =1e4 =1e5 0.0 0.5 1.0 1.5 2.0 2.5 3.0012345 (a) Dynamic Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 246810|H()| ut=0.3 ut=0.6 ut=0.9 (b) Magnitude Response 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 02468101214|H()| =1e2 =1e3 =1e4 0.0 0.5 1.0 1.5 2.0 2.5 3.00.000.250.500.751.001.251.501.752.00 (c) Dynamic Magnitude Response 0 50 100 150 200 250 300 Epoch051015202530Norm gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) gradient norm (=1e5) momentum norm (=1e5) (d) Norm Comparisons 0 50 100 150 200 250 300 Epoch0246810121416Norm gradient norm (ut=0.3) momentum norm (ut=0.3) gradient norm (ut=0.6) momentum norm (ut=0.6) gradient norm (ut=0.9) momentum norm (ut=0.9) (e) Norm Comparisons 0 50 100 150 200 250 300 Epoch024681012Norm gradient norm (=1e2) momentum norm (=1e2) gradient norm (=1e3) momentum norm (=1e3) gradient norm (=1e4) momentum norm (=1e4) (f) Norm Comparisons Figure 7: ( Up) Analysis of the (dynamic) magnitude responses in the early and late training stages for Standard-SGDM with high-pass gain momentum defined in Equation 10. The solid lines denote the magnitude responses in the early stages , and the dashed lines denote the magnitude responses in the late stages . (Down ) The comparison between the gradient norms and momentum norms for Standard-SGDM with high-pass gain momentum. Left Column: increasing sequence. Middle Column: fixed sequence. Right Column: decreasing sequence. 0 2 4 6 8 10 Epoch01020304050T est Accuracy (%) Low-pass =1e4 Low-pass ut=0.9 Low-pass =1e4 High-pass =1e4 High-pass ut=0.9 High-pass =1e4 All-pass ut=0 FSGDM (a) Orthodox Momentum Systems 0 2 4 6 8 10 Epoch01020304050T est Accuracy (%) Low-pass gain =1e4 Low-pass gain ut=0.9 Low-pass gain =1e4 High-pass gain =1e4 High-pass gain ut=0.9 High-pass gain =1e4 All-pass ut=0 FSGDM (b) Unorthodox Momentum Systems Figure 8: The first 10epochs of the test accuracy curves with different momentum coefficient meth- ods. We choose 104for both increasing and decreasing factors ( µandν) in dynamic momentum systems and ut= 0.9for static momentum coefficient. These observations significantly validate that preserving the original gradient in early stages en- hances test performance, which matches the findings in Section 3. Additionally, our proposed FS- GDM retains the all-pass characteristic and possesses the same quick start property in test accuracy curves. 17 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| Stage = 1 Stage = 150 Stage = 300 (a) LP2HP 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| Stage = 1 Stage = 150 Stage = 300 (b) HP2LP 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 150 Stage = 300 (c) LPG2HPG 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 150 Stage = 300 (d) HPG2LPG Figure 9: The magnitude response curves of Stage 1,150,300in different momentum systems. C.5 C OMPARISON WITH SPECIAL MOMENTUM SYSTEMS In this subsection, we investigate the test performance of the following four types of momentum systems: 1) low-pass to high-pass momentum system (LP2HP); 2) high-pass to low-pass momen- tum system (HP2LP); 3) low-pass gain to high-pass gain momentum system (LPG2HPG); 4) high- pass gain to low-pass gain momentum system (HPG2LPG). Their dynamic magnitude responses are shown in Figure 9. Note that the maximum values |H(ω)|of these four systems are the same as the default setting in FSGDM. We run each experiment under 3 different random seeds (0-2). Table 8 displays the test accuracy results of four types of momentum systems and FSGDM. Our proposed FSGDM outperforms all four special momentum systems. Specifically, the test accuracy of the mo- mentum systems shifting from high-pass to low-pass is better than that shifting from low-pass to high-pass. This indicates that compared to the low-frequency gradient components, high-frequency components are more undesired in the late training stages, which supports the finding in Section 3. Table 8: Comparison of Top-1 Accuracy (%) among the low-pass to high-pass, high-pass to low- pass, low-pass gain to high-pass gain, high-pass gain to low-pass gain momentum systems and FSGDM. Dynamic Magnitude Response FSGDM LP2HP HP2LP LPG2HPG HPG2LPG ACC-1 (%) 81.44 0.06 74.77 0.21 77.00 0.13 72.60 0.58 78.91 0.25 C.6 T RAINING WITH EXTREME MOMENTUM COEFFICIENTS 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0.00.20.40.60.81.0|H()| ut=0.9,vt=0.1 ut=0.93,vt=0.07 ut=0.96,vt=0.04 ut=0.99,vt=0.01 ut=0.999,vt=0.001 (a) Extreme utin EMA Setting 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 020406080100|H()| ut=0.9,vt=1 ut=0.93,vt=1 ut=0.96,vt=1 ut=0.99,vt=1 (b) Extreme utwithvt= 1 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 0510152025303540|H()| ut=0.9,vt=1 ut=0.9,vt=2 ut=0.9,vt=3 ut=0.9,vt=4 (c) Extreme vtwithut= 0.9 Figure 10: The magnitude responses of different utandvtwith extreme value ranges. (a): EMA- SGDM; (b), (c): Standard-SGDM. Why do researchers usually choose ut= 0.9orvt= 1instead of larger values? From the frequency domain perspective, we discover that 1) when utis extremely close to 1in EMA-SGDM, the mo- mentum system will behave like a super narrow low-pass filter, with an extreme reduction in most of the high-frequency gradient components; 2) when utis extremely close to 1in Standard-SGDM, the momentum system will behave like a super narrow low-pass gain filter, with a reduction in high- frequency gradient components and high amplification in a narrow band of low-frequency gradient 18 Published as a conference paper at ICLR 2025 components; 3) when vtis larger than 1in Standard-SGDM, the attenuation of high-frequency gra- dient components is then reduced. We speculate that all these poor filtering characteristics of the momentum systems will lead to bad test performance. Figure 10 displays the magnitude response of these three situations. As shown in Figure 11, the test performance results validate our previous speculations and support our frequency domain analysis framework. 0 50 100 150 200 250 300 Epoch010203040506070T est Accuracy (%)ut=0.9,vt=0.1 (76.63% ± 0.06) ut=0.93,vt=0.07 (75.91% ± 0.34) ut=0.96,vt=0.04 (75.63% ± 0.30) ut=0.99,vt=0.01 (73.09% ± 0.41) ut=0.999,vt=0.001 (63.87% ± 0.02) (a) Extreme utin EMA Setting 0 50 100 150 200 250 300 Epoch01020304050607080T est Accuracy (%)ut=0.9,vt=1 (79.70% ± 0.25) ut=0.93,vt=1 (79.37% ± 0.37) ut=0.96,vt=1 (72.84% ± 1.82) ut=0.99,vt=1 (28.27% ± 1.75) (b) Extreme utwithvt= 1 0 50 100 150 200 250 300 Epoch01020304050607080T est Accuracy (%) ut=0.9,vt=1 (79.70% ± 0.25) ut=0.9,vt=2 (78.17% ± 0.51) ut=0.9,vt=3 (75.33% ± 1.45) ut=0.9,vt=4 (72.08% ± 1.45) (c) Extreme vtwithut= 0.9 Figure 11: The test accuracy curves of different utandvtin extreme value ranges. (a): EMA- SGDM; (b), (c): Standard-SGDM. C.7 A DDITIONAL EXPLORATION OF OPTIMAL SETTINGS FORNLP T ASKS In this subsection, we provide experiments that explore the optimal parameter selection of FSGDM for the IWSLT14 translation task by training LSTM-W and Transformer-tiny. The experimental settings follow Section 5 and Appendix D. 0.01 0.1 1 Scaling Factor c0.51.01.52.02.53.0Momentum Coefficient vOptimal ZoneIWSLT14-LSTM-W 0.01 0.1 1 Scaling Factor cMomentum Coefficient vOptimal ZoneIWSLT14-Transformer-tiny 20.00020.80621.61222.41823.22524.03124.83725.64326.44927.255 12.00012.84413.68814.53215.37616.22117.06517.90918.75319.597 Figure 12: The BLEU scores of training LSTM-W and Transformer-tiny on IWSLT14 German- English translation task. The results show that the optimal parameter selections across these two training settings exhibit a high similarity. The black points denote the parameter selections with better test performance. The optimal zone of the parameter selection is circled in blue. The results in Figure 12 indicate that similar optimal zones can be observed on the NLP task. When the momentum coefficient vis fixed, the BLEU score shows an initial increase followed by a decline as the scaling factor cincreases, which is highly consistent with the results in Section 4.2. In addition, we find that the empirical insights discussed in Section 3.3 are also applicable to various deep learning models beyond CNNs, as well as NLP tasks. C.8 O PTIMAL ZONE OF FSGDM In this subsection, we go deeper into the optimal zone. We suspect that the similarity of the dynamic magnitude responses may lead to close test set performance. The dynamic magnitude responses of the black points with different parameters in the optimal zone (Figure 4) are shown in Figure 13. We train ResNet50 on CIFAR-100 and visualize the training losses and the test accuracy curves of different points in the optimal zone. The results are shown in Figure 14. 19 Published as a conference paper at ICLR 2025 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (a)c= 0.016, v= 0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (b)c= 0.033, v= 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (c)c= 0.051, v= 1.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (d)c= 0.069, v= 2.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 051015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (e)c= 0.088, v= 2.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Frequency (radians) 51015202530|H()| Stage = 1 Stage = 50 Stage = 100 Stage = 150 Stage = 200 Stage = 250 Stage = 300 (f)c= 0.107, v= 3.0 Figure 13: The dynamic magnitude responses of the black points in the optimal zone. 0 50 100 150 200 250 300 Epoch0123456Training Lossc=0.107,v=3.0 c=0.088,v=2.5 c=0.069,v=2.0 c=0.051,v=1.5 c=0.033,v=1.0 c=0.016,v=0.5 (a) Training Losses 0 50 100 150 200 250 300 Epoch01020304050607080T est Accuracy (%)c=0.107,v=3.0 (81.20%) c=0.088,v=2.5 (80.98%) c=0.069,v=2.0 (81.21%) c=0.051,v=1.5 (81.70%) c=0.033,v=1.0 (81.54%) c=0.016,v=0.5 (81.35%) (b) Test Accuracy Figure 14: The training losses and test accuracy of different parameter settings in the optimal zone. From the training loss and test accuracy curves, we find that the optimization processes of different black points in the optimal zone resemble each other. According to the existing parameter settings of the black points, one can find that the mathematical relationship between candvin training ResNet50 on CIFAR-100 is approximately30.992 v≈1 +1 c4. C.9 A BLATION STUDY ON DIFFERENT BATCH SIZE This subsection provides the ResNet50 training experiments on CIFAR-100 with different batch size settings. We compare the Top-1 accuracy of the test set by using our FSGDM with c= 0.033, v= 1, Standard-SGDM with ut= 0.9, vt= 1, and EMA-SGDM with ut= 0.9, as shown in Table 9. The test results show that our FSGDM consistently outperforms popular conventional SGD-based momentum optimizers. 4This relationship can be better approximated and generalized with continued experimentations across di- verse tasks. 20 Published as a conference paper at ICLR 2025 Table 9: Comparison of Top-1 Accuracy (%) among the FSGDM, Standard-SGDM, and EMA- SGDM with different batch size settings. Batch size 64 128 256 EMA-SGDM 79.42 0.11 76.84 0.06 69.03 0.39 Standard-SGDM 79.55 0.13 79.71 0.25 78.96 0.33 FSGDM 80.92 0.13 81.44 0.06 80.34 0.01 D E XPERIMENTAL SETTINGS D.1 T RAINING SETTINGS FOR VISION CLASSIFICATION TASKS We use custom training code based on the PyTorch tutorial code for all our visual classification experiments (including the experiments in Section 3, Section 4.2 and Section 5) We choose the CosineAnnealingLR (Loshchilov & Hutter, 2016) as our training scheduler. Additionally, we set the learning rate as 1×10−1for all experiments, while the weight decay is set as 5×10−4for experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, and 1×10−1for ImageNet. All models we used are simply following their paper’s original architecture, and adopt the weight initialization introduced by He et al. (2015). Additionally, we train 300 epochs for experiments on CIFAR- 10 and CIFAR-100 and train 100 epochs for Tiny-ImageNet and ImageNet. We use a 128 batch size for experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, and 256 for ImageNet. All experiments are conducted on RTX 4090 or A100 GPUs. Data Augmentation. For experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, we adopt PyTorch’s RandomCrop, followed by random horizontal flips. Specifically, the random crop size is set to 32x32 for CIFAR-10 and CIFAR-100 and set to 64x64 for Tiny-ImageNet. For experiments on ImageNet, we adopt PyTorch’s RandomResizedCrop, cropping to 224x224 followed by random horizontal flips. Test images use a fixed resize to 256x256 followed by a center crop to 224x224. At last, a data normalization is adopted to input images. D.2 T RAINING SETTINGS FOR NATURAL LANGUAGE PROCESSING TASKS All models used in our experiments are directly adopted from the FairSeq5framework. We retain the original architecture of each model and train all models for 100 epochs using a single NVIDIA RTX 4090 GPU. We set the maximum batch size to 4,096 tokens and apply gradient clipping with a threshold of 0.1. The baseline learning rate is set to 0.25, and for the optimizer, we use a weight decay of 0.0001. D.3 T RAINING SETTINGS FOR REINFORCEMENT LEARNING TASKS For the experiments in RL tasks, we do not make any changes except for replacing the original Adam optimizer with Standard-SGDM, EMA-SGDM, and our proposed FSGDM. To ensure fairness, we use Tianshou’s (Weng et al., 2022) default hyperparameters for PPO training. However, since SGD- based optimizers are highly sensitive to the learning rate, we searched for suitable learning rates across the three games, ultimately setting 10−2,10−2and10−3for Walker2d-v4, HalfCheetah-v4, and Ant-v4, respectively. 5https://github.com/facebookresearch/fairseq 21 Published as a conference paper at ICLR 2025 E C HALLENGES IN THE FREQUENCY DOMAIN ANALYSIS FOR ADAPTIVE OPTIMIZERS Algorithm 2: RMSprop Input β2,ϵ,v0; foreacht= 1,2, . . .do gt=∇Lt(xt−1, ζt−1); vt=β2vt−1+ (1−β2)g2 t; xt=xt−1−αtgt/(√vt+ϵ); endAlgorithm 3: Adam Input β1,β2,ϵ,m0,v0; foreacht= 1,2, . . .do gt=∇Lt(xt−1, ζt−1); mt=β1mt−1+ (1−β1)gt; vt=β2vt−1+ (1−β2)g2 t; cmt=mt 1−βt 1,bvt=vt 1−βt 2; xt=xt−1−αtcmt/(√bvt+ϵ); end In this section, we make a discussion on the potential challenges for the extension of the frequency domain analysis framework to adaptive optimizers like RMSprop and Adam as shown in Algo- rithm 2 and 3. The first-moment estimate of Adam is in the form of EMA and thus acts as a low-pass filter. However, the second-moment estimate presents additional obstacles for frequency domain analysis in the following ways: 1. The second-moment estimates of Adam and RMSprop involve the squared gradient term g2 t, resulting in nonlinearity that complicates the direct application of the Z-transform. 2. Adam introduces both the first- and second-moment estimates ( mtandvt), and adopts bmt/(√bvt+ϵ)as the update step. This intricate interaction between mtandvtalso makes the analysis more challenging. At this stage, we believe that our argument regarding the three insights discussed in Section 3.3 is also applicable to other optimizers. However, it remains unclear how the different frequency gradient components in the model parameter updates are processed by the Adam optimizer. We anticipate that resolving these issues will provide deeper insight. 22 | 8 | 1 | The paper involves training two models: ResNet50 on CIFAR-100 and VGG16 on CIFAR-10. ResNet50 has approximately 25.6 million parameters, while VGG16 has around 138 million parameters. Both datasets (CIFAR-10 and CIFAR-100) are relatively small (60,000 images total for CIFAR-10 and 100,000 images for CIFAR-100) and are often used in deep learning benchmarks. Training such models on these datasets typically requires a total training time of around 8 hours on a modern GPU for standard training procedures like using SGD with momentum. Given that the paper does not mention any complex modifications to the training pipeline or significantly larger batch sizes, it can be estimated that a single modern GPU could feasibly train these models within the 8-hour window. | yes | Yes | CV | On the Performance Analysis of Momentum Method: A Frequency Domain Perspective | 2024-11-29 0:00:00 | https://github.com/yinleung/FSGDM | 1 | inside the repo examples CIFAR1OO Folder | 300 epochs * 2.5 min = 12.5 hOURS | https://drive.google.com/file/d/1grWsTDyc3MOwfbwob2EbMbL7GPmjsKfI/view?usp=sharing | Yes | -- Run by going inside the examples/cifar1oo/main.py |
ogbl-ddi | GCN (node embedding) | [] | Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods | 2024-11-22T00:00:00 | https://arxiv.org/abs/2411.14711v1 | [
"https://github.com/astroming/GNNHE"
] | {'Test Hits@20': '0.9549 ± 0.0073', 'Validation Hits@20': '0.9098 ± 0.0294', 'Number of params': '5125250', 'Ext. data': 'No'} | [
"Ext. data",
"Test Hits@20",
"Validation Hits@20",
"Number of params"
] | Given the following paper and codebase:
Paper: Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods
Codebase: https://github.com/astroming/GNNHE
Improve the GCN (node embedding) model on the ogbl-ddi dataset. The result
should improve on the following metrics: {'Test Hits@20': '0.9549 ± 0.0073', 'Validation Hits@20': '0.9098 ± 0.0294', 'Number of params': '5125250', 'Ext. data': 'No'}. You must use only the codebase provided.
| Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods Shuming Liang*1, Yu Ding2, Zhidong Li1, Bin Liang1, Siqi Zhang3, Yang Wang1, Fang Chen1 1University of Technology Sydney first name.last name@uts.edu.au 2University of Wollongong dyu@uow.edu.au 3Zhejiang University siqizhang@zju.edu.cn Abstract This paper explores the ability of Graph Neural Networks (GNNs) in learning various forms of information for link prediction, alongside a brief review of existing link prediction meth- ods. Our analysis reveals that GNNs cannot effectively learn structural information related to the number of common neighbors between two nodes, primarily due to the nature of set-based pooling of the neighborhood aggregation scheme. Also, our extensive experiments indicate that trainable node embeddings can improve the performance of GNN-based link prediction models. Importantly, we observe that the denser the graph, the greater such the improvement. We attribute this to the characteristics of node embeddings, where the link state of each link sample could be encoded into the embeddings of nodes that are involved in the neighborhood aggregation of the two nodes in that link sample. In denser graphs, every node could have more opportunities to attend the neighborhood aggregation of other nodes and encode states of more link samples to its embedding, thus learning better node embeddings for link prediction. Lastly, we demonstrate that the insights gained from our re- search carry important implications in identifying the limitations of existing link prediction methods, which could guide the future development of more robust algorithms. 1 Introduction Graph Neural Networks (GNNs) have demonstrated powerful expressiveness in graph representation learning [80, 27]. However, what structural information can be learned via GNN remains an open question [67, 9, 15, 5, 42, 79]. Particularly, scant attention has been directed towards this question in terms of structural information specific to two nodes. A prominent task relying on such the information is link prediction. Although a number of GNN-based link prediction models have been introduced [74, 56, 78, 18, 62, 41, 36, 23], many of them lack thorough investigations into whether their models effectively learn pair-specific structural information. For example, SEAL [74] and its successors [37, 60, 70, 59] are a family of link prediction methods that attempt to use GNNs to learn pair-specific structural information represented by traditional link heuristics such as Common Neighbors, Katz index [26], etc. SEAL [74] has proven that most link heuristics between two nodes in a graph can be computed approximately within an enclosing subgraph specifically constructed for that two nodes. In essence, SEAL-type methods perform link prediction by using GNNs to classify such enclosing subgraphs, with an expectation that GNNs could learn the structural information equivalent to link heuristics. However, a critical evaluation of this expectation is lacking in the literature. We demonstrate that this expectation does not hold completely. In this paper, we mainly study the link prediction capability of GNNs, with a focus on three aspects. 1), we explore whether GNNs can effectively learn the pair-specific structural information related to the number of common neighbors for link prediction. 2), we present our experimental observation: incorporating trainable node embeddings can improve the performance of GNN-based link prediction models, and the denser the graph, the greater the improvement. This observation, not extensively revealed in the prior literature, has 1arXiv:2411.14711v1 [cs.SI] 22 Nov 2024 significant practical implications for selecting appropriate methods based on graph density in real-world link prediction problems. 3), we leverage insights derived from our research to provide a limitation analysis of existinglinkpredictionmethods, therebycontributingvaluableperspectivesfortheirpotentialimprovements. First, the majority of GNNs follow a neighborhood aggregation scheme, where each node’s representation is recursively updated by aggregating the representations of that node and its neighbors [12, 36, 27]. The learned representations are node-wise. It has been recognized that every node’s representation can hardly capture information related to the number of its neighbors. This is due to the nature of the set-based pooling of the aggregation scheme, which inherently ignores the size of the neighborhood set of each node [67, 76]. A general strategy for applying node-wise representations learned by GNNs to downstream multiple-node tasks (e.g., link prediction, graph classification, etc.) is to combine the representations of the nodes involved in these tasks. For link prediction, we find that the combination of two nodes’ representations essentially lacks the ability to capture information related to the number of common neighbors. This is mainly because node-wise representations learned by GNNs inherently lack information about the number of neighbors of each node, and most operations of combining two nodes’ representations (e.g., concatenation, Hadamard production, etc.) also do not contain any behaviors of counting how many common neighbors between two nodes. To empirically verify the above, we examine the link prediction performance of an approach that incorporates traditional link heuristics (e.g., Common Neighbors) into the GNN. The approach yields results either superior or comparable to those obtained by using only GNNs, experimentally supporting our analysis. Moreover, in our experiments, we find that trainable node embeddings (different from pre-trained node embeddings, we refer to trainable node embeddings as those embeddings that can be optimized during the model training) can enhance the performance of GNN-based link prediction models, and the denser the graph, the stronger the enhancement. In particular, by only utilizing node embeddings in GCN [29] or GAT [61], we are able to surpass many link prediction specific methods on two dense graphs, i.e., ogbl- ddi and ogbl-ppa [21]. Our explanation is as follows. Compared to the model weights of a GNN that are shared across all nodes [29, 12, 27], each trainable node embedding is unique to its respective node. This characteristic of node embeddings can benefit the model. When the training is supervised by positive and negative link samples (i.e., two nodes are not linked), the link state of two nodes in every link sample could be encoded into the node embeddings of that two nodes and their neighboring nodes by the neighborhood aggregation algorithm of the GNN. This would enable each node embedding to remember the relationships of that node to other nodes, allowing the model to know better which two nodes are more likely to be linked or not. Moreover, in the neighborhood aggregation of the GNN, the denser graphs would allow each node to see more other nodes, leading to better learning of node embeddings for link prediction. The insights gained in this study can help identify and interpret the limitations of existing link prediction methods, potentially directing the search for more robust algorithms. To demonstrate this, we present two case studies: first, we show that SEAL-type methods [74, 60, 70, 59] could not effectively learn information about the number of common neighbors. Second, we show that NBFNet [82] lacks the algorithmic capability to train powerful node embeddings for link prediction. Additionally, we compare the empirical performance of various link prediction methods on OGB datasets. The results can be explained with our insights. 2 Notations and Problem Definition Without loss of generality, we demonstrate our work on homogeneous graphs. Let G= (V,E,X)denote a graphGwithNnodes, where Vis the set of nodes, |V|=N,Eis the set of edges, and X∈RN×fis the feature matrix of nodes. The i-th row of X(i.e.,xi∈Rf) is the feature vector of node i. The adjacency matrix is A∈Rn×nin which the i,j-th entry (i.e., ai,j) is1if an edge exists from node itojand0otherwise. The degree of node iisdeg(i) =/summationtext j∈Vai,j. The degree of the graph Gis the average degree of all nodes. A set of nodes connected directly to a node v∈Vis the first-order or 1-hop neighborhood set of vand is denoted by Γv. Link prediction is a node-pair-specific problem, aiming to estimate the likelihood ˆyv,uof the existence of an unknown edge Ev,u/∈Ebetween two nodes v,u∈V. Herein we refer to v,uastwo target nodes in the candidate linkEv,u. 2 3 Statistical Link Heuristics In the long history of link prediction research, especially prior to the emergence of neural networks, a variety of statistical link prediction methods have been proposed [40, 58, 32]. These statistical methods, known as heuristic methods or link heuristics, often rely on intuitive rules or empirical observations, and often extract structural information specific to the target node pair. Therefore, we can concrete abstract pair-specific structural information using tangible link heuristics. In this work, based on whether a link heuristic captures information related to the Number of Common Neighbors ( NCN) between two target nodes or not, we categorize pair-specific link heuristics (structural information) into two types: NCN-dependent andnon-NCN-dependent . With this categorization, we offer a concise review of link heuristics in the literature, which reveals that the majority of these heuristics are NCN-dependent. 3.1 NCN-dependent Link Heuristics Table 1: NCN-dependent link heuristics between nodes v,u. Heuristic Definition CNv,u |Γv∩Γu| JAv,u[24]|Γv∩Γu| |Γv∪Γu| AAv,u[2]/summationtext z∈Γv∩Γu1 log|Γz| RAv,u[81]/summationtext z∈Γv∩Γu1 |Γz| Sorensen index[57]2|Γv∩Γu| |Γv|+|Γu| Salton index [51]|Γv∩Γu|√ |Γv|·|Γu| Hub Promoted index [49]|Γv∩Γu| min(|Γv|,|Γu|) Hub Depressed index [49]|Γv∩Γu| max(|Γv|,|Γu|) Table 1 lists several commonly-used NCN-dependent Link heuristics. As shown, Common Neighbors (CN) is defined as the size of the intersection of the first-order neighborhood sets of two nodes. Jaccard (JA) coefficient [24] normalizes the CN by the size of the union of the two nodes’ neighborhood sets. AdamicAdar (AA) [2] and Resource Allocation (RA) [81] suppress the contribution of nodes by penalizing each node with its degree. Sorensen index [57], Salton index [51], Hub Promoted index [49], and Hub Depressed index [49] incorporate the degree of two target nodes with CN. We can see from Table 1 that these heuristics are highly dependent on the number of common neighbors between two nodes v,u(i.e.,|Γv∩Γu|). In addition to the above, many other link heuristics are NCN-dependent. Cannistraci et al. [7] suggest that the likelihood of two nodes forming a link increases if their common neighbors are members of a strongly inner-linked cohort, termed local-community-links. They introduce a modified version of CN, JA, AA, and RA, denoted as CAR-based. Furthermore, some another link heuristics consider the clustering coefficient of the nodes in counting common neighbors, such as node clustering coefficient (/summationtext z∈Γv∩Γu|C(z)) [66] and node- link clustering coefficient (/summationtext z∈Γv∩Γu||Γv∩Γz| |Γz|−1×C(z) +|Γu∩Γz| |Γz|−1×C(z)) [65], where C(z) =2|Ej,k:j,k∈Γz,Ej,k∈E| |Γz|×(|Γz|−1) is the local clustering coefficient of the node z. An important family of link heuristics is those counting all paths between two target nodes, such as Katz index [26], Leicht Holme Newman index [34], and Rooted PageRank [74]. These heuristics are indeed NCN- dependent, where first-order and high-order common neighbors are considered. For example, Katz index (Katz v,u=/summationtext∞ l=1βl|{path(l) v,u}|) [26] weighted sums the number of all paths between two target nodes v,u, where|{path(l) v,u}|is the number of all paths between node vanduwith the length of l, andβis a damping factor. For example, if l= 4,|{path(4) v,u}|can be computed by |{path(4) v,u}|=/summationtext a∈Γv,b∈ΓuCNa,b, where CNa,b is the number of common neighbors between nodes a,b. 3 3.2 non-NCN-dependent Link Heuristics According to our categorization strategy, there exist a limited number of link heuristics that are non-NCN- dependent, including Shortest Path Distance (SPD), Preferential Attachment ( PAv,u=|Γv|×|Γu|) [4], and SimRank [25]. Notably, SPD and PA do not extract information about the number of common neighbors. SimRank,detailedinAlgorithm1,recursivelyrefinessimilarityscoresbetweeneverytwonodesbyconsidering the neighboring nodes of the two nodes, where the number of common neighbors is ignored essentially. Specifically, the similarity score s(m) i,jbetween node i,jin them-th iteration is obtained by averaging the similarity scores between all neighbors of iandjfrom the (m−1)-th iteration, where the information about how many common neighbors between i,jcan hardly be encoded into s(m) i,j. Algorithm 1 SimRank [25] 1:Input:GraphG= (V,E)(|V|=N), decay factor C(0<C < 1), iterations K 2:Output: Similarity S= (si,j)∈RN×N 3:Initialize: s(0) i,j= 1ifi=j, otherwise 0 4:form= 1toKdo 5:s(m) ij=C |Γi||Γj|/summationtext|Γj| b=1/summationtext|Γi| a=1s(m−1) Γi(a)Γj(b), where Γi(a)is thea-th node in Γi 6:end for 4 Aggregation-based GNNs Most GNNs follow a neighborhood information aggregation algorithm, where the representation of each node in a graph is iteratively updated by aggregating the representations of its neighbors and its own [12, 27]. Formally, the representation of a node iupdated by the l-th layer of a GNN is ˆh(l) i= AGG(l)/parenleftig/braceleftig h(l−1) j|∀j∈Γi∪{i}/bracerightig/parenrightig , h(l) i=ˆh(l) iW(l),(1) where h(0) iis initialized with the feature vector of node i,AGG(l)(·)is instantiated as a set-based pooling operation such as MAX, MEAN [19], or attention-based SUM [61, 36], W(l)is a weight matrix for the l-th GNN layer, which is shared across all nodes and used for representation transformation (i.e., if ˆh(l) v∈ Rf,W(l)∈Rf×f′, then h(l) v∈Rf′). For simplicity, we omit the residual connections, activation functions, etc. In this paper, we use the term GNNs to refer to such aggregation-based GNNs unless otherwise stated. 1 23 4 56 𝑎1,2 𝑎2,1𝑎5,4 𝑎5,1𝑎1,51 23 4 56 𝑎4,5𝑎6,4 𝑎4,6 𝑎1,3𝑎3,1𝑎6,1 𝑎1,61 23 4 56 Figure 1: An illustration of neighborhood information propagation and aggregation in GNNs, where ai,jcan be an edge weight or attention weight from node jtoi. Fig. 1 illustrates the neighborhood information propagation and aggregation process in GNNs. As shown, ˆh(l) iin Eq. 1 can be computed by ˆh(l) i=/summationdisplay j∈Γi∪{i}a(l) i,j/summationtext j∈Γi∪{i}a(l) i,jh(l−1) j, (2) 4 wherea(l) i,jis the weight for the message (i.e., h(l−1) j) from node jtoi. For MEAN-pooling in GNNs like GCN [19], it can be a(l) i,j= 1,∀Ei,j∈E. For attention-based GNNs like GAT [61], a(l) i,jis an attention coefficient that is computed dynamically based on h(l−1) iandh(l−1) j. Remark 1. The node representations learned by aggregation-based GNNs are node-wise. Analysis. As shown in Eq. 1, the input, intermediate, and output representations of GNNs are node-wise. Remark 2. Each node’s representation learned by neighborhood aggregation-based GNNs lacks information about the number of neighboring nodes of that node. Analysis. As shown in Eq. 1, GNNs update the representation h(l) iby aggregating the representations of nodeiand its neighbors. In this process, the number of neighbors of node ican hardly be encoded into h(l) i. This is due to the inherent nature of neighborhood aggregation scheme in GNNs, i.e., the AGG(l)(·) in Eq. 1 is set-based pooling, which is originally designed to handle irregular sizes of neighborhood sets of different nodes in a graph. For example, if the aggregation is MEAN pooling, then the set of node-wise representations (i.e., {h(l−1) j|∀j∈Γi∪{i}}in Eq. 1) will be averaged and the result could hardly contain information about the size of that set. Note that attention-based pooling also cannot address this inherent issue of the neighborhood aggregation scheme. As shown in Eq. 2, attention-based GNNs [61] essentially replace the original edge weight with attention weight. Despite this modification, the set of representations is weighted averaged, and consequently, the resulting representation still lacks information related to the size of the neighborhood set. Remark 2 shows that the neighborhood aggregation algorithm of GNNs inherently cannot effectively learn information about the number of neighbors of each node. Essentially, we can address this issue by, for example, adding the node degree as a feature to each node. We note that several previous works have pointed out this [67, 76]. We present it using Remark 2 for better presenting our following analysis. 5 Can GNNs Completely Learn NCN-dependent Structural Information? 5.1 Analytical Study What can we do when applying the node-wise representations learned by GNNs to downstream graph tasks that involve multiple nodes, such as link prediction or graph classification? A general way is to combine the representations of the involved nodes into one representation and pass it into the subsequent model components [67, 63, 36, 27]. For such a combination, we have the following insight: Remark 3. The combination of two or more nodes’ representations learned by GNNs cannot effectively capture NCN-dependent structural information. Analysis. According to Remark 2, due to the inherent nature of the neighborhood aggregation algorithm, node-wise representations learned by GNNs cannot effectively capture information related to the number of neighbors of each node (i.e., the size of its neighborhood set), much less to the number of common neighbors between two nodes. The operation of combining representations of two or more nodes also cannot effectively extract NCN-dependent structural information. For example, we can combine the representations of two nodes by concatenation, Hadamard production [63, 27], and combine more nodes’ representations by MEAN pooling [67], Sort pooling [75], etc. These combination operations on node-wise representations learned by GNNs are unlikely to contain the behavior of counting the common neighbors between two nodes, thereby falling short in extracting NCN-dependent structural information. GNNs might learn little structural information related to the number of common neighbors. However, the neighborhood aggregation algorithm of GNNs learns node-wise representations by passing the messages of neighboring nodes of each node to that node and set-based aggregates them [19, 61, 76, 36]. Such set- based aggregation operation inherently washes out the information related to the number of nodes in the set, including the number of common nodes between two nodes. For instance, as shown in Fig. 1, the nodes 1and4have common neighboring nodes 5,6. In GNN learning based on Eq. 1, the node 1will receive the messages from 2,3,5,6, where the number of common neighbors (i.e., CN1,4= 2) can hardly be 5 Algorithm 2 Link prediction by integrating statistical heuristics into the GNN 1:Input:GraphG= (V,E,X)(|V|=N),X∈RN×f, trainable node embeddings E∈RN×d, trainable heuristic embeddings, ground truth yv,ufor link sample (v,u), GNN layers L, epochsK 2:Output: Link likelihood ˆyv,u∈Rfor node pair v,u 3:Initialize: node embeddings E, trainable heuristic embeddings, model weights, etc. 4:fori= 0toKdo 5:forl= 1toLdo 6: ˆh(l) i= AGG(l)/parenleftig/braceleftig h(l−1) j|∀j∈Γi∪{i}/bracerightig/parenrightig 7: h(l) i=ˆh(l) iW(l) 8:end for 9:hvu= COMBINE/parenleftig h(L) v,h(L) u/parenrightig 10: evu= CONCAT/parenleftig e(CN) vu,e(JA) vu,···,e(RA) vu/parenrightig 11: ˆyvu= PREDICTOR (CONCAT ( hvu,evu)) 12:Calculate loss(yv,u,ˆyv,u) 13:Update E, trainable heuristic embeddings, model weights, etc. 14:end for 15:Herein h(0) iis initialized based on the feature and node embedding of node i(i.e.,xiandei).hvuis the link representation for (v,u).e(CN) vu,e(JA) vu,e(RA) vuare trainable heuristic embeddings by encoding CN, JA, RA between nodes v,u, respectively. COMBINE(·,·)can be Hadamard production, concatenation, etc.CONCAT is the operation of concatenation. PREDICTOR(·)is a predictor like MLP. captured in the aggregation of five representations (i.e., representations of nodes 1,2,3,4,5). Note that in this example, attention-based aggregation also cannot effectively learn CN1,4= 2from the aggregation of five representations. The reason is the same as the analysis of Remark 2. In fact, it is difficult to interpret what information the GNN has learned in a rigorous mathematical format. Nevertheless, we could say that GNNs cannot completely capture NCN-dependent structural information. 5.2 Empirical Study 5.2.1 Experimental Design If Remark 3 holds, we expect that properly integrating NCN-dependent heuristics into a GNN could improve the link prediction performance. To this end, we design our experiments as detailed in Algorithm 2. As shown, given two nodes v,u, the link prediction is performed by combining the two nodes’ representations from the last GNN layer into a pair-specific link representation hvu, then concatenating it with heuristic encodings, and lastly passing the concatenation into a predictor like MLP. During the training stage, the node pair (v,u)can be a positive or negative link sample, where a negative sample can be two distant nodes that are not connected to each other. In Algorithm 2, h(0) ican be initialized using feature vector xi∈X, node embedding ei∈Eor the concate- nation of both xiandei. Some may refer to node embedding as an intermediate representation of a node in GNNs. In this work, we clearly distinguish node embeddings from node representations. We consider node embedding as a type of node-wise input feature. The embedding of a node can be viewed as encoding a unique node id into a trainable embedding vector, which is like encoding a unique word id into the word embedding in natural language processing [45]. Note that we can encode any feature into a trainable embed- ding vector (e.g., encoding node degree to an embedding). The main difference between node embeddings and the embeddings of other node features is that each node embedding vector is unique to that node. For example, an embedding of a node degree is not unique to that node (different nodes could have the same node degree). We also encode link heuristics into trainable embedding vectors. To verify Remark 3, we only need to encode NCN-dependent link heuristics. The methodology of encoding heuristics is as follows. For heuristics that are discrete integer values (e.g., CN), we assign a trainable embedding vector to each integer. In the case of 6 heuristicsthatarecontinuousfloating-pointvalues(e.g., AA),wepartitionthevaluerangeintosmallbinsand subsequently allocate each bin a unique embedding vector. Encoding heuristics into embeddings is mainly because if we directly use heuristics as features, we find that the model optimization is challenging, where the model is more likely to get stuck in a local optimum. This issue could arise due to the high correlation between the heuristic features and the link samples. Encoding heuristics into trainable embeddings can address this issue successfully. 5.2.2 Datasets Table 2: Statistics of OGB link prediction datasets used in our experiments. Dataset #Nodes #Edges #Degree ogbl-ddi 4,267 1,334,889 500 ogbl-collab 235,868 1,285,465 8 ogbl-ppa 576,289 30,326,273 73 ogbl-citation2 2,927,963 30,561,187 21 All of our experiments are conducted on four OGB link prediction datasets: ogbl-collab, ogbl-citation2, ogbl-ppa, andogbl-ddi [21]. The statistics of datasets are summarized in Table 2. All these datasets are constructed based on real-world data, covering diverse realistic applications and spanning different scales (4K - 3M nodes). OGB provides an official evaluation protocol. We completely follow it in the data splits and evaluation metrics (i.e., Hits@ 50, MRR, Hits@ 100, and Hits@ 20on ogbl-collab, ogbl-citation2, ogbl-ppa and ogbl-ddi, respectively). We report the result on the test set, with mean and standard deviation computed across 10 trials. /uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000046/uni00000052/uni0000004f/uni0000004f/uni00000044/uni00000045 /uni00000003/uni0000000b/uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000018/uni00000013/uni0000000c/uni0000000f /uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni0000001b/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000046/uni0000004c/uni00000057/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000015 /uni00000003/uni0000000b/uni00000030/uni00000035/uni00000035/uni0000000c/uni0000000f /uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni00000015/uni00000014/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000053/uni00000053/uni00000044 /uni00000003/uni0000000b/uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000014/uni00000013/uni00000013/uni0000000c/uni0000000f /uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000016/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000047/uni00000047/uni0000004c /uni00000003/uni0000000b/uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000015/uni00000013/uni0000000c/uni0000000f /uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni00000018/uni00000013/uni00000013/uni00000013/uni00000015/uni00000018/uni00000018/uni00000013/uni0000001a/uni00000018/uni00000014/uni00000013/uni00000013/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000056 /uni00000018/uni00000016/uni00000011/uni00000013/uni00000016 /uni0000001b/uni00000014/uni00000011/uni0000001c/uni00000014 /uni00000017/uni0000001c/uni00000011/uni00000015/uni00000015 /uni00000015/uni00000014/uni00000011/uni0000001c/uni00000018/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000019 /uni0000001b/uni00000013/uni00000011/uni00000019 /uni00000014/uni00000015/uni00000011/uni00000015/uni00000016 /uni00000014/uni00000015/uni00000011/uni00000015/uni00000014/uni00000018/uni00000013/uni00000011/uni00000014/uni0000001a /uni0000001b/uni00000014/uni00000011/uni00000018 /uni00000019/uni00000016/uni00000011/uni00000018/uni00000017 /uni0000001c/uni00000018/uni00000011/uni00000017/uni0000001c/uni00000018/uni00000016/uni00000011/uni00000013/uni0000001b /uni0000001b/uni00000016/uni00000011/uni0000001b/uni00000018 /uni00000019/uni00000013/uni00000011/uni00000019/uni0000001b /uni0000001c/uni00000017/uni00000011/uni00000017/uni00000015/uni00000018/uni00000019/uni00000011/uni00000014/uni00000014 /uni0000001b/uni0000001b/uni00000011/uni00000019/uni00000016 /uni00000017/uni0000001c/uni00000011/uni00000019/uni0000001b /uni00000015/uni00000018/uni00000011/uni0000001a/uni0000001b/uni00000018/uni00000017/uni00000011/uni00000015/uni00000014 /uni0000001b/uni00000018/uni00000011/uni00000018/uni0000001a /uni00000019/uni00000014/uni00000011/uni00000013/uni00000015 /uni0000001c/uni00000014/uni00000011/uni00000014/uni00000014/uni00000018/uni00000018/uni00000011/uni0000001a /uni0000001b/uni00000019/uni00000011/uni00000016/uni00000014 /uni00000018/uni0000001c/uni00000011/uni0000001c/uni00000017 /uni0000001b/uni0000001c/uni00000011/uni00000019/uni00000018 /uni0000002b/uni00000028 /uni0000002a/uni00000026/uni00000031/uni0000000b/uni0000003b/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni0000003b/uni00000003/uni0000000e/uni00000003/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni0000003b/uni0000000c/uni00000003/uni0000000e/uni00000003/uni0000002b/uni00000028 /uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c/uni00000003/uni0000000e/uni00000003/uni0000002b/uni00000028 /uni0000002a/uni00000026/uni00000031/uni0000000b/uni0000003b/uni00000003/uni0000000e/uni00000003/uni00000031/uni00000028/uni0000000c/uni00000003/uni0000000e/uni00000003/uni0000002b/uni00000028 Figure 2: The results of Algorithm 2 on four OGB link prediction datasets, using heuristic encoding (HE) only, node features (X) only, node embeddings (NE) only, or their combinations. The data splits and evaluation metrics follow OGB official evaluation protocol [21]. 5.2.3 Implementation Details We implement our Algorithm 2 based on PyTorch and PyTorch Geometric [13]. All embedding vectors are initialized by following the methods in [16, 20]. The model is trained with Adam optimizer [28]. The learning rate is decayed using the ExponentialLR method [38]. We conduct experiments on ogbl-ddi and ogbl-collab on a Linux machine with 192G RAM, and NVIDIA Quadro P6000 (24G), and on ogbl-ppa and ogbl-citation2 on a machine with 512G RAM and NVIDIA A100 (40G). Table 3 lists the configura- tions of the Algorithm 2 for the best performance. We provide our code for reproducing the results at https://github.com/astroming/GNNHE . 7 Table 3: Configurations of the Algorithm 2 for the best performances. ogbl-ddi ogbl-collab ogbl-ppa ogbl-citation2 GNN module GCN GCN GCN GCN GNN layers 2 2 2 2 predictor MLP MLP MLP MLP predictor layer 4 5 3 4 heuristics - SPD,CN,AA - SPD,AA heuristic embedding dim - 32 - 32 node embedding dim 512 - 256 - lr 0.003 0.002 0.001 0.001 dropout rate 0.3 0.3 0.3 0.25 gradient clip norm 5 10 5 10 batch size 100000 70000 100000 15000 5.2.4 Experimental Results Fig. 2 shows the experimental results, where HE is the model that only uses the heuristic encoding evuin Line 11 of Algorithm 2. GNN(X), GNN(NE), and GNN(X+NE) denote the models only using the GNN with three different inputs: node features (X) only, node embeddings (NE) only, and concatenation of both X and NE (X+NE). The model GNN(X)+HE uses both hvuandevu. WecanseeinFig.2thatHEoutperformsGNN(X)onalldatasets, suggestingthatNCN-dependentheuristics convey meaningful information that could not be effectively learned by the GNN. Moreover, most of the results of combining GNN and HE are better than those only using GNN. Especially, GNN(X)+HE achieves the best on ogbl-collab and ogbl-ciation2. All these results can support Remark 3. 6 GNNs with Node Embedding in Link Prediction As shown in Fig. 2, on two relatively sparse graphs, i.e., ogbl-collab (degree 8) and ogbl-citation2 (degree 21), the performance of GNN(NE) is on par with that of GNN(X), and GNN(X+NE) performs better than GNN(NE) and GNN(X). By comparison, on two denser graphs, i.e., ogbl-ppa (degree 73) and ogbl-ddi (degree 500), GNN(NE) outperforms GNN(X) by a large margin. These results indicate that incorporating node embeddings into GNNs can enhance the link prediction performance. More importantly, it reveals a strong positive correlation between the graph degree and the performance improvement by node embeddings, i.e., denser graphs exhibit greater improvement. Remark 4. For GNN-based link prediction models like Algorithm 2, when the training is supervised by positive and negative links, trainable node embeddings could enhance the expressive power of these models. Analysis. In Algorithm 2, the parameters optimized by the link samples could include model weights, trainable node embeddings, and other embedding weights. The key difference between node embedding weights and other learnable weights is that the former is unique to each node but the latter is shared across multiple nodes (e.g., the GNN weight matrix W(l)in Eq. 1 is shared across all nodes). The unique nature of node embeddings can bring benefits. As shown in Algorithm 2, when the model training is supervised by a link sample (v,u), for a GNN using node embeddings, the loss calculated based on (yv,u,ˆyv,u)would be used to optimize the node embeddings of nodes v,uand their neighboring nodes (i.e., the nodes involved in calculating h(L) v,h(L) u). The link state of (v,u)could be encoded into the node embeddings of these nodes, which would enable those node embeddings to remember the relationships between corresponding nodes. After being trained with sufficient positive and negative link samples, the node embedding of each node could know which nodes (through their node embeddings) in the graph are more likely to be or not to be connected to that node. If node embeddings are not used in Algorithm 2, link samples will only supervise the optimization of the weights that are shared across multiple nodes. The states of link samples could not be effectively preserved by the model since these shared weights might learn a common pattern for different nodes rather than unique 8 to a node. By comparison, each node embedding is unique to that node and could learn the link information specific to that node. In this respect, trainable node embeddings could enhance the expressive power of the GNN-based link prediction model. In Remark 4, the requirement of the model training is supervised by positive and negative links is indispens- able. Withoutthisprerequisite, thelinkstatebetweentwonodescouldnotbeencodedintonodeembeddings. Additionally, negative link samples can allow the embeddings of two distant nodes and their neighbors to see each other during the optimization of the GNN-based model. Finding 5. Following Remark 4, the denser the graph, the more the enhancement by node embeddings. Analysis. In GNN-based link prediction models like Algorithm 2, node embeddings in a dense graph could be better learned for link prediction than those in a sparse graph. Our explanation is as follows. In a dense graph, a node often has a lot of neighboring nodes, thereby providing numerous opportunities for that node to meet other nodes and encode link relationships of that node with these other nodes into its embedding during the GNN training. In contrast, a sparse graph typically contains only a limited number of neighbors for each node. For example, in the case where a node vhas only one neighboring node w, the optimization of the embedding of node vin a GNN would mainly rely on its neighbor w. As a result, the learned embedding of nodevwould lack sufficient information to identify the relationships between node vand the majority of the other nodes in the sparse graph because node vrarely or never sees them during the training process. Finding 5 indicates that the graph degree significantly influences the effectiveness of trainable node em- beddings in GNN-based link prediction models. Interestingly, prior studies [54] have also highlighted the sensitivity of heuristic methods to the graph degree. This underscores the necessity of considering the graph degree when selecting link prediction methods, as their efficacy may vary depending on it. Investigating the influence of different graph degrees on link prediction methods represents a compelling direction for further research. 7 GNNs in Learning non-NCN-dependent Structural Information In Section 5, we present that GNNs cannot completely learn the structural information related to the number of common neighbors between two target nodes. When it comes to the question of what non-NCN- dependent pair-specific structural information can be learned via GNNs, it poses a significant challenge. This is due to the potential presence of diverse types of non-NCN-dependent information. Unlike NCN-dependent information directly related to the number of common neighbors, non-NCN-dependent information tends to be abstract and difficult to express in rigorous mathematical terms. For example, our review in Section 3.2 identifies only three non-NCN-dependent link heuristics. In this work, we leave the exploration of this question as a future research endeavor. Nevertheless, by comparing Algorithm 1 and Algorithm 2, we have the following insight. Remark 6. The learning styles of SimRank in Algorithm 1, and the GNN-based link prediction model with node embeddings in Algorithm 2, exhibit certain similarities.1 Analysis. Comparing Algorithms 1 and 2, several similarities emerge. Firstly, similarity scores in Line 3 of Algorithm1andnodeembeddingsinLine3ofAlgorithm2bothneedtobeinitializedandcanbedynamically trained. Secondly, the updating computations of both algorithms (i.e., Line 5 of Algorithm 1 and Line 6 of Algorithm 2) involve neighboring nodes. Moreover, both the learned results (i.e., sijin Algorithm 1 and ˆyv,uin Algorithm 2) describe the existence likelihood of a link between two nodes. However, compared to Algorithm 2 where the trainable parameters include node embeddings, model weights, etc., the expressive power of SimRank is limited. In SimRank, only the similarity scores between every two nodes can be optimized, with each score always taking the form of a scalar. 1For Remark 6, we do not compare the performance of Algorithm 1 and Algorithm 2 due to the difficulty of SimRank in computation. For example, the basic memory requirement of SimRank is 415G and 2.42T on ogbl-collab and ogbl-ppa, respectively. Besides, SimRank produces 0 at Hits@20 on ogbl-ddi. 9 Remark 6 implies that although NCN-dependent structural information cannot be effectively learned via GNNs (Remark 3), other types of information (e.g., the information captured by SimRank) might be learned through GNNs. 8 Limitation Analysis of Existing Methods In this section, we first present a brief survey of existing link prediction methods and then identify their possible limitations. 8.1 A Survey of Link Prediction Methods 8.1.1 Heuristic Methods As outlined in Section 3, traditional link heuristics are usually defined based on the number of common neighbors or paths between two nodes [43]. Their effectiveness in link prediction has been confirmed in real-world tasks [43, 31]. However, many link heuristics are designed for specific graph applications and their performance may vary on different graphs [31]. Also, the expressiveness of these methods is limited compared to graph representation learning [74]. 8.1.2 Graph Neural Networks GNNs have proven their effectiveness in various graph applications [39, 22, 71, 27]. A number of GNN models have been proposed [29, 68, 83]. GCN [29] learns node representations by summing the normalized representationsfromthefirst-orderneighbors. GraphSAGE[19]samplesandaggregatesrepresentationsfrom local neighborhoods. GAT [61] introduces an attention-based GNN architecture. JKNet [68] adds a pooling layer following the last GNN layer and each GNN layer has a residual connection to this layer. Cluster- GCN [10] proposes an efficient algorithm for training deep GCN on large graphs. LRGA [48] incorporates a Low-Rank global attention module to GNNs. Several works such as Mixhop [1] and DEGNN [37] propose techniques to leverage high-hop neighbors. ID-GNN [72] embeds each node by considering its identity.These GNNs have demonstrated promising link prediction performance. 8.1.3 Non-GNN-based Node Embedding Methods A family of node embedding methods is those built with matrix factorization [30]. MF [44] is a pioneer work employing matrix factorization in link prediction. FSSDNMF [8] proposes a link prediction model based on non-negative matrix factorization. In general, such methods mainly rely on the adjacency matrix and tend to encounter scalability issues when employed on large graphs. Another family of node embedding methods is those based on relative distance encoding. The similarity of nodes in the embedding space reflects the semantic similarity of nodes in the graph [47]. Such methods would learn more similar embeddings for two close nodes than two distant nodes. Following word embedding [45], methods such as Deepwalk [47], Node2vec [17], and NodePiece [14] learn node embeddings by treating the nodes as words and treating the sequences of nodes generated based on links as sentences. UniNet [69] improves the efficiency of such methods using the Metropolis Hastings sampling technique [11]. Inspired by subword tokenization [53], NodePiece [14] explores parameter-efficient node embeddings. 8.1.4 SEAL-type Methods SEAL-type methods have shown superior performance among existing link prediction approaches [21, 36, 36]. SEAL and its subsequent works [74, 37, 60, 70, 59] address the link prediction problem by classifying the subgraphs that are extracted specifically for candidate links. SEAL [74] extracts a local enclosing subgraph for each candidate link and uses a GNN [75] to classify these subgraphs for link prediction. GraiL [60] is developed for inductive link prediction. It is similar to SEAL but it replaces SortPooling [75] with MEAN- pooling. DEGNN [37] proposes a distance encoding GNN. Cai et al. [6] transform the enclosing subgraph into a corresponding line graph and address the link prediction task with the node classification problem in 10 its line graph. Pan et al.[46] follow the subgraph strategy in SEAL while designing a new pooling mechanism called WalkPool. SUREL [70] proposes an algorithmic technique to improve the computational efficiency of subgraph generation in SEAL. SIEG [3] incorporates the structural information learned from the enclosing subgraphs into the GNN for link prediction, which fuses topological structures and node features to take full advantage of graph information for link prediction. 8.1.5 Methods Specific for Link Prediction Various link prediction-specific methods have been introduced [77, 27]. Wang et al. [63] present PLNLP by jointly using the representations learned by a GNN, distance encoding, etc. Neo-GNN [73] weighted aggregates the link prediction scores obtained by heuristics and a GNN. NBFNet [82] generalizes traditional path-based link heuristics into a path formulation. Singh et al. [56] show that adding a set of edges to the graph as a pre-processing step can improve the performance of link prediction models. PermGNN [50] optimizes the neighborhood aggregator directly by link samples. Zhao et al. [78] study counterfactual questions about link existence by causal inference. RelpNet [64] aggregates edge features along the structural interactions between two target nodes. Guo et al. [18] propose cross-model distillation techniques for link prediction. Shang et al. [55] propose a negative link sampling method PbTRM based on a policy-based training method. Li et al. [35] study the integration of large language models (LLMs) and prompt learning techniques with graphs, enhancing graph transfer capabilities across diverse tasks and domains. 8.2 Limitation Analysis We provide two basic insights into the application of GNNs in link prediction. Firstly, we show that aggregation-based GNNs inherently lack the ability to learn NCN-dependent structural information for link prediction (Remark 3). Secondly, we demonstrate that node embeddings can boost the performance of GNN-based link prediction models on dense graphs (Remark 4 and Finding 5). These can serve as effective avenues to identify and interpret the limitations of existing link prediction methods. To illustrate this, we present two case studies. The entire graphExtracting the subgraph specific for (𝑣,𝑢)Labeling nodesNode -wise representationsRepresentation for (𝑣,𝑢)ො𝑦𝑣𝑢 GNN Readout PredictorSubgraph preparation Subgraph classification Figure 3: The algorithm flow of SEAL-type link prediction methods. Case study 1. Can SEAL effectively learn NCN-dependent structural information? SEAL-type methods have achieved the best performance on several link prediction datasets [74, 37, 60, 70]. SEAL [74] has proven that most link heuristics between two nodes can be computed approximately within an enclosing subgraph extracted specifically for that two nodes. As shown in Fig. 3, most SEAL-type methods employ GNNs for graph representation learning, with the expectation that from such enclosing subgraphs, the GNN can learn the structural information equivalent to link heuristics including CN, AA, Katz, etc. However, whether this expectation holds true has not been thoroughly investigated in existing works. Herein we present a rough analysis to examine this issue. First, the GNNs used in SEAL-type methods, e.g., DGCNN [75] in SEAL [74], R-GCN [52] in GraiL [60], still belong to the type of aggregation-based GNNs. According to Remark 3, these GNNs inherently cannot effectively learn NCN-dependent structural information. Furthermore, we have noticed that SEAL-type methods typically use a labeling technique [76] to add labeling features to each node in the enclosing subgraph. The labeling features of each node describe the relationship of that node to the target two nodes. Fig. 4 illustrates such a labeling method, where the labeling features of a node are the shortest path distances from the node to the target pair of nodes. The work [76] points out that the labeling features can help the GNN learn the structural information about the number of common neighbors. Their explanation is as follows. As shown in Fig. 4, for node vandu, in the first iteration of the neighborhood aggregation of a GNN, only the common neighbors between node vanduwill receive the labeling messages from both vandu; then in the second iteration, the common neighbors will pass such 11 (1,3)(1,10) (1,2)(1,1) (1,3) (1,4) (1,3)(1,6) Positive link Negative link𝑣 (1,4)𝑤 𝑢 𝑢 link? link?Figure 4: Node labeling in SEAL-type methods. The left is a subgraph specific for a positive link sample and the right is a negative one. The labeling features are based on the SPDs from every node (here only show the first-order neighbors of node vorw) to the target pair of nodes. For example, on the left, the node with the labeling (1,10)indicates that the SPD from this node to node vanduis1and10, respectively. messages back to both vandu, which can encode the number of common neighbors into the representations of nodev,u. However, theaforementionedexplanationraisesquestionsregardingitsvalidity. Intheseconditeration, apart from the common neighbors, the non-common neighbors of node valso pass their messages back to v. The messages from all neighbors of vare then aggregated through a set-based pooling (e.g., MEAN or attention- based pooling as shown in Eq. 2). Such an aggregated result for node vwould wash out the distinguishable labels’ information. We present an example to illustrate this. As shown in Fig. 4, if the pooling method in a GNN is MEAN, then the aggregation of the labeling features of the neighbors of node vwould be equal to that of node w, i.e., MEAN({10,3,2,1}) = MEAN({4,6,4,3,3}). This means that the distinct labeling features of the neighbors of a node are not effectively kept in the aggregated result. In other words, the aggregated results for node vandwin the positive and negative link samples become indistinguishable. Note that attention-based pooling in GNNs like GAT [61] also suffers from the above limitation for the same reason as the analysis of Remark 2. The same goes for node u. It should be noted that our example is merely for illustrative purposes. In practice, a GNN layer contains a series of complicated operations such as linear and non-linear transformations, dropout, residual connection, and others. The structural information in the labeling features could be partially kept in the learned representations. Although SEAL-type methods cannot effectively learn structural information related to the number of com- mon neighbors, we highlight that these methods are powerful for link prediction. Such methods transform the pair-specific link prediction problem into a graph-level classification task. Compared to models like Al- gorithm 2 that only combine the representations of two target nodes, SEAL-type methods take advantage of the representations of not only two target nodes but also their neighboring nodes in the enclosing subgraph, enabling the model to consider more information of the surrounding environment of the candidate link. Case study 2. NBFNet lacks the algorithmic ability to leverage node embeddings. NBFNet [82] is a model specifically developed for link prediction. Differently from GNN-based link prediction methods like Algorithm 2 and SEAL-type methods, NBFNet generalizes traditional link heuristics such as Katz index [26], Personalized PageRank [33] into a general formulation and approximates such formulation using a special network. Unlike aggregation-based GNNs that propagate and aggregate node-wise representations, NBFNet is designed to train edge-wise representations. The model architecture of NBFNet makes it hardly consider node-wise information. This would make NBFNet lack the algorithmic ability to train powerful node embeddings and may lead to non-competitive link prediction performance on dense graphs, considering the strong performance of the GNN only using node embeddings on dense graphs as shown in Fig. 2. 8.3 Further analysis of experimental results We expand our limitation analysis of existing link prediction methods by examining their experimental performance on four OGB benchmark datasets. The results are presented in Table 4 and Fig. 5. In the interest of brevity, our analysis focuses on several main types of methods. First, as shown in Fig. 5, the performance of heuristic methods is not stable across the four datasets. For example, RA performs best on ogbl-ppa but second worst on ogbl-ddi. These results are consistent with 12 Table 4: Results on OGB link prediction datasets. Higher scores indicate better performance, with the best results highlighted in bold. Herein, "HE" stands for Heuristic Encoding, "X" represents node attributes, and "NE" denotes Node Embedding. ogbl-ddi ogbl-collab ogbl-ppa ogbl-citation2 Hits@20 (%) Hits@50 (%) Hits@100 (%) MRR (%) MF [44] 13.68±4.75 38.86±0.49 32.29±0.94 51.86±8.43 FSSDNMF [8] 14.62±2.64 37.95±3.25 34.15±1.16 54.71±8.73 DeepWalk [47] 26.42±6.10 50.37±0.34 28.88±1.63 60.11±0.23 Node2vec [17] 23.26±2.09 48.88±0.54 22.26±0.83 61.41±0.11 NodePiece [14] 24.15±3.04 47.88±0.41 22.85±0.94 61.52±2.91 GraphSAGE [19] 83.90±4.74 48.10±0.81 16.55±2.40 82.60±0.36 GAT [61] 95.38±0.94 52.26±0.85 51.33±2.16 83.17±0.54 Neo-GNN [73] 75.72±3.42 55.31±0.53 49.13±0.60 87.26±1.84 PLNLP [63] 90.88±3.13 52.92±0.98 32.38±2.58 84.92±0.29 NBFNet [82] 18.14±2.12 51.15±1.38 23.96±2.03 74.91±2.37 SEAL [74] 30.56±3.86 54.71±0.79 48.80±4.56 87.67±0.32 DEGNN [37] 26.63±6.42 53.74±0.45 36.48±5.38 60.30±0.81 SIEG [3] 31.95±3.93 55.35±0.52 53.35±1.39 89.87±0.10 HE 21.95±0.08 53.03±0.29 49.22±0.06 81.91±0.05 GCN(X) 12.21±3.16 50.76±1.08 12.23±0.47 80.60±0.04 GCN(NE) 95.49±0.73 50.17±0.56 63.54±1.21 81.50±2.01 GCN(X+NE) 94.42±0.63 53.08±0.46 60.68±3.52 83.85±0.03 GCN(X)+HE 25.78±5.38 56.11±0.64 49.68±0.39 88.63±0.05 GCN(NE)+HE 91.11±3.60 54.21±0.07 61.02±2.51 85.57±0.19 GCN(X+NE)+HE 89.65±2.32 55.70±0.24 59.94±4.62 86.31±0.12 the research of [31] which indicates that many link heuristics are designed for specific applications and may perform well only on those specific graphs. Moreover, the unstable performance of every single heuristic confirms the need of combining multiple heuristics in link prediction, as shown in our Algorithm 2. We also report the heuristic encoding results (Fig. 6) obtained through Algorithm 2 when only employing heuristic encoding. We find that it is not always true that the more the heuristics used, the better the performance of HE. In contrast, encoding a certain number of heuristics can yield the best performance, whereas encoding too many heuristics would pose an optimization challenge. The node embedding methods based on relative distance encoding (DeepWalk [47], NodePiece [14]) perform slightly better than those based on matrix factorization (MF [44], FSSDNMF [8]). Nevertheless, all these methods fall short compared to other methods. This could be attributed to the limitations of such methods, e.g., reliance solely on the adjacency matrix or unsupervised learning without link samples. This also underscoresthecriticalroleoflinksamplesinsupervisingthetrainingofnodeembeddingsforlinkprediction. Fig. 5 also presents the results of MLP and general GNNs (GCN [29], GAT [61] and JKNet [68]) that use node embeddings only. MLP(NE) performs much worse than GNNs, demonstrating the significance of neighborhood aggregation of GNNs in training node embeddings, considering that MLP updates each node’s representation independently of other nodes. Furthermore, GCN(NE) and GAT(NE) perform comparably, indicating that the expressiveness of GCN is sufficient for learning node embedding. The similar performance ofGCNandGATempiricallysupportsouranalysesinRemark2and3, wherewepointoutthattheattention mechanism (e.g., GAT) cannot address the inherent issue of GNNs in learning structural information related to the number of each node’s neighbors and of common neighbors between two nodes. In Fig. 5, SEAL-type methods show state-of-the-art performance. Especially, SIEG [3] achieves the best results on two sparse graphs, i.e., ogbl-collab and ogbl-citation2 with graph degrees of 8and21, respectively. 13 /uni00000016/uni00000013 /uni00000016/uni00000018 /uni00000017/uni00000013 /uni00000017/uni00000018 /uni00000018/uni00000013 /uni00000018/uni00000018 /uni00000019/uni00000013 /uni00000019/uni00000018 /uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000018/uni00000013/uni00000003/uni0000000b/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000046/uni00000052/uni0000004f/uni0000004f/uni00000044/uni00000045/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni0000001b/uni0000000c/uni00000026/uni00000031 /uni0000002d/uni00000024 /uni00000024/uni00000024 /uni00000035/uni00000024 /uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni00000030/uni00000029 /uni00000029/uni00000036/uni00000036/uni00000027/uni00000031/uni00000030/uni00000029 /uni00000027/uni00000048/uni00000048/uni00000053/uni0000003a/uni00000044/uni0000004f/uni0000004e /uni00000031/uni00000052/uni00000047/uni00000048/uni00000033/uni0000004c/uni00000048/uni00000046/uni00000048 /uni00000030/uni0000002f/uni00000033/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000024/uni00000037/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002d/uni0000002e/uni00000031/uni00000048/uni00000057/uni0000000b/uni00000031/uni00000028/uni0000000c /uni00000036/uni00000028/uni00000024/uni0000002f /uni00000036/uni0000002c/uni00000028/uni0000002a /uni00000027/uni00000028/uni0000002a/uni00000031/uni00000031 /uni00000031/uni00000025/uni00000029/uni00000031/uni00000048/uni00000057 /uni00000031/uni00000048/uni00000052/uni00000010/uni0000002a/uni00000031/uni00000031 /uni00000032/uni00000058/uni00000055/uni00000056/uni0000001d/uni00000003/uni0000002a/uni00000026/uni00000031/uni0000000b/uni0000003b/uni0000000c/uni00000003/uni0000000e/uni00000003/uni0000002b/uni00000028/uni00000017/uni00000014/uni00000011/uni0000001a/uni00000014 /uni00000017/uni0000001b/uni00000011/uni00000018 /uni00000018/uni00000015/uni00000011/uni00000017/uni00000015 /uni00000018/uni00000015/uni00000011/uni00000016/uni00000019 /uni00000018/uni00000013/uni00000011/uni00000015 /uni00000018/uni00000016/uni00000011/uni00000013/uni00000016 /uni00000016/uni0000001b/uni00000011/uni0000001b/uni00000019 /uni00000016/uni0000001a/uni00000011/uni0000001c/uni00000018 /uni00000018/uni00000013/uni00000011/uni00000016/uni0000001a /uni00000017/uni0000001a/uni00000011/uni0000001b/uni0000001b /uni00000017/uni0000001c/uni00000011/uni00000018/uni00000016 /uni00000018/uni00000013/uni00000011/uni00000014/uni0000001a /uni00000018/uni00000013/uni00000011/uni00000016/uni00000015 /uni00000018/uni00000013/uni00000011/uni00000018/uni00000018 /uni00000018/uni00000017/uni00000011/uni0000001a/uni00000014 /uni00000018/uni00000018/uni00000011/uni00000016/uni00000018 /uni00000018/uni00000016/uni00000011/uni0000001a/uni00000017 /uni00000018/uni00000014/uni00000011/uni00000014/uni00000018 /uni00000018/uni00000018/uni00000011/uni00000016/uni00000014 /uni00000018/uni00000019/uni00000011/uni00000014/uni00000014 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000001a/uni00000013 /uni0000001b/uni00000013 /uni0000001c/uni00000013 /uni00000030/uni00000035/uni00000035/uni00000003/uni0000000b/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000046/uni0000004c/uni00000057/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000015/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni00000015/uni00000014/uni0000000c/uni00000026/uni00000031 /uni0000002d/uni00000024 /uni00000024/uni00000024 /uni00000035/uni00000024 /uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000c /uni00000030/uni00000029 /uni00000029/uni00000036/uni00000036/uni00000027/uni00000031/uni00000030/uni00000029 /uni00000027/uni00000048/uni00000048/uni00000053/uni0000003a/uni00000044/uni0000004f/uni0000004e /uni00000031/uni00000052/uni00000047/uni00000048/uni00000033/uni0000004c/uni00000048/uni00000046/uni00000048 /uni00000030/uni0000002f/uni00000033/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000024/uni00000037/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002d/uni0000002e/uni00000031/uni00000048/uni00000057/uni0000000b/uni00000031/uni00000028/uni0000000c /uni00000036/uni00000028/uni00000024/uni0000002f /uni00000036/uni0000002c/uni00000028/uni0000002a /uni00000027/uni00000028/uni0000002a/uni00000031/uni00000031 /uni00000031/uni00000025/uni00000029/uni00000031/uni00000048/uni00000057 /uni00000031/uni00000048/uni00000052/uni00000010/uni0000002a/uni00000031/uni00000031 /uni00000032/uni00000058/uni00000055/uni00000056/uni0000001d/uni00000003/uni0000002a/uni00000026/uni00000031/uni0000000b/uni0000003b/uni0000000c/uni00000003/uni0000000e/uni00000003/uni0000002b/uni00000028/uni0000001a/uni00000017/uni00000011/uni00000015/uni0000001b /uni0000001a/uni00000014/uni00000011/uni00000016/uni00000015 /uni0000001a/uni00000018/uni00000011/uni0000001c/uni00000017 /uni0000001a/uni00000019/uni00000011/uni00000013/uni00000016 /uni0000001a/uni00000017/uni00000011/uni00000019/uni0000001a /uni0000001b/uni00000014/uni00000011/uni0000001c/uni00000014 /uni00000018/uni00000014/uni00000011/uni0000001b/uni00000019 /uni00000018/uni00000017/uni00000011/uni0000001a/uni00000014 /uni00000019/uni00000013/uni00000011/uni00000014/uni00000014 /uni00000019/uni00000014/uni00000011/uni00000018/uni00000015 /uni0000001a/uni00000018/uni00000011/uni00000019/uni0000001b /uni0000001b/uni00000014/uni00000011/uni00000018 /uni0000001b/uni00000014/uni00000011/uni0000001c/uni00000015 /uni0000001b/uni00000013/uni00000011/uni00000018/uni00000014 /uni0000001b/uni0000001a/uni00000011/uni00000019/uni0000001a /uni0000001b/uni0000001c/uni00000011/uni0000001b/uni0000001a /uni00000019/uni00000013/uni00000011/uni00000016 /uni0000001a/uni00000017/uni00000011/uni0000001c/uni00000014 /uni0000001b/uni0000001a/uni00000011/uni00000015/uni00000019 /uni0000001b/uni0000001b/uni00000011/uni00000019/uni00000016 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000014/uni00000013/uni00000013/uni00000003/uni0000000b/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000053/uni00000053/uni00000044/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000016/uni0000000c/uni00000026/uni00000031 /uni0000002d/uni00000024 /uni00000024/uni00000024 /uni00000035/uni00000024 /uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni00000030/uni00000029 /uni00000029/uni00000036/uni00000036/uni00000027/uni00000031/uni00000030/uni00000029 /uni00000027/uni00000048/uni00000048/uni00000053/uni0000003a/uni00000044/uni0000004f/uni0000004e /uni00000031/uni00000052/uni00000047/uni00000048/uni00000033/uni0000004c/uni00000048/uni00000046/uni00000048 /uni00000030/uni0000002f/uni00000033/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000024/uni00000037/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002d/uni0000002e/uni00000031/uni00000048/uni00000057/uni0000000b/uni00000031/uni00000028/uni0000000c /uni00000036/uni00000028/uni00000024/uni0000002f /uni00000036/uni0000002c/uni00000028/uni0000002a /uni00000027/uni00000028/uni0000002a/uni00000031/uni00000031 /uni00000031/uni00000025/uni00000029/uni00000031/uni00000048/uni00000057 /uni00000031/uni00000048/uni00000052/uni00000010/uni0000002a/uni00000031/uni00000031 /uni00000032/uni00000058/uni00000055/uni00000056/uni0000001d/uni00000003/uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c/uni00000015/uni0000001a/uni00000011/uni00000019/uni00000018 /uni00000015/uni00000014/uni00000011/uni00000016/uni00000014 /uni00000016/uni00000015/uni00000011/uni00000017/uni00000018 /uni00000017/uni0000001c/uni00000011/uni00000015/uni00000015 /uni00000014/uni00000017/uni00000011/uni00000019/uni00000014 /uni00000017/uni0000001b/uni00000011/uni0000001c/uni00000017 /uni00000016/uni00000015/uni00000011/uni00000015/uni0000001c /uni00000016/uni00000017/uni00000011/uni00000014/uni00000018 /uni00000015/uni0000001b/uni00000011/uni0000001b/uni0000001b /uni00000015/uni00000015/uni00000011/uni0000001b/uni00000018 /uni00000017/uni00000018/uni00000011/uni00000016/uni00000018 /uni00000019/uni00000017/uni00000011/uni00000016/uni00000019 /uni00000019/uni00000014/uni00000011/uni0000001b/uni00000018 /uni00000019/uni00000015/uni00000011/uni0000001c/uni00000018 /uni00000017/uni0000001b/uni00000011/uni0000001b /uni00000018/uni00000016/uni00000011/uni00000016/uni00000018 /uni00000016/uni00000019/uni00000011/uni00000017/uni0000001b /uni00000015/uni00000016/uni00000011/uni0000001c/uni00000019 /uni00000017/uni0000001c/uni00000011/uni00000014/uni00000016 /uni00000019/uni00000016/uni00000011/uni00000018/uni00000017 /uni00000013 /uni00000015/uni00000013 /uni00000017/uni00000013 /uni00000019/uni00000013 /uni0000001b/uni00000013 /uni00000014/uni00000013/uni00000013 /uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000015/uni00000013/uni00000003/uni0000000b/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000047/uni00000047/uni0000004c/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni00000018/uni00000013/uni00000013/uni0000000c/uni00000026/uni00000031 /uni0000002d/uni00000024 /uni00000024/uni00000024 /uni00000035/uni00000024 /uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000024/uni0000000c /uni00000030/uni00000029 /uni00000029/uni00000036/uni00000036/uni00000027/uni00000031/uni00000030/uni00000029 /uni00000027/uni00000048/uni00000048/uni00000053/uni0000003a/uni00000044/uni0000004f/uni0000004e /uni00000031/uni00000052/uni00000047/uni00000048/uni00000033/uni0000004c/uni00000048/uni00000046/uni00000048 /uni00000030/uni0000002f/uni00000033/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002a/uni00000024/uni00000037/uni0000000b/uni00000031/uni00000028/uni0000000c /uni0000002d/uni0000002e/uni00000031/uni00000048/uni00000057/uni0000000b/uni00000031/uni00000028/uni0000000c /uni00000036/uni00000028/uni00000024/uni0000002f /uni00000036/uni0000002c/uni00000028/uni0000002a /uni00000027/uni00000028/uni0000002a/uni00000031/uni00000031 /uni00000031/uni00000025/uni00000029/uni00000031/uni00000048/uni00000057 /uni00000031/uni00000048/uni00000052/uni00000010/uni0000002a/uni00000031/uni00000031 /uni00000032/uni00000058/uni00000055/uni00000056/uni0000001d/uni00000003/uni0000002a/uni00000026/uni00000031/uni0000000b/uni00000031/uni00000028/uni0000000c/uni00000014/uni0000001a/uni00000011/uni0000001a/uni00000014 /uni00000013 /uni00000014/uni0000001b/uni00000011/uni00000018/uni0000001c /uni00000019/uni00000011/uni00000015/uni00000014 /uni00000014/uni0000001a/uni00000011/uni0000001a/uni00000014 /uni00000015/uni00000014/uni00000011/uni0000001c/uni00000018 /uni00000014/uni00000016/uni00000011/uni00000019/uni0000001b /uni00000014/uni00000017/uni00000011/uni00000019/uni00000015 /uni00000015/uni00000019/uni00000011/uni00000017/uni00000015 /uni00000015/uni00000017/uni00000011/uni00000014/uni00000018 /uni0000001a/uni00000019/uni00000011/uni00000014/uni00000014 /uni0000001c/uni00000017/uni00000011/uni00000013/uni00000016 /uni0000001c/uni00000017/uni00000011/uni00000016/uni00000018 /uni0000001c/uni00000016/uni00000011/uni00000014/uni00000018 /uni00000016/uni00000013/uni00000011/uni00000018/uni00000019 /uni00000016/uni00000014/uni00000011/uni0000001c/uni00000018 /uni00000015/uni00000019/uni00000011/uni00000019/uni00000016 /uni00000014/uni0000001b/uni00000011/uni00000014/uni00000017 /uni0000001a/uni00000018/uni00000011/uni0000001a/uni00000015 /uni0000001c/uni00000018/uni00000011/uni00000017/uni0000001c /uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000056/uni00000003/uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003/uni00000003/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003 /uni0000002a/uni00000048/uni00000051/uni00000048/uni00000055/uni00000044/uni0000004f/uni00000003/uni00000051/uni00000052/uni00000047/uni00000048/uni00000003/uni00000048/uni00000050/uni00000045/uni00000048/uni00000047/uni00000047/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000056/uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003/uni00000003/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003 /uni00000030/uni0000002f/uni00000033/uni00000003/uni00000044/uni00000051/uni00000047/uni00000003/uni0000002a/uni00000031/uni00000031/uni00000056/uni00000003/uni0000005a/uni0000004c/uni00000057/uni0000004b/uni00000003/uni00000051/uni00000052/uni00000047/uni00000048/uni00000003/uni00000048/uni00000050/uni00000045/uni00000048/uni00000047/uni00000047/uni0000004c/uni00000051/uni0000004a/uni00000056/uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003/uni00000003/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003 /uni00000036/uni00000028/uni00000024/uni0000002f/uni00000010/uni00000057/uni0000005c/uni00000053/uni00000048/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000056/uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003/uni00000003/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003 /uni00000032/uni00000057/uni0000004b/uni00000048/uni00000055/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000056/uni00000003/uni00000056/uni00000053/uni00000048/uni00000046/uni0000004c/uni00000049/uni0000004c/uni00000046/uni00000003/uni00000049/uni00000052/uni00000055/uni00000003/uni0000004f/uni0000004c/uni00000051/uni0000004e/uni00000003/uni00000053/uni00000055/uni00000048/uni00000047/uni0000004c/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051/uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003/uni00000003/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003 /uni00000032/uni00000058/uni00000055/uni00000003/uni00000055/uni00000048/uni00000056/uni00000058/uni0000004f/uni00000057/uni00000056/uni0000002b/uni00000048/uni00000058/uni00000055/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003/uni00000003/uni00000003/uni00000050/uni00000048/uni00000057/uni0000004b/uni00000052/uni00000047/uni00000003Figure5: ResultsofdifferentmethodsforlinkpredictiononfourOGBdatasets. ForMLPandgeneralGNNs, we present their results obtained by utilizing node embeddings, considering the dominant performance of node embeddings as shown in Fig. 2. However, they perform worse than general GNNs with node embeddings (GCN(NE) [29], GAT(NE) [61] and JKNet(NE) [68]) on two dense graphs, i.e., ogbl-ppa and ogbl-ddi. This discrepancy in the performance of SEAL-type methods could be attributed to the algorithmic challenge of training node embeddings using subgraphs. Unlike general GNNs, the algorithm of SEAL-type methods limits each node to perceive other nodes within the subgraph rather than the entire graph, thereby restricting the information flow between nodes and potentially reducing the efficiency of learning node embeddings. Besides, Fig. 5 shows two link prediction-specific methods, namely NBFNet [82] and Neo-GNN [73]. NBFNet underperformsonfourdatasets, whichalignswiththelimitationsidentifiedinSection8.2. Neo-GNNpredicts link likelihood by combining the scores obtained by heuristic methods and the result produced by a GNN. It performs on par with the state-of-the-art SEAL-type methods on two sparse graphs (ogbl-collab and ogbl-citation2). Lastly, our GNN(X)+HE based on Algorithm 2 performs better than SEAL-type methods on ogbl-collab, supporting our limitation analysis of SEAL-type methods in Section 8.2, i.e., such methods could not effec- tively learn the information equivalent to NCN-dependent heuristics. It should be noted that our focus does not lie in developing solutions to these limitations of the existing methods, as this goes beyond the scope of our main goal. Nevertheless, these identified issues could pave the way for future research. 14 /uni00000026/uni00000031 /uni0000002d/uni00000024/uni00000024/uni00000024 /uni00000035/uni00000024/uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000e/uni00000036/uni00000033/uni00000027/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000e/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000005d/uni0000000e/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000044/uni0000004f/uni0000004f/uni0000000c/uni00000017/uni00000013/uni00000018/uni00000013/uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000018/uni00000013 /uni00000017/uni00000014/uni00000011/uni0000001a/uni00000014 /uni00000017/uni0000001b/uni00000011/uni00000018 /uni00000018/uni00000015/uni00000011/uni00000017/uni00000015 /uni00000018/uni00000015/uni00000011/uni00000016/uni00000019 /uni00000018/uni00000013/uni00000011/uni00000015 /uni00000017/uni00000014/uni00000011/uni0000001a/uni00000014 /uni00000017/uni00000018/uni00000011/uni0000001c/uni0000001b /uni00000018/uni00000014/uni00000011/uni00000013/uni00000019 /uni00000018/uni00000014/uni00000011/uni00000014/uni00000019 /uni00000018/uni00000014 /uni00000018/uni00000015/uni00000011/uni00000017/uni00000017 /uni00000018/uni00000015/uni00000011/uni00000017/uni0000001a /uni00000018/uni00000015/uni00000011/uni00000016/uni0000001c /uni00000018/uni00000016/uni00000011/uni00000013/uni00000016 /uni00000018/uni00000015/uni00000011/uni0000001c/uni0000001c /uni00000018/uni00000015/uni00000011/uni0000001b /uni00000018/uni00000015/uni00000011/uni00000019/uni00000018 /uni00000018/uni00000015/uni00000011/uni0000001a/uni00000014 /uni00000018/uni00000015/uni00000011/uni00000019/uni00000014 /uni00000018/uni00000015/uni00000011/uni00000018/uni0000001b/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000046/uni00000052/uni0000004f/uni0000004f/uni00000044/uni00000045/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni0000001b /uni00000026/uni00000031 /uni0000002d/uni00000024/uni00000024/uni00000024 /uni00000035/uni00000024/uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002d/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000026/uni00000031/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000e/uni00000026/uni00000031/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000044/uni0000004f/uni0000004f/uni0000000c/uni00000015/uni00000018/uni00000018/uni00000013/uni0000001a/uni00000018/uni00000030/uni00000035/uni00000035 /uni0000001a/uni00000017/uni00000011/uni00000015/uni0000001b /uni0000001a/uni00000014/uni00000011/uni00000016/uni00000015 /uni0000001a/uni00000018/uni00000011/uni0000001c/uni00000017 /uni0000001a/uni00000019/uni00000011/uni00000013/uni00000016 /uni0000001a/uni00000017/uni00000011/uni00000019/uni0000001a /uni00000018/uni00000018/uni00000011/uni00000013/uni00000019 /uni00000019/uni0000001a/uni00000011/uni0000001a/uni0000001c /uni0000001a/uni00000017/uni00000011/uni00000014/uni0000001b /uni00000017/uni0000001c/uni00000011/uni00000019/uni00000015 /uni00000015/uni0000001c/uni00000011/uni00000014 /uni0000001b/uni00000014/uni00000011/uni00000017/uni00000015 /uni0000001a/uni00000019/uni00000011/uni0000001a/uni0000001a /uni0000001a/uni00000018/uni00000011/uni00000015/uni0000001b /uni0000001b/uni00000014/uni00000011/uni0000001b/uni00000015 /uni0000001b/uni00000014/uni00000011/uni00000017/uni0000001b /uni0000001b/uni00000014/uni00000011/uni0000001a/uni00000017 /uni0000001b/uni00000014/uni00000011/uni0000001c/uni00000014 /uni0000001b/uni00000014/uni00000011/uni0000001a/uni00000019 /uni0000001b/uni00000014/uni00000011/uni0000001b/uni00000016 /uni0000001b/uni00000014/uni00000011/uni0000001b/uni00000019/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000046/uni0000004c/uni00000057/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000015/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni00000015/uni00000014 /uni00000026/uni00000031 /uni0000002d/uni00000024/uni00000024/uni00000024 /uni00000035/uni00000024/uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000035/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni00000035/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000035/uni00000024/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000036/uni00000033/uni00000027/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000e/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000e/uni00000026/uni00000031/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000044/uni0000004f/uni0000004f/uni0000000c/uni00000013/uni00000018/uni00000013/uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000014/uni00000013/uni00000013 /uni00000015/uni0000001a/uni00000011/uni00000019/uni00000018 /uni00000015/uni00000014/uni00000011/uni00000016/uni00000014 /uni00000016/uni00000015/uni00000011/uni00000017/uni00000018 /uni00000017/uni0000001c/uni00000011/uni00000015/uni00000015 /uni00000014/uni00000017/uni00000011/uni00000019/uni00000014 /uni00000014/uni0000001b/uni00000011/uni00000016/uni00000015 /uni0000001b/uni00000011/uni00000018/uni00000014 /uni00000017/uni00000011/uni0000001c/uni0000001b /uni00000017/uni0000001b/uni00000011/uni0000001b/uni00000014 /uni00000014/uni00000015/uni00000011/uni00000017/uni00000018 /uni00000017/uni0000001b/uni00000011/uni0000001b/uni00000014 /uni00000017/uni0000001b/uni00000011/uni00000013/uni00000017 /uni00000017/uni0000001b/uni00000011/uni0000001b/uni00000014 /uni00000017/uni0000001b/uni00000011/uni0000001b /uni00000017/uni0000001b/uni00000011/uni0000001a/uni0000001b /uni00000017/uni0000001b/uni00000011/uni0000001c/uni00000017 /uni00000017/uni0000001b/uni00000011/uni0000001b/uni0000001a /uni00000017/uni0000001b/uni00000011/uni0000001a/uni0000001c /uni00000017/uni0000001b/uni00000011/uni00000013/uni00000019 /uni00000017/uni0000001b/uni00000011/uni00000019/uni00000016/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000053/uni00000053/uni00000044/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni0000001a/uni00000016 /uni00000026/uni00000031 /uni0000002d/uni00000024/uni00000024/uni00000024 /uni00000035/uni00000024/uni0000002e/uni00000044/uni00000057/uni0000005d /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000024/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000035/uni00000024/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni00000026/uni00000031/uni0000000e/uni0000002d/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000e/uni00000026/uni00000031/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni00000036/uni00000033/uni00000027/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000026/uni00000031/uni0000000e/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000c /uni0000002b/uni00000028/uni0000000b/uni0000002d/uni00000024/uni0000000e/uni00000024/uni00000024/uni0000000e/uni0000002e/uni00000044/uni00000057/uni0000005d/uni0000000e/uni00000035/uni00000024/uni0000000c/uni0000002b/uni00000028/uni0000000b/uni00000044/uni0000004f/uni0000004f/uni0000000c/uni00000013/uni00000015/uni00000013/uni0000002b/uni0000004c/uni00000057/uni00000056/uni00000023/uni00000015/uni00000013 /uni00000014/uni0000001a/uni00000011/uni0000001a/uni00000014 /uni00000013 /uni00000014/uni0000001b/uni00000011/uni00000018/uni0000001c /uni00000019/uni00000011/uni00000015/uni00000014 /uni00000014/uni0000001a/uni00000011/uni0000001a/uni00000014 /uni00000014/uni00000016/uni00000011/uni00000016/uni00000017 /uni00000014/uni00000011/uni0000001c/uni00000017 /uni00000013 /uni00000016/uni00000011/uni0000001c/uni00000018 /uni00000013 /uni00000015/uni00000013/uni00000011/uni0000001a/uni00000016 /uni00000015/uni00000014/uni00000011/uni0000001b/uni00000018 /uni00000014/uni00000017/uni00000011/uni00000017/uni00000019 /uni00000014/uni0000001c/uni00000011/uni00000019/uni00000014 /uni00000014/uni00000019/uni00000011/uni00000019 /uni00000015/uni00000014/uni00000011/uni0000001c/uni00000018 /uni00000014/uni0000001c/uni00000011/uni00000016/uni00000015 /uni00000014/uni0000001a/uni00000011/uni00000015 /uni00000014/uni00000019/uni00000011/uni00000016/uni00000018 /uni00000014/uni0000001a/uni00000011/uni0000001b/uni00000018/uni00000052/uni0000004a/uni00000045/uni0000004f/uni00000010/uni00000047/uni00000047/uni0000004c/uni0000000f/uni00000003/uni00000027/uni00000048/uni0000004a/uni00000055/uni00000048/uni00000048/uni00000003/uni00000020/uni00000003/uni00000018/uni00000013/uni00000013Figure 6: Results of HE (Heuristic Encoding) through Algorithm 2, where we encode various combinations of NCN-dependent heuristics. We can find that the best HE is often achieved by encoding a select number of heuristics rather than all heuristics. 9 Implication for Practical Applications This study carries significant implications for real-world link prediction applications. A particular emphasis is the selection of appropriate solutions tailored to the graph degree. 15 For link prediction on sparse graphs, the performance of various methods in our experiments highlights the important role of NCN-dependent information. Approaches that can leverage such information, such as SEAL-type methods and Neo-GNN, generally outperform those that cannot. In practical scenarios, both SEAL-type methods and GNNs with heuristic encoding could yield satisfactory performance. Additionally, traditional machine learning models like MLP incorporating multiple heuristic encodings serve as viable alternatives, which could achieve comparable performance to the top-performing methods, while offering faster processing times. This is particularly advantageous for tasks such as recommender systems that demand rapid model response. For link prediction on dense graphs, the contribution of node embeddings becomes dominant. Simple GNNs like GCN[29] with the incorporation of trainable node embeddings can outperform most existing methods, rendering such a solution an optimal choice. However, this does not mean that the model using node embed- dings will certainly perform better than that only using node features, especially in practical applications where careful feature engineering guided by domain knowledge is conducted. Besides, the use of trainable node embeddings remains limitations in the inductive setting [60], where new nodes are added to the graph, and the model together with all node embeddings may need to be retrained. In such cases, the methods that do not involve the training of node embeddings may offer more practical suitability. 10 Limitations This paper primarily explores several fundamental issues in link prediction methods, particularly in GNNs. It does not seek to introduce novel model architectures. Some analyses in this paper are provided in the form of examples and may lack rigorous mathematical proofs. 11 Conclusion Link prediction stands as a pivotal task within the realm of graph applications. Our exploration into this domain reveals noteworthy variations in the performance of various link prediction methods across different graphs, with a significant dependence on graph degrees. Notably, on dense graphs, we observe that straight- forward GNNs, like GCN, exhibit superior link prediction performance compared to many models that are developed specifically for link prediction. In contrast, on sparse graphs, the simple common-neighbor method often outshines GNN-based approaches. Understanding and interpreting these performance fluctuations is imperative, serving as a compass for refining existing methodologies and establishing a foundation for the development of more effective link prediction algorithms. In addition, this work brings suggestions to practitioners in link prediction. Specifically, on sparse graphs, either SEAL-type methods or GNNs plus heuristic encoding can yield satisfactory performance. On dense graphs, GNN with node embeddings is an ideal choice in the transductive setting. For inductive learning, methods that do not involve the training of node embeddings may be more suitable. References [1] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Haru- tyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. arXiv preprint arXiv:1905.00067 , 2019. [2] Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks , 25(3):211–230, 2003. [3] Baole Ai, Zhou Qin, Wenting Shen, and Yong Li. Structure enhanced graph neural networks for link prediction. arXiv preprint arXiv:2201.05293 , 2022. [4] Albert-Laszlo Barabâsi, Hawoong Jeong, Zoltan Néda, Erzsebet Ravasz, Andras Schubert, and Tamas Vicsek. Evolution of the social network of scientific collaborations. Physica A: Statistical mechanics and its applications , 311(3-4):590–614, 2002. 16 [5] Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neuralnetworkexpressivityviasubgraphisomorphismcounting. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(1):657–668, 2022. [6] Lei Cai, Jundong Li, Jie Wang, and Shuiwang Ji. Line graph neural networks for link prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2021. [7] Carlo Vittorio Cannistraci, Gregorio Alanis-Lobato, and Timothy Ravasi. From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks. Scientific reports, 3(1):1613, 2013. [8] Guangfu Chen, Haibo Wang, Yili Fang, and Ling Jiang. Link prediction by deep non-negative matrix factorization. Expert Systems with Applications , 188:115991, 2022. [9] Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substruc- tures?Advances in neural information processing systems , 33:10383–10395, 2020. [10] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining , pages 257–266, 2019. [11] Siddhartha Chib and Edward Greenberg. Understanding the metropolis-hastings algorithm. The amer- ican statistician , 49(4):327–335, 1995. [12] GabrieleCorso,LucaCavalleri,DominiqueBeaini,PietroLiò,andPetarVeličković. Principalneighbour- hood aggregation for graph nets. Advances in Neural Information Processing Systems , 33:13260–13271, 2020. [13] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds , 2019. [14] Mikhail Galkin, Jiapeng Wu, Etienne Denis, and William L Hamilton. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. arXiv preprint arXiv:2106.12144 , 2021. [15] Floris Geerts and Juan L Reutter. Expressiveness and approximation properties of graph neural net- works. In International Conference on Learning Representations , 2021. [16] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics , pages 249–256. JMLR Workshop and Conference Proceedings, 2010. [17] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining , pages 855–864, 2016. [18] Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh Chawla, Neil Shah, and Tong Zhao. Linkless link prediction via relational distillation. arXiv preprint arXiv:2210.05801 , 2022. [19] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in neural information processing systems , pages 1024–1034, 2017. [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-levelperformanceonimagenetclassification. In Proceedings of the IEEE international conference on computer vision , pages 1026–1034, 2015. [21] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems , 33:22118–22133, 2020. 17 [22] KexinHuang,CaoXiao,LucasMGlass,MarinkaZitnik,andJimengSun. Skipgnn: predictingmolecular interactions with skip-graph networks. Scientific reports , 10(1):1–16, 2020. [23] Xingyue Huang, Miguel Romero, Ismail Ceylan, and Pablo Barceló. A theory of link prediction via relational weisfeiler-leman on knowledge graphs. Advances in Neural Information Processing Systems , 36, 2024. [24] Paul Jaccard. Étude comparative de la distribution florale dans une portion des alpes et des jura. Bull Soc Vaudoise Sci Nat , 37:547–579, 1901. [25] Glen Jeh and Jennifer Widom. Simrank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining , pages 538–543, 2002. [26] Leo Katz. A new status index derived from sociometric analysis. Psychometrika , 18(1):39–43, 1953. [27] Bharti Khemani, Shruti Patil, Ketan Kotecha, and Sudeep Tanwar. A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions. Journal of Big Data , 11(1):18, 2024. [28] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [29] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations (ICLR) , 2017. [30] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer , 42(8):30–37, 2009. [31] István A Kovács, Katja Luck, Kerstin Spirohn, Yang Wang, Carl Pollis, Sadie Schlabach, Wenting Bian, Dae-Kyum Kim, Nishka Kishore, Tong Hao, et al. Network-based prediction of protein interactions. Nature communications , 10(1):1–8, 2019. [32] Ajay Kumar, Shashank Sheshar Singh, Kuldeep Singh, and Bhaskar Biswas. Link prediction tech- niques, applications, and performance: A survey. Physica A: Statistical Mechanics and its Applications , 553:124289, 2020. [33] Amy N Langville and Carl D Meyer. Google’s pagerank and beyond. In Google’s PageRank and Beyond . Princeton university press, 2011. [34] Elizabeth A Leicht, Petter Holme, and Mark EJ Newman. Vertex similarity in networks. Physical Review E , 73(2):026120, 2006. [35] Jia Li, Xiangguo Sun, Yuhan Li, Zhixun Li, Hong Cheng, and Jeffrey Xu Yu. Graph intelligence with large language models and prompt learning. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 6545–6554, 2024. [36] Juanhui Li, Harry Shomer, Haitao Mao, Shenglai Zeng, Yao Ma, Neil Shah, Jiliang Tang, and Dawei Yin. Evaluating graph neural networks for link prediction: Current pitfalls and new benchmarking. Advances in Neural Information Processing Systems , 36, 2024. [37] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems, 33:4465–4478, 2020. [38] Zhiyuan Li and Sanjeev Arora. An exponential learning rate schedule for deep learning. In International Conference on Learning Representations , 2020. [39] Shuming Liang, Zhidong Li, Bin Liang, Yu Ding, Yang Wang, and Fang Chen. Failure prediction for large-scale water pipe networks using gnn and temporal failure series. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management , pages 3955–3964, 2021. 18 [40] David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management , pages 556–559, 2003. [41] Xiaoyang Liu, Xiang Li, Giacomo Fiumara, and Pasquale De Meo. Link prediction approach combined graph neural network with capsule network. Expert Systems with Applications , 212:118737, 2023. [42] Zhaowei Liu, Dong Yang, Yingjie Wang, Mingjie Lu, and Ranran Li. Egnn: Graph structure learning based on evolutionary computation helps more in graph neural networks. Applied Soft Computing , 135:110040, 2023. [43] Víctor Martínez, Fernando Berzal, and Juan-Carlos Cubero. A survey of link prediction in complex networks. ACM computing surveys (CSUR) , 49(4):1–33, 2016. [44] Aditya Krishna Menon and Charles Elkan. Link prediction via matrix factorization. In Joint european conference on machine learning and knowledge discovery in databases , pages 437–452. Springer, 2011. [45] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems , 26, 2013. [46] Liming Pan, Cheng Shi, and Ivan Dokmanić. Neural link prediction with walk pooling. arXiv preprint arXiv:2110.04375 , 2021. [47] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. InProceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710, 2014. [48] Omri Puny, Heli Ben-Hamu, and Yaron Lipman. Global attention improves graph networks generaliza- tion.arXiv preprint arXiv:2006.07846 , 2020. [49] Erzsébet Ravasz, Anna Lisa Somera, Dale A Mongru, Zoltán N Oltvai, and A-L Barabási. Hierarchical organization of modularity in metabolic networks. science, 297(5586):1551–1555, 2002. [50] Indradyumna Roy, Abir De, and Soumen Chakrabarti. Adversarial permutation guided node represen- tations for link prediction. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pages 9445–9453, 2021. [51] Gerard Salton. Introduction to modern information retrieval. McGraw-Hill , 1983. [52] MichaelSchlichtkrull, ThomasNKipf, PeterBloem, Riannevan denBerg, IvanTitov, andMaxWelling. Modeling relational data with graph convolutional networks. In European semantic web conference , pages 593–607. Springer, 2018. [53] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with sub- word units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1715–1725, 2016. [54] Ke-ke Shang, Tong-chen Li, Michael Small, David Burton, and Yan Wang. Link prediction for tree-like networks. Chaos: An Interdisciplinary Journal of Nonlinear Science , 29(6), 2019. [55] Yigeng Shang, Zhigang Hao, Chao Yao, and Guoliang Li. Improving graph neural network models in link prediction task via a policy-based training method. Applied Sciences , 13(1):297, 2022. [56] AbhaySingh, QianHuang, SijiaLindaHuang, OmkarBhalerao, HoraceHe, Ser-NamLim, andAustinR Benson. Edge proposal sets for link prediction. arXiv preprint arXiv:2106.15810 , 2021. [57] Thorvald Julius Sørensen. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons . I kommission hos E. Munksgaard, 1948. 19 [58] Pulipati Srilatha and Ramakrishnan Manjula. Similarity index based link prediction algorithms in social networks: A survey. Journal of Telecommunications and Information Technology , (2):87–94, 2016. [59] Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, and Xia Hu. Bring your own view: Graph neural networks for link prediction with personalized subgraph selection. InProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining , pages 625–633, 2023. [60] Komal Teru, Etienne Denis, and Will Hamilton. Inductive relation prediction by subgraph reasoning. InInternational Conference on Machine Learning , pages 9448–9457. PMLR, 2020. [61] PetarVeličković, GuillemCucurull, ArantxaCasanova, AdrianaRomero, PietroLiò, andYoshuaBengio. Graph attention networks. In International Conference on Learning Representations , 2018. [62] Huan Wang, Ziwen Cui, Ruigang Liu, Lei Fang, and Ying Sha. A multi-type transferable method for missing link prediction in heterogeneous social networks. IEEE Transactions on Knowledge and Data Engineering , 35(11):10981–10991, 2023. [63] Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, and Hanjing Su. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936 , 2021. [64] Ensen Wu, Hongyan Cui, and Zunming Chen. Relpnet: Relation-based link prediction neural network. InProceedings of the 31st ACM International Conference on Information & Knowledge Management , pages 2138–2147, 2022. [65] Zhihao Wu, Youfang Lin, Huaiyu Wan, and Waleed Jamil. Predicting top-l missing links with node and link clustering information in large-scale networks. Journal of Statistical Mechanics: Theory and Experiment , 2016(8):083202, 2016. [66] ZhihaoWu,YoufangLin,JingWang,andSteveGregory. Linkpredictionwithnodeclusteringcoefficient. Physica A: Statistical Mechanics and its Applications , 452:1–8, 2016. [67] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? InInternational Conference on Learning Representations , 2018. [68] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International confer- ence on machine learning , pages 5453–5462. PMLR, 2018. [69] Xingyu Yao, Yingxia Shao, Bin Cui, and Lei Chen. Uninet: Scalable network representation learning with metropolis-hastings sampling. In 2021 IEEE 37th International Conference on Data Engineering (ICDE), pages 516–527. IEEE, 2021. [70] Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, and Pan Li. Algorithm and system co- design for efficient subgraph-based graph representation learning. arXiv preprint arXiv:2202.13538 , 2022. [71] Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems , 34, 2021. [72] Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pages 10737– 10745, 2021. [73] Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang, and Hyunwoo J Kim. Neo-gnns: Neigh- borhood overlap-aware graph neural networks for link prediction. Advances in Neural Information Processing Systems , 34:13683–13694, 2021. 20 [74] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in Neural Information Processing Systems , 31:5165–5175, 2018. [75] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architec- ture for graph classification. In Proceedings of the AAAI conference on artificial intelligence , volume 32, 2018. [76] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, 34:9061–9073, 2021. [77] Shichang Zhang, Jiani Zhang, Xiang Song, Soji Adeshina, Da Zheng, Christos Faloutsos, and Yizhou Sun. Page-link: Path-based graph neural network explanation for heterogeneous link prediction. In Proceedings of the ACM Web Conference 2023 , pages 3784–3793, 2023. [78] Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. Learning from counterfactual links for link prediction. In International Conference on Machine Learning , pages 26911–26926. PMLR, 2022. [79] Zhou Zhiyao, Sheng Zhou, Bochao Mao, Xuanyi Zhou, Jiawei Chen, Qiaoyu Tan, Daochen Zha, Yan Feng, Chun Chen, and Can Wang. Opengsl: A comprehensive benchmark for graph structure learning. Advances in Neural Information Processing Systems , 36, 2024. [80] JingyaZhou,LingLiu,WenqiWei,andJianxiFan. Networkrepresentationlearning: frompreprocessing, feature extraction to node embedding. ACM Computing Surveys (CSUR) , 55(2):1–35, 2022. [81] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B , 71(4):623–630, 2009. [82] Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Pro- cessing Systems , 34, 2021. [83] Chunya Zou, Andi Han, Lequan Lin, Ming Li, and Junbin Gao. A simple yet effective framelet-based graph neural network for directed graphs. IEEE Transactions on Artificial Intelligence , 2023. 21 | 8 | 1 | The experiments utilize OGB datasets which are well-known benchmarks in link prediction tasks. Based on similar models in the literature, training one of these GNNs on datasets like ogbl-collab or ogbl-ddi typically ranges from 4 to 8 hours on a single GPU when standard hyperparameters are used. Given the dataset sizes (4,267 to 2,927,963 nodes), a modest model capacity (typical GNN architectures like GCN, GAT), and depending on optimizations like node embeddings, it's reasonable to expect that the training could fit within an 8-hour window on a single GPU. The model weight sharing across nodes suggests lower memory requirements than models with unique parameters per node, indicating more efficient memory usage during training. | yes | Yes | Graph | Can GNNs Learn Link Heuristics? A Concise Review and Evaluation of Link Prediction Methods | 2024-11-22 0:00:00 | https://github.com/astroming/GNNHE | 1 | Inside the /GNNHE/ogbl-ddi_95.49_10runs/dataset folder of repo | 30 sec * 2000 = 16.7hours | https://colab.research.google.com/drive/1LD0gm45pSoZMyKFWrm23d4s0_DQpgtx8?usp=sharing | Yes | -- Since the requirements were vauge .. I used grok to fix the dependency issue and the installment process is recorded in collab notebook. |
TXL-PBC: a freely accessible labeled peripheral blood cell dataset | yolov5n | [] | TXL-PBC: a freely accessible labeled peripheral blood cell dataset | 2024-07-18T00:00:00 | https://arxiv.org/abs/2407.13214v1 | [
"https://github.com/lugan113/TXL-PBC_Dataset"
] | {'mAP50': '0.958'} | [
"mAP50"
] | Given the following paper and codebase:
Paper: TXL-PBC: a freely accessible labeled peripheral blood cell dataset
Codebase: https://github.com/lugan113/TXL-PBC_Dataset
Improve the yolov5n model on the TXL-PBC: a freely accessible labeled peripheral blood cell dataset dataset. The result
should improve on the following metrics: {'mAP50': '0.958'}. You must use only the codebase provided.
| TXL-PBC: A FREELY ACCESSIBLE LABELED PERIPHERAL BLOOD CELL DATASET Lu Gan Northern Arizona University Flagstaff, AZ, USA lg2465@nau.eduXi Li Independent Researcher Chengdu, China reilixi723@gmail.com ABSTRACT In a recent study, we found that publicly BCCD and BCD datasets have significant issues such as labeling errors, insufficient sample size, and poor data quality. To address these problems, we performed sample deletion, re-labeling, and integration of these two datasets. Additionally, we introduced the PBC and Raabin-WBC datasets, and ultimately created a high-quality, sample- balanced new dataset, which we named TXL-PBC. The dataset contains 1008 training sets, 288 validation sets, and 144 test sets. Firstly, The dataset underwent strict manual annotation, automatic annotation with YOLOv8n model, and manual audit steps to ensure the accuracy and consistency of annotations. Secondly, we addresses the blood cell mislabeling problem of the original datasets. The distribution of label boundary box areas and the number of labels are better than the BCCD and BCD datasets. Moreover, we used the YOLOv8n model to train these three datasets, the performance of the TXL-PBC dataset surpass the original two datasets. Finally, we employed YOLOv5n, YOLOv5s, YOLOv5l, YOLOv8s, YOLOv8m detection models as the baseline models for TXL-PBC. This study not only enhances the quality of the blood cell dataset but also supports researchers in improving models for blood cell target detection. We published our freely accessible TXL-PBC dataset at https://github.com/lugan113/TXL-PBC_Dataset. Keywords Blood cell datasets ·Semi-automatic Labeling ·YOLOv8 ·YOLOv5 ·Data Integration 1 Introduction In clinical medical diagnosis, the analysis, detection, and counting of blood cells are important indicators for doctors to diagnose diseases. Nowadays, researchers have integrated artificial intelligence models into analysis, detection, and counting of blood cells. To enhance the accuracy of AI model detection, it is crucial to strictly screen the quality and diversity of blood cells. However, Many hospitals are reluctant to disclose their datasets due to patient privacy and security concerns, making blood cell datasets scarce. Although we found the relevant publicly Blood Cell datasets BCCD (Blood Cell Count Dataset) [ 1] and BCD (Blood Cell Dataset) [ 2] on the Internet, these two cell datasets have obvious flaws. These include mislabeled blood cells, errors in sample labeling, insufficient sample size, and poor data quality. The defects of the two datasets will seriously limit the effectiveness and research value of AI models in practical applications. Specifically, labeling errors will lead to increased model training errors, insufficient sample size limits the generalization ability of the model, and poor data quality affects the reliability of the research results. Therefore, we need to re-label and organize these datasets in order to improve the quality and applicability of their blood cell samples. The main objective of this study is to perform sample reduction, re-labeling, and integration from BCCD and BCD datasets. Then, the original dataset is integrated with two new cell datasets, PBC dataset (Peripheral Blood Cells) [ 3] and Raabin-WBC dataset (Raabin White Blood Cells) [ 4], to create a high-quality, sample balanced new dataset. We call it TXL-PBC dataset. We use the Labelimg [ 5] tool to annotate all the datasets. Specifically, this study shows below:arXiv:2407.13214v1 [cs.CV] 18 Jul 2024 TXL-PBC: a freely accessible labeled peripheral blood cell dataset •Low-quality samples from the BCCD and BCD datasets are deleted, and the remaining samples are integrated. Semi-automated labeling is performed using YOLOv8n[6][7]. •Five white blood cell samples from the PBC and Raabin-WBC datasets are introduced and semi-automatically labeled using YOLOv8n. They are labeled as ’WBC’, ’RBC’, and ’Platelet’. •The above datasets are integrated, then randomly arranged and renamed to ensure their randomness and diversity. •The new dataset is divided into a training set (train: 1008), a validation set (val: 288), and a test set (test: 144). We conducted four case studies to showcase the utility of the TXL-PBC dataset. (1)We compared TXL-PBC with the original two datasets to demonstrate that our annotations are more comprehensive. (2)We performed a data visualization analysis to compare the labels of the BCCD, BCD, and TXL-PBC datasets. The results indicate that the boundary area distribution and the number of labels in TXL-PBC are superior to those in the other two datasets. (3)We trained the three data sets through Yolov8n model, and in a demonstration of All class, WBC, RBC, and Platelet. we found that the performance of TXL-PBC surpassed that of BCCD and BCD dataset. (4)We selected YOLOv5n, YOLOv5s, YOLOv5l [ 8], YOLOv8s, YOLOv8m as baseline models to train the TXL-PBC dataset. The training results showed that TXL-PBC exhibited excellent performance across different detection models. The rest of this paper is organized as follows. Section 2 describes the materials and methods used to construct the dataset. Section 3 provides an overview of TXL-PBC dataset’s statistics, Models Training and Evaluation. Finally, Sections 4 and 5 discuss the findings, draw conclusions, and offer recommendations for future research. 2 Material and Methods 2.1 Data Set Selection This paper mainly includes four cell image datasets: Blood Cell Count and Detection (BCCD), Blood Cell Dataset (BCD), Peripheral Blood Cells (PBC), and Raabin White Blood (Raabin-WBC) Cells). The details of these datasets are as follows: 2.1.1 BCCD and BCD datasets Blood Cell Count and Detection (BCCD) dataset have 364 images which are classed: “RBC”, “WBC”, and “Platelets”. The dataset contains 364 BCs images validating three various classes of cells. All images are blood smear images with a resolution of 640 × 480. This dataset was published on https://github.com/Shenggan/BCCD-Dataset. Blood Cell Dataset (BCD) dataset consisted of a total of 364 blood smear images with annotations. The dataset is labeled three groups: “RBC”, “WBC”, and “Platelets”. There are 364 images with 416 x 416 pixels resolution in the BCD dataset. This dataset was published on https://www.kaggle.com/datasets/adhoppin/blood-celldetection-datatset. These two datasets are widely used in blood cell image analysis and detection research, but they suffer from labeling errors and insufficient sample sizes. Figure1 (a-c) is a dataset from the BCCD, and all three images have a large number of red blood cells overlapping and missing marks. The figure1 (d-f) is a dataset from BCD, where a large number of missed red blood cell labels are found. (a) (b) (c) (d) (e) (f) Figure 1: (a-c): BCCD dataset, (d-f): BCD dataset. 2 TXL-PBC: a freely accessible labeled peripheral blood cell dataset 2.1.2 PBC dataset The Peripheral Blood Cell (PBC) dataset consists of 17,092 images. These images are further organized into the following eight groups: neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes (including promyelocytes, myelocytes, and metamyelocytes), erythroblasts, and platelets or thrombocytes. Each image is 360 x 363 pixels in size and is in JPG format, annotated by expert clinical pathologists. This dataset focuses on images of peripheral blood cells. For our newly introduced dataset, we have selected five types of white blood cells from this dataset. As shown in Figure2. (a) (b) (c) (d) (e) Figure 2: (a) Basophils, (b) Eosinophils, (c) Lymphocytes, (d) Monocytes, (e) Neutrophils. 2.1.3 Raabin-WBC dataset Raabin White Blood Cell (WBC) dataset which consisted of 14514 WBC images across five classes 301 basophils, 795 monocytes, 1066 eosinophils, 8891 neutrophils, and 3461 lymphocytes at resolutions of 575 x 575. The data set mainly focuses on the classification of white blood cells. We selected each type of white blood cells and introduced them into our new data set. As shown in Figure3. (a) (b) (c) (d) (e) Figure 3: (a) Basophils, (b) Eosinophils, (c) Lymphocytes, (d) Monocytes, (e) Neutrophils. 2.2 Data Clean We combined the BCCD and BCD datasets, resulting in a total of 724 images. In the process of integrating the two datasets, we found that some images were duplicated and contained excessive noise. Additionally, some overlapping blood cells in these images prevented accurate detection and labeling. As a result, we deleted 270 substandard images and retained 447 high-quality images of blood cells. 3 TXL-PBC: a freely accessible labeled peripheral blood cell dataset 2.3 Data Annotation To improve the quality of the datasets and save manual annotation time, we adopted the semi-automatic annotation method using YOLOv8n model. First, we used Labelimg tool to label 100 samples, which were then used to train the YOLOv8n model. Next, the trained model was used to automatically label the remaining data. Finally, manual screening was performed to correct labeling errors, missed labels, and repeated labels. The process is shown in Figure 4. a b c d e f Figure 4: Semi automatic annotation with yolov8s This semi-automatic labeling method greatly improves the efficiency of labeling. Combined with manual screening, it ensures the accuracy and consistency of the labels. We applied this approach to all datasets, resulting in high-quality annotated data. This provide a reliable basis for subsequent data integration and analysis. 2.4 Data Integration In this paper, our goal is to build a balanced, diverse, and high-quality new dataset. So, We combined all the labeled datasets, which contained a total of 1440 samples. There are 447 samples from the BCCD and BCD datasets, and 500 samples each from the PBC and Raabin-WBC datasets. The label categories for the new dataset are WBC, RBC, Platelet. In the process of integrating the data set, we randomly arranged and named the images. The purpose of this is to ensure a high level of randomness and diversity in the samples. As shown in Figure 5. Figure 5: TXL-PBC dataset 2.5 Data Set Splitting We divided the data set into a training set, a validation set, and a test set in a 7:2:1 ratio. The training set consists of 1,008 samples for machine learning model training. The validation set has 288 samples. It is used to adjust model parameters during training and to prevent over-fitting. The test set contains 144 samples that are used to finally evaluate the performance of the model. We named the new dataset TXL-PBC. We aim to provide a high-quality sample set for cell research and machine learning models. 4 TXL-PBC: a freely accessible labeled peripheral blood cell dataset 3 Results 3.1 Sample Comparison To demonstrate the significant improvement in the annotation quality of the TXL-PBC dataset, We randomly selected three images from the new dataset and the corresponding images from the original datasets. (a-c) is the original BCCD and BCD dataset, and ( a1−c1) is the new TXL-PBC dataset. In the picture 6, it is evident that many RBCs are missing in the original dataset, whereas they are all accurately labeled in the new dataset. (a) (b) (c) (a1) (b1) (c1) Figure 6: (a-c): BCCD and BCD, ( a1−c1): TXL-PBC a b c d e f (a) Box Plot of Bounding Box Areas a b c d e f (b) Violin Plot of Bounding Box Areas (c) Scatter Plot of Bounding Box Areas Figure 7: Distribution of Bounding Box Areas 5 TXL-PBC: a freely accessible labeled peripheral blood cell dataset 3.2 Dataset Statistical Analysis In order to show that the TXL-PBC dataset has more balanced and diverse samples than the original dataset. We visually analyzed the labels of BCCD, BCD and TXL-PBC data sets. Figure 7 presents the box area distribution of the three datasets using box plots, violin plots, and scatter plots for comparison. Figures a and b shows that the distribution range of the TXL-PBC dataset is the widest. The median area of the bounding box is high, and the dataset also contains multiple outliers. As seen in Figure c, the label area of the TXL-PBC dataset has the widest distribution, indicating that the boundary areas cover a wide range. Data Statistical analysis proves that the diversity, reliability, and validity of the TXL-PBC dataset are higher than those of the BCCD and BCD datasets. Figure 8 presents a bar chart comparing the number of labels in the three datasets. As shown in the figure, the number of labels in the TXL-PBC dataset is significantly higher than the other two datasets, with the number of RBC labels reaching about 25,000. The number of WBC tags is higher in the TXL-PBC dataset compared to the other two datasets, and the number of TXL-PBC tags in the BCD dataset is equal to the number of Platelet tags. This fully indicates that the new TXL-PBC dataset has significant advantages in both coverage range and the number of labels. Therefore, the category diversity, as well as the reliability and validity of the labels in the TXL-PBC dataset, are higher than those in the original datasets. Figure 8: Bar Plot of Label Counts Comparison 3.3 Model Training and Evaluation As shown in table1, To validate the performance of the new TXL-PBC dataset on the model, we trained the BCCD, BCD, and TXL-PBC datasets using YOLOv8n. The chart shows the data results for All classes, RBC,WBC and Platelets, respectively. The results contain values for Precision, Recall, mAP@0.5, and mAP@0.5-0.95. The parameters we set consistently include: training for 100 epochs, a batch size of 16, the image size is 320x320, zero workers, the optimizer selects AdamW, the cache is enabled to improve training efficiency, and the initial learning rate is set to 0.001. Table 1: Detailed YOLOv8n Results on BCCD, BCD, and TXL-PBC Datasets Dataset Type Precision Recall mAP@0.5 mAP@0.5-0.95 BCCD All 0.872 0.91 0.913 0.629 RBC 0.773 0.866 0.88 0.641 WBC 0.99 0.998 0.993 0.808 Platelets 0.852 0.868 0.867 0.438 BCD All 0.84 0.91 0.907 0.628 RBC 0.75 0.84 0.873 0.613 WBC 0.962 0.998 0.976 0.809 Platelets 0.809 0.89 0.871 0.461 TXL-PBC All 0.952 0.94 0.97 0.794 RBC 0.942 0.967 0.988 0.859 WBC 0.99 0.97 0.985 0.837 Platelets 0.924 0.883 0.935 0.687 6 TXL-PBC: a freely accessible labeled peripheral blood cell dataset It is evident from the chart that all data results of the TXL-PBC dataset surpass those of the original BCCD and BCD datasets. Notably, in the comparison of mAP@0.5 for All classes, the TXL-PBC dataset achieves 0.97, while BCCD and BCD achieve only 0.913 and 0.907, respectively. These training results clearly demonstrate the significant advantages of the TXL-PBC dataset in enhancing model detection performance. 3.4 Baseline Models In this study, we used several common object detection models as baseline models for the new dataset. These included YOLOv5n, YOLOv5s, YOLOv5l, YOLOv8s, YOLOv8m. The parameters consistently set for all baseline models were: training for 100 epochs, a batch size of 16, an image size of 320x320, zero workers, the AdamW optimizer, caching enabled to improve training efficiency, and an initial learning rate of 0.001. The performance of the models is shown in table2. Table 2: Performance Comparison of Different Models on TXL-PBC Datasets Model Dataset Precision Recall mAP@0.5 mAP@0.5-0.95 F1 Score YOLOv5n All 0.948 0.911 0.958 0.756 0.929 RBC 0.947 0.955 0.988 0.831 0.951 WBC 0.99 0.974 0.981 0.798 0.981 Platelets 0.908 0.804 0.904 0.64 0.853 YOLOv5s All 0.959 0.929 0.97 0.775 0.943 RBC 0.944 0.963 0.988 0.853 0.953 WBC 0.987 0.964 0.987 0.809 0.975 Platelets 0.946 0.86 0.934 0.662 0.901 YOLOv5l All 0.947 0.944 0.965 0.776 0.945 RBC 0.925 0.974 0.989 0.9858 0.949 WBC 0.983 0.959 0.98 0.793 0.971 Platelets 0.932 0.899 0.928 0.678 0.916 YOLOv8s All 0.952 0.949 0.977 0.847 0.95 RBC 0..934 0.976 0.99 0.908 0.954 WBC 0.989 0.977 0.986 0.847 0.982 Platelets 0.935 0.894 0.955 0.773 0.914 YOLOv8m All 0.964 0.946 0.974 0.838 0.954 RBC 0.952 0.968 0.99 0.9 0.96 WBC 0.987 0.97 0.986 0.856 0.978 Platelets 0.956 0.899 0.946 0.757 0.925 The purpose of this is not only to understand the performance differences of different models on the TXL-PBC dataset but also to provide a preliminary performance evaluation standard for subsequent blood cell detection model research. In conclusion, by establishing various baseline models for the TXL-PBC dataset, researchers can gain important reference values for studying new target detection tasks. 4 Discussion Our research results show that TXL-PBC dataset not only has significant advantages in the number of labels and boundary box area distribution, but also far exceeds the actual detection effect of existing data sets. This indicates that the data set has great application potential in medical image analysis, which can be used to assist diagnosis, disease research and other fields, and is expected to provide important support for medical image processing and analysis. Although we have improved the sample diversity, balance, and quality of our cellular datasets, our work still has limitations. Firstly, the sample diversity and quantity of the TXL-PBC dataset need further expansion. Additionally, despite our strict screening process when labeling cell samples, the consistency of labeling may be affected by subjective differences among different annotators. Therefore, our work will focus on expand the size and diversity of the dataset to enhance the model’s generalization ability. Simultaneously, to improve the accuracy and efficiency of annotation, we will explore more effective annotation tools and methods. With these improvements, we aim to provide stronger technical support for cell object detection tasks, automatic cell image labeling, and machine learning models. 7 TXL-PBC: a freely accessible labeled peripheral blood cell dataset 5 Conclusion In this paper, we conducted sample deletion, re-labeling and integration of the two publicly available cell data sets BCCD and BCD, then added two new data sets PBC and RBC, and finally obtained a new data set TXL-PBC. As can be seen in the results, first of all, the new data set makes up for a large number of red blood cell mislabeling problems in the original data set. Secondly, through the analysis of label data, it can be seen that TXL-PBC data set not only has the widest distribution of label bounding area, but also has a significantly higher number of labels than BCCD and BCD. Finally, through the target detection training results of the three data sets by yolov8n, it is found that the results of the TXL-PBC data set are far superior to the other two data sets. In addition, we also made a variety of baseline model comparison references for the TXL-PBC dataset. In addition to comprehensively evaluating the performance of the new data set in different model training environments, our goal is to provide an important reference for researchers to study and test models. In future work, we plan to further optimize the TXL-PBC dataset to further improve its universality and accuracy by introducing more diverse cell image data. In addition, we will explore more advanced target detection models and evaluate their performance on the TXL-PBC dataset in order to find more efficient cell detection methods. Availability of Data and Materials We have published the TXL-PBC dataset on Github (https://github.com/lugan113/TXL-PBC_Dataset). We hope that more researchers will use this dataset for further studies. We hope that through our work, we can promote the development of cell detection technology and promote the progress of related medical image detection. Acknowledgments Special thanks to all the people and institutions who provided Blood cell datasets and technical support in this study, without their help, this study could not have been successfully completed. References [1]S. Cheng. Blood cell count and detection (bccd) dataset, 2024. figshare https://github.com/Shenggan/BCCD_ Dataset . [2]A. Dongre. Blood cell detection(bcd) dataset, 2024. figshare https://www.kaggle.com/datasets/adhoppin/ blood-celldetection-datatset . [3]Andrea Acevedo, Anna Merino González, Edwin Santiago Alférez Baquero, Ángel Molina Borrás, Laura Boldú Nebot, and José Rodellar Benedé. A dataset of microscopic peripheral blood cell images for develop- ment of automatic recognition systems. Data in brief , 30(article 105474), 2020. [4]Zahra Mousavi Kouzehkanan, Sepehr Saghari, Eslam Tavakoli, Peyman Rostami, Mohammadjavad Abaszadeh, Farzaneh Mirzadeh, Esmaeil Shahabi Satlsar, Maryam Gheidishahran, Fatemeh Gorgi, Saeed Mohammadi, et al. Raabin-wbc: a large free access dataset of white blood cells from normal peripheral blood. bioRxiv , pages 2021–05, 2021. [5] Tzutalin. Labelimg, 2015. figshare https://github.com/HumanSignal/labelImg . [6]Yaniv Gur, Mehdi Moradi, Hakan Bulu, Yufan Guo, Colin Compas, and Tanveer Syeda-Mahmood. Towards an efficient way of building annotated medical image collections for big data studies. In Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis: 6th Joint International Workshops, CVII-STENT 2017 and Second International Workshop, LABELS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 10–14, 2017, Proceedings 2 , pages 87–95. Springer, 2017. [7]Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics YOLO, January 2023. figshare https://github.com/ ultralytics/ultralytics . [8]Glenn Jocher. ultralytics/yolov5: v3.1 - Bug Fixes and Performance Improvements, 2020. figshare https: //github.com/ultralytics/yolov5 . 8 | 8 | 1 | The TXL-PBC dataset has 1,008 training samples, with a batch size of 16 and an image resolution of 320x320 pixels. Training with YOLOv8n for 100 epochs means that there are a total of 1,008/16 = 63 iterations per epoch, resulting in approximately 6,300 total iterations. Given the complexity of YOLOv8n and the semi-automated annotation process, the estimated time is relatively short, possibly around 8 hours on a single GPU assuming decent hardware (NVIDIA RTX 3080 or similar) and efficient data loading. The typical performance for YOLO models suggests that they are optimized for speed; hence, training could be accomplished within this time frame. Therefore, it is reasonable to conclude that the model can be trained in under 8 hours on a single GPU. | yes | Yes | CV | TXL-PBC: a freely accessible labeled peripheral blood cell dataset | 2024-07-18 0:00:00 | https://github.com/lugan113/TXL-PBC_Dataset | 1 | isnside the repo on TXL-PBC folder | 17s * 100 epoch = 29 minutes aprox | https://drive.google.com/file/d/1NdhlcOZdyojbL8kctTFOo8eFA03PMWdl/view?usp=sharing | Yes | -- I have fixed the train.py file with correct arguments and file path. I have commented the fixes on collab file. |
ZJU-RGB-P | CSFNet-2 | [] | CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes | 2024-07-01T00:00:00 | https://arxiv.org/abs/2407.01328v1 | [
"https://github.com/Danial-Qashqai/CSFNet"
] | {'mIoU': '91.40', 'Frame (fps)': '75 (3090)'} | [
"mIoU",
"Frame (fps)"
] | Given the following paper and codebase:
Paper: CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes
Codebase: https://github.com/Danial-Qashqai/CSFNet
Improve the CSFNet-2 model on the ZJU-RGB-P dataset. The result
should improve on the following metrics: {'mIoU': '91.40', 'Frame (fps)': '75 (3090)'}. You must use only the codebase provided.
| CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB -X Semantic Segmentation of Driving Scenes Danial Qashqaia,*, Emad Mousaviana, Shahriar B. Shokouhia, Sattar Mirzakuchakia aDepartment of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran Abstract Semantic segmentation, as a crucial component of complex visual interpretation, plays a fundamental role in autonomous vehicle vision systems. Recent studies have significantly improved the accuracy of semantic segmentation by exploiting complementary information and developing multimodal methods. Despite the gains in accuracy, multimodal semantic segmentation methods suffer from high computational complexity and low inference speed. Therefore, it is a challenging task to implement multimodal methods in driving applications. To address this problem, we propose the Cosine Similarity Fusion Network (CSFNet) as a real -time RGB -X semantic segmentation model. Specifically, we design a Cosine Similarity Attention Fusion Module (CS -AFM) that effectively rectifies and fuses features of two modalities. The CS - AFM module leverages cross -modal similarity to achieve high generalization ability. By enhancing the fusion of cross - modal features at lower levels, CS -AFM paves the way for the use of a single -branch network at higher levels. Therefore, we use dual and single -branch architectures in an encoder, along with an efficient context module and a lightweight decoder for fast and accurate predictions. To verify the effectiveness of CSFNet, we use the Cityscapes, MFNet, and ZJU datasets for the RGB-D/T/P semantic segmentation. According to the results, CSFNet has competitive accuracy with state- of-the-art methods while being state -of-the- art in terms of speed among multimodal semantic segmentation models. It also achieves high efficiency due to its low parameter count and computational complexity. The source code for CSFNet will be available at https://github.com/Danial-Qashqai/CSFNet . Keywords: Multimodal semantic segmentation ; Real- time scene parsing ; Cross-modal similarity ; Autonomous driving. 1. Introduction Semantic segmentation is a fundamental task in computer vision, dealing with the analysis and understanding of driving scenes through pixel -level classification. Due to the high sensitivity of Advanced Driver -Assistance Systems (ADAS) and the potential for serious accidents in the event of errors, improving the accuracy of semantic segmentation models is very important. Recent advancements in sensor technology and the availability of complementary data such as depth [1], thermal [2], and polarization [3] have opened new doors to the development of multimodal semantic segmentation models in driving scenarios. Research in the field of multimodal semantic segmentation [4], [5], by fusing complementary information and RGB images, enables a deeper understanding of content and provides higher accuracy than RGB models [6], [7]. This superiority in accuracy is accompanied by overcoming challenges such as similar color or texture of objects, lighting variations, limited visibility, and light reflection from glossy surfaces. In the development of multimodal semantic segmentation, four approaches —early fusion [8], mid- term fusion [9], late fusion [10], and multi -level interactive fusion [11], [12], [13] —have been used to combine two * Corresponding author. E-mail address: ghashghaie_danial@elec.iust.ac.ir. 2 input modalities. Early fusion involves fusing the input data before extracting features; such a simple fusion ignores the complementarity between the input information [14]. Therefore, other aforementioned approaches have been proposed to perform cross -modal fusion using a two -branch network. Among these methods, multi- level interactive fusion stands out as the leading approach for multimodal semantic segmentation models, achieving superior accuracy. In this approach, the extracted feature maps of the branches are fused at multiple levels. While two -branch networks and multi- level fusion can improve accuracy, they increase computational complexity and dramatically slow down inference speed. Therefore, given the importance of processing speed in driving applications, the use of multimodal models in this domain is challenging. The process of performing fusion operations is a key aspect of multimodal semantic segmentation. Early works [11], [15] fuse cross -modal features in a straightforward manner by applying element -wise addition, overlooking the complementary nature of the features. Recent works [16], [17] have improved this problem by using attention -based fusion modules. These modules generally use global information either directly [18] or by applying interactions between them [19] in a trainable approach. Despite the improvement in accuracy, the global information does not adequately distinguish between the features of the two modalities. To address these shortcomings, we propose a Cosine Similarity Fusion Network (CSFNet) for real -time RGB- X semantic segmentation. In this model, an optimized encoder extracts features from both RGB and X modalities. The proposed encoder employs a two- branch architecture for the first three levels and a single branch for the final two levels. This approach reduces computational complexity and results in higher processing speeds. In addition, we further achieve higher computational efficiency by using the Short -Term Dense Concatenate (STDC) [20] backbone for the first time in a multimodal semantic segmentation model. To more effectively combine the features of two modalities, we design the Cosine Similarity Attention Fusion Module (CS -AFM). As a novel approach, this module rectifies and fuses the modalities by employing the cosine similarity between the corresponding channels in an attention -based approach. Unlike previous methods, the CS -AFM module considers local features by applying average pooling layers, and it can effectively distinguish cross -modal features by exploiting cosine similarity. This module, with its high generalization ability, is used in the CSFNet model for both fusion of RGB- X features in the encoder and fusion of skip connection features with the decoder. Finally, the proposed encoder is combined with an efficient context module and a lightweight decoder for fast and accurate predictions. We evaluate the proposed CSFNet model on three types of multimodal semantic segmentation tasks, including RGB -Depth, RGB -Thermal, and RGB -Polarization. Given the research focus on driving scenarios, we use the Cityscapes [1], MFNet [2], and ZJU [3] datasets in this evaluation. In general, CSFNet achieves competitive accuracy with state -of-the-art (SOTA) multimodal semantic segmentation models; it also has the fastest inference speed of all multimodal semantic segmentation models, and because of its low complexity, it can be used in embedded hardware. The main contributions of this study are summarized as follows: • We leverage both dual and single -branch architectures to design an optimized encoder network. Furthermore, the proposed encoder uses the STDC backbone for the first time in a multimodal semantic segmentation task. • We propose a Cosine Similarity Attention Fusion Module (CS -AFM) that rectifies and fuses the input features based on cross -modal similarity. • We propose a Cosine Similarity Fusion Network (CSFNet) as a real -time RGB- X semantic segmentation model. • CSFNet achieves competitive accuracy on Cityscapes (half resolution), MFNet, and ZJU datasets, while also having low complexity and being state -of-the-art in terms of speed among multimodal semantic segmentation models. 3 2. Related works In this section, we provide a brief overview of previous single - and multi -modal semantic segmentation methods, given the high overlap between their proposed methods and techniques. 2.1. Single -modal semantic segmentation The advent of the Fully Convolutional Network (FCN) [21] and the replacement of fully connected layers with convolutional layers marked a significant advancement in pixel -level classification. Similar to the FCN, the U -Net [22] and SegNet [23] adopted the encoder -decoder network architecture for semantic segmentation tasks. U -Net used skip connections to transfer the entire feature map, while SegNet transferred the max - pooling indices to reduce computational resources. To have a larger receptive field, PSPNet [24] proposed a Pyramid Pooling Module (PPM) that aggregates both local and global context information, while Deeplab [25], [26] used the Atrous Spatial Pyramid Pooling (ASPP) module to capture multi -scale context. Motivated by the integration of attention mechanisms into convolutional networks to capture feature dependencies [27], [28], semantic segmentation models such as DANet [29] and SANet [30] have effectively employed attention mechanisms. Building on these successes, transformer -based architectures have made remarkable progress in semantic segmentation. SETR [31] proposed an encoder with a transformer structure for the sequence -to- sequence prediction task. SegFormer [7] used a hierarchical encoder to achieve multi -scale representations. MaskFormer [32] approached semantic segmentation by predicting sets of masks rather than classifying each pixel individually. In addition, some networks have improved towards real -time semantic segmentation. Specifically, ERFNet [33] used the residual connection and factorized convolution to reduce the computational cost while maintaining the accuracy; BiSeNetV1 [34] and BiSeNetV2 [35] used a dual -branch architecture for low - and high-level information; STDCSeg [20] proposed a Short- Term Dense Concatenate (STDC) backbone and a detail guidance module to guide the low -level layers to learn the spatial information; and PP -LiteSeg [36] proposed a lightweight decoder combined with the Unified Attention Fusion Module (UAFM) to strengthen the feature representations. 2.2. Multi -modal s emantic segmentation Despite significant advances in single -modal semantic segmentation, accuracy degrades under challenging conditions, especially when there is insufficient information in RGB images. Therefore, multimodal semantic segmentation networks have been widely studied to exploit complementary information with RGB images. The core of multimodal networks is the fusion strategy of RGB and complementary data. To perform cross - modal fusion, early works [11], [15], [37], [38] employed simple element -wise addition. More effectively, [3], [12], [18] used channel attention, and [16], [17], [19] used both channel and spatial attention modules for better fusion. To address the input noise, SA -Gate [4], CMX [5], and SpiderMesh [39] proposed the Separation -and-Aggregation Gate (SA -Gate), Cross -Modal Feature Rectification Module (CM -FRM), and Demand -guided Target Masking Module (DTM), respectively. In a simpler approach, LDFNet [40] concatenated the luminance and depth information for noise suppression. In addition, some multimodal semantic segmentation models have proposed specific approaches to improve their performance. For instance, ADSD [13] and CAINet [41] used auxiliary supervision with multiple decoders; IGFNet [42] used illumination to guide the fusion of RGB -T features; and CRM -RGBTSeg [43] improved the robustness and accuracy of RGB -T semantic segmentation by using a random masking strategy and self -distillation loss. 4 The majority of multimodal semantic segmentation models have focused on a particular type of input data. Only a few models, such as NLFNet [44] and CMX [5], have been proposed as RGB -X semantic segmentation networks. Good performance on one type of input data does not necessarily imply good performance in other domains. For example, RGB -D models such as ACNet [12] and SA -Gate [4] have low accuracy in the RGB -T semantic segmentation task [45]. Therefore, in this paper, we propose the CS -AFM module to effectively rectify and fuse different modalities. This module exhibits high generalization ability by considering cross -modal similarity. By using the CS -AFM module in the efficient encoder -decoder architecture, we propose CSFNet as a real -time RGB- X semantic segmentation model for driving applications. 3. The proposed method In this section, we first describe the architecture of the proposed CSFNet, then introduce the Cosine Similarity Attention Fusion Module (CS -AFM) and the Efficient Context Module in detail. 3.1. Architecture overview The overview of the proposed Cosine Similarity Fusion Network (CSFNet) is shown in Fig. 1. Our network follows an encoder -decoder structure optimized for real -time scene parsing. The encoder uses a double -branch network for the first three stages and a single- branch network for the last two stages. Using this structure with an efficient STDC [20] backbone reduces computational complexity and increases inference speed. Specifically, the RGB and X (depth, thermal, or polarization) modalities are effectively extracted and fused in the first three stages. In the last two stages, high -level features are extracted from the fused features. Due to the presence of noise in the input data and the importance of considering complementary features, we design the Cosine Similarity Attention Fusion Module (CS -AFM). This module uses cosine similarity in an attention -based approach to rectify and fuse the cross -modal features. Thus, we apply the CS -AFM module at the first three levels of the encoder network. To capture contextual information, we employ an efficient context module between the encoder and decoder. This module models the long- range dependencies using 1D convolutional layers. To improve efficiency, we leverage a lightweight decoder in CSFNet. Our decoder uses a combination of learned upsampling modules and CS -AFM modules for efficient feature recovery. The learned upsampling modules include a bilinear interpolation and 3×3 convolutional layers, followed by a batch normalization layer and the ReLU activation function. Except for the third learned upsampling module, Fig. 1. Overview of our proposed CSFNet for real- time RGB -X semantic segmentation. The inputs are RGB and X (depth, thermal, or polarization) modalities. ( 𝑘𝑘𝑤𝑤×𝑘𝑘ℎ, C, S2: Convolution with kernel size 𝑘𝑘𝑤𝑤×𝑘𝑘ℎ, C output channels and stride 2, BN: Batch Normalization and N -d: The channel dimension is N.) 5 which upsamples by a factor of 4 and uses two convolutional layers, the other three modules upsample by a factor of 2 and have only one convolutional layer. In addition, the three CS -AFM modules effectively fuse the skip connections and decoder feature maps. The skip connections reduce encoder feature maps with a 1×1 convolution to the same number of channels in the decoder. 3.2. Cosine similarity attention fusion module Using global information for cross -modal feature fusion has been a common approach in previous research. Despite the improvements, global information cannot adequately discriminate between cross- modal features. To address this problem, we propose a novel Cosine Similarity Attention Fusion Module (CS -AFM) to rectify and fuse the input modalities in a more efficient way. The structure of the CS -AFM module is shown in Fig. 2. Suppose 𝐹𝐹 𝑥𝑥∈ ℝ𝐶𝐶×𝑊𝑊×𝐻𝐻 and 𝐹𝐹𝑦𝑦∈ ℝ𝐶𝐶×𝑊𝑊×𝐻𝐻 are the extracted feature maps of two encoder branches. In the CS- AFM module, the input features ( 𝐹𝐹𝑥𝑥 and 𝐹𝐹𝑦𝑦) are downsampled using the adaptive average pooling layers. Then , cosine similarity is applied to measure the similarity between the two modalities. To perform the cosine similarity, the downsampled features are reshaped to dimension 𝐶𝐶×𝑁𝑁, where 𝑁𝑁 is equal to 𝑃𝑃𝑤𝑤×𝑃𝑃ℎ. The process of obtaining the similarity vector (𝑆𝑆𝑣𝑣) from 𝐹𝐹 𝑥𝑥 and 𝐹𝐹 𝑦𝑦 is formulated as follows: 𝑆𝑆𝑣𝑣= 𝐹𝐹 𝑥𝑥⋅𝐹𝐹 𝑦𝑦 ‖𝐹𝐹 𝑥𝑥‖×‖𝐹𝐹 𝑦𝑦‖ , (1) where 𝑆𝑆𝑣𝑣 ∈ ℝ𝐶𝐶×1 indicates the degree of similarity between the channels of the two input modalities, and its values range from -1 to 1. Fig. 2. The structure of our proposed Cosine Similarity Attention Fusion Module (CS- AFM). 6 To find the correlations between the channels and make an appropriate weight distribution, we employ two convolutional layers followed by a sigmoid function, as shown in the following equation: 𝑊𝑊=𝜎𝜎 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶1×1 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝐵𝐵𝑁𝑁 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 1×1(𝑆𝑆′ 𝑣𝑣) , (2) where 𝜎𝜎, BN, and 𝑆𝑆′ 𝑣𝑣 denote the sigmoid function, batch normalization layer, and reshaped form of 𝑆𝑆𝑣𝑣, respectively. To rectify the cross -modal features, the weights 𝑊𝑊∈ℝ𝐶𝐶×1×1 and (1−𝑊𝑊) ∈ℝ𝐶𝐶×1×1 are multiplied by their corresponding modalities and added to the other modality using the following equations: 𝐹𝐹′𝑥𝑥=𝐹𝐹𝑥𝑥 +𝐹𝐹𝑦𝑦 ×𝑊𝑊 , 𝐹𝐹′𝑦𝑦=𝐹𝐹𝑦𝑦+𝐹𝐹𝑥𝑥×(1−𝑊𝑊) , (3) where 𝐹𝐹′𝑥𝑥∈ℝ𝐶𝐶×𝑊𝑊×𝐻𝐻 and 𝐹𝐹′𝑦𝑦∈ℝ𝐶𝐶×𝑊𝑊×𝐻𝐻 are the rectified feature maps. In addition, to fuse the cross -modal features in the encoder and to fuse the skip connections and decoder features, weighted element -wise summation is used as the following equation: 𝐹𝐹𝑚𝑚=𝐹𝐹𝑦𝑦×𝑊𝑊 + 𝐹𝐹𝑥𝑥×(1−𝑊𝑊) , (4) where 𝐹𝐹𝑚𝑚∈ℝ𝐶𝐶×𝑊𝑊×𝐻𝐻 is the merged feature map. 3.3. Efficient context module In order to achieve a richer semantic understanding, we propose an efficient context module to capture contextual information. As shown in Fig. 3, this module first applies an adaptive average pooling layer along with a 1×1 convolutional layer to reduce the dimension of the input feature maps from 𝐶𝐶𝑖𝑖𝑖𝑖×𝑊𝑊/32×𝐻𝐻/32 to 𝐶𝐶𝑖𝑖𝑖𝑖/4×𝑆𝑆𝑤𝑤×𝑆𝑆ℎ. Then, two 1D convolutional layers are used in two parallel branches to extract long -range dependencies. These layers use 4×1 and 1×4 kernels and reduce the channel dimension from 𝐶𝐶𝑖𝑖𝑖𝑖/4 to 𝐶𝐶𝑖𝑖𝑖𝑖/16. To merge the branches by element -wise summation, we leverage the bilinear interpolation method to upsample the features to the input resolution. Finally, a 3×3 convolutional layer is applied to reduce the channel dimension from 𝐶𝐶𝑖𝑖𝑖𝑖/16 to 𝐶𝐶𝑖𝑖𝑖𝑖/32. It is worth noting that all convolutional layers, except the last one, use a batch normalization layer and a ReLU activation function. Fig. 3. Detailed structure of efficient context module. 7 4. Experiments and results In this section, we first describe three benchmark datasets for RGB -D/T/P semantic segmentation tasks in driving scenarios. Then, we explain the implementation details of our proposed CSFNet and compare it with SOTA methods. Finally, we perform ablation experiments to demonstrate the effectiveness of our proposed method. 4.1. Multimodal datasets To evaluate the performance of our proposed method, we use the Cityscapes [1], MFNet [2], and ZJU [3] datasets for the RGB- D, RGB -T and RGB -P semantic segmentation tasks, respectively. Cityscapes. Cityscapes is an RGB- D dataset of urban street scenes. It consists of 5,000 finely annotated images with 19 semantic classes. The images have a resolution of 2048×1024 and are divided into 2,975 images for training, 500 for validation, and 1,525 for testing. MFNet. The MFNet dataset contains 1569 RGB -T image pairs with a resolution of 640×480. It has 784 images for training, 392 for validation, and 393 for testing, provided in 9 semantic classes. There are 50% of daytime and 50% of nighttime images in the training set, while 25% of daytime and 25% of nighttime images are in the validation and test sets, respectively. ZJU. The ZJU is an RGB -P dataset captured from complex campus street scenes and annotated in 8 semantic classes. It has 344 image pairs for training and 50 for validation. The original images have a resolution of 1224×1024, but are resized to 612×512. Each image pair contains four polarized images [𝐼𝐼 0°,𝐼𝐼45°,𝐼𝐼90°,𝐼𝐼135°], where 𝐼𝐼𝛼𝛼 denotes the polarized image with polarization angle 𝛼𝛼. In order to obtain the Angle of Linear Polarization (AoLP) representation , the Stokes vectors 𝑆𝑆1 and 𝑆𝑆2 can be derived from the following equation: 𝑆𝑆1=𝐼𝐼0°−𝐼𝐼90° , 𝑆𝑆2=𝐼𝐼45°−𝐼𝐼135° , (5) where 𝑆𝑆1 and 𝑆𝑆2 represent the ratio of 0° and 45° linear polarization over their perpendicular polarized portion. Then, AoLP can be formulated as : 𝐴𝐴𝐶𝐶𝑅𝑅𝑃𝑃 =1 2𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝐶𝐶 (𝑆𝑆1𝑆𝑆2⁄) . (6) 4.2. Implementation details We implement the proposed CSFNet using PyTorch 2.1.2, CUDA 12.1, CUDNN 8.9.0, and Python 3.10.13 on an NVIDIA RTX 3090 GPU (24 GB RAM ). The STDC1 and STDC2 [20] backbones are pretrained on the ImageNet [46] dataset and employed on CSFNet -1 and CSFNet -2, respectively. For the Cityscapes dataset, the batch size, learning rate, and number of training epochs are set to 16, 0.02, and 300, while for the MFNet and ZJU datasets, they are set to 8, 0.01, and 600, respectively. The stochastic gradient descent (SGD) with momentum 0.9 and weight decay 5×10−4 is adopted as the optimizer , and the polynomial learning rate policy with power 0.9 is used to reduce the learning rate during training. For all datasets, random horizontal flipping, random scaling, random cropping, random color jittering, and normalization are used as data augmentation. The random scales are in the range [0.5, 1.75]. Except for Cityscapes, which is cropped to 1024×512, MFNet and ZJU use their image resolution as the crop size. We also concatenate the luminance and input depth information for noise suppression, similar to [40]. As 8 mentioned, adaptive average pooling layers are applied in the proposed CS -AFM and efficient context modules. For more details, the output size of the adaptive average pooling layers for each dataset is shown in Table 1. 4.3. Comparison with SOTA models For a comprehensive comparison, we evaluate the proposed method on three RGB -D/T/P datasets using different metrics. The metrics include the number of parameters, frames per second (FPS), floating point operations per second (FLOPs), and mean intersection over union (mIoU). Table 1 The output size of the adaptive average pooling layers for each dataset in different modules ( Lx means level x). Module Cityscapes MFNet & ZJU CS-AFM -L1 32×16 24×16 CS-AFM -L2 16×8 12×8 CS-AFM -L3 8×4 6×4 CS-AFM -L4 4×2 3×2 Context Module 8×4 5×5 Table 2 Comparison with state- of-the-art methods on C ityscapes val set at half resolution (1024×512). Method Modal Backbone GPU Params FPS mIoU SwiftNet [47] RGB ResNet18 GTX 1080Ti 11.8 134.9 70.2 BiSeNetV2 [35] RGB - GTX 1080Ti - 156 73.4 BiSeNetV2 -L [35] RGB - GTX 1080Ti - 47.3 75.8 STDC1 -Seg50 * [20] RGB STDC1 RTX 3090 - 145.6 72.2 STDC2 -Seg50 * [20] RGB STDC2 RTX 3090 - 96.16 74.2 PP-LiteSeg -T1* [36] RGB STDC1 RTX 3090 - 166.4 73.1 PP-LiteSeg -B1* [36] RGB STDC2 RTX 3090 - 105.8 75.3 LDFNet* [40] RGB -D ERFNet RTX 3090 2.31 68.5 68.48 ESANet* [18] RGB -D R18-NBt1D RTX 3090 - 70.22 74.65 ESANet* [18] RGB -D R34-NBt1D RTX 3090 - 43.63 75.22 SGACNet* [17] RGB -D R18-NBt1D RTX 3090 22.1 50.1 73.3 SGACNet* [17] RGB -D R34-NBt1D RTX 3090 35.6 35.7 74.1 CSFNet- 1 RGB -D STDC1 RTX 3090 11.31 106.1 74.73 CSFNet- 2 RGB -D STDC2 RTX 3090 19.37 72.3 76.36 * The inference speed of marked models is retested on an NVIDIA RTX 3090 GPU. 9 1) Comparison results on Cityscapes: Given the importance of the speed metric, Table 2 presents the results on the Cityscapes dataset at half resolution (1024×512) . According to the results, CSFNet -2 achieves mIoU of 76.36%, which is the highest accuracy among both RGB and RGB -D semantic segmentation models. It also attains a speed of 72.3 FPS, which, after its lighter version (CSFNet -1) with 106.1 FPS, has the highest speed among the RGB -D models . Despite the improvement in speed, the proposed CSFNet doesn't reach the speed of the fastest RGB models due to the higher complexity of RGB -D processing. Remarkably , both CSFNet -1 and CSFNet -2 have fewer parameters and higher accuracy than the efficient SGACNet model [17] , which shows the high efficiency of the proposed architecture. Fig. 4 presents some of the qualitative results obtained on the CityScapes dataset, showcasing the remarkable semantic segmentation performance of our proposed method. 2) Comparison results on MFNet: Among the methods compared in Table 3 , CSFNet -2 achieves the second -highest mIoU of 59.98%, surpassed only by the SOTA method CRM- RGBTSeg [43] with mIoU of 61.4%. It also achieves the highest IoU of 26.05% for the Guardrail class. Our proposed method has remarkable results in terms of computational efficiency. Specifically, CSFNet -1 achieves mIoU of 56.05% with only 11.30M parameters and 27.17G FLOPs. As presented in Table 4 , CSFNet -1 with 106.3 FPS and CSFNet -2 with 72.7 FPS are SOTA in terms of inference speed, and they also have the highest accuracy among all real -time RGB- T semantic segmentation methods . According to the results in Table 5 , CSFNet -1 achieves the highest accuracy with mIoU of 55.27% for the daytime scenes. In spite of weak illumination and noisy information in nighttime RGB images, CSFNet -2 demonstrates superior performance, attaining the highest mIoU of 60.8 0%. These results show the effective performance of the CS -AFM module in fusing and rectifying cross -modal features. The v isual results for daytime and nighttime scenarios of the MFNet dataset are shown in Fig. 5. It can be observed that the CSFNet has accurate and detailed segmentation predictions, particularly in poor lighting conditions. Fig. 4. Visual results of CSFNet on the Cityscapes val set (half resolution). From left to right: RGB input, depth input, prediction, and ground truth. 10 Table 3 Comparison with state- of-the-art methods on the MFNet dataset. The best results are shown in bold font. Method Backbone Params FLOPs Car Person Bike Curve Car Stop Guardrail Color Cone Bump mIoU MFNet [2] - 0.73 - 65.9 58.9 42.9 29.9 9.9 8.5 25.2 27.7 39.7 RTFNet [11] ResNet50 185.24 245.71 86.3 67.8 58.2 43.7 24.3 3.6 26.0 57.2 51.7 RTFNet [11] ResNet152 254.51 337.04 87.4 70.3 62.7 45.3 29.8 0.0 29.1 55.7 53.2 FuseSeg [38] DenseNet161 141.52 193.40 87.9 71.7 64.6 44.8 22.7 6.4 46.9 47.9 54.5 NLFNet [44] ResNet- 18 - - 88.5 69.0 63.9 47.8 25.6 6.1 45.0 44.7 54.3 FEANet [16] ResNet152 - - 87.8 71.1 61.1 46.5 22.1 6.6 55.3 48.9 55.3 ABMDRNet [45] ResNet50 64.60 194.33 84.8 69.6 60.3 45.1 33.1 5.1 47.4 50 54.8 EAEFNet [19] ResNet152 200.4 147.3 87.6 72.6 68.3 48.6 35.0 14.2 52.4 58.3 58.9 CAINet [4 1] MobileNet- V2 12.16 123.62 88.5 66.3 68.7 55.4 31.5 9.0 48.9 60.7 58.6 SpiderMesh [39] MiT- B4 - 398.8 89.9 75.3 64.8 51.5 31.4 4.5 54.5 55.9 58.4 IGFNet [42] MiT- B2 67.44 69.18 88.0 74.0 62.7 48.2 36.0 14.2 52.4 57.5 59.0 CMX [5] MiT- B2 - - 89.4 74.8 64.7 47.3 30.1 8.1 52.4 59.4 58.2 CMX [5] MiT- B4 - - 90.1 75.2 64.5 50.2 35.3 8.5 54.2 60.6 59.7 CRM -RGBTSeg [43] Swin -B - - 90.0 75.1 67.0 45.2 49.7 18.4 54.2 54.4 61.4 CSFNet- 1 STDC1 11.30 27.17 86.91 71.35 62.21 44.16 33.10 7.96 46.11 54.54 56.05 CSFNet- 2 STDC2 19.36 47.82 87.36 73.59 63.74 49.28 43.49 26.05 47.32 50.79 59.98 Table 4 The inference speed of networks on the MFNet dataset. Method GPU FPS mIoU MFNet [2] GTX Titan X 55.6 39.7 RTFNet- 50* [11] RTX 3090 40.88 51.7 RTFNet- 152* [11] RTX 3090 25.24 53.2 FuseSeg -161 [38] RTX 2080 Ti 30.01 54.5 NLFNet [44] GTX 1080Ti 35.6 54.3 FEANet* [16] RTX 3090 21.13 55.3 CSFNet- 1 RTX 3090 106.3 56.05 CSFNet- 2 RTX 3090 72.7 59.98 * The inference speed of marked models is retested on an NVIDIA RTX 3090 GPU. Table 5 The comparative results for the daytime and nighttime scenarios on the MFNet dataset. Method Backbone mIoU (%) Daytime Nighttime RTFNet [11] ResNet152 45.8 54.8 FuseSeg [38] DenseNet161 47.8 54.6 NLFNet [44] ResNet- 18 50.3 54.8 SpiderMesh [39] ResNet152 52.0 56.0 CMX [5] MiT- B2 51.3 57.8 CMX [5] MiT- B4 52.5 59.4 CSFNet- 1 STDC1 55.27 46.45 CSFNet- 2 STDC2 49.34 60.80 11 3) Comparison results on ZJU: In Table 6, we compare our proposed CSFNet with SOTA RGB and RGB - P semantic segmentation methods using the ZJU dataset. According to the results, CSFNet -2 and CSFNet -1 achieve mIoUs of 91.4 0% and 90.85%, respectively, which are the third - and fourth- highest accuracies after CMX -B4 [5] and CMX -B2 [5]. Despite the competitive results in terms of accuracy, CSFNet -1 with 108. 5 FPS and CSFNet -2 with 75 FPS have the highest inference speed among all RGB -P semantic segmentation models. The qualitative results of the CSFNet on the ZJU dataset are illustrated in Fig. 6. Table 6 Comparison with state- of-the-art methods on the ZJU dataset. Method Modal Backbone Building Glass Car Road Tree Sky Pedestrian Bicycle mIoU SegFormer [7] RGB MiT-B2 90.6 79.0 92.8 96.6 96.2 89.6 82.9 89.3 89.6 NLFNet [44] RGB -AoLP ResNet- 18 85.4 77.1 93.5 97.7 93.2 85.9 56.9 85.5 84.4 EAFNet [3] RGB -AoLP ResNet- 18 87.0 79.3 93.6 97.4 95.3 87.1 60.4 85.6 85.7 CMX [5] RGB -AoLP MiT-B2 91.5 87.3 95.8 98.2 96.6 89.3 85.6 91.9 92.0 CMX [5] RGB -AoLP MiT-B4 91.6 88.8 96.3 98.3 96.8 89.7 86.2 92.8 92.6 CSFNet- 1 RGB -AoLP STDC1 90.19 84.75 95.21 97.91 96.23 89.18 82.49 90.86 90.85 CSFNet- 2 RGB -AoLP STDC2 91.10 86.11 95.54 98.03 96.51 89.50 83.18 91.26 91.40 Fig. 5. Visual results of CSFNet on the MFNet dataset. From left to right: RGB input, thermal input, prediction, and ground truth. 12 4.4. Ablation Study We performed ablation experiments on the Cityscapes val dataset to verify the effectiveness of the proposed CSFNet. As shown in Table 7 , we evaluate the performance of CSFNet -1 with different levels of parallelization in the encoder network. According to the results, adding two parallel stages (from 3 to 5) leads to an 11. 58% increase in the number of parameters, an 83.77% increase in FLOPs, and a 29.87 % decrease in inference speed. Despite this performance degradation, mIoU accuracy improved by only 0.07%. This slight improvement in accuracy demonstrates the strength of the CS -AFM module in fusing and rectifying cross - modal features in the first three stages. In other words, by improving the fusion of cross -modal features at lower levels, CS -AFM paves the way for the use of a single -branch network at higher levels. Fig. 6. Visual results of CSFNet on the ZJU dataset. From left to right: RGB input, AoLP input, prediction, and ground truth . Table 7 Performance comparison of CSFNet- 1 at different levels of parallelization using the C ityscapes val set (half resolution). Method Dual- Branch Params FLOPs FPS mIoU CSFNet- 1 3 stages 11.31 47.28 106.1 74.73 CSFNet- 1 4 stages 11.57 67.11 82.9 74.75 CSFNet- 1 5 stages 12.62 86.89 74.4 74.80 13 We also evaluate the effectiveness of using the CS -AFM module in the decoder network. As shown in Table 8, using the CS -AFM module in the decoder achieves higher accuracy with more parameters and a lower FPS compared to applying element -wise addition. 5. Conclusion Considering the critical need for real -time processing in driving applications and to overcome the speed limitations of existing multimodal semantic segmentation methods, we design CSFNet, a real -time RGB- X semantic segmentation model. The key component of this model is the proposed CS -AFM module. The CS - AFM module rectifies and fuses the input modalities by employing the cosine similarity between the corresponding channels in an attention- based approach. Given the high generalization ability of the CS -AFM module and its effective performance in low -level feature fusion, we propose an optimized encoder, followed by an effective context module and a lightweight decoder. Experiments on three RGB -D, RGB -T, and RGB -P datasets demonstrate that our CSFNet model achieves high efficiency and is the fastest multimodal semantic segmentation model while maintaining competitive accuracy with SOTA methods. The combination of these advantages makes the CSFNet model ideal for real-time applications in autonomous driving and robotics and facilitates its implementation on embedded hardware. References [1] M. Cordts et al., “The Cityscapes Dataset for Semantic Urban Scene Understanding, ” in Proc. 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 3213- 3223. [2] Q. Ha, K. Watanabe, T. Karasawa, Y. Ushiku, and T. Harada, “MFNet:Towards real- time semantic segmentation for autonomous vehicles with multi -spectral scenes,” in Proc. IROS , Sep. 2017, pp. 5108– 5115. [3] K. Xiang, K. Yang, and K. Wang, “Polarization -driven semantic segmentation via efficient attention -bridged fusion,” Opt. Exp ., vol. 29, no. 4, pp. 4802 –4820, 2021. [4] X. Chen et al., “Bi- directional cross -modality feature propagation with separation-and- aggregation gate for RGB- D semantic segmentation,” in Proc. ECCV, 2020, pp. 561– 577. [5] J. Zhang, H. Liu, K. Yang, X. Hu, R. Liu and R. Stiefelhagen, “CMX: Cross -Modal Fusion for RGB -X Semantic Segmentation With Transformers,” IEEE Trans. Intell. Transp. Syst , vol. 24, no. 12, pp. 14679- 14694, Dec. 2023. [6] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder –decoder with atrous separable convolution for semantic image segmentation,” in Proc. ECCV , 2018, pp. 801– 818. [7] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, “SegFormer: Simple and efficient design for semantic segmentation with transformers,” in Proc. NeurIPS , 2021, pp. 12077 –12090. [8] Couprie, Camille, Clément Farabet, Laurent Najman, and Yann LeCun, “Indoor semantic segmentation using depth information.” 2013, arXiv:1301.3572. [9] Wang, Jinghua, Zhenhua Wang, Dacheng Tao, Simon See, and Gang Wang, “Learning common and specific features for RGB -D semantic segmentation with deconvolutional networks,” in Proc. Comp. Vision –ECCV 2016: 14th Euro. Conf. , 2016, pp. 664- 679. Table 8 Performance evaluation of the CS -AFM module in the decoder network using the Cityscapes val set (half resolution). Method CS-AFM in decoder Params FPS mIoU CSFNet-1 11.31 106.1 74.73 CSFNet-1 11.30 111.2 74.28 CSFNet-2 19.37 72.3 76.36 CSFNet-2 19.36 77.1 76.01 14 [10] Y. Cheng, R. Cai, Z. Li, X. Zhao, and K. Huang, “Locality -sensitive deconvolution networks with gated fusion for RGB -D indoor semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 3029 –3037. [11] Y. Sun, W. Zuo, and M. Liu, “RTFNet: RGB -thermal fusion network for semantic segmentation of urban scenes,” IEEE Robot. Autom. Lett ., vol. 4, no. 3, pp. 2576– 2583, Jul. 2019. [12] X. Hu, K. Yang, L. Fei, and K. Wang, “ACNET: Attention based network to exploit complementary features for RGBD semantic segmentation,” in Proc. ICIP, Sep. 2019, pp. 1440– 1444. [13] Zhang, Yang, Yang Yang, Chenyun Xiong, Guodong Sun, and Yanwen Guo, “Attention- based dual supervised decoder for RGBD semantic segmentation.” 2022, arXiv:2201.01427. [14] Zhang, H., Sheng, V.S., Xi, X., Cui, Z. and Rong, H., “Overview of RGBD semantic segmentation based on deep learning,” Journ. Ambient Intell. Hum. Comp. , pp.13627- 13645, Apr. 2023. [15] C. Hazirbas, L. Ma, C. Domokos, and D. Cremers, “FuseNet: Incorporating depth into semantic segmentation via fusion -based CNN architecture,” in Proc. ACCV, 2016, pp. 213– 228. [16] F. Deng et al., “FEANet: Feature- enhanced attention network for RGBthermal real -time semantic segmentation,” in Proc. IROS , Sep. 2021, pp. 4467– 4473. [17] Y. Zhang, C. Xiong, J. Liu, X. Ye and G. Sun, “Spatial Information -Guided Adaptive Context- Aware Network for Efficient RGB - D Semantic Segmentation, ” IEEE Sensors Journal , vol. 23, no. 19, pp. 23512- 23521, 1 Oct. 2023, [18] D. Seichter, M. Kohler, B. Lewandowski, T. Wengefeld, and H. -M. Gross, “Efficient RGB -D semantic segmentation for indoor scene analysis,” in Proc . ICRA, May 2021, pp. 13525 –13531. [19] M. Liang, J. Hu, C. Bao, H. Feng, F. Deng and T. L. Lam, “Explicit Attention -Enhanced Fusion for RGB -Thermal Perception Tasks,” IEEE Robot. Auto. Lett ., vol. 8, no. 7, pp. 4060- 4067, July 2023. [20] Fan, Mingyuan, Shenqi Lai, Junshi Huang, Xiaoming Wei, Zhenhua Chai, Junfeng Luo, and Xiaolin Wei. “Rethinking bisenet for real-time semantic segmentation.” in Proc. of the IEEE/CVF conf. on comput. Vis. pattern recognit. , 2021, pp. 9716- 9725. [21] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2015, pp. 3431– 3440. [22] O. Ronneberger, P. Fischer, and T. Brox, “U- Net convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput. -Assist. Interv., 2015, pp. 234– 241. [23] V. Badrinarayanan, A. Kendall and R. Cipolla, “SegNet: A Deep Convolutional Encoder -Decoder Architecture for Image Segmentation,” IEEE Trans Patt Anal Mach Intell, vol. 39, no. 12, pp. 2481- 2495, 1 Dec. 2017. [24] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 2881– 2890. [25] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell. , vol. 40, no. 4, pp. 834– 848, Apr. 2018. [26] Chen, Liang -Chieh, George Papandreou, Florian Schroff, and Hartwig Adam, “Rethinking atrous convolution for semantic image segmentation.” 2017, arXiv:1706.05587. [27] X. Wang, R. Girshick, A. Gupta, and K. He, “Non -local neural networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 7794 –7803. [28] J. Hu, L. Shen, and G. Sun, “Squeeze-and- excitation networks,” IEEE/CVF Conf. Compu. Vis. Pattern Recognit. (CVPR) , 2018, pp. 7132 –7141. [29] Zhong, Zilong, Zhong Qiu Lin, Rene Bidart, Xiaodan Hu, Ibrahim Ben Daya, Zhifeng Li, Wei -Shi Zheng, Jonathan Li, and Alexander Wong. “Squeeze-and- attention networks for semantic segmentation.” in Proc. IEEE/CVF conf. comput. Vis. pattern recognit. , 2020, pp. 13065- 13074. [30] J. Fu et al., “Dual attention network for scene segmentation,” in Proc. CVPR, Jun. 2019, pp. 3146– 3154. [31] S. Zheng et al., “Rethinking semantic segmentation from a sequenceto - sequence perspective with transformers,” in Proc. CVPR , Jun. 2021, pp. 6881– 6890. [32] B. Cheng, A. Schwing, and A. Kirillov, “Per -pixel classification is not all you need for semantic segmentation,” Proc. NeurIPS , vol. 34, pp. 17 864 –17875, Oct. 2021. [33] Romera, Eduardo, José M. Alvarez, Luis M. Bergasa, and Roberto Arroyo. “Erfnet: Efficient residual factorized convnet for rea l- time semantic segmentation.” IEEE Trans. Intell. Transp. Sys. , no. 1 pp.263- 272, 2017. [34] Yu, Changqian, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. “Bisenet: Bilateral segmentation network for real-time semantic segmentation.” In Proc. Eur. Conf. C ompu. Vis. (ECCV), 2018, pp. 325- 341. [35] Yu, Changqian, Changxin Gao, Jingbo Wang, Gang Yu, Chunhua Shen, and Nong Sang. “Bisenet v2: Bilateral network with guided aggregation for real -time semantic segmentation.” Int. Jour. Compu . Vis., pp.3051- 3068, sep. 2021. [36] Peng, Juncai, Yi Liu, Shiyu Tang, Yuying Hao, Lutao Chu, Guowei Chen, Zewu Wu et al. “Pp -liteseg: A superior real-time semantic segmentation model.” 2022, arXiv:2204.02681. [37] Jiang, Jindong, Lunan Zheng, Fei Luo, and Zhijun Zhang, “Rednet: Residual encoder -decoder network for indoor rgb- d semantic segmentation.” 2018, arXiv:1806.01054. [38] Y. Sun, W. Zuo, P. Yun, H. Wang, and M. Liu, “FuseSeg: Semantic segmentation of urban scenes based on RGB and thermal data fusion,” IEEE Trans. Autom. Sci. Eng., vol. 18, no. 3, pp. 1000– 1011, Jul. 2021. [39] Fan, Siqi, Zhe Wang, Yan Wang, and Jingjing Liu, “Spidermesh: Spatial- aware demand -guided recursive meshing for rgb-t semantic segmentation.” 2023, arXiv:2303.08692. 15 [40] S.-W. Hung, S. -Y. Lo, and H. -M. Hang, “Incorporating luminance, depth and color information by a fusion- based network for semantic segmentation,” in Proc IEEE Int. Conf. Image Proc , 2019, pp. 2374– 2378. [41] Y. Lv, Z. Liu and G. Li, “Context -Aware Interaction Network for RGB -T Semantic Segmentation, ” IEEE Trans. Multi., vol. 26, pp. 6348- 6360, Jan. 2024. [42] H. Li and Y. Sun, “IGFNet: Illumination -Guided Fusion Network for Semantic Scene Understanding using RGB -Thermal Images, ” in Proc IEEE Int. Conf. Robot. Biom. (ROBIO) , Dec. 2023, pp. 1-6. [43] Shin, Ukcheol, Kyunghyun Lee, In So Kweon, and Jean Oh. “Complementary random masking for RGB -thermal semantic segmentation.” 2023, arXiv:2303.17386. [44] Yan, Ran, Kailun Yang, and Kaiwei Wang. “NLFNet: Non -local fusion towards generalized multimodal semantic segmentation across RGB -depth, polarization, and thermal images.” in Proc IEEE int. conf. robot. Biom. (ROBIO), Dec. 2021, pp. 1129- 1135. [45] Zhang, Qiang, Shenlu Zhao, Yongjiang Luo, Dingwen Zhang, Nianchang Huang, and Jungong Han. “ABMDRNet: Adaptive - weighted bi- directional modality difference reduction network for RGB -T semantic segmentation.” in Proc. IEEE/CVF Conf. Compu. Vis. Pattern Recognit. , 2021, pp. 2633- 2642. [46] Deng, Jia, Wei Dong, Richard Socher, Li -Jia Li, Kai Li, and Li Fei-Fei. “Imagenet: A large- scale hierarchical image database. ” in Proc. IEEE conf. compu. Vis. pattern recognit. , 2009, pp. 248- 255. [47] M. Oršic, I. Krešo, P. Bevandic, and S. Šegvic, “In defense of pretrained ImageNet architectures for real- time semantic segmentation of road -driving images,” in Proc. CVPR, Jun. 2019, pp. 12607– 12616. | 8 | 1 | The CSFNet model described in the paper is utilizing a dual and single-branch architecture with low complexity, specifically designed for faster inference. Given that it is trained on Cityscapes, MFNet, and ZJU datasets with a moderate number of parameters (around 11.31M to 19.37M), and considering the training settings (batch size of 16 for Cityscapes and 8 for others), the model is likely to have a moderate training time. Each dataset has a solid number of samples (e.g., Cityscapes has 2,975 for training). Assuming it takes around 300 epochs for Cityscapes and 600 for the others, I estimate the training time under 8 hours for 1 GPU given the use of an NVIDIA RTX 3090 which has sufficient memory and compute power to handle this efficiently. | yes | Yes | CV | CSFNet: A Cosine Similarity Fusion Network for Real-Time RGB-X Semantic Segmentation of Driving Scenes | 2024-07-01 0:00:00 | https://github.com/Danial-Qashqai/CSFNet | 1 | https://drive.google.com/file/d/1TugQ16fcxbmPBJD0EPMHHmjdK9IE4SAO/view | 34s * 600 epoch = 5.67 hour | https://drive.google.com/file/d/12nPSCuyG-9-eA3bAaqDBWAzJGoddQyDn/view?usp=sharing | Yes | -- Just need to change the path on the argument while calling training script. All the data in proper structure is in colab file. Also the backbone has been downloaded and add resp. |
MIMIC-III | FLD | [] | Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting | 2024-05-06T00:00:00 | https://arxiv.org/abs/2405.03582v2 | [
"https://github.com/kloetergensc/functional-latent_dynamics"
] | {'MSE': '0.444 ± 0.027'} | [
"MSE",
"NegLL"
] | Given the following paper and codebase:
Paper: Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting
Codebase: https://github.com/kloetergensc/functional-latent_dynamics
Improve the FLD model on the MIMIC-III dataset. The result
should improve on the following metrics: {'MSE': '0.444 ± 0.027'}. You must use only the codebase provided.
| Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting Christian Kl¨ otergens( )12, Vijaya Krishna Yalavarthi1, Maximilian Stubbemann12, and Lars Schmidt-Thieme12 1ISMLL, University of Hildesheim, Germany {kloetergens, yalavarthi, stubbemann, schmidt-thieme }@ismll.de 2VWFS Data Analytics Research Center Abstract. Irregularly sampled time series with missing values are often observed in multiple real-world applications such as healthcare, climate and astronomy. They pose a significant challenge to standard deep learn- ing models that operate only on fully observed and regularly sampled time series. In order to capture the continuous dynamics of the irreg- ular time series, many models rely on solving an Ordinary Differential Equation (ODE) in the hidden state. These ODE-based models tend to perform slow and require large memory due to sequential operations and a complex ODE solver. As an alternative to complex ODE-based mod- els, we propose a family of models called Functional Latent Dynamics (FLD). Instead of solving the ODE, we use simple curves which exist at all time points to specify the continuous latent state in the model. The coefficients of these curves are learned only from the observed values in the time series ignoring the missing values. Through extensive experi- ments, we demonstrate that FLD achieves better performance compared to the best ODE-based model while reducing the runtime and memory overhead. Specifically, FLD requires an order of magnitude less time to infer the forecasts compared to the best performing forecasting model. Keywords: Irregularly Sampled Time Series ·Missing Values ·Fore- casting 1 Introduction Time series forecasting plays a pivotal role in numerous fields, ranging from fi- nance and economics to environmental science and healthcare. A time series is considered multivariate if multiple variables, also known as channels, are ob- served. In the realm of time series forecasting, most of the literature considers regular time series , where the time difference between the observed points is equal, and no observations are missing. However, in real-world application such as in healthcare domains, different channels are often independently and irreg- ularly observed leading to an extremely sparse multivariate time series when they are aligned. We refer to these time series as Irregularly Sampled Multi- variate Time Series with missing values (IMTS). The forecasting task of regular multivariate time series and IMTS is illustrated in Figure 1.arXiv:2405.03582v2 [cs.LG] 3 Oct 2024 2 Kl¨ otergens, et al. Forecasting of IMTS is not well-covered in the literature compared to fore- casting of regular time series. Machine learning models that are designed for forecasting regular multivariate time series often rely on the relative position of the observation in the series rather than the absolute time, and cannot accom- modate missing values. Applying these models to IMTS forecasting is not trivial. More specific, models need to implement strategies to handle varying observa- tion distances and missing values. The standard method of handling missing values is imputation. However, this approach is usually suboptimal as absence of data itself carries information, which is discarded by imputation. Addition- ally, imputation errors accumulate and heavily affect the final forecasting task. Therefore, IMTS models must incorporate a more advanced method to handle missing values and directly take observation times into account. (a) Regular Time Series (b) IMTS Fig. 1: Example for regularly and irregularly sampled Time Series with two chan- nels. The observations and forecasting targets are marked as black crosses. Ordinary Differential Equation (ODE)-based models [2,3,1,12,11] have been widely studied for this task. These models capture underlying dynamics of con- tinuous time, making them well-suited for IMTS forecasting where the time in- tervals between observations vary. However, ODE-based models cannot directly handle the missing values, a prevalent occurrence in various application scenar- ios. Furthermore, they are inefficient in terms of run time as they operate in sequential manner similar to recurrent neural networks (RNNs). In this work, we propose a novel family of models called Functional Latent Dynamics (FLD). The hidden states of FLD are governed by a function whose coefficients are derived from the observed time series. The hidden state function can be any curve such as a polynomial or sine function. As the hidden state function accepts continuous time points as inputs, it can be evaluated at any desired time. Our encoder considers only observed values in the time series and ignores the missing values to parameterize the hidden state function. Finally, a dense fully connected deep neural network is applied to the hidden state to obtain the forecasts. Our approach is capable of utilizing any type of parameterized, differentiable function and can thus be adapted to various forecasting scenarios. FLD serves as an alternative to ODE-based models and can handle both missing values and irregular sampling. By employing simple curve functions to Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 3 model hidden state dynamics, we demonstrate that the forecasting accuracy of FLD is significantly better than ODE based models and competitive with the state-of-the-art IMTS forecasting models on 4 real-world IMTS datasets. Additional studies on computational efficiency show that FLD significantly out- performs competing models in terms of inference time. Our contributions are as follows. –We propose Functional Latent Dynamics (FLD), a novel method for IMTS forecasting. FLD captures latent dynamics in a continuous fashion with pa- rameterized curve functions. –We propose an approach to incorporate the well-established attention mech- anism to learn the coefficients of our curve functions that encode the IMTS. –We provide a Proof-of-Concept on a simple toy dataset that is generated with the Goodwin oscillator model [4], an ODE designed to model enzyme synthesis. –We conduct extensive experiments on established benchmark tasks. Our re- sults indicate that FLD outperforms state-of-the-art competitors by an order of magnitude in terms of inference time while providing competitive forecast- ing accuracy. Our code is publicly available on an anonymous Git repository: https:// github.com/kloetergensc/Functional-Latent_Dynamics 2 Problem Formulation AnIrregularly sampled multivariate time series (IMTS) is a sequence x:= ((t1, v1), . . . , (tN, vN)) of Nmany pairs where each pair consists of an obser- vation time point tn∈Rand observation event vn∈XC:= (R∪ {NaN})C made at tn;C∈Nis the number of channels, vn,c̸= NaN represents an observed value andvn,c= NaN represents a missing value . An IMTS forecasting query is a sequence tq:= (tq 1, . . . , tq K) of time points for which observation values are sought (where min k=1:Ktq k>max n=1:Ntn). Any sequence y:= (y1, . . . , y K) of same length with values in XCwe call an IMTS forecasting answer . To measure the difference between the ground truth forecasting answer y(possibly with missing values) and the predicted forecasting answer ˆ y(without missing values), a scalar loss function ℓ:R×R→Rsuch as squared error is averaged over all query time points and non-missing observations: ℓ(y,ˆy) :=1PK k=1NkKX k=1CX c=1 yk,c̸=NaNℓ(yk,c,ˆyk,c), where Nk=|{c∈[C]|yk,c̸= NaN }|denotes the amount of non-missing values of the forecasting answer ykat time point tq k. AnIMTS forecasting dataset consists of Mmany triples ( xm, tq m, ym) (called instances) consisting of a past IMTS xm, a query tq mof future time points and 4 Kl¨ otergens, et al. the ground truth observation values ymfor those time points, drawn from an unknown distribution ρ. The length Nof the past and the number Kof queries will vary across instances in general, while the number Cof channels is the same for all instances. The IMTS forecasting problem then is, given such a dataset D:= ((x1, tq 1, y1), . . . , (xM, tq M, yM)) and a loss function ℓ, find a model M: (XC)∗× R∗→(RC)∗, where∗denotes finite sequences, such that its expected forecasting loss is minimal: L(M;ρ) := E (x,tq,y)∼ρ[ℓ(y,M(x, tq))] 3 Background ODE-based models [2,3,1,11,12] are a family of continuous-time models wherein the hidden state z(τ) is the solution of an initial value problem in Ordinary Differential Equations (ODEs): dz(τ) dτ=f(τ, z(τ)) where z(τ0) =z0 (1) Here, τcan both reference to observation time points tand query time points tq. fis a trainable neural network that governs the dynamics of the hidden state. The hidden state z(τ) is defined and can be evaluated at any desired time- point. Hence, they are a natural fit to model IMTS, where observation times are continuous. However, a numerical ODE solver is required to infer the hidden state: z0, . . . , z N:= ODESolve ( f, z0,(τ0, . . . , τ N)) (2) Here, znis the hidden state for τnandz0is the initial value. GRU-ODE-Bayes [3] integrates a continuous version of Gated Recurrent Units (GRU) into the neural ODE architecture and updates z(t) with Bayesian inference. LinODENet [12] replaces the neural ODE with a linear ODE, in which the ODE solutions are computed by a linear layer. Using a linear ODE enables the model to omit the ODE solver. For updates at observations, LinODENet model incorporates Kalman filtering to ensure the self-consistency property, where the state of the model only changes when the observation deviates from the model prediction. Continuous Recurrent Units (CRU) [11] replace the ODE with a Stochastic Differential Equation (SDE). Using an SDE has the benefit that the change of latent state over any time frame can be computed in closed form with continuous- discrete Kalman filtering. Related to neural ODE, Neural Flows [1] apply invertible networks to directly model the solution curves of ODEs, rendering the ODE solver obsolete. While ODE-based models have the advantage of learning from continuous time observations, they require a complex numerical ODE solver which is slow [1]. Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 5 x (Observed Time Series)FLD-Encoderθ Q NNoutg(tq;θ) ··· ··· ˆy (Forecasts) Fig. 2: Example of FLD with sine functions as a 3-dimensional hidden state. The parameters θof the hidden state function g(·;θ) are inferred by aggregating the observations (red/blue dots) with the attention-based FLD-Encoder. The hidden state at the query times is acquired by following g(tq;θ) and decoded by a neural network (NNout). Additionally, they process the observations of an IMTS sequentially worsening the run time and also increase the memory requirements. Furthermore, ODE- based models cannot directly handle missing values. Typically, they require miss- ing value indicators which act as additional input channels in the series compli- cating the learning process. Substantially different from neural ODEs, GraFITi [17] encodes time se- ries as graphs and solves the forecasting problem using graph neural networks. The model showed superior forecasting accuracy on the established benchmark datasets, while having significantly faster inference than ODE-based models. 4 Functional Latent Dynamics We introduce a family of models called Functional Latent Dynamics as an alter- native to ODE-based models. Here, we use simple curves to specify the hidden state. Specifically, we replace ODESolve in eq. (2) with curves such as polynomial or sine functions. The latent state znis given as: zn:=g(τn;θ) where gis a curve with coefficients θ. Inferring the hidden state at any time point with a function is computa- tionally efficient if that function is simple and does not depend on other time points. Large portion of the literature applies sequential models such as RNNs to learn the inductive bias from the causal nature of time series. However, they 6 Kl¨ otergens, et al. Algorithm 1 Functional Linear Dynamics Require: Observed IMTS x,Query time points tq,latent function g 1:θ←FLD-Encoder( x) {▷Compute the function coefficients } 2:fork= 1, . . . , K do 3: zk←g(tq k, θ) {▷Compute the latent state } 4: ˆ yk←NNout(zk) {▷Make the prediction } 5:return (ˆyk)k=1:K are slow, as they have to operate sequentially. Alternatively, recent transformer based works in similar domains [10,15] show that we can achieve state-of-the-art performance even without applying the sequential model. Hence, in this work, we use simple curves such as polynomial (linear (FLD-L) in eq. (3), quadratic (FLD-Q) in eq. (4)) or sine (FLD-S) functions (in eq. (5)). glin(t;θ) :=θ1t+θ2 θ= (θ1, θ2)∈R2×L(3) gquad(t;θ) :=θ1t2+θ2t+θ3 θ= (θ1, θ2, θ3)∈R3×L(4) gsin(t;θ) :=θ1sin(θ2+θ3t) +θ4 θ= (θ1, θ2, θ3, θ4)∈R4×L(5) Here, sinis applied coordinate-wise. Once we have computed the latent state z, we apply a multilayer feedforward neural network (NNout) to compute ˆ yvia ˆyk= NNout(zk). 5 Inferring Coefficients Values of θare computed from the observed time series Xusing the FLD- Encoder. First, we convert XintoCmany tuples x(1), . . . , x(C)where x(c)= (t(c), v(c)). Here, t(c)= (t(c) 1, . . . , t(c) Nc) and v(c)= (v(c) 1, . . . , v(c) Nc) represent the observation time points and values in channel c, respectively, i.e., the time points with no missing values in channel cand the corresponding values. We pass all the tuples x(c)to a multi-head attention based encoder. We begin with time embeddings. Continuous time embeddings. Our attention-based FLD-Encoder consists of H many heads and for each head h, we provide a Ddimensional embedding ϕh: R→RDof time points: ϕh d(t) :=( adht+bdh ifd= 1 sin(adht+bdh) if 1 < d≤D(6) Here, adhandbdhare trainable parameters. This embedding helps to learn pe- riodic terms from the sinusoidal embeddings and non-periodic terms from the linear embedding [13]. Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 7 θ1 θR FF FF // // Q1 Attn QR Attn Q1 Attn QR Attn tChan. 2 tChan. 1 Fig. 3: FLD-Encoder infers coefficients θto model the hidden dynamics of an IMTS. The channel observations are aggregated with attention (Attn), concate- nated ( //) and combined with a feed forward layer (FF). Multi-head attention encoder. In the following, Qh∈RR×Dis a matrix of trainable parameters where Qh rprovides vector representation to θr,R=|θ|. Kh,c:=ϕh(t(c)) is the continuous embedding of time points in t(c), and Vc=v(c). FF :RHC→RLis a single feed forward layer. Note that similar to scaled dot- product attention in [16], softmax is applied row wise. Presence of missing values in the data makes it challenging to apply multi- head attention directly. Hence, we modify it as follows: θ:= FF( ˆθ) ∈RR×L ˆθ:= [ˆθ1,1, . . . , ˆθ1,C, . . . , ˆθH,1, . . . , ˆθH,C] ∈RR×HC ˆθh,c:=Ah,cVc∈RR×1 Ah,c:= softmax Qh Kh,cT /√ D ∈RR×|Nc| A forward pass of IMTS forecasting using the proposed model is presented in Algorithm 1. Delineating from mTAN Encoder. Our encoder shares some features with the mTAN encoder [13]. The mTAN encoder is used to convert an IMTS into a fully observed regularly sampled time series in the latent space. Instead, the goal of our encoder is to compute the coefficients θinstead of converting to another time series. Hence, our attention query is a trainable matrix instead of embedded reference time points. 8 Kl¨ otergens, et al. 6 Modelling Goodwin Oscillators with FLD-L FLD operates on the assumption that complex functions can be modeled by combining multiple simple curves with a deep neural network. To investigate FLD-L’s ability to learn non-linear dynamics, we conduct an experiment with time series generated by the Goodwin oscillator model [4], which describes neg- ative feedback interactions of cells at the molecular level. For our experiments we use the implementation that was published in CellML [8]. The dataset samples have two input channels and were generated by varying the constants and initial states of the Goodwin oscillator. Figure 4a shows a sample generated by the oscillator and FLD-L’s prediction. Further- more, we plot the hidden states that the trained FLD-L model inferred for that sample in Figure 4b. The experiment on the synthetic Goodwin dataset demon- strates that FLD is capable to precisely infer a non-linear time series, although the hidden states develop linearly over time. (a) Ground truth and prediction (b) Hidden States Fig. 4: Experiment on synthetic data created by the Goodwin oscillator model. We show FLD-L’s forecast (left) and the inferred hidden states (right). 7 Benchmark Experiments We provide details about the tasks, datasets, and models that were used in our experiments. To ensure a fair comparison with previous work, we utilize established benchmark datasets and protocols. 7.1 Datasets Following the IMTS forecasting literature [3,1,17,12], we conduct experiments on four different datasets USHCN, Physionet-2012, MIMIC-III and MIMIC-IV. USHCN [9] contains measurements of 5 variables from 1280 weather stations in the USA. Following the preprocessing proposed by DeBrouwer et al. [3], most of the 150+ years of observation time is ignored and only measurements from 1996-2000 are used in the experiments. Furthermore, USHCN is made sparse artificially by only keeping a randomly sampled 5% of the measurements. Physionet-2012 [14] comprises a dataset consisting of the medical records of 12,000 ICU-patients. During the initial 48 hours of admission, measurements Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 9 Table 1: Statistics of data sets used for experiments. Max. Len. refers to the maximum sequence length among samples. Max. Obs. refers to the maximum number of non-missing observations among samples. Sparsity refers to the per- centage of missing values over all samples. Name #Sampl. #Chann. Max. Len Max. Obs Spars. USHCN 1.114 5 370 398 78.0% Physionet-2012 11.981 37 48 606 80.4% MIMIC-III 21.250 96 97 677 94.2% MIMIC-IV 17.874 102 920 1642 97.8% of 37 vital signs were recorded. Following the approach used in previous stud- ies [3,1,17,12], we pre-process the dataset to create hourly observations, resulting in a maximum of 48 observations in each series. MIMIC-III [5] is a widely utilized medical dataset that provides valuable insights into the care of ICU patients. In order to capture a diverse range of patient characteristics and medical conditions, 96 variables were meticulously observed and documented. To ensure consistency, we followed the preprocessing steps outlined in previous studies [1, 3, 11]. Specifically, we round the recorded observations into 30-minute intervals and only use observations from the 48 hours following the admission. Patients who spend less than 48 hours in the ICU are disregarded. MIMIC-IV [6] represents an expansion and improvement over MIMIC-III, offering an updated and enriched dataset that enables more comprehensive ex- ploration and analysis. It incorporates new data sources and additional patient records, providing an enhanced foundation for researchers to delve into temporal patterns, forecast future medical events, and gain valuable insights into critical care management. Strictly following [1,17], we preprocess MIMIC-IV similar to MIMIC-III, but round observations into 1-minute intervals. 7.2 Competing Models We compare FLD models against members of the neural ODE family: GRU- ODE-Bayes [3],Neural Flows [1],LinODENet [12],CRU [11]. Besides the ODE-based models, we also compare our results to GraFITi [17], the state-of- the-art in IMTS forecasting. mTAN [13] was not introduced as a forecasting model, but we still selected the model as one of our competitors, because the FLD-Encoder is related to the mTAN encoder. The model is trained using the training routine that was originally proposed for interpolation purposes. In the experimental results of this work, mTAN refers to the mTAND-Full architecture, as described by Shukla et al. [13]. 10 Kl¨ otergens, et al. Table 2: FLD’s hyperparameter search space for the benchmark experiments Hyperparameter Search Space Hidden Dimension {32,128,256,512 } Attention Heads {4,8} Decoder Depth {2,4} Embedding Size per Attention Head {2,4,8} 7.3 Task Protocol We adopted the experimental protocol as published by Yalavarthi et al. [17]. Our experiments on IMTS forecasting involve varying the observation range and forecasting horizon across multiple tasks for each dataset to assess different model capabilities. The widely used 75%-3 task requires models to predict the next three time steps after observing 75% of the time series, equating to 36 hours for healthcare datasets and the first three years for the USHCN dataset. To challenge models with a longer forecasting horizon, we also undertake the 50%-50% task, where models predict the second half of an IMTS using the first half as observations, meaning 24 hours of prediction for medical datasets and 2 years for the USHCN dataset. Additionally, we also evaluate the models on the 75%-25% to add a task in between the two previous tasks. Here, models see the observations from the initial 36h / 3 years and forecast the remaining 12h / 1 year. For hyperparameter search and early stopping we take a validation set, consisting of 20% of the available data. Furthermore, we set aside another 10% of the data as unseen for the final evaluation (Test Data). We applied 5-fold cross-validation. Each fold reserves different subsets for validation and testing. The implementation of the experiments are mainly based on the TSDM pack- age provided by Scholz et al. [12]. We run our experiments on Nvidia 2080TI GPU with 12GB. 7.4 Hyperparameters Regarding hyperparameter optimization for competing models, we use the same hyperparameter search spaces and optimization protocol as introduced by Yalavarthi et al. [17]. For each task, we randomly sample a maximum of 10 sets of hyperparameters and fully train models with the respective configurations on one fold. We select the model with the lowest MSE on the validation data of that fold and then train it on each of the 5 folds to compute the mean and standard deviation of the test loss. The search space for the FLD models is described in table 2. While we vary the number of hidden layers in the decoder networks, we fix the width of each layer at the dimension of the hidden states z. For all models we use Adam optimizer [7]. For our models we use an initial learning rate of 0.0001. Furthermore, we add an L2-regularization of weight 0.001. Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 11 Table 3: Test MSE for forecasting next three time steps after 75% observation time. OOM refers to out of memory. †: results reported by Yalavarthi et al. [17]. We highlight the best model in bold and the second best in italics USHCN Physionet-12 MIMIC-III MIMIC-IV GraFITi†0.272±0.047 0.286 ±0.001 0.396 ±0.030 0.225 ±0.001 mTAN†0.300±0.038 0 .315±0.002 0 .540±0.036 OOM GRU-ODE†0.401±0.089 0 .329±0.004 0 .476±0.043 0 .360±0.001 Neural Flow†0.414±0.102 0 .326±0.004 0 .477±0.041 0 .354±0.001 LinODE†0.300±0.060 0.299 ±0.001 0 .446±0.033 0.272±0.002 CRU†0.290±0.060 0 .379±0.003 0 .592±0.049 OOM FLD-L 0.262±0.040 0 .297±0.000 0 .444±0.027 0.274±0.000 FLD-Q 0 .258±0.043 0.301±0.000 0 .451±0.024 0 .280±0.000 FLD-S 0.282±0.030 0 .307±0.000 0 .450±0.029 0 .313±0.002 7.5 Results We compare the forecasting accuracy of FLD-L, FLD-Q and FLD-S with that of the competition by conducting experiments using various observation times and forecasting horizons. Since we follow the experimental protocols from [17], we report their results whenever it is possible and run those experiments that have not been conducted yet. Table 4: Test MSE for forecasting next 25% after 75% observation time. OOM refers to out of memory. †: results reported by Yalavarthi et al. [17]. We highlight the best model in bold and the second best in italics USHCN Physionet-12 MIMIC-III MIMIC-IV GraFITi 0 .499±0.152 0.365±0.001†0.438±0.014†0.285±0.002† mTAN 0.579±0.182 0.514 ±0.017 0.985 ±0.055 OOM GRU-ODE†0.914±0.343 0 .432±0.003†0.591±0.018†0.366±0.154† Neural Flow 1.019±0.338 0 .431±0.001†0.588±0.014†0.465±0.003† LinODEnet 0.923±0.877 0 .373±0.001†0.477±0.021†0.335±0.002† CRU 0.549±0.238 0.435 ±0.001†0.575±0.020†OOM FLD-L 0.645±0.150 0.360±0.001 0.552±0.032 0 .321±0.000 FLD-Q 0.601±0.097 0 .366±0.000 0 .559±0.028 0 .336±0.000 FLD-S 0.526±0.205 0.366±0.000 0 .558±0.033 0 .347±0.001 Table 3, Table 4 and Table 5 show the Test MSEs for each model on the 75%-3, 75%-25% and 50%-50% task respectively. Based on our results we do 12 Kl¨ otergens, et al. not observe an FLD variant which consistently outperforms the other two mem- bers of the model family. For most datasets, FLD-L shows to be the best fit for the short and medium forecasting range, while FLD-S has the best accu- racy on two datasets for the 50%-50% task among FLD variants. FLD-Q makes the best predictions on USHCN for the 75%-3 task, where it even surpasses the state-of-the art model GraFITi [17]. However, USHCN carries large standard de- viations across all models and task, especially for the longer forecasting ranges. Consequently, findings on this dataset are less conclusive. GraFITi reports su- perior forecasting accuracy’s on 10 out 12 dataset/task combinations, but on Physionet-2012 and the 75%-25% task FLD-L improves on GraFITi making it the state-of-the art in this part of the evaluation. When we compare FLD’s per- formance to the ODE-based models, we observe that the most accurate FLD variant outperforms the best ODE-based models in 7 out of 12 cases. In partic- ular, LinODENet outperforms, FLD on all datasets for the 50%-50% task. Table 5: Test MSE for forecasting next 50% after 50% observation time. OOM refers to out of memory. †: results reported by Yalavarthi et al. [17]. We highlight the best model in bold and the second best in italics USHCN Physionet-12 MIMIC-III MIMIC-IV GraFITi 0.623 ±0.153 0 .401±0.001†0.491±0.014†0.285±0.002† mTAN 0.721±0.198 0.632±0.023 1.016 ±0.084 OOM GRU-ODE†1.019±0.342 0 .505±0.001†0.653±0.023†0.439±0.003† Neural Flow 1.019±0.338 0 .506±0.002†0.651±0.017†0.465±0.003† LinODEnet 0.724±0.185 0.411±0.001†0.531±0.022†0.336±0.002† CRU 0.729±0.185 0.467 ±0.002†0.619±0.028†OOM FLD-L 0.874±0.212 0 .415±0.000 0 .545±0.026 0 .346±0.001 FLD-Q 0.888±0.236 0 .424±0.000 0 .554±0.025 0 .358±0.000 FLD-S 1.141±1.163 0 .414±0.000 0 .536±0.023 0 .359±0.001 8 Efficiency We evaluate FLD’s efficiency with respect to inference time. For that experiment each benchmark model is trained on the 50%-50% task of the Physionet-2012, MIMIC-III, MIMIC-IV, and USHCN datasets. Efficiency comparison of machine learning models is a complex task, since dif- ferent hyperparameter configurations may introduce a trade-off between number of parameters and prediction accuracy. Scholars typically compare the inference time of hyperparameter sets that were trained to optimize the training objec- tive, in our case forecasting accuracy. However, we argue that this strategy is Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 13 not necessarily fair to all models, since it ignores the trade-off between efficiency and accuracy. For example, there might exist a hyperparameter configuration that is barely suboptimal with regard to accuracy, but excels in terms of infer- ence time. Consequently, we compare our model to the fastest hyperparameter configuration from each architecture’s search space. This will provide a lower bound of inference time for the competing models and is only unfair to FLD. Table 6: Comparison of inference time in seconds on the 50%-50% task. OOM indicates a memory error. USHCN Physionet-12 MIMIC-III MIMIC-IV GraFITi 0.176 2.775 3.640 6.719 mTAN 0.062 0.776 1.068 3.494 GRU-ODE 5.378 38.118 46.272 154.543 Neural Flow 1.630 2.835 6.428 44.187 f-CRU 1.657 4.578 9.281 OOM LinODE 2.852 6.294 13.776 95.050 FLD-L 0.018 0 .237 0 .394 2 .141 FLD-Q 0.020 0 .243 0 .431 2 .380 FLD-S 0.021 0 .245 0 .435 2 .740 We assume that for each model the smallest hyperparameter instances pro- vide the fastest inference. For example, with Neural Flows [1], we use only 1 flow layer, as employing multiple flow layers leads to slower inference and training. We opt for the euler solver instead of the dopri5 solver due to its significantly faster inference, to infer the hidden state of GRU-ODE-Bayes [3]. Additionally, we use the fastvariant of CRU (f-CRU), that was introduced by Schirmer et al. [11]. For FLD-L, we use the hyperparameter set that has been tuned on vali- dation loss for each task because we found a negligible change in computational speed for different hyperparameters. Table 6 reports the inference time of each model on various datasets. More specific, we refer to the wall-clock time to pre- dict the complete test data of each dataset, with a batch size of 64. We observe that FLD-L infers predictions significantly faster than the competing models. Since mTAN is the second-fastest model, we conclude that FLD’s speed is re- lated to the performant attention-based encoder, since it is closely related with the mTAN encoder. FLD’s inference with parameterized curves results in fewer operations and a significant gain in computational speed. Furthermore, keep in mind that mTAN’s inference time increases drastically if we add more reference points and parameters. To gain more insight into the trade-off between inference time and forecasting accuracy, we conduct a more detailed efficiency comparison. In Figure 5 we plot validation MSE and inference time of 10 randomly sampled hyperparameter 14 Kl¨ otergens, et al. (a) MIMIC-III (b) MIMIC-IV Fig. 5: Efficiency comparison of FLD-L and GraFITi. We plot the validation loss and inference time for 10 randomly sampled hyperparameter configurations for each GraFITi and FLD-L. The plots refer to results on the 75%-25% task on MIMIC-III and MIMIC-IV. configurations of FLD-L and GraFITi on the two largest datasets MIMIC-III and MIMIC-IV. The plot shows that all versions of FLD-L were significantly faster than GraFITi. However, they were also constantly inferior with respect to forecasting accuracy. 9 Conclusion and Future Work In this work, we introduced a novel approach to forecast irregularly sampled multivariate time series (IMTS). In particular, we proposed Functional Latent Dynamics (FLD), a model family that models the hidden state of an IMTS with a continuous curve function. This serves as an efficient and accurate alternative to ODE-based models, which have to solve complex differential equations. To be more specific, we outperform all ODE-based models in task with a short and medium forecasting range. Additionally, we surpass the IMTS forecasting state-of-the-art model GraFITi [17] on 2 of 12 evaluation tasks. Our models have magnitudes faster inference speed when compared to ODE approaches, and multitudes faster inference speed than GraFITi. Our FLD-Encoder can elegantly handle missing observations in order to com- pute the coefficients of the curve functions. Even if the hidden states are linear, FLD can learn to forecast non-linear functions since non-linearity is induced by its decoder. We demonstrate that hidden states that follow linear curve functions are expressive enough to imitate Goodwin oscillators. In the future, we will tackle the problem of combining different forms of curve functions like sine and linear curves. Here, the distant vision is to learn which kind of curves are appropriate for a specific time-series dataset. As our results indicate that FLD is a performant approach for time-series forecasting, it is Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting 15 promising to transfer it to probabilistic forecasting settings. Here, it is crucial to derive possibilities for FLD to output distributions instead of point predictions. To achieve this, FLD can, for example, be used as an encoder for a conditioning input for a normalizing flow. References 1. Biloˇ s, M., Sommer, J., Rangapuram, S.S., Januschowski, T., G¨ unnemann, S.: Neu- ral flows: Efficient alternative to neural odes. Advances in Neural Information Processing Systems 34, 21325–21337 (2021) 2. Chen, R.T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural or- dinary differential equations. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neu- ral Information Processing Systems. vol. 31. Curran Associates, Inc. (2018), https://proceedings.neurips.cc/paper_files/paper/2018/file/ 69386f6bb1dfed68692a24c8686939b9-Paper.pdf 3. De Brouwer, E., Simm, J., Arany, A., Moreau, Y.: Gru-ode-bayes: Continuous mod- eling of sporadically-observed time series. Advances in neural information process- ing systems 32(2019) 4. Goodwin, B.C.: Oscillatory behavior in enzymatic control processes. Advances in enzyme regulation 3, 425–437 (1965) 5. Johnson, A., Pollard, T.J., Shen, L., Lehman, L.w.H., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L.A., Mark, R.G.: Mimic-iii, a freely accessible critical care database sci. Data 3(160035), 10–1038 (2016) 6. Johnson, A., Bulgarelli, L., Pollard, T., Celi, L.A., Mark, R., Horng IV, S.: Mimic- iv-ed. PhysioNet (2021) 7. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 8. Lloyd, C.M., Lawson, J.R., Hunter, P.J., Nielsen, P.F.: The cellml model repository. Bioinformatics 24(18), 2122–2123 (2008) 9. Menne, M.J., Williams Jr, C., Vose, R.S.: United states historical climatology net- work daily temperature, precipitation, and snow data. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee (2015) 10. Nie, Y., Nguyen, N.H., Sinthong, P., Kalagnanam, J.: A time series is worth 64 words: Long-term forecasting with transformers. arXiv preprint arXiv:2211.14730 (2022) 11. Schirmer, M., Eltayeb, M., Lessmann, S., Rudolph, M.: Modeling irregular time series with continuous recurrent units. In: International Conference on Machine Learning. pp. 19388–19405. PMLR (2022) 12. Scholz, R., Born, S., Duong-Trung, N., Cruz-Bournazou, M.N., Schmidt-Thieme, L.: Latent linear ODEs with neural kalman filtering for irregular time series fore- casting (2023), https://openreview.net/forum?id=a-bD9-0ycs0 13. Shukla, S.N., Marlin, B.M.: Multi-time attention networks for irregularly sampled time series. arXiv preprint arXiv:2101.10318 (2021) 14. Silva, I., Moody, G., Scott, D.J., Celi, L.A., Mark, R.G.: Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012. In: 2012 Computing in Cardiology. pp. 245–248. IEEE (2012) 15. Tarasiou, M., Chavez, E., Zafeiriou, S.: Vits for sits: Vision transformers for satellite image time series. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10418–10428 (2023) 16 Kl¨ otergens, et al. 16. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Advances in neural information pro- cessing systems 30(2017) 17. Yalavarthi, V.K., Madusudanan, K., Sholz, R., Ahmed, N., Burchert, J., Javed, S., Born, S., Schmidt-Thieme, L.: Forecasting irregularly sampled time series using graphs (2023) | 8 | 1 | The Functional Latent Dynamics (FLD) model employs a multi-head attention mechanism and a feedforward neural network for its architecture, which entails a moderate level of complexity. Considering the datasets used (which have varying sample sizes and sparsity), realistic estimates suggest that with diverse datasets like Physionet and MIMIC, each containing thousands of samples, the training could be run within the range of 5-8 hours on a single high-end GPU (such as the Nvidia 2080TI mentioned). Given that the model addresses missing values directly without imputation, which parses additional complexity but remains less intensive than ODE-based models, a single GPU should suffice for efficient training under appropriate hyperparameters and batch sizes, which the authors have also optimized. Thus, it is realistic to conclude that the model could be trained in under 8 hours on one GPU. | yes | Yes | Time Series | Functional Latent Dynamics for Irregularly Sampled Time Series Forecasting | 2024-05-06 0:00:00 | https://github.com/kloetergensc/functional-latent_dynamics | 1 | https://physionet.org/content/mimiciii/1.4/, Goodwin dataset inside the repo. | 4 min for 100 epoch on goodwin dataset | https://colab.research.google.com/drive/1c3AQIu4CXDrXGjt_Ft_W2B3OMPepaQ97?usp=sharing | Yes | -- MIMIC-III dataser REQUIRES some training course on the ir website to be completed to acess. But the model runs on goodwin dataset. |
Stanford Cars | ProMetaR | [] | Prompt Learning via Meta-Regularization | 2024-04-01T00:00:00 | https://arxiv.org/abs/2404.00851v1 | [
"https://github.com/mlvlab/prometar"
] | {'Harmonic mean': '76.72'} | [
"Harmonic mean"
] | Given the following paper and codebase:
Paper: Prompt Learning via Meta-Regularization
Codebase: https://github.com/mlvlab/prometar
Improve the ProMetaR model on the Stanford Cars dataset. The result
should improve on the following metrics: {'Harmonic mean': '76.72'}. You must use only the codebase provided.
| Prompt Learning via Meta-Regularization Jinyoung Park, Juyeon Ko, Hyunwoo J. Kim* Department of Computer Science and Engineering, Korea University {lpmn678, juyon98, hyunwoojkim }@korea.ac.kr Abstract Pre-trained vision-language models have shown impres- sive success on various computer vision tasks with their zero-shot generalizability. Recently, prompt learning ap- proaches have been explored to efficiently and effectively adapt the vision-language models to a variety of down- stream tasks. However, most existing prompt learning meth- ods suffer from task overfitting since the general knowl- edge of the pre-trained vision language models is forgot- ten while the prompts are finetuned on a small data set from a specific target task. To address this issue, we pro- pose a Prompt Meta -Regularization (ProMetaR) to im- prove the generalizability of prompt learning for vision- language models. Specifically, ProMetaR meta-learns both the regularizer and the soft prompts to harness the task- specific knowledge from the downstream tasks and task- agnostic general knowledge from the vision-language mod- els. Further, ProMetaR augments the task to gener- ate multiple virtual tasks to alleviate the meta-overfitting. In addition, we provide the analysis to comprehend how ProMetaR improves the generalizability of prompt tuning in the perspective of the gradient alignment. Our exten- sive experiments demonstrate that our ProMetaR improves the generalizability of conventional prompt learning meth- ods under base-to-base/base-to-new and domain general- ization settings. The code of ProMetaR is available at https://github.com/mlvlab/ProMetaR. 1. Introduction Foundational vision-language models (VLMs) have estab- lished their precedence in various computer vision appli- cations such as object detection [11, 13, 16, 79], image classification [55, 60, 75], segmentation [44], and cap- tioning [39, 47, 78]. Represented by CLIP [55] and ALIGN [26], these models are pre-trained on millions of image-text pairs with contrastive loss, creating a shared, well-aligned joint embedding space for vision and lan- guage. They have demonstrated their generalization abili- ties in zero-shot image recognition and object detection. *is the corresponding author. (a) Base class performance.(b) New class performance. CoOpCoCoOpIVLPProMetaR(Ours) Zero-shot CLIPZero-shot CLIPFigure 1. Performance comparison of ProMetaR with prompt learning methods (Zero-shot CLIP, CoOp, CoCoOp, IVLP (base method), and ProMetaR (Ours)) under the base-to-base/base-to- new setting. We measure average accuracy on the base classes (a) and new classes (b) over 11 datasets. The red dotted line indicates the performance of the zero-shot CLIP. Despite the effectiveness of VLMs on zero-shot image recognition, they suffer from time-consuming manual text prompting for each task, which is inefficient and requires human efforts and prior knowledge. Prompt tuning meth- ods such as Context Optimization (CoOp) [83] have arisen as a new paradigm that uses a small number of learnable vectors (soft prompts) instead of manual prompting. They efficiently and effectively adapt models to downstream tasks by optimizing only a small number of learnable vectors (soft prompts) while keeping VLMs frozen. In recent, some works [28, 34] further enhance the performance by applying prompt tuning to both image and text modalities. Prompt tuning methods enhance traditional generalization capabil- ities showing good performance on trained tasks with only a few samples. However, as the soft prompts tend to prior- itize task-specific knowledge, they easily overfit the target task and show poor task generalization abilities. In other words, they have difficulty in generalizing on new tasks, re- sulting in worse performance than CLIP in data-deficient settings. From Figure 1, standard prompt learning meth- ods (CoOp, CoCoOp, and IVLP) show worse performancearXiv:2404.00851v1 [cs.CV] 1 Apr 2024 than zero-shot CLIP on the unseen (new) classes during the training, while they perform well on the seen (base) classes. One remedy to alleviate the task overfitting is learn- ing the learnable prompts with the regularizer. However, the regularizers are not always beneficial for all the tasks, and it is nontrivial to manually balance the strength of the downstream loss ( i.e., contrastive loss) and regular- izer for each task. So, we propose a framework named ProMetaR ( Prompt learning via Meta R egularization) that jointly meta-learns the regularizer and soft prompts to im- prove the generalizability of the prompt tuning. Specif- ically, ProMetaR learns to modulate the gradients of the regularizer to automatically learn effective regularization with a learnable gradient modulation function. This can be viewed as a bi-level optimization, which can be solved with the meta-learning algorithm. The representations learned through the meta-learning algorithms are at a high risk of suffering from meta-overfitting , meaning that the meta- parameters are overfitted to a small set of validation data (also referred to as meta-data). To address this issue, we present task augmentation to generate diverse virtual tasks by augmenting the validation set. We also show how ProMetaR improves the generalizability of existing prompt- ing methods from the perspective of gradient alignments. Our extensive experiments validate the effectiveness of ProMetaR under the base-to-base/base-to-new gener- alization and domain generalization settings over 11 im- age recognition datasets and four variants of Imagenet datasets. In the base-to-base/base-to-new generalization settings (Figure 1), our ProMetaR outperforms existing prompt learning methods on 11 image recognition datasets on the both base classes and new classes. It also outper- forms CLIP on the new classes while improving the perfor- mance on the base classes. These indicate that ProMetaR is effective in both traditional generalization and task gen- eralization. Further, ProMetaR demonstrates its competi- tive performance under the domain generalization setting. We also show that our ProMetaR is applicable to various prompting methods as a general training scheme. Thecontribution of our work can be summarized as: • We propose ProMetaR, a prompt learning framework for improving the generalizability of the prompt optimization methods. ProMetaR meta-learns both the regularizer and learnable prompts, incorporating task augmentation for more effective meta-learning. • We provide the theoretical analysis of how our ProMetaR improves the generalizability of prompt learning ap- proaches. • Our experiments demonstrate the effectiveness and ro- bustness of ProMetaR under the base-to-base/base-to- new settings and domain generalization. Our ProMetaR significantly improves the base prompting methods on the seen (base) and unseen (new) tasks.2. Related works Meta-Learning. The goal of meta-learning, as known as learning to learn , is to efficiently and effectively adapt to new tasks by leveraging past learning experiences [21]. Applications of learning to learn include learning loss functions [3, 4, 59], learning initialization for task adap- tation [14], and few-shot learning [32, 61, 64]. Meta- learning algorithms are typically categorized into three types: metric-based methods [35, 61, 64, 67], memory- based methods [20, 46, 48, 49, 58], and gradient-based methods [15, 37, 50, 56]. After Model-agnostic meta- learning (MAML) [14] has been proposed, gradient-based approaches have been actively explored. But, the gradient- based approaches are often prone to meta-overfitting due to insufficient meta-training tasks [2, 22, 23, 31, 72, 85]. In- spired by these works, ProMetaR automatically learns the effective regularization in a meta-learning manner for the generalizability of the prompting methods and address the meta-overfitting via task augmentation. Regularization. Regularization is a conventional technique to prevent neural networks from overfitting and enhance the generalization. Conventional regularization meth- ods include constraint-based approaches like weight de- cay [42, 76], and input-dependent or parameter-dependent approaches such as ensembling [24, 69], dropout [63], and data augmentation [7, 30, 52, 65, 66, 73, 77]. In this work, we present learning to regularize the soft prompts and task augmentation to improve the traditional generalization and task generalization abilities. Prompt Learning in Vision-Language Models. Prompt learning has proven to be an effective technique in vari- ous natural language processing tasks [36, 40, 41]. Inspired by this success, prompt learning in vision-language models has also been explored [43, 82]. Specifically, CoOp [83] introduces learnable prompts, or soft prompting, which en- ables efficient finetuning and adaptation of CLIP [55] text encoder. VPT [27] proposes to optimize the prompts in the Vision Transformer (ViT) [10]. Recently, a line of works [6, 28, 74] presents multimodal prompt tuning meth- ods by combining the vision and language prompts. How- ever, many prompt learning approaches in VLMs suffer from the over-fitting issue. Some works have been pro- posed to address it. For example, ProGrad [84] regular- izes the learning process by aligning the update of the soft prompts to the the task-agnostic general knowledge of the VLMs with the gradient alignment. UNIGRAM [38] meta- learns the prompt initialization with a large scale of external data to alleviate the generalizability degradation. Prompt- SRC [29] regulates the prompt with mutual agreement max- imization and self-ensemble. Our ProMetaR meta-learns both learnable prompts and regularization to improve the generalizability without using any external data. 3. Method We present our ProMetaR ( Prompt learning via Meta Regularization) to address the limitations of prompt learn- ing in small data regimes. Our novel framework automati- cally learns effective regularization via meta-learning. We will refer to it as Meta Regularization. Remarkably, the proposed method effectively improves the performance in not only base tasks ( traditional generalization) but also new tasks ( task generalization) to address the task overfitting problem. We first introduce the background of prompt tun- ing methods for the vision-language models and the meta- learning. Second, we propose a prompt learning mechanism via meta-regularization to address the over-fitting problems of the prompting approaches. Finally, we provide the theo- retical analysis of our ProMetaR to demonstrate how it en- hances the prompt tuning methods. 3.1. Preliminaries Prompt tuning for VLMs. CLIP [55] provides a well- aligned image-text joint embedding space. The pre- trained CLIP image encoder fand text encoder gcan be used for zero-shot visual recognition by construct- ing the hard prompt . Specifically, CLIP employs text prompts pygenerated by hand-crafted templates ( e.g., “A photo of a [CLASS] ”). Then, the prediction probability can be calculated using the visual embeddings z=f(x)and textual embeddings wy=g(py). Given Nc classes, the predicted probability of image xto be class yis given as: p(y|x) =exp ( sim(z,wy)/τ)PNc j=1exp ( sim(z,wj)/τ), (1) where sim (·,·)denotes the cosine similarity, wyis the tex- tual embedding of the class y, and τis the temperature. Even though hard prompts considerably improve CLIP’s performance, this technique requires manual efforts to find effective hand-crafted templates for each task, namely, ‘prompt engineering’. Instead of manually optimizing hard prompts, ‘prompt tuning’, also known as ‘prompt learn- ing’, approaches have been proposed to learn context vec- tors for the textual and/or visual prompts, namely soft prompts [27, 83]. Concretely, by inserting Ntlearnable tex- tual prompts θtxt= θtxt 1,···, θtxt Nt andNvvisual prompts θvis= θvis 1,···, θvis Nv , the textual embedding ˜wyfor class yand visual embedding ˜zare obtained as follows: ˜wy=f θtxt 1, . . . , θtxt Nt,cy , (2) ˜z=g CLS, θvis 1, . . . , θvis Nv,E , (3) where cyis the word embedding of class y,CLSdenotes the class token, and Eis the image patch embeddings. With theweights of the visual encoder fand text encoder gfrozen, the prompts are optimized with the contrastive loss: L=−X iyilogexp ( sim(˜zi,˜wi)/τ)PNc j=1exp ( sim(˜zi,˜wj)/τ),(4) where yidenotes the one-hot vector for the class of the in- putxi. With the soft prompts, prompt tuning minimizes manual efforts, and it improves CLIP’s performance in the downstream tasks. However, since existing prompt tuning methods tend to focus on task-specific knowledge , they of- ten suffer from the overfitting problem, necessitating proper regularization, especially in small data regimes. Meta-learning. The goal of meta-learning, commonly re- ferred to as ‘learning-to-learn,’ is to design models that can quickly adapt to new tasks with small data by leveraging past learning experiences across multiple tasks [21]. Let Ddenote a meta-training set that consists of training and validation sets across tasks T,i.e.,D={{Dtr i, Dval i}}i∈T, where Dtr i, andDvalid iare the traditional training and valida- tion sets of i-th task. Then, meta-learning can be formulated as a bi-level optimization problem given as: min ϕX i∈TLvalid(θ∗ i(ϕ);Dval i) (5) s.t.θ∗ i(ϕ) = arg min θiLtrain(θi;ϕ, Dtr i),∀i∈ T, (6) where Lvalid andLtraindenote the losses for the upper- and lower-level optimization problems, and ϕi, θiare task- specific parameters for i-th task and meta-parameters, re- spectively. The lower-level optimization in Eq. (6) does the task-specific adaption/training leveraging learning ex- periences encoded in the meta-parameters ϕand training setDtr i. The upper-level optimization in Eq. (5) searches for meta-parameters ϕthat improve the overall validation losses of trained task-specific parameters θ∗ i(ϕ). A seminal work, MAML [14], can be derived from the formulation above. MAML aims at learning good initializa- tion that is efficiently adaptable to new tasks. Let ϕdenote the initialization of model parameters. With task-specific loss function Liand the approximation of lower optimiza- tion by one-step update (Eq. (8)), the meta-learning formu- lation in (Eq. (5)) is converted into MAML’s formulation given as: min ϕX iLi(ˆθi(ϕ);Dval i) (7) s.t.ˆθi(ϕ) =ϕ−α∇ϕLi(ϕ;Dtr i),∀i∈ T, (8) where αdenotes the step size to adapt the initialization ϕto i-th task. The approximation by one-step update in Eq. (8) enables efficient optimization without the necessity of iter- ative optimization for lower optimization. Inner LoopImageSoft promptTextSoft prompt Update(Eq. (19))Outer LoopImageText <latexit sha1_base64="Db5ynPZMhotFK/qcZ7dZjy6a63k=">AAACxHicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rIoiMsW7ANqkWQ6rUMnD5KJUIr+gFv9NvEP9C+8M6agFtEJSc6ce8+Zuff6sRSpcpzXgrWwuLS8Ulwtra1vbG6Vt3faaZQljLdYJKOk63splyLkLSWU5N044V7gS97xx+c63rnjSSqi8EpNYt4PvFEohoJ5iqjm6KZccaqOWfY8cHNQQb4aUfkF1xggAkOGABwhFGEJDyk9PbhwEBPXx5S4hJAwcY57lEibURanDI/YMX1HtOvlbEh77ZkaNaNTJL0JKW0ckCaivISwPs028cw4a/Y376nx1Heb0N/PvQJiFW6J/Us3y/yvTteiMMSpqUFQTbFhdHUsd8lMV/TN7S9VKXKIidN4QPGEMDPKWZ9to0lN7bq3nom/mUzN6j3LczO861vSgN2f45wH7aOqe1ytNWuV+lk+6iL2sI9DmucJ6rhEAy3j/YgnPFsXlrRSK/tMtQq5ZhfflvXwAUWqj3U=</latexit>g <latexit sha1_base64="S8VE4s8C/k3hqxAyNAfRb/+RaXM=">AAACxHicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rIoiMsW7ANqkWQ6raF5MTMRStEfcKvfJv6B/oV3xhTUIjohyZlz7zkz914/DQOpHOe1YC0sLi2vFFdLa+sbm1vl7Z22TDLBeIslYSK6vid5GMS8pQIV8m4quBf5Ie/443Md79xxIYMkvlKTlPcjbxQHw4B5iqjm8KZccaqOWfY8cHNQQb4aSfkF1xggAUOGCBwxFOEQHiQ9PbhwkBLXx5Q4QSgwcY57lEibURanDI/YMX1HtOvlbEx77SmNmtEpIb2ClDYOSJNQniCsT7NNPDPOmv3Ne2o89d0m9Pdzr4hYhVti/9LNMv+r07UoDHFqagioptQwujqWu2SmK/rm9peqFDmkxGk8oLggzIxy1mfbaKSpXffWM/E3k6lZvWd5boZ3fUsasPtznPOgfVR1j6u1Zq1SP8tHXcQe9nFI8zxBHZdooGW8H/GEZ+vCCi1pZZ+pViHX7OLbsh4+AENKj3Q=</latexit>f <latexit sha1_base64="8ofD/mB5SR38e1FEO1HMRkNwdY0=">AAAC1nicjVHLSsNAFD3GV32nunQTLIKrkkpRl0U3LlwoWC1oLZNxWgcnD5KJIkV34tYfcKufJP6B/oV3xghqEZ2Q5My595yZe2+QKJlp338ZcoZHRsfGSxOTU9Mzs3Nuef4gi/OUiyaPVZy2ApYJJSPR1FIr0UpSwcJAicPgfMvEDy9Emsk42tdXiWiHrBfJruRME9Vxy8ch02ecKW/npM+VTK47bsWv+nZ5g6BWgAqKtRu7zzjGKWJw5AghEEETVmDI6DlCDT4S4troE5cSkjYucI1J0uaUJSiDEXtO3x7tjgo2or3xzKya0ymK3pSUHpZJE1NeStic5tl4bp0N+5t333qau13RPyi8QmI1zoj9S/eZ+V+dqUWjiw1bg6SaEsuY6njhktuumJt7X6rS5JAQZ/ApxVPC3Co/++xZTWZrN71lNv5qMw1r9rzIzfFmbkkDrv0c5yA4WK3W1qr1vXqlsVmMuoRFLGGF5rmOBraxiyZ5X+IBj3hyWs6Nc+vcfaQ6Q4VmAd+Wc/8OMWyWfw==</latexit>LclipUpdate<latexit sha1_base64="b6WkND2DcPvFmY7V7rvrMg1R0J4=">AAACz3icjVHLTsJAFD3UF+ILdemmkZi4MKQUBNwR3biERMAEiGnLABNK27RTDSEat/6AW/0r4x/oX3hnLIkuiE7T9s6555yZe68duDwShvGe0paWV1bX0uuZjc2t7Z3s7l4r8uPQYU3Hd/3w2rYi5nKPNQUXLrsOQmZNbJe17fGFzLdvWRhx37sS04D1JtbQ4wPuWIKgbleMmLBO9G4w4jfZnJE/q5bNU1M38oZRMYtlGZiVklnUC4TIlUOy6n72DV304cNBjAkYPAiKXViI6OmgAAMBYT3MCAsp4irPcI8MaWNiMWJYhI7pO6RdJ0E92kvPSKkdOsWlNySljiPS+MQLKZan6SofK2eJLvKeKU95tyn97cRrQqjAiNC/dHPmf3WyFoEBqqoGTjUFCpHVOYlLrLoib67/qEqQQ0CYjPuUDyl2lHLeZ11pIlW77K2l8h+KKVG5dxJujE95SxrwfIr64qBl5gvlfKlRytXOk1GncYBDHNM8K6jhEnU0yTvAM17wqjW0O+1Be/ymaqlEs49fS3v6Av9MlBs=</latexit>✓, <latexit sha1_base64="0YMLX6+68t1XJ/rets6fPAz4RHk=">AAAC13icjVHLSsNAFD2N73fVpZtgEVyVVIq6LLpxIyhYW2mrTMaxHZoXyUQspbgTt/6AW/0j8Q/0L7wzpqAW0QlJzpx7z5m597qRJxPlOK85a2x8YnJqemZ2bn5hcSm/vHKahGnMRZWHXhjXXZYITwaiqqTyRD2KBfNdT9Tc7r6O165FnMgwOFG9SLR81g7kleRMEXWRX2n6THU48/qHg/N+M+rIwUW+4BQds+xRUMpAAdk6CvMvaOISIThS+BAIoAh7YEjoaaAEBxFxLfSJiwlJExcYYJa0KWUJymDEdunbpl0jYwPaa8/EqDmd4tEbk9LGBmlCyosJ69NsE0+Ns2Z/8+4bT323Hv3dzMsnVqFD7F+6YeZ/dboWhSvsmhok1RQZRlfHM5fUdEXf3P5SlSKHiDiNLykeE+ZGOeyzbTSJqV33lpn4m8nUrN7zLDfFu74lDbj0c5yj4HSrWNoulo/LhcpeNupprGEdmzTPHVRwgCNUyfsGj3jCs3Vm3Vp31v1nqpXLNKv4tqyHD0r4l1c=</latexit>MForwardBackward LearnableFrozen <latexit sha1_base64="O/Xv3zocURyoGIus8Xs0+SWwAZE=">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rLoxpVUsA9piyTptA3Ni8lEqKVbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXnPGwuLS8kp+tbC2vrG5VdzeaaRxxj1W9+Ig5i3XSVngR6wufBGwVsKZE7oBa7qjcxlv3jGe+nF0LcYJ64bOIPL7vucIom46oSOGbt+8vy2WrLKlljkPbA1K0KsWF1/QQQ8xPGQIwRBBEA7gIKWnDRsWEuK6mBDHCfkqzjBFgbQZZTHKcIgd0XdAu7ZmI9pLz1SpPToloJeT0sQBaWLK44TlaaaKZ8pZsr95T5SnvNuY/q72CokVGBL7l26W+V+drEWgj1NVg081JYqR1XnaJVNdkTc3v1QlyCEhTuIexTlhTylnfTaVJlW1y946Kv6mMiUr957OzfAub0kDtn+Ocx40jsr2cblyVSlVz/So89jDPg5pnieo4gI11Mk7xCOe8GxcGsKYGNPPVCOnNbv4toyHD6TRksY=</latexit>z<latexit sha1_base64="8Dju18je0DQ+BLdId0OU9V4QlP8=">AAAC13icjVHLSsNAFD2Nr1pfsS7dBIvgqqQi6rLoxmUF+5C2lCSdtkPzIpmItRR34tYfcKt/JP6B/oV3xhTUIjohyZlz7zkz9147dHksTPM1o83NLywuZZdzK6tr6xv6Zr4WB0nksKoTuEHUsK2YudxnVcGFyxphxCzPdlndHp7KeP2KRTEP/AsxClnbs/o+73HHEkR19HxLcLfLjHHLs8TA7hk3k45eMIumWsYsKKWggHRVAv0FLXQRwEECDww+BGEXFmJ6mijBREhcG2PiIkJcxRkmyJE2oSxGGRaxQ/r2addMWZ/20jNWaodOcemNSGlglzQB5UWE5WmGiifKWbK/eY+Vp7zbiP526uURKzAg9i/dNPO/OlmLQA/HqgZONYWKkdU5qUuiuiJvbnypSpBDSJzEXYpHhB2lnPbZUJpY1S57a6n4m8qUrNw7aW6Cd3lLGnDp5zhnQW2/WDosHpwfFMon6aiz2MYO9mieRyjjDBVUyfsaj3jCs3ap3Wp32v1nqpZJNVv4trSHDxfVltc=</latexit>˜z <latexit sha1_base64="LjkreQ/G2S6tAK77F08L2dZwosk=">AAAC13icjVHLSsNAFD2Nr1pfsS7dBIvgqqQi6rLoxmUF+5C2lCSdtkPzIpmopRR34tYfcKt/JP6B/oV3xhTUIjohyZlz7zkz9147dHksTPM1o83NLywuZZdzK6tr6xv6Zr4WB0nksKoTuEHUsK2YudxnVcGFyxphxCzPdlndHp7KeP2KRTEP/AsxClnbs/o+73HHEkR19HxLcLfLjHHLs8TA7hnXk45eMIumWsYsKKWggHRVAv0FLXQRwEECDww+BGEXFmJ6mijBREhcG2PiIkJcxRkmyJE2oSxGGRaxQ/r2addMWZ/20jNWaodOcemNSGlglzQB5UWE5WmGiifKWbK/eY+Vp7zbiP526uURKzAg9i/dNPO/OlmLQA/HqgZONYWKkdU5qUuiuiJvbnypSpBDSJzEXYpHhB2lnPbZUJpY1S57a6n4m8qUrNw7aW6Cd3lLGnDp5zhnQW2/WDosHpwfFMon6aiz2MYO9mieRyjjDBVUyfsGj3jCs3ap3Wp32v1nqpZJNVv4trSHDxCyltQ=</latexit>˜w<latexit sha1_base64="WPaPbCGQzEKtcPVmTNAZc7PsBiQ=">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rLoxpVUsA9piyTptA3Ni8lEKaVbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXnPGwuLS8kp+tbC2vrG5VdzeaaRxxj1W9+Ig5i3XSVngR6wufBGwVsKZE7oBa7qjcxlv3jGe+nF0LcYJ64bOIPL7vucIom46oSOGbt+8vy2WrLKlljkPbA1K0KsWF1/QQQ8xPGQIwRBBEA7gIKWnDRsWEuK6mBDHCfkqzjBFgbQZZTHKcIgd0XdAu7ZmI9pLz1SpPToloJeT0sQBaWLK44TlaaaKZ8pZsr95T5SnvNuY/q72CokVGBL7l26W+V+drEWgj1NVg081JYqR1XnaJVNdkTc3v1QlyCEhTuIexTlhTylnfTaVJlW1y946Kv6mMiUr957OzfAub0kDtn+Ocx40jsr2cblyVSlVz/So89jDPg5pnieo4gI11Mk7xCOe8GxcGsKYGNPPVCOnNbv4toyHD52xksM=</latexit>w <latexit sha1_base64="Db5ynPZMhotFK/qcZ7dZjy6a63k=">AAACxHicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rIoiMsW7ANqkWQ6rUMnD5KJUIr+gFv9NvEP9C+8M6agFtEJSc6ce8+Zuff6sRSpcpzXgrWwuLS8Ulwtra1vbG6Vt3faaZQljLdYJKOk63splyLkLSWU5N044V7gS97xx+c63rnjSSqi8EpNYt4PvFEohoJ5iqjm6KZccaqOWfY8cHNQQb4aUfkF1xggAkOGABwhFGEJDyk9PbhwEBPXx5S4hJAwcY57lEibURanDI/YMX1HtOvlbEh77ZkaNaNTJL0JKW0ckCaivISwPs028cw4a/Y376nx1Heb0N/PvQJiFW6J/Us3y/yvTteiMMSpqUFQTbFhdHUsd8lMV/TN7S9VKXKIidN4QPGEMDPKWZ9to0lN7bq3nom/mUzN6j3LczO861vSgN2f45wH7aOqe1ytNWuV+lk+6iL2sI9DmucJ6rhEAy3j/YgnPFsXlrRSK/tMtQq5ZhfflvXwAUWqj3U=</latexit>g <latexit sha1_base64="S8VE4s8C/k3hqxAyNAfRb/+RaXM=">AAACxHicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rIoiMsW7ANqkWQ6raF5MTMRStEfcKvfJv6B/oV3xhTUIjohyZlz7zkz914/DQOpHOe1YC0sLi2vFFdLa+sbm1vl7Z22TDLBeIslYSK6vid5GMS8pQIV8m4quBf5Ie/443Md79xxIYMkvlKTlPcjbxQHw4B5iqjm8KZccaqOWfY8cHNQQb4aSfkF1xggAUOGCBwxFOEQHiQ9PbhwkBLXx5Q4QSgwcY57lEibURanDI/YMX1HtOvlbEx77SmNmtEpIb2ClDYOSJNQniCsT7NNPDPOmv3Ne2o89d0m9Pdzr4hYhVti/9LNMv+r07UoDHFqagioptQwujqWu2SmK/rm9peqFDmkxGk8oLggzIxy1mfbaKSpXffWM/E3k6lZvWd5boZ3fUsasPtznPOgfVR1j6u1Zq1SP8tHXcQe9nFI8zxBHZdooGW8H/GEZ+vCCi1pZZ+pViHX7OLbsh4+AENKj3Q=</latexit>fSoft promptSoft prompt <latexit sha1_base64="kL6aVOo2Rul8J4mNeI4sKk1K63g=">AAACzXicjVHLSsNAFD2Nr1pfVZdugkVwVRIp6rLoxoVgBVuLbZHJdNqG5kUyEUrVrT/gVn9L/AP9C++MKahFdEKSM+fec2buvU7kuYm0rNecMTM7N7+QXywsLa+srhXXNxpJmMZc1HnohXHTYYnw3EDUpSs90YxiwXzHE5fO8FjFL29EnLhhcCFHkej4rB+4PZczSdRV22dywJlnnl4XS1bZ0sucBnYGSshWLSy+oI0uQnCk8CEQQBL2wJDQ04INCxFxHYyJiwm5Oi5whwJpU8oSlMGIHdK3T7tWxga0V56JVnM6xaM3JqWJHdKElBcTVqeZOp5qZ8X+5j3WnupuI/o7mZdPrMSA2L90k8z/6lQtEj0c6hpcqinSjKqOZy6p7oq6ufmlKkkOEXEKdykeE+ZaOemzqTWJrl31lun4m85UrNrzLDfFu7olDdj+Oc5p0Ngr2/vlynmlVD3KRp3HFraxS/M8QBUnqKFO3gEe8YRn48xIjVvj/jPVyGWaTXxbxsMHWAuTCg==</latexit>L <latexit sha1_base64="CsyHAqSFqd3YS/07Khw4FSJ9V7A=">AAAC2nicjVHLSsNAFD3GV31HxZWbYBFclVSKuhTduFSxttCWMhnHdjAvkklRSjfuxK0/4FY/SPwD/QvvjCmoRXRCkjPn3nNm7r1e7MtUue7rmDU+MTk1XZiZnZtfWFyyl1fO0yhLuKjyyI+SusdS4ctQVJVUvqjHiWCB54uad3Wo47WeSFIZhWfqJhatgHVCeSk5U0S17bVmwFSXM985bTeVuFb9nkwHbbvollyznFFQzkER+TqO7Bc0cYEIHBkCCIRQhH0wpPQ0UIaLmLgW+sQlhKSJCwwwS9qMsgRlMGKv6NuhXSNnQ9prz9SoOZ3i05uQ0sEmaSLKSwjr0xwTz4yzZn/z7htPfbcb+nu5V0CsQpfYv3TDzP/qdC0Kl9gzNUiqKTaMro7nLpnpir6586UqRQ4xcRpfUDwhzI1y2GfHaFJTu+4tM/E3k6lZved5boZ3fUsacPnnOEfB+XapvFOqnFSK+wf5qAtYxwa2aJ672McRjlEl7z4e8YRnq2ndWnfW/WeqNZZrVvFtWQ8fOZiYeQ==</latexit>Rvis <latexit sha1_base64="em/sgqyE6O8wIPVbH+v8H9aTZt8=">AAAC2nicjVHLSgMxFD0d3/VVFVduBovgqkxF1KXoxqWKbYVWSiZN2+C8mMmIUty4E7f+gFv9IPEP9C+8iSn4QDTDzJyce89J7r1+EshMed5LwRkZHRufmJwqTs/Mzs2XFhbrWZynXNR4HMTpqc8yEchI1JRUgThNUsFCPxAN/3xfxxsXIs1kHJ2oq0SchawXya7kTBHVLi23Qqb6nAXucbulxKUaqEt13S6VvYpnlvsTVC0ow67DuPSMFjqIwZEjhEAERTgAQ0ZPE1V4SIg7w4C4lJA0cYFrFEmbU5agDEbsOX17tGtaNqK99syMmtMpAb0pKV2skSamvJSwPs018dw4a/Y374Hx1He7or9vvUJiFfrE/qUbZv5Xp2tR6GLH1CCppsQwujpuXXLTFX1z91NVihwS4jTuUDwlzI1y2GfXaDJTu+4tM/FXk6lZvec2N8ebviUNuPp9nD9BfaNS3apsHm2Wd/fsqCexglWs0zy3sYsDHKJG3gM84BFPTsu5cW6du49Up2A1S/iynPt3WvGYhw==</latexit>Rtxt <latexit sha1_base64="xwSWSuPoXn2LQiIE6lHSZOf26VM=">AAAC5XicjVHLSgMxFD0dX/VddelmsAiuylSKuiy6cVnBPqCtZWYa29B5MckUS+nWnTtx6w+41V8R/0D/wps4BR+IZpjk5Nx7TnJzncjjQlrWS8aYmZ2bX8guLi2vrK6t5zY2ayJMYpdV3dAL44ZjC+bxgFUllx5rRDGzfcdjdWdwouL1IYsFD4NzOYpY27d7Ab/kri2J6uTMlhN6XTHyaRm3ZJ9Je3JBgF3J8ZCLyaSTy1sFSw/zJyimII90VMLcM1roIoSLBD4YAkjCHmwI+poowkJEXBtj4mJCXMcZJlgibUJZjDJsYgc092jXTNmA9spTaLVLp3j0x6Q0sUuakPJiwuo0U8cT7azY37zH2lPdbUSrk3r5xEr0if1LN838r07VInGJI10Dp5oizajq3NQl0a+ibm5+qkqSQ0Scwl2Kx4RdrZy+s6k1Qteu3tbW8VedqVi1d9PcBG/qltTg4vd2/gS1/ULxoFA6K+XLx2mrs9jGDvaon4co4xQVVMn7Gg94xJPRM26MW+PuI9XIpJotfBnG/Tvw3p4z</latexit>✓vis <latexit sha1_base64="iIUX5XF1XtvFXO5YmqRhkXMNZ7c=">AAAC5XicjVHLSsNAFD2Nr/qOunQTLIKrkkpRl0U3LivYB7S1JOm0Dc2LZCItIVt37sStP+BWf0X8A/0L74wpqEV0QmbOnHvPmblzzcCxI67rrzllbn5hcSm/vLK6tr6xqW5t1yM/Di1Ws3zHD5umETHH9liN29xhzSBkhms6rGGOzkS8cc3CyPa9Sz4JWMc1Bp7dty2DE9VVtbbpO71o4tKStPmQcSO9IsDGPOFjnqZdtaAXdTm0WVDKQAHZqPrqC9rowYeFGC4YPHDCDgxE9LVQgo6AuA4S4kJCtowzpFghbUxZjDIMYkc0D2jXyliP9sIzkmqLTnHoD0mpYZ80PuWFhMVpmozH0lmwv3kn0lPcbUKrmXm5xHIMif1LN838r07UwtHHiazBppoCyYjqrMwllq8ibq59qYqTQ0CcwD2Kh4QtqZy+syY1kaxdvK0h428yU7Bib2W5Md7FLanBpZ/tnAX1w2LpqFi+KBcqp1mr89jFHg6on8eo4BxV1Mj7Bo94wrMyUG6VO+X+M1XJZZodfBvKwwcSVJ5B</latexit>✓txt<latexit sha1_base64="xwSWSuPoXn2LQiIE6lHSZOf26VM=">AAAC5XicjVHLSgMxFD0dX/VddelmsAiuylSKuiy6cVnBPqCtZWYa29B5MckUS+nWnTtx6w+41V8R/0D/wps4BR+IZpjk5Nx7TnJzncjjQlrWS8aYmZ2bX8guLi2vrK6t5zY2ayJMYpdV3dAL44ZjC+bxgFUllx5rRDGzfcdjdWdwouL1IYsFD4NzOYpY27d7Ab/kri2J6uTMlhN6XTHyaRm3ZJ9Je3JBgF3J8ZCLyaSTy1sFSw/zJyimII90VMLcM1roIoSLBD4YAkjCHmwI+poowkJEXBtj4mJCXMcZJlgibUJZjDJsYgc092jXTNmA9spTaLVLp3j0x6Q0sUuakPJiwuo0U8cT7azY37zH2lPdbUSrk3r5xEr0if1LN838r07VInGJI10Dp5oizajq3NQl0a+ibm5+qkqSQ0Scwl2Kx4RdrZy+s6k1Qteu3tbW8VedqVi1d9PcBG/qltTg4vd2/gS1/ULxoFA6K+XLx2mrs9jGDvaon4co4xQVVMn7Gg94xJPRM26MW+PuI9XIpJotfBnG/Tvw3p4z</latexit>✓vis <latexit sha1_base64="iIUX5XF1XtvFXO5YmqRhkXMNZ7c=">AAAC5XicjVHLSsNAFD2Nr/qOunQTLIKrkkpRl0U3LivYB7S1JOm0Dc2LZCItIVt37sStP+BWf0X8A/0L74wpqEV0QmbOnHvPmblzzcCxI67rrzllbn5hcSm/vLK6tr6xqW5t1yM/Di1Ws3zHD5umETHH9liN29xhzSBkhms6rGGOzkS8cc3CyPa9Sz4JWMc1Bp7dty2DE9VVtbbpO71o4tKStPmQcSO9IsDGPOFjnqZdtaAXdTm0WVDKQAHZqPrqC9rowYeFGC4YPHDCDgxE9LVQgo6AuA4S4kJCtowzpFghbUxZjDIMYkc0D2jXyliP9sIzkmqLTnHoD0mpYZ80PuWFhMVpmozH0lmwv3kn0lPcbUKrmXm5xHIMif1LN838r07UwtHHiazBppoCyYjqrMwllq8ibq59qYqTQ0CcwD2Kh4QtqZy+syY1kaxdvK0h428yU7Bib2W5Md7FLanBpZ/tnAX1w2LpqFi+KBcqp1mr89jFHg6on8eo4BxV1Mj7Bo94wrMyUG6VO+X+M1XJZZodfBvKwwcSVJ5B</latexit>✓txt <latexit sha1_base64="p3uCNtqFWgefPGMUWteo8ATmuRg=">AAAC4HicjVG7TsMwFD2Ed3kVGFkiKiSmKg2lLVsFCyNItCBRQElqikVeip2KKurAxoZY+QFW+BrEH8BfcG1SCQYEjmJfn3vOsa+vG/tcSMt6GzPGJyanpmdmC3PzC4tLxeWVtojSxGMtL/Kj5MR1BPN5yFqSS5+dxAlzAtdnx+71nsof91kieBQeyUHMzgKnF/JL7jmSoIviWseN/K4YBLRkveF51pHsRmZ9LobDi2LJKu80ava2bVply6rbWzUV2PWqvWVWCFGjhHwcRMVXdNBFBA8pAjCEkBT7cCDoO0UFFmLCzpARllDEdZ5hiAJpU2IxYjiEXtPco91pjoa0V55Cqz06xac/IaWJDdJExEsoVqeZOp9qZ4X+5p1pT3W3Aa1u7hUQKnFF6F+6EfO/OlWLxCUaugZONcUaUdV5uUuqX0Xd3PxWlSSHmDAVdymfUOxp5eidTa0Runb1to7Ov2umQtXey7kpPtQtqcGjLpq/B227XKmVq4fVUnM3b/UM1rCOTepnHU3s4wAt8r7FE57xYrjGnXFvPHxRjbFcs4ofw3j8BLlnnDo=</latexit>gvis <latexit sha1_base64="V/fl3RNprsPSRnNZW7p6qie0UBA=">AAAC4HicjVG7TsMwFD0N7/IKMLJEVEhMVRpKW7YKFkaQaKlEASWpW6LmpcRBraIMbGyIlR9gha9B/AH8BdcmlWBA4Cj29bnnHPv6WqHrxFzX3wrK1PTM7Nz8QnFxaXllVV1bb8dBEtmsZQduEHUsM2au47MWd7jLOmHETM9y2Zk1PBT5sxsWxU7gn/JxyC48c+A7fcc2OUFX6mbXCtxePPZoSQfZZdrlbMRTPuJZdqWW9PJ+o2bsGZpe1vW6sVsTgVGvGrtahRAxSsjHcaC+ooseAthI4IHBB6fYhYmYvnNUoCMk7AIpYRFFjswzZCiSNiEWI4ZJ6JDmAe3Oc9SnvfCMpdqmU1z6I1Jq2CZNQLyIYnGaJvOJdBbob96p9BR3G9Nq5V4eoRzXhP6lmzD/qxO1cPTRkDU4VFMoEVGdnbsk8lXEzbVvVXFyCAkTcY/yEcW2VE7eWZOaWNYu3taU+XfJFKjY2zk3wYe4JTV40kXt96BtlCu1cvWkWmoe5K2exya2sEP9rKOJIxyjRd63eMIzXhRLuVPulYcvqlLINRv4MZTHT9rOnEg=</latexit>gtxt <latexit sha1_base64="945E8S7/FrvUHZBNUvbl1XBy70Y=">AAAC7XicjVG7TsNAEBzM+x2gpLGIkKgix4QEOgQNZZAIQSIQ2c4RLGyfdT4jIsu/QEeHaPkBWvgNxB/AX7B3OBIUCM6yb3Z2Z3x768aBn0jLehsxRsfGJyanpmdm5+YXFktLy8cJT4XHWh4PuDhxnYQFfsRa0pcBO4kFc0I3YG33al/l29dMJD6PjuQgZmeh04/8C99zJFHd0kbH5UEvGYS0Zf28m3Uku5GZYP08Py8CeSPzvFsqW5Wd7bq9ZZtWxbIa9mZdAbtRszfNKjFqlVGsJi+9ooMeODykCMEQQRIO4CCh5xRVWIiJO0NGnCDk6zxDjhnSplTFqMIh9oq+fYpOCzaiWHkmWu3RXwJ6BSlNrJOGU50grP5m6nyqnRX7m3emPdXZBrS7hVdIrMQlsX/phpX/1aleJC6wrXvwqadYM6o7r3BJ9a2ok5vfupLkEBOncI/ygrCnlcN7NrUm0b2ru3V0/l1XKlbFXlGb4kOdkgY8nKL5Ozi2K9V6pXZYK+/uFaOewirWsEHzbGAXB2iiRd63eMIzXgxu3Bn3xsNXqTFSaFbwYxmPnwD6ong=</latexit>gtxtreg <latexit sha1_base64="LdQjrF03kQl9jsDCjjOhHiGeqp8=">AAAC7XicjVHLTttAFD24QHk3tEs2ViMkVpFjQgI7RDddBqkhSAQi2xnCCNtjzYwRkeVfYMcOse0PdNv+RsUfwF/0ztSRYIFgLHvOPfee47lzwyzmSnvew4zzYXZu/uPC4tLyyurap9r65yMlchmxXiRiIY/DQLGYp6ynuY7ZcSZZkIQx64eX30y+f8Wk4iL9oScZO02CccrPeRRoooa1rUEo4pGaJLQV43JYDDS71oVk47I8q4IrrspyWKt7jb3dtr/ju17D8zr+dtsAv9Pyt90mMWbVUa2uqP3FACMIRMiRgCGFJhwjgKLnBE14yIg7RUGcJMRtnqHEEmlzqmJUERB7Sd8xRScVm1JsPJVVR/SXmF5JShebpBFUJwmbv7k2n1tnw77mXVhPc7YJ7WHllRCrcUHsW7pp5Xt1pheNc+zaHjj1lFnGdBdVLrm9FXNy91lXmhwy4gweUV4Sjqxyes+u1Sjbu7nbwOYfbaVhTRxVtTmezClpwNMpuq+DI7/RbDdah636/kE16gVs4Cu2aJ4d7OM7uuiR9w1+4Tf+OMK5de6c+/+lzkyl+YIXy/n5D9+Eomo=</latexit>gvisreg <latexit sha1_base64="oA3zLADeRrXH/W4pNFTUhS5quhg=">AAAC0HicjVHLSsNAFD2Nr1pfVZdugkVwVVIp6rKoC5dV7APaKkk6raF5OZlISyji1h9wq18l/oH+hXfGFNQiOiHJmXPvOTP3Xit0nUgYxmtGm5mdm1/ILuaWlldW1/LrG/UoiLnNanbgBrxpmRFzHZ/VhCNc1gw5Mz3LZQ1rcCzjjVvGIyfwL8QoZB3P7PtOz7FNQVTn5DJpCzYUieDj8VW+YBQNtfRpUEpBAemqBvkXtNFFABsxPDD4EIRdmIjoaaEEAyFxHSTEcUKOijOMkSNtTFmMMkxiB/Tt066Vsj7tpWek1Dad4tLLSaljhzQB5XHC8jRdxWPlLNnfvBPlKe82or+VennEClwT+5dukvlfnaxFoIdDVYNDNYWKkdXZqUusuiJvrn+pSpBDSJzEXYpzwrZSTvqsK02kape9NVX8TWVKVu7tNDfGu7wlDbj0c5zToL5XLO0Xy2flQuUoHXUWW9jGLs3zABWcoooaed/gEU941s61oXan3X+maplUs4lvS3v4AIIklR8=</latexit>Dtr<latexit sha1_base64="xaWvDrZVgnslLqw2m2EBu3VgWAM=">AAAC33icjVHJSsRAEH3Gfdyi3vQSHAS9DImIenQ7eFRwZgRHpdP2jMFsJB1RQsCbN/HqD3jVvxH/QP/C6p4ILoh2SPLqVb3XXV1u7HuptO2XHqO3r39gcGi4MjI6Nj5hTk410ihLuKjzyI+SA5elwvdCUZee9MVBnAgWuL5ouudbKt+8EEnqReG+vIrFUcA6odf2OJNEnZgzLSkuZb6RdYqF7eO8G10wvygWT8yqXbP1sn4CpwRVlGs3Mp/RwikicGQIIBBCEvbBkNJzCAc2YuKOkBOXEPJ0XqBAhbQZVQmqYMSe07dD0WHJhhQrz1SrOe3i05uQ0sI8aSKqSwir3Sydz7SzYn/zzrWnOtsV/d3SKyBW4ozYv3Qflf/VqV4k2ljTPXjUU6wZ1R0vXTJ9K+rk1qeuJDnExCl8SvmEMNfKj3u2tCbVvau7ZTr/qisVq2Je1mZ4U6ekATvfx/kTNJZqzkpteW+5ur5ZjnoIs5jDAs1zFevYwS7q5H2NBzziyWDGjXFr3HVLjZ5SM40vy7h/BwR0mrY=</latexit>Aug(Dval) <latexit sha1_base64="KsPo1yTpdMiwReZG6NiCdanzp4M=">AAADknicjVFdb9MwFL1Z+Bjjqxu88WJRIbVCVClM2zReBnvhAaQh0W1oGZXjuo01J45sZ1BF+Yk8I/4B/AuujSetHQgcJT4+95wTXzurpDA2Sb5HK/G16zdurt5au33n7r37nfWNQ6NqzfiIKan0cUYNl6LkIyus5MeV5rTIJD/KzvZd/eicayNU+cHOK35a0FkppoJRi9S48zXNqW3STMmJmRc4NanNuaVt+wkR/2Kbc2HatpdWueiTVPKppVqrz+QPlgUHeUZSKquckt5l6awlT0laUJszKpt3zoPJ7ZJmHJI0ny1t5OWicKHW74873WSQ+EGugmEAXQjjQHW+QQoTUMCghgI4lGARS6Bg8DmBISRQIXcKDXIakfB1Di2sobdGFUcFRfYMvzNcnQS2xLXLNN7N8C8SX41OAk/Qo1CnEbu/EV+vfbJj/5bd+Ey3tznOWcgqkLWQI/sv34Xyf32uFwtT2PE9COyp8ozrjoWU2p+K2zm51JXFhAo5hydY14iZd16cM/Ee43t3Z0t9/YdXOtatWdDW8NPtEi94uHydV8Hh88Fwa7D5frO79zpc9So8gsfQw/vchj14AwcwAha9iD5GWcTih/Fu/Cre/y1diYLnASyM+O0vyoThWA==</latexit>ˆ✓vis() ✓vis↵(g+M(gvisreg;gvis)) <latexit sha1_base64="Chyw3CnmnnaX5QnBuZxbuiWIFEg=">AAADknicjVFdT9swFL0h+2Dsq8DeeLFWTWo1rUo3BGi8sPHCw5CYtAITYZXjuo2FE0eOw6ii/MQ9T/sH419wbYxEy6bNUeLjc8858bWTQorSRNGvYCG8d//Bw8VHS4+fPH32vLW8cliqSjM+YEoqfZzQkkuR84ERRvLjQnOaJZIfJWe7tn50znUpVP7FTAt+mtFJLsaCUYPUsPUjTqmp40TJUTnNcKpjk3JDm+YbIn5hanNhmqYTF6nokljysaFaq+/kD5YZB3lDYiqLlJLObemkIa9JnFGTMirrfevB5GZOM/RJmk/mNrI9K5ypdbvDVjvqRW6Qu6DvQRv8OFCtnxDDCBQwqCADDjkYxBIolPicQB8iKJA7hRo5jUi4OocGltBboYqjgiJ7ht8Jrk48m+PaZpbOzfAvEl+NTgKv0KNQpxHbvxFXr1yyZf+WXbtMu7cpzonPypA1kCL7L9+N8n99thcDY9hyPQjsqXCM7Y75lMqdit05udWVwYQCOYtHWNeImXPenDNxntL1bs+Wuvpvp7SsXTOvreDS7hIvuD9/nXfB4dtef6O3/nm9vfPRX/UirMFL6OB9bsIO7MEBDIAF74KvQRKw8EX4PvwQ7l5LFwLvWYWZEX66AmF34ZA=</latexit>ˆ✓txt() ✓txt↵(g+M(gtxtreg;gtxt))Eq. (20), (21)Figure 2. ProMetaR learns the soft prompts Θ = θvis,θtxt with meta-regularization to generalize well on the new tasks without losing the generalizability of the pretrained VLMs ( e.g., CLIP). In the inner-loop (Eq. (19)), we adapt the soft prompts Θwith the gradients gof the loss Land modulated gradients greg=Mϕ greg;g . In the outer-loop (Eq. (20), (21)), the soft prompts Θand the gradient modulation function ϕare updated on the augmented validation set Dval. The image encoder fand text encoder gof the pretrained vision-language models are frozen during the training phase. 3.2. Prompt learning via meta-regularization We propose a novel framework named Prompt Learning via Meta-Regularization (ProMetaR) to improve the generaliz- ability of existing prompt tuning methods. Our framework adopts meta-learning approaches to learn both soft prompts and regularizers. In addition, we incorporate task augmen- tation into our framework to generate diverse tasks to alle- viate the meta-overfitting . Figure 2 delineates the overall meta-learning pipeline of the proposed method. Prompt tuning optimizes prompts to adapt pre-trained models, e.g., a Vision-Language Models (VLMs), to the specific tasks by minimizing a loss: min ΘL Θ;Dtr , (9) where Θ={θtxt,θvis}denotes the learnable prompts and Dtris the training set of the target downstream task. Since the goal of prompt tuning is the sample-efficient adaptation of the pre-trained models, the training set for prompt tun- ing is usually small. Thus, prompt tuning methods often suffer from overfitting, showing inferior performance com- pared to even zero-shot VLMs. To address this problem, we introduce a regularizer Rthat penalizes large changes in representations as Rvis=X i|˜zi−zi|,Rtxt=X j|˜wj−wj|,(10) where z,wdenote original visual and textual embeddings, while ˜z,˜wrepresent corresponding embeddings obtained with prompts Θ. Then, we have: min ΘL Θ;Dtr +λR(Θ;Dtr), (11) where λ∈R+is the regularization strength and Runifies Rvis, andRtxt.However, the regularizer may not always be helpful and manually adjusting the strength of the regularizer, is non- trivial. So, we learn the regularizer to automatically balance it with the main loss, which can be formulated as a bi-level optimization given as: min Θ,ϕL Θ∗(ϕ) ;Dval (12) s.t.Θ∗(ϕ) = arg min ΘL Θ;Dtr +Rϕ Θ;Dtr , where Θis a meta-parameter for the better adaptation, and ϕis a also meta-parameter to learn the strength of regular- izerR. Similar to Eq. (8) in MAML, using the one-step update approximation, Eq. (12) can be rewritten as min Θ,ϕL ˆΘ(ϕ) ;Dval (13) s.t.ˆΘ(ϕ) =Θ−α g+Mϕ greg;g , (14) where g=∇ΘL(Θ;Dtr)andgreg=∇ΘR(Θ;Dtr), and Mϕis the gradient modulation function with the parameter ϕthat adaptively adjusts gregconsidering gas: Mϕ greg;g =σ mϕ ⊙greg, (15) where σis the sigmoid function and ⊙is Hadamard product. The modulation vectors mϕis computed by MLP ϕ g||greg considering the gradients of both the loss and the regularizer. By learning the regularizer, we have addressed the over- fitting problem of the prompt learning methods. We fur- ther extend our framework to boost generalization perfor- mance in new tasks ( task generalization) by generating di- verse tasks. To this extent, we incorporate task augmenta- tioninto our framework as: min Θ,ϕEL ˆΘ(ϕ) ;Aug Dval s.t.ˆΘ(ϕ) =Θ−α g+Mϕ greg;g ,(16) where Aug (·)is the task augmentation operation. The task augmentation generates new labels to augment many tasks, which encourages the parameters to be optimized for di- verse tasks. The augmented task can be viewed as a virtu- ally large meta-validation set with many tasks. This helps the model generalize on new tasks. Mixup-based augmentation is one of the augmenta- tion operations that generate new interpolated labels. In our experiments, task augmentation randomly draws sam- ples from train and validation sets and employs manifold mix [66] for augmentation. Specifically, given a pair of random samples xi∈Dvalandxj∈Dtrfrom valida- tion and training sets, we interpolate the last layer fea- tures (h(i) val,h(j) tr) and their labels ( y(i) val, y(j) tr) as: ˆh(i) val=ρh(i) val+ (1−ρ)h(j) tr, (17) ˆy(i) val=ρy(i) val+ (1−ρ)y(j) tr, (18) where ρ∈[0,1]is a mixture ratio, which is sampled from the Beta distribution Beta (µ, ν). Remarks. Note that similar to overfitting, meta-learning algorithms often suffer from meta-overfitting , especially when the size of the meta-validation set {Dval i}i∈Tin Eq. (5) is small [2, 72, 85]. The size is related to the quan- tity of both samples and tasks, and their diversity. Un- fortunately, in prompt tuning benchmarks such as base- to-base/base-to-new generalization and domain generaliza- tion settings, only one task is available for training with a small number of samples. This setting is challenging and can be seen as ‘single-task meta-learning’. In our frame- work, task augmentation effectively addresses the scarcity of tasks/samples and boosts generalization performance in both a base (seen) task and a new task. Overall procedure of ProMetaR. Motivated by the episodic training scheme [67], we divide the batch into the training and validation set based on the class of the sam- ple. To maintain the in-domain performance, we first up- date the parameters with the conventional gradient descent. Then, we update the parameters with the meta-learning. The learnable prompts Θare adapted with the gradients of the loss and modulated gradients (inner-loop): ˆΘ(ϕ)←Θ−α g+Mϕ greg;g (19) After the update, the learnable prompts Θand gradient modulation function ϕare optimized for performing well on the augmented set (outer-loop): Θ←Θ−β∇ΘL ˆΘ(ϕ);Aug Dval , (20) ϕ←ϕ−β∇ϕL ˆΘ(ϕ);Aug Dval , (21) where α, β are hyperparameters, respectively.3.3. Analysis of ProMetaR We provide the analysis to elucidate how our proposed ProMetaR enhances the generalizability of prompt learning from the standpoint of gradient alignment [67]. The ob- jective of ProMetaR is to find the optimal soft prompts as follows: min Θ,ϕL Θ−α g+Mϕ greg ;Dval , (22) where g=∇ΘL(Θ;Dtr),greg=∇ΘR(Θ;Dtr)are the gradients of loss Land regularizer R, respectively. We can approximate L(x)with first-order Taylor expan- sion. Given loss L(x), its first-order approximation via Tay- lor expansion is as follows: L(x)≈ L(x0) +∇xL(x0)⊤(x−x0), (23) where x0is an arbitrary point and xis a point close to x0. Assume that we have x=Θ−α g+Mϕ greg and x0=Θ. Then, our objective (Eq. (22)) can be written as: min Θ,ϕL Θ;Dval + ∇ΘL(Θ)⊤ −α g+Mϕ greg .(24) SinceMϕ greg =σ mϕ ⊙greg, we can rewrite Eq. (24) as below: min Θ,ϕL Θ;Dval −α ∇ΘL(Θ)⊤g −α ∇ΘL(Θ)⊤ σ mϕ ⊙greg . (25) This equation has three terms. The optimization above im- plies minimizing (i) the loss on the validation set, (ii) maxi- mizing the inner product between the gradients of the losses on the validation set and the training set, and (iii) maximiz- ing the inner product between the gradient of the valida- tion loss and the regularizer on the training set. So, these indicate that this optimization prefers a solution/direction where the training and validation gradients agree, which leads to better generalization on new tasks. In addition, the third term in Eq. (25) plays a role in avoiding the con- flict of the update between the task-specific knowledge by tuned prompts and task-agnostic general knowledge pro- vided by original prompts. From the perspective of the gra- dient alignment [84], the third term leads to a reduction in the generalization error by aligning the gradients induced by tuned prompts and general knowledge from the origi- nal prompts. So, our proposed ProMetaR enhances the task generalization ability as well as traditional generalization capability. 4. Experiments In this section, we demonstrate the effectiveness of our proposed ProMetaR. We first introduce datasets, baselines, and implementation details. Next, we provide the ablation studies to explore the contribution of each component in ProMetaR. Then, we compare the proposed method with other prompting-based methods to evaluate the ability of traditional generalization on seen categories (base-to-base), and task generalization to unseen categories (base-to-new) and new datasets (domain generalization). We also design a task overfitting score and provide analysis to show the effi- cacy of the proposed method. 4.1. Experimental settings We evaluate ProMetaR on base-to-base/base-to-new gen- eralization and domain generalization following other prompting works [28]. Base-to-base/Base-to-new generalization. We train the prompts only on the base classes in a 16-shot (16 images per class) setting and measure the performance of the prompt- ing methods on base and new classes. In this setting, the model cannot see new classes in the training phase. Domain generalization. We also validate the effectiveness of our model in a 16-shot on out-of-distribution datasets. We train the model only using ImageNet dataset (source) and perform inference on four other variants (target) of Im- ageNet dataset. In other words, the model cannot see target domains in the training phase. Datasets. For base-to-base/base-to-new class generaliza- tion, we evaluate our method on 11 image recognition datasets: ImageNet [9], Caltech101 [12], OxfordPets [53], StanfordCars [33], Flowers102 [51], Food101 [5], FGV- CAircraft [45], SUN397 [71], UCF101 [62], DTD [8], and EuroSAT [17], following other prompting methods [28, 82]. We also evaluate our method on domain generalization settings by setting ImageNet [9] as the source dataset. The target datasets contain four ImageNet variants: Im- ageNetV2 [57], ImageNet-Sketch [68], ImageNet-A [19], and ImageNet-R [18]. Baselines. To validate the effectiveness of our ProMetaR, we use the following baselines: (1) zero-shot CLIP [55], (2) textual prompt learning approaches: CoOp [83] and Co- CoOp [82], (3) multimodal prompt learning approaches: MAPLE [28] and RPO [34], (4) prompt learning with reg- ularization and ensemble methods: PromptSRC [29], (5) prompt learning with the meta-learning: UNIGRAM [38], and (6) our base prompting method: IVLP. Experimental details. Following other prompt learning works [28, 29, 82], we use CLIP-ViT-B/16 as the pre- trained backbone model and four soft prompting tokens for each modality. For the base prompt learning method,MetaLearn TaskAug MetaReg Base New H (a) 82.51 73.36 77.66 (b) ✓ 83.51 73.15 77.99 (c) ✓ ✓ 84.04 75.37 79.47 (d) ✓ ✓ 84.27 75.06 79.40 (e) ✓ ✓ ✓ 84.39 76.93 80.49 Table 1. Contribution of each component of our ProMetaR. Re- sults are averaged over 11 datasets. H refers to harmonic mean. MetaLearn: meta-learning, TaskAug: Task augmentation to alle- viate the meta-overfitting, MetaReg: meta-regularization to learn the regularizer. we use Independent Vision-Language Prompting as a base prompt learning method that optimizes hierarchical prompts on both image and text modalities [28]. In all experiments, we evaluate the performance of the methods in three inde- pendent runs (seed 1, 2, and 3) and report average perfor- mance following other prompt learning works [28, 29, 82]. 4.2. Effectiveness of ProMetaR We validate the effectiveness of each component of the pro- posed ProMetaR under the base-to-base/base-to-new set- ting. Table 1 provides the ablation study on our compo- nents, and the results are averaged over 11 datasets. Met- aLearn denotes meta-learning, TaskAug indicates task aug- mentation to alleviate the meta-overfitting, and MetaReg refers to meta-regularization. Eliminating all of our com- ponents, or (a), corresponds to using only IVLP, which is the base prompt learning method of ProMetaR. By adopt- ing meta-learning to IVLP ((a) →(b)), the base class per- formance improves (+1.0%) but it impairs generalization to new classes (-0.21%). However, our task augmentation ((b)→(c)) significantly enhances the average accuracy on new classes and harmonic mean with gains of +2.22% and +1.48%, respectively, compared to IVLP+meta-learning. Additionally, our meta-regularization ((b) →(d)) improves accuracy for both base and new classes by +0.76% and +1.91%, respectively. This indicates that both task augmen- tation and meta-regularization clearly ameliorate the meta- overfitting caused by meta-learning and contribute to strong generalization. Furthermore, by adding meta-regularization to (c), i.e., (c)→(e), all three accuracies increase to +0.35% (base class), +1.56% (new class), and +1.02% (harmonic mean). Employing task augmentation to (d), i.e., (d)→(e), leads to an additional +1.87% growth in new class accuracy. Our ProMetaR significantly improves over IVLP for both base and new classes ((a) →(e)), achieving performance gains of +1.88%, +3.57%, and +2.83% on the base class, new class accuracy, and harmonic mean, respectively. DatasetCLIP CoOp CoCoOp MaPLe RPO PromptSRC UNIGRAM IVLP ProMetaR Gain [55] [83] [82] [28] [34] [29] [38] (Base) (Ours) ∆ Avg. Rank 8.18 8.55 6.73 3.64 4.55 2.73 3.82 5.27 1.36 - Average on 11 datasetsBase 69.34 82.69 80.47 82.28 81.13 84.26 80.34 82.51 84.39 +1.88 New 74.22 63.22 71.69 75.14 75.00 76.10 75.92 73.35 76.93 +3.58 H 71.70 71.66 75.83 78.55 77.78 79.97 78.07 77.66 80.49 +2.83 ImageNetBase 72.43 76.47 75.98 76.66 76.60 77.60 76.60 77.39 77.76 +0.37 New 68.14 67.88 70.43 70.54 71.57 70.73 70.69 70.04 70.75 +0.71 H 70.22 71.92 73.10 73.47 74.00 74.01 73.53 73.53 74.09 +0.56 Caltech 101Base 96.84 98.00 97.96 97.74 97.97 98.10 98.07 98.28 98.11 -0.17 New 94.00 89.81 93.81 94.36 94.37 94.03 95.11 93.65 94.29 +0.64 H 95.40 93.73 95.84 96.02 96.03 96.02 96.57 95.91 96.16 +0.25 Oxford PetsBase 91.17 93.67 95.20 95.43 94.63 95.33 94.94 95.41 95.57 +0.16 New 97.26 95.29 97.69 97.76 97.50 97.30 97.94 96.31 97.43 +1.12 H 94.12 94.47 96.43 96.58 96.05 96.30 96.42 95.86 96.49 +0.63 Stanford CarsBase 63.37 78.12 70.49 72.94 73.87 78.27 73.50 72.39 78.32 +5.93 New 74.89 60.40 73.59 74.00 75.53 74.97 75.38 73.31 75.18 +1.87 H 68.65 68.13 72.01 73.47 74.69 76.58 74.43 72.85 76.72 +3.87 Flowers 102Base 72.08 97.60 94.87 95.92 94.13 98.07 95.20 96.17 98.13 +1.96 New 77.80 59.67 71.75 72.46 76.67 76.50 76.21 73.64 77.66 +4.02 H 74.83 74.06 81.71 82.56 84.50 85.95 84.65 83.41 86.70 +3.29 Food101Base 90.10 88.33 90.70 90.71 90.33 90.67 90.84 90.53 90.80 +0.27 New 91.22 82.26 91.29 92.05 90.83 91.53 92.12 91.66 91.89 +0.23 H 90.66 85.19 90.99 91.38 90.58 91.10 91.48 91.09 91.34 +0.25 FGVC AircraftBase 27.19 40.44 33.41 37.44 37.33 42.73 32.25 37.24 42.02 +4.78 New 36.29 22.30 23.71 35.61 34.20 37.87 38.00 34.47 38.63 +4.16 H 31.09 28.75 27.74 36.50 35.70 40.15 34.89 35.80 40.25 +4.45 SUN397Base 69.36 80.60 79.74 80.82 80.60 82.67 80.43 82.63 82.70 +0.07 New 75.35 65.89 76.86 78.70 77.80 78.57 77.91 78.40 79.02 +0.62 H 72.23 72.51 78.27 79.75 79.18 80.52 79.15 80.46 80.82 +0.36 DTDBase 53.24 79.44 77.01 80.36 76.70 83.37 73.62 80.67 83.02 +2.35 New 59.90 41.18 56.00 59.18 62.13 62.97 62.38 55.31 64.05 +8.74 H 56.37 54.24 64.85 68.16 68.61 71.75 67.56 65.63 72.31 +6.68 EuroSATBase 56.48 92.19 87.49 94.07 86.63 92.90 86.26 92.64 94.94 +2.30 New 64.05 54.74 60.04 73.23 68.97 73.90 71.38 63.33 77.44 +14.11 H 60.03 68.69 71.21 82.35 76.79 82.32 78.12 75.23 85.30 +10.07 UCF101Base 70.53 84.69 82.33 83.00 83.67 87.10 82.00 84.23 86.97 +2.74 New 77.50 56.05 73.45 78.66 75.43 78.80 78.06 76.78 79.84 +3.06 H 73.85 67.46 77.64 80.77 79.34 82.74 79.98 80.33 83.25 +2.92 Table 2. Performance comparison on the base-to-new generalization setting. We train our model with a subset of the classes (base classes) in a 16-shot setting and evaluate on the test set including base classes and new classes. H denotes the harmonic mean of base and novel performance to show the generalization trade-off [70]. Avg. Rank is the average rank of the harmonic mean on each dataset among the baselines. ∆denotes the performance gain of ProMetaR from IVLP (our base prompting method). 4.3. Base-to-base/Base-to-new generalization We compare the performance of ProMetaR with other re- cent prompting approaches in the base-to-base/base-to-new generalization setting to demonstrate the effectiveness of the proposed learning framework. Following [28, 82], we report the average accuracy of three different data splits used in CoCoOp [82] for a fair comparison. The results are reported in Table 2. Our ProMetaR shows the best performance on the av- erage accuracy over 11 datasets among baselines. In par-ticular, ProMetaR achieves a significant improvement on new classes from 76.10 to 76.93 compared to the best base- line method PromptSRC. Also, ProMetaR substantially im- proves the average accuracy of the base model IVLP by 3.58 on new classes. This result indicates that our ProMetaR en- hances the generalizability of existing prompting methods by meta-learning the regularization. In comparison with UNIGRAM, which applies meta-learning with a large scale of external data, ProMetaR shows impressive performance improvement on both base and new categories without any external data for the meta-learning. Source Target ImageNet -V2 -S -A -R Avg. CLIP 66.73 60.83 46.15 47.77 73.96 57.18 CoOp 71.51 64.20 47.99 49.71 75.21 59.28 CoCoOp 71.02 64.07 48.75 50.63 76.18 59.91 MaPLe 70.72 64.07 49.15 50.90 76.98 60.27 RPO 71.67 65.13 49.27 50.13 76.57 60.28 PromptSRC 71.27 64.35 49.55 50.90 77.80 60.65 UNIGRAM 71.65 64.81 49.54 51.51 77.34 60.80 ProMetaR 71.29 64.39 49.55 51.25 77.89 60.77 Table 3. Performance comparison on the domain generalization. Top-3 Bottom-3 Dataset tosIVLPGain∆Dataset tosIVLPGain∆ EuroSAT 36.88 10.07 Food101 -0.01 0.25 DTD 32.02 6.68 Caltech101 1.79 0.25 Flowers 28.25 3.29 Imagenet 3.06 0.56 Table 4. Task overfitting score tosIVLP=δIVLP base−δIVLP new and the gain∆.∆denotes the performance gain (H) by ProMetaR on IVLP (Table 2). 4.4. Domain generalization In the domain generalization setting, the performance com- parison of ImageNet-trained models, evaluated with four out-of-distribution variants, is reported in Table 3. For a fair comparison, we exclude UNIGRAM since it em- ploys a large scale of extra datasets to pre-train the learn- able prompts. ProMetaR successfully generalizes to out-of- domain datasets showing the best average accuracy. This demonstrates that our meta-regularizer and task augmenta- tion clearly enhance the robustness to domain shifts. 4.5. Analysis Task overfitting score. We analyze when our ProMetaR provides a relatively large (or small) performance improve- ment compared to the base model (IVLP). To quantify the room for improvement, we define Task Overfitting Score ( tos) of the prompting method <pr> as tos<pr>=δ<pr> base−δ<pr> new, (26) where δ<pr> base = max(0 , s<pr> base−sCLIP base), δ<pr> new =s<pr> new− sCLIP new be the performance difference between prompting method <pr> and zero-shot CLIP on the base and new classes, respectively. s<pr> base, s<pr> new indicate the accuracy of the prompting method <pr> on base and new classes, re- spectively. As the task overfitting score is lower, the method <pr> tends to generalize well on new tasks. Table 4 re- ports the task overfitting score and performance gain ∆of ProMetaR from IVLP (Table 2) on the datasets with top- 3 (left) and bottom-3 (right) task overfitting scores. The table shows that gains of ProMetaR are relatively high when the task overfitting score is high. It demonstrates that ProMetaR is more effective when prompting method IVLP suffers from overfitting.Methods Base New H CoOp 82.69 63.22 71.66 + ProMetaR 83.35 71.20 76.80 VPT 82.75 71.00 76.43 + ProMetaR 83.18 73.19 77.87 Table 5. Performance comparison of ProMetaR with different prompting approaches (CoOp [83] and VPT [27]) under the base- to-base/base-to-new generalization setting. Method Base New H Loss+Reg. 83.96 75.70 79.62 ProMetaR (Ours) 84.39 76.93 80.49 Performance Gain ( ∆) +0.43 +1.23 +0.87 Table 6. Performance comparison of ProMetaR with IVLP trained with the loss and regularizer under the base-to-base/base-to-new generalization setting. ProMetaR with diverse methods. ProMetaR can be ap- plied to any existing prompting methods in a plug-and- play manner. We elucidate the effectiveness of ProMetaR by comparing the performance of various methods, such as CoOp and VPT, with our method plugged in (Table 5). ProMetaR consistently improves all the other prompt learn- ing methods with harmonic mean gains of +5.14% and +1.44% over CoOp and VPT, respectively. Moreover, the performance is enhanced, especially in new classes, indi- cating that our ProMetaR effectively prevents the prompts from overfitting to downstream tasks. Meta-Regularization. In Table 6, we also compare ProMetaR with IVLP trained with the loss and the regu- larizer (Loss+Reg) in (11) with manually tuned hyperpa- rameters ( e.g., a regularization strength). The experimental results show that our ProMetaR outperforms standard IVLP training with regularization (Loss+Reg). This result indi- cates that our ProMetaR automatically learns more effective regularization via meta-learning. 5. Conclusion We propose ProMetaR to encourage both traditional gen- eralization and task generalization, yielding a significant performance improvement in base-to-base/base-to-new and domain generalization settings. Specifically, we adopt meta-learning to learn both soft prompts and regularizers. We further incorporate task augmentation to generate di- verse tasks and address the meta-overfitting. Extensive ex- periments and analyses demonstrate that our ProMetaR en- hances the generalizability of prompt learning. Acknowledgements. This work was partly sup- ported by ICT Creative Consilience Program through the IITP, NRF of Korea grants funded by the Ko- rea government (MSIT) (IITP-2024-2020-0-01819, NRF- 2023R1A2C2005373), and the NVIDIA academic grant. We thank Jongha Kim for the suggestions on the analysis. References [1] James Urquhart Allingham, Jie Ren, Michael W Dusenberry, Xiuye Gu, Yin Cui, Dustin Tran, Jeremiah Zhe Liu, and Bal- aji Lakshminarayanan. A simple zero-shot prompt weighting technique to improve prompt ensembling in text-image mod- els. In ICML , 2023. 12 [2] Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your maml. In ICLR , 2019. 2, 5 [3] Yogesh Balaji, Swami Sankaranarayanan, and Rama Chel- lappa. Metareg: Towards domain generalization using meta- regularization. In NeurIPS , 2018. 2 [4] Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti, Gaurav S. Sukhatme, and Franziska Meier. Meta learning via learned loss. In ICPR , 2020. 2 [5] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In ECCV , 2014. 6 [6] Eulrang Cho, Jooyeon Kim, and Hyunwoo J Kim. Distribution-aware prompt tuning for vision-language mod- els. In ICCV , 2023. 2 [7] Hyeong Kyu Choi, Joonmyung Choi, and Hyunwoo J Kim. Tokenmixup: Efficient attention-guided token-level data augmentation for transformers. In NeurIPS , 2023. 2 [8] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In CVPR , 2014. 6 [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR , 2009. 6, 12 [10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, et al. An image is worth 16x16 words: Trans- formers for image recognition at scale. In ICLR , 2021. 2 [11] Yu Du, Fangyun Wei, Zihe Zhang, Miaojing Shi, Yue Gao, and Guoqi Li. Learning to prompt for open-vocabulary ob- ject detection with vision-language model. In CVPR , 2022. 1 [12] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning gener- ative visual models from few training examples: An incre- mental bayesian approach tested on 101 object categories. In CVPRW , 2004. 6 [13] Chengjian Feng, Yujie Zhong, Zequn Jie, Xiangxiang Chu, Haibing Ren, Xiaolin Wei, Weidi Xie, and Lin Ma. Prompt- det: Towards open-vocabulary detection using uncurated im- ages. In ECCV , 2022. 1 [14] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model- agnostic meta-learning for fast adaptation of deep networks. InICML , 2017. 2, 3 [15] Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. In ICLR , 2018. 2 [16] Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Open-vocabulary object detection via vision and language knowledge distillation. In ICLR , 2022. 1[17] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. JS- TARS , 12(7):2217–2226, 2019. 6 [18] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kada- vath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robust- ness: A critical analysis of out-of-distribution generalization. InICCV , 2021. 6 [19] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Stein- hardt, and Dawn Song. Natural adversarial examples. In CVPR , 2021. 6 [20] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In ICANN , 2001. 2 [21] Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey. TPAMI , 44(9):5149–5169, 2021. 2, 3 [22] Dasol Hwang, Jinyoung Park, Sunyoung Kwon, KyungMin Kim, Jung-Woo Ha, and Hyunwoo J Kim. Self-supervised auxiliary learning with meta-paths for heterogeneous graphs. InNeurIPS , 2020. 2 [23] Dasol Hwang, Jinyoung Park, Sunyoung Kwon, Kyung- Min Kim, Jung-Woo Ha, and Hyunwoo J Kim. Self- supervised auxiliary learning for graph neural networks via meta-learning. arXiv:2103.00771 , 2021. 2 [24] Gabriel Ilharco, Mitchell Wortsman, Samir Yitzhak Gadre, Shuran Song, Hannaneh Hajishirzi, Simon Kornblith, Ali Farhadi, and Ludwig Schmidt. Patching open-vocabulary models by interpolating weights. In NeurIPS , 2022. 2 [25] Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In UAI, 2018. 12 [26] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML , 2021. 1 [27] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Vi- sual prompt tuning. In ECCV , 2022. 2, 3, 8 [28] Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Maple: Multi-modal prompt learning. In CVPR , 2023. 1, 2, 6, 7, 12 [29] Muhammad Uzair Khattak, Syed Talal Wasim, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, and Fahad Shah- baz Khan. Self-regulating prompts: Foundational model adaptation without forgetting. In ICCV , 2023. 2, 6, 7, 12 [30] Jang-Hyun Kim, Wonho Choo, Hosan Jeong, and Hyun Oh Song. Co-mixup: Saliency guided joint mixup with super- modular diversity. In ICLR , 2021. 2 [31] Dohwan Ko, Joonmyung Choi, Hyeong Kyu Choi, Kyoung- Woon On, Byungseok Roh, and Hyunwoo J Kim. Meltr: Meta loss transformer for learning to fine-tune video foun- dation models. In CVPR , 2023. 2 [32] Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, et al. Siamese neural networks for one-shot image recognition. In ICMLW , 2015. 2 [33] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In ICCVW , 2013. 6 [34] Dongjun Lee, Seokwon Song, Jihee Suh, Joonmyeong Choi, Sanghyeok Lee, and Hyunwoo J Kim. Read-only prompt op- timization for vision-language few-shot learning. In ICCV , 2023. 1, 6, 7 [35] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex op- timization. In CVPR , 2019. 2 [36] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In EMNLP , 2021. 2 [37] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for do- main generalization. In AAAI , 2018. 2 [38] Juncheng Li, Minghe Gao, Longhui Wei, Siliang Tang, Wen- qiao Zhang, Mengze Li, Wei Ji, Qi Tian, Tat-Seng Chua, and Yueting Zhuang. Gradient-regulated meta-prompt learning for generalizable vision-language models. In ICCV , 2023. 2, 6, 7 [39] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In ICML , 2023. 1 [40] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In ACL, 2021. 2 [41] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. Gpt understands, too. AI Open , 2023. 2 [42] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR , 2019. 2 [43] Yuning Lu, Jianzhuang Liu, Yonggang Zhang, Yajing Liu, and Xinmei Tian. Prompt distribution learning. In CVPR , 2022. 2 [44] Timo L ¨uddecke and Alexander Ecker. Image segmentation using text and image prompts. In CVPR , 2022. 1 [45] S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical re- port, 2013. 6 [46] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In ICLR , 2018. 2 [47] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. arXiv:2111.09734 , 2021. 1 [48] Tsendsuren Munkhdalai and Hong Yu. Meta networks. In ICML , 2017. 2 [49] Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler. Rapid adaptation with conditionally shifted neurons. In ICML , 2018. 2 [50] Alex Nichol, Joshua Achiam, and John Schulman. On first- order meta-learning algorithms. arXiv:1803.02999 , 2018. 2 [51] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In ICVGIP , 2008. 6[52] Hyeonjin Park, Seunghun Lee, Sihyeon Kim, Jinyoung Park, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, and Hyunwoo J Kim. Metropolis-hastings data augmentation for graph neu- ral networks. In NeurIPS , 2022. 2 [53] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In CVPR , 2012. 6 [54] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Al- ban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In ICLRW , 2017. 12 [55] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In ICML , 2021. 1, 2, 3, 6, 7, 12 [56] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR , 2016. 2 [57] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to im- agenet? In ICML , 2019. 6 [58] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In ICML , 2016. 2 [59] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In NeurIPS , 2019. 2 [60] Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guil- laume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In CVPR , 2022. 1 [61] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeurIPS , 2017. 2 [62] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. In ICCVW , 2013. 6 [63] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR , 15 (1):1929–1958, 2014. 2 [64] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Re- lation network for few-shot learning. In CVPR , 2018. 2 [65] AFM Uddin, Mst Monira, Wheemyung Shin, TaeChoong Chung, Sung-Ho Bae, et al. Saliencymix: A saliency guided data augmentation strategy for better regularization. In ICLR , 2021. 2 [66] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Na- jafi, Aaron Courville, Ioannis Mitliagkas, and Yoshua Ben- gio. Manifold mixup: learning better representations by in- terpolating hidden states. In ICML , 2019. 2, 5, 12 [67] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. InNeurIPS , 2016. 2, 5 [68] Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In NeurIPS , 2019. 6 [69] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gon- tijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. InCVPR , 2022. 2 [70] Yongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning-the good, the bad and the ugly. In CVPR , 2017. 7, 12 [71] Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR , 2010. 6 [72] Huaxiu Yao, Linjun Zhang, and Chelsea Finn. Meta-learning with fewer tasks through task interpolation. In ICLR , 2022. 2, 5 [73] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regu- larization strategy to train strong classifiers with localizable features. In ICCV , 2019. 2 [74] Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and Chen Change Loy. Unified vision and language prompt learning. arXiv:2210.07225 , 2022. 2 [75] Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In CVPR , 2022. 1 [76] Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger Grosse. Three mechanisms of weight decay regularization. InICLR , 2019. 2 [77] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza- tion. In ICLR , 2018. 2, 12 [78] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. In ICLR , 2024. 1 [79] Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al. Regionclip: Region-based language-image pretraining. In CVPR , 2022. 1 [80] Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Do- main adaptive ensemble learning. TIP, 30:8008–8018, 2021. 12 [81] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. TPAMI , 2022. 12 [82] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Zi- wei Liu. Conditional prompt learning for vision-language models. In CVPR , 2022. 2, 6, 7, 12 [83] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. IJCV , 130(9):2337–2348, 2022. 1, 2, 3, 6, 7, 8, 12 [84] Beier Zhu, Yulei Niu, Yucheng Han, Yue Wu, and Hanwang Zhang. Prompt-aligned gradient for prompt tuning. In ICCV , 2023. 2, 5 [85] Luisa Zintgraf, Kyriacos Shiarli, Vitaly Kurin, Katja Hof- mann, and Shimon Whiteson. Fast context adaptation via meta-learning. In ICML , 2019. 2, 5 In this supplement, we provide the implementation de- tails (Section A) and additional experimental results (Sec- tion B). A. Implementation details In this section, we provide the implementation details of our work. We implement our ProMetaR ( Prompt learning viaMeta R egularization) using Pytorch [54] and Dassl [80, 81], which is a library designed for domain adaptation and generalization. Following previous prompt learning meth- ods [28, 29, 82], we use CLIP-ViT-B/16 as the pretrained backbone model [55] and four soft prompting tokens for each modality. Following other works [1, 29, 55], we uti- lize an ensemble of text prompts for the textual regular- izer. For the base prompt learning method, we use Indepen- dent Vision-Language Prompting as a base prompt learning method that optimizes hierarchical prompts on both image and text modalities [28]. The learning rate is set to 0.0025, and the prompts are optimized with SGD optimizer for all experiments. For the base-to-new generalization settings, we train the model for 15 epochs. For domain generaliza- tion and cross-dataset transfer settings, we train the models for 6 epochs. In all experiments, we evaluate the perfor- mance of the methods in three independent runs (seed 1, 2, and 3) and report average performance following previous prompt learning approaches [28, 29, 82]. Evaluation metrics. In all experiments, we report top- 1 accuracy for each dataset. In base-to-novel generaliza- tion, the top-1 accuracy is measured on base classes and new classes, respectively. We calculate the harmonic mean (H) between the base and new class accuracy to show the generalization trade-off [70]. In domain generalization, and cross-dataset evaluation settings, we measure top-1 accu- racy on the test set of each dataset with the split provided by CoOp [83] following other prompt optimization works. B. Additional experiments In this section, we provide the results of the additional ex- periments including cross-dataset settings and more analy- sis. B.1. Cross-dataset We also measure the performance of the proposed method in the cross-dataset transfer setting to explore the task gen- eralization ability of ProMetaR in Table 7. In cross-dataset transfer setting, we train our ProMetaR on ImageNet [9] as a source dataset and evaluate it on other 11 unseen datasets such as Caltech101, OxfordPets, StanfordCars, Flowers102, Food101, FGVC Aircraft, Sun397, DTD, Eu- roSAT, and UCF101 following other works. Please notethat the model cannot access the unseen datasets during the training phase. For a fair comparison, we exclude UNIGRAM since it employs a large scale of extra datasets to pre-train the learn- able prompts. From the table, ProMetaR successfully gen- eralizes on out-of-domain datasets, achieving the best per- formance on 7 out of 10 datasets compared to other base- lines. This result indicates that our ProMetaR improves the task generalization ability of the existing prompting meth- ods and robustness against domain shifts. B.2. More analysis Comparison of ProMetaR with the generalization meth- ods. We examine the efficacy of ProMetaR by compar- ing ours with data augmentation methods: Mixup [77] and Manifold Mixup [66] and common generalization methods based on the weight averaging: exponential moving aver- age (EMA) and stochastic weight averaging (SWA) [25] by applying them to the base prompt learning method, IVLP. The results are reported in Table 8. Mixup slightly im- proves the performance on base classes with an accuracy gain of 0.39%, but it shows the performance degradation on the new classes. Similarly, Manifold Mixup decreases the performance on new classes with the performance gain on base classes. These results indicate that conventional data augmentation helps improve the performance on base classes ( traditional generalization), but it still suffers from the task overfitting problem in existing prompt learning methods to generalize on the new classes ( task generaliza- tion). EMA enhances new class accuracy by +0.79%, at little expense of base class accuracy. Meanwhile, SWA improves performance on base classes with an improve- ment of +1.14%, but the average accuracy on new classes slightly decreases. We observe that our ProMetaR signifi- cantly outperforms both domain augmentation and general- ization methods by a large margin. Task augmentation. In Table 9, we measure the perfor- mance of the model without using task augmentation (No TaskAug), with the input Mixup [77] for a task augmenta- tion and our ProMetar that uses Manifold Mixup for the task augmentation. Compared to No TaskAug, task augmenta- tion improves the performance on new classes without the loss of the performance on the base classes. This demon- strates that using task augmentation alleviates the meta- overfitting issue by generating various virtual augmented tasks. In addition, the task augmentation with the manifold mixup shows better performance than the input mixup with a performance gain of 0.83% on new classes. Source Target ImageNet Caltech101 OxfordPets StanfordCars Flowers102 Food101 Aircraft SUN397 DTD EuroSAT UCF101 CoOp 71.51 93.70 89.14 64.51 68.71 85.30 18.47 64.15 41.92 46.39 66.55 CoCoOp 71.02 94.43 90.14 65.32 71.88 86.06 22.94 67.36 45.73 45.37 68.21 MaPLe 70.72 93.53 90.49 65.57 72.23 86.20 24.74 67.01 46.49 48.06 68.69 PromptSRC 71.27 93.60 90.25 65.70 70.25 86.15 23.90 67.10 46.87 45.50 68.75 UNIGRAM 71.65 94.67 90.83 66.78 73.12 86.69 25.27 67.97 48.06 52.63 71.03 ProMetaR 71.29 93.74 90.59 65.83 71.13 86.39 24.78 67.41 47.08 45.02 69.50 Table 7. Performance comparison on the cross-dataset transfer setting. Methods Base New H IVLP (Base) 82.51 73.36 77.66 Mixup 82.90 (+0.39) 71.45 (-1.91) 76.75 (-0.91) Manifold Mixup 83.57 (+1.06) 73.19 (-0.17) 78.04 (+0.38) EMA 82.30 (-0.21) 74.15 (+0.79) 78.01 (+0.35) SWA 83.65 (+1.14) 73.14 (-0.22) 78.04 (+0.38) ProMetaR (Ours) 84.39 (+1.88) 76.93 (+3.57) 80.49 (+2.83) Table 8. Performance comparison of ProMetaR with the domain generalization methods on the base-to-new generalization setting. Results are averaged over 11 datasets. H refers to harmonic mean. Methods Base New H No TaskAug 84.27 75.06 79.40 TaskAug: Input Mixup 84.26 76.10 79.97 TaskAug: Manifold Mixup (Ours) 84.39 76.93 80.49 Table 9. Effect of our proposed meta-regularization. Results are averaged over 11 datasets. H refers to harmonic mean. | 8 | 1 | The proposed ProMetaR framework builds upon existing vision-language models (VLMs) like CLIP, which are pre-trained on millions of image-text pairs. Given the extensive experiments mentioned, it's reasonable to assume a substantial dataset for fine-tuning, likely similar to CLIP's 400 million pairs. Fine-tuning such models typically involves training over several epochs (10-50), potentially exceeding 8 hours just on a single GPU. However, the framework may be optimized for efficiency through meta-learning, possibly allowing quicker convergence. Considering typical configurations for similar VLM tasks, and with the right optimizations, it's feasible this model could fit within 8 hours on a single modern GPU (like A100 or V100). However, for robustness, multi-GPU setups are often employed in practice for faster training and handling memory-intensive workloads, hence the estimate of 1 GPU indicates minimal resources. | yes | Yes | CV | Prompt Learning via Meta-Regularization | 2024-04-01 0:00:00 | https://github.com/mlvlab/prometar | 1 | !git clone https://github.com/jhpohovey/StanfordCars.git
!mv StanfordCars/stanford_cars ./stanford_cars! | 2hr 10min for 10 epochs according to logs | https://drive.google.com/file/d/1gthiYFsffpGbuJcRv9QhtjyG8CRBi-rt/view?usp=sharing | Yes | -- Official website is down but found a git repo for dataset. |
CIFAR-10-LT (ρ=50) | SURE(ResNet-32) | [] | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01T00:00:00 | https://arxiv.org/abs/2403.00543v1 | [
"https://github.com/YutingLi0606/SURE"
] | {'Error Rate': '9.78'} | [
"Error Rate"
] | Given the following paper and codebase:
Paper: SURE: SUrvey REcipes for building reliable and robust deep networks
Codebase: https://github.com/YutingLi0606/SURE
Improve the SURE(ResNet-32) model on the CIFAR-10-LT (ρ=50) dataset. The result
should improve on the following metrics: {'Error Rate': '9.78'}. You must use only the codebase provided.
| SURE: SUrvey REcipes for building reliable and robust deep networks Yuting Li1,2, Yingyi Chen3, Xuanlong Yu4,5, Dexiong Chen†6, and Xi Shen†1 1Intellindust, China 2China Three Gorges University, China 3ESAT-STADIUS, KU Leuven, Belgium 4SATIE, Paris-Saclay University, France 5U2IS, ENSTA Paris, Institut Polytechnique de Paris, France 6Max Planck Institute of Biochemistry, Germany Abstract In this paper, we revisit techniques for uncertainty es- timation within deep neural networks and consolidate a suite of techniques to enhance their reliability. Our inves- tigation reveals that an integrated application of diverse techniques–spanning model regularization, classifier and optimization–substantially improves the accuracy of uncer- tainty predictions in image classification tasks. The syn- ergistic effect of these techniques culminates in our novel SURE approach. We rigorously evaluate SURE against the benchmark of failure prediction, a critical testbed for un- certainty estimation efficacy. Our results showcase a con- sistently better performance than models that individually deploy each technique, across various datasets and model architectures. When applied to real-world challenges, such as data corruption, label noise, and long-tailed class dis- tribution, SURE exhibits remarkable robustness, delivering results that are superior or on par with current state-of-the- art specialized methods. Particularly on Animal-10N and Food-101N for learning with noisy labels, SURE achieves state-of-the-art performance without any task-specific ad- justments. This work not only sets a new benchmark for robust uncertainty estimation but also paves the way for its application in diverse, real-world scenarios where re- liability is paramount. Our code is available at https: //yutingli0606.github.io/SURE/ . 1. Introduction Deep neural networks (DNNs) have established themselves as powerful and adaptable tools for prediction tasks on †Corresponding Author. Accuracy↑Baseline89.4RegMixup 92.4CRL 89.6FMFP 92.0SURE (Ours) 93.9Long-tailed classificationCIFAR10-LT <cat> <cat>Learning with noisy labels Animal-10N <dog> <dog> Accuracy↑Baseline 83.5CRL 84.2FMFP 87.9SURE (Ours)89.0RegMixup (not converge)Robustness under data corruptions CIFAR10-C Gaussian Noise Shot NoiseMotion Blur Zoom BlurAUROC↑Baseline 83.1RegMixup 85.3CRL 85.4FMFP 88.0SURE (Ours) 89.6 Figure 1. SURE consistently performs better than previous ap- proaches to uncertainty estimation under various scenarios. Note that we did not manage to scale RegMixup [59] to the learn- ing with noisy label task. Baseline refers to the MSP [31] method. structured data. However, accurately assessing the reli- ability of their predictions continues to be a substantial challenge. In safety-critical areas such as medical diag- nostics [2, 43, 56], robotics [29, 49], autonomous driv- ing [9, 18, 49], and earth observation systems [24, 52], deci- sions based on overconfident predictions can result in severe 1arXiv:2403.00543v1 [cs.CV] 1 Mar 2024 consequences. Consequently, ensuring the robust depend- ability of artificial intelligence systems grounded in DNNs is of utmost importance. Addressing the issue of overconfidence in deep learning has been a focal point of significant research efforts, such as [25, 32, 46, 48, 55, 66]. However, a key limitation of these methods is their restricted testing scenarios, typically confined to benchmark datasets for a single, predefined task like failure prediction or out-of-distribution (OOD) detec- tion. The effectiveness of these methods in more complex, real-world situations involving issues like data corruption, label noise, or long-tailed class distributions remains largely under-explored. Our experiments reveal that no single ap- proach excels uniformly across these diverse scenarios, as depicted in Figure 1. In this work, we propose a unified model designed to effectively address all these challenges. In our pursuit to enhance uncertainty estimation, we start by examining the combined impact of several pre- existing methods, leading to the discovery of an integrated approach that significantly refines this estimation. We clas- sify these methods based on their function in the model training process: regularization, classifier and optimization. For regularization, we utilize techniques such as RegMixup regularization [59], correctness ranking loss (CRL) [54] and cosine similarity classifier (CSC) [23, 33], which can help in increasing entropy for challenging samples. In the realm of optimization, we incorporate Sharpness-Aware Minimization (SAM) [19] and Stochastic Weight Averaging (SWA) [35], as recommended by FMFP [81], to ensure that the model can converge towards flatter minima. The syner- gistic integration of these diverse techniques culminates in our novel approach, which we name SURE. This method harnesses the strengths of each individual component, re- sulting in a more robust and reliable model. In the evaluation of SURE, we first focus on failure pre- diction, a pivotal task for evaluating uncertainty estimation. Our evaluations reveal that SURE consistently outperforms models deploying individual technique. This superior per- formance is evident across various datasets such as CIFAR- 10 [40], CIFAR-100 [40], Tiny-ImageNet [41] and also across various model architectures, namely ResNet [28], VGG [64], DenseNet [34], WideResNet [76] and DeiT [70]. Notably, SURE even surpasses OpenMix [82], a method that leverages additional OOD data. By applying SURE directly to real-world scenarios, without or with min- imal task-specific adjustments, we further witness its ef- fectiveness in bringing robustness to the models. Specif- ically, the real-world challenges include data corruption in CIFAR10-C [30], label noise in Animal-10N [65] and Food-101N [42], and skewed class distribution in CIFAR- LT [12]. In these contexts, SURE achieves results that are either superior to or on par with the latest specialized meth- ods. A standout achievement is observed on Food-101N,where SURE attains an impressive accuracy of 88.0%, sig- nificantly surpassing the previous state-of-the-art method, Jigsaw-ViT [7], which achieved accuracy of 86.7% by us- ing extra training data to pre-train the model. This demon- strates SURE’s remarkable capability in handling complex real-world data challenges. The main contributions of this paper are summarized as follows: • We reveal that existing methods do not uniformly excel in various real-world challenges. This analysis underlines the need for more reliable and robust approaches to han- dle the complexities of real-world data. • We propose a novel approach, named SURE, for robust uncertainty estimation, inspired by the synergistic ef- fect achieved by combining multiple techniques, across model regularization, classifier and optimization. Mod- els trained under our SURE approach consistently achieve better performance in failure prediction than models that deploy individual technique, across various datasets and model architectures. • When applied directly to real-world scenarios, SURE consistently shows performance at least comparable to state-of-the-art specialized methods. 2. Related work Uncertainty estimation Quantifying uncertainty for DNN outputs can improve the interpretability and trustwor- thiness of the predictions and serve various downstream tasks, such as model calibration [25], OOD detection [32, 46], failure prediction [10, 31], etc. MSP [31], Entropy [66], and Energy [48] provide uncertainty estimates for outputs using the information provided by the DNN itself. Modify- ing the architecture and optimization of the DNN can fur- ther improve the performance of these measures on down- stream tasks, i.e., attaining robust and reliable uncertainty estimates. To balance the sensitivity and smoothness of the DNN and achieve robust uncertainty estimation, DDU [55] applies spectral normalization layers [53] to encourage bi- Lipschitzness and LDU [20] introduces distinction maxi- mization layer and an uncertainty estimation head to the DNN. Yet, they all lead to increased training parameters, and a predefined input image size is needed for the former, which also lacks scalability. A simpler adjustment to DNN architecture introduced by OV ADM [58] improves OOD detection performance, which replaces the output layer with anℓ2distance-based layer and uses a one-vs-all loss for training. In terms of optimization, in addition to FMFP [81] mentioned in the previous section, Qu et al. [61] use meta- learning to achieve flat minima yet apply to the auxiliary un- certainty estimators. Methods based on data augmentation, such as Mixup [77], RegMixup [59] and OpenMix [82], ap- ply regularization when training the model, resulting in de- pendable uncertainty estimates, while ensuring classifica- 2 tion accuracy. This work selects and integrates these meth- ods and obtains a scalable solution to improve classification accuracy with more reliable uncertainty estimates. Learning with noisy labels This task aims to perform learning while noisy annotated data is presented in the train- ing set. Mainstream solutions include: i)label correction, which aims at revising possibly wrong labels with more consistent substitutes [65, 68, 74, 78]; ii)semi-supervised learning, which trains networks in a semi-supervised man- ner with only the clean labels used [3, 15, 39, 44]; iii) sample re-weighting, which assigns more weights to pos- sibly clean samples [5, 6, 17, 26, 36, 51, 60, 72, 75]; iv) over-fitting prevention, which prevents networks from over- fitting on noisy training data so as to have better generaliza- tion on clean test set [6, 37, 47, 50, 57, 79]. Specifically, WarPI [67] and [37] are based on meta-learning framework, which propose adaptively rectifying the training procedure for the classification network. SSR+ [17] designs a sample selection and relabelling based on a non-parametric KNN classifier and a parametric classifier. Long-tailed classification In addressing the long-tailed classification challenge, various strategies have been pro- posed. BBN [80] utilizes a dual-branch network to balance learning between different class frequencies, while SSP [73] leverages self-supervised learning and semi-supervised learning for contrastive learning in long-tailed distribution. LDAM-DRW [4] introduces logit compensation to handle class frequency imbalance. Hybrid-SC [71] proposes a two- branch network for supervised contrastive learning and re- ducing classifier bias. BCL [83] develops a balanced con- trastive loss, ensuring that all classes are optimized for a regular simplex configuration that yields a balanced feature space. Recently, GLMC [16] proposes a new paradigm that contains a global and local mixture consistency loss to im- prove the robustness of the feature extractor, and a cumula- tive head-tail soft label re-weighted loss to mitigate the head class bias problem. In this work, we show that by simply applying the un- certainty score provided by DNNs trained using SURE to the re-weighting training strategy, which is commonly used in the community of long-tailed classification [1, 4, 38, 69, 80], the classification performance on imbalanced data can on par with the previous SOTAs. 3. Methods As illustrated in Figure 2, our proposed approach SURE aims to train reliable and robust DNNs through two aspects: i)increasing entropy for hard samples; ii)enforcing flat minima during optimization. In the following, we denote the dataset by {(xi,yi)}N i=1where xiis the input image, yiis its ground-truth label and Nis the number of samples. The recipes in SURE for increasing entropy for hard samples consist of three components: the RegMixup reg- ularization [59] denoted as Lmix, the correctness ranking lossLcrlwhich serves to regularize the class probabilities by aligning the confidence with the ordinal ranking of cor- rectness, and the cosine similarity classifier (CSC). These recipes are employed collectively to optimize the objective, which includes a task-specific loss, e.g., the cross-entropy loss for classification, denoted as Lce, in addition to the Reg- Mixup regularization Lmix, and the confidence-aware regu- larization Lcrlbased on the historical correctness informa- tion gathered during training. The recipes for enforcing flat minima lie in leveraging Sharpness-Aware Minimization (SAM) [19] and Stochastic Weight Averaging (SWA) [35] during optimization. This section is organized as follows: Section 3.1 illus- trates our objective function and CSC to increase entropy for hard samples. Section 3.2 introduces the flat minima- enforced techniques. Implementation details are provided in Section 3.3. 3.1. Increasing entropy for hard samples Total loss As described above, the objective function of SURE is composed of three components, which is ex- pressed as: Ltotal=Lce+λmixLmix+λcrlLcrl, (1) where λmixandλcrldenote hyper-parameters to balance the contribution of each loss component to the total loss. The impact of λmixandλcrlis studied in the Apendix A. RegMixup regularization Lmix Mixup [77] is a widely used data augmentation for image classification. Given two input-target pairs ( xi,yi) and ( xj,yj), we obtain an aug- mented sample ( ˜ xi,˜ yi) by linearly interpolating between them: ˜ xi=mxi+ (1−m)xj,˜ yi=myi+ (1−m)yj,(2) where mdenotes the mixing coefficient, following a Beta distribution: m∼Beta(β, β), β∈(0,∞). (3) The RegMixup regularization Lmixconsists of fitting the model additionally on the augmented samples ( ˜ xi,˜ yi) : Lmix(˜ xi,˜ yi) =Lce(˜ xi,˜ yi), (4) withβ= 10 leading to a heavy mixing of two samples with high probability. Similar to RegMixup [59], we incorporate Lmixas an ad- ditional regularizer alongside the original cross-entropy loss 3 Increasing entropy for hard samples 050001000015000200002500030000The number of samples140180220260300Correctness Dog (easy)Cat (easy)Cat (hard)Dog (hard) RegMixupCRLCSCsharp minimaflat minimaEnforcing flat minima SAMSWAClassifierOptimizationLossSAMSWARegMixupCSCRegMixupCRLFigure 2. Overview of recipes. Our proposed approach SURE contains two aspects: increasing entropy for hard samples and enforcing flat minima during optimization. We incorporate RegMixup [59] loss and correctness ranking loss (CRL) [54] as our loss function and employ cosine similarity classifier (CSC) [23, 33] as our classifier to increase entropy for hard samples. As in optimization, we leverage Sharpness-Aware Minimization (SAM) [19] and Stochastic Weight Averaging (SWA) [35] to find flat minima. on(xi,yi),i.e.,Lcein (1). A high value of βresults in a heavy mixing of samples, prompting the model to exhibit high entropy on heavily interpolated samples, which can be regarded as challenging examples. Correctness ranking loss LcrlThe correctness ranking loss [54] encourages the DNN to align the model’s confi- dence with the ordinal ranking of historical correctness in- formation gathered during training. Specifically, for two input images xiandxj,Lcrlis defined as: Lcrl(xi,xj) = max(0 ,|ci−cj| −sign(ci−cj)(si−sj)), (5) where ciandcjrepresent the proportions of correct pre- diction events for xiandxjduring the training, siandsj denote the confidence score for xiandxj, which are the softmax scores in this work, sign denotes the sign func- tion,Lcrlaims to align the confidence score to the correct- ness statistics. Hard samples, which are less likely to be correctly predicted during training, are encouraged to have lower confidence and thus, higher entropy. Cosine Similarity Classifier (CSC) CSC has shown to be effective on few-shot classification [23, 33] by simply replacing the last linear layer with a cosine classifier. For the image xi, we denote the classification logits for xibe- longing to class kassk i, which is defined as follows: sk i=τ·cos(fθ(xi), wk) =τ·fθ(xi) ∥fθ(xi)∥2·wk ∥wk∥2,(6)where τis the temperature hyper-parameter, fθis a DNN parameterized with θ, used to extract features of input im- ages, wkrepresenting the k-th class prototype, denotes the weight of the k-th class. CSC encourages the classifier to focus on the directional alignment between the feature vector extracted from the in- put image and the class prototype vector, rather than the dot product. This makes it conceptually distinct from the tradi- tional linear classifier, where magnitude plays a significant role. A key benefit of using CSC is its ability to handle hard samples better. CSC views hard samples as equidistant in angle to several class prototypes, leading to more effective interpretation and potentially higher entropy than the tradi- tional linear classifier that uses the dot product. 3.2. Flat minima-enforced optimization We jointly employ Sharpness-Aware Minimization (SAM) [19] and Stochastic Weight Averaging (SWA) [35] to en- hance flat minima. Note that these two techniques are also jointly used in [81] to improve uncertainty estimation. Sharpness-Aware Minimization (SAM) SAM [8, 19] is an optimization method that enhances model generalization by seeking parameters lying in flat neighborhoods such that the DNN has a uniformly small loss. For our objective func- tionLtotaland DNN parameters θ, the SAM optimizer seeks θsatisfying: min θmax ∥ϵ∥2≤ρLtotal(θ+ϵ), (7) where ϵis a perturbation vector and ρis the neighborhood 4 size within which we seek to minimize the sharpness of the loss. The SAM algorithm proceeds by alternating be- tween finding the worst-case perturbation ϵthat maximizes the loss within the ℓ2-norm ball of radius ρ, and updating the model parameters θto minimize this perturbed loss. Stochastic Weight Averaging (SWA) SWA is introduced in [35], which improves the generalization of DNNs by av- eraging model weights over the course of training. The pro- cess begins with a standard training phase, after which SWA starts by averaging the weights at each subsequent epoch. The mathematical representation of the SWA weight update is given by: θSWA=1 TXT t=1θt, (8) where θtrepresents the model weights at epoch t, and Tis the total number of epochs during which SWA is applied. 3.3. Implementation details Following [81], our models are trained using SAM [19] with stochastic gradient descent (SGD) as the base optimizer with a momentum of 0.9, starting with an initial learning rate of 0.1 and a weight decay of 5e-4, over 200 epochs with a batch size of 128. We employ a cosine annealing learning rate schedule and set the SWA [35] start epoch to 120 and a SWA-specific learning rate of 0.05, to enhance the training effectiveness and model robustness. We set β= 10 in (3) for the Mixup data augmentation, which is following [59]. All hyper-parameters, including λmix,λcrl, and τ, are selected on the validation set. An ablation study of λmixin (1), λcrl in (1), and τin (6) are provided in Appendix A, B. In terms of finetuning DeiT-Base [70] with the ImageNet [14] pre- trained model, we set the learning rate at 0.01 with a weight decay of 5e-5 over 50 epochs and start SWA start epoch to 1 and a SWA-specific learning rate of 0.004. 4. Experiments In this section, we evaluate the performance of SURE in failure prediction and further explore SURE’s ability in tackling real-world challenges, including long-tailed clas- sification, learning with noisy labels, and generalization in corrupted image scenarios. We first introduce the datasets used in our experiments and outline the key metrics in Sec- tion 4.1. Then, we present results on failure prediction in Section 4.2. Results on long-tailed classification are pre- sented in Sections 4.3. In Section 4.4, we present results for learning with noisy labels. Performances on corrupted im- ages are provided in Section 4.5. Finally, we present analy- sis in Section 4.6. 4.1. Datasets and evaluation metrics CIFAR10, CIFAR100 and Tiny-ImageNet We use CI- FAR [40] and Tiny-ImageNet [41] to evaluate failure pre-diction. CIFAR datasets are commonly used in the commu- nity [59, 81, 82] and we use Tiny-ImageNet [41] as a larger dataset to evaluate the effectiveness and robustness of our proposed method. The CIFAR10 dataset contains 60,000 color images with a resolution of 32 ×32, divided into 10 classes, each holding 5,000 training images and 1,000 test- ing images. The CIFAR100 dataset follows a similar struc- ture, but with 100 classes. Each class contains 500 training samples and 100 testing samples. Tiny-ImageNet [41] con- tains 100,000 images of 200 classes downsized to 64 ×64 colored images which are a subset of the ImageNet dataset [14]. Each class has 500 training images. 50 images are collected for testing. Note that for all our experiments, we keep 10% of the training set as our validation set. We report the means and standard deviations over three runs. Long-Tailed CIFAR: CIFAR10-LT and CIFAR100-LT We use CIFAR10-LT and CIFAR100-LT [12] to evalu- ate long-tailed classification. Note that these datasets are widely used as evaluation datasets in the community [1, 16, 83]. Following previous works [1, 16, 83], the datasets are created by only keeping the number of training samples per class according to an exponential function ˜Ni=Niµi where iis the class index, Niis the number of training im- ages in the i-th class and µ∈(0,1). The imbalanced factor IF quantifies the level of distribution imbalance and deter- mines µ, which is defined by the ratio between the maxi- mum and the minimum number of samples in a category. The test set remains unchanged. Animal-10N and Food-101N Animal-10N [65] and Food-101N [42] are two real-world datasets where noisy la- bels are present in the training set. Animal-10N is a bench- mark that contains 10 animal classes with confusing appear- ance. The training set size is 50,000, and the test set is 5,000. The estimated label noise ratio of the training set is 8%. No data augmentation is applied so as to follow the settings in [65]. Food-101N contains 310,009 training im- ages of different food recipes collected online and are clas- sified into 101 classes. The training set is with an approxi- mate noise ratio of 20%. Following [42], the learned models should be evaluated on the test set of Food-101 with 25,250 clean labeled images. CIFAR10-C To evaluate the model’s robustness, we use CIFAR10-C dataset [30], which applies 15 common im- age corruptions, e.g., Gaussian noise, impulse noise, mo- tion blur, frost, etc., to CIFAR10 [40] test set. Each type of corruption is characterized by five severity levels, as these corruptions can occur at different intensities. Evaluation Metrics We report metrics that are commonly used in the community of failure prediction to assess the 5 Backbones MethodsCIFAR-10 [40] CIFAR-100 [40] Tiny-ImageNet [41] Acc.↑ AURC ↓ AUROC ↑ FPR95 ↓ Acc.↑ AURC ↓ AUROC ↑ FPR95 ↓ Acc.↑ AURC ↓ AUROC ↑ FPR95 ↓ MSP [31] 94.89±0.20 6.78±0.33 92.20±0.55 38.73±2.89 75.87±0.31 69.44±2.11 87.00±0.21 60.73±1.16 63.39±0.59 136.50±1.08 85.62±0.35 63.99±0.64 RegMixup [59] 95.69±0.13 4.74±0.27 92.96±0.29 34.26±1.98 77.90±0.37 59.23±1.65 87.61±0.13 58.65±0.43 66.36±0.43 115.08±1.98 86.53±0.27 62.54±0.43 CRL [54] 94.85±0.10 5.09±0.28 93.64±0.48 35.33±1.73 76.42±0.21 62.78±0.21 88.07±0.17 59.02±0.39 65.50±0.03 117.46±0.56 87.01±0.13 61.15±0.07 ResNet-18 [28] SAM [19] 95.30±0.25 3.97±0.33 94.53±0.31 31.13±3.62 76.60±0.21 62.97±1.02 87.72±0.10 59.35±0.87 64.95±0.21 120.04±2.11 87.19±0.57 59.98±0.55 SWA [35] 95.38±0.09 4.00±0.21 94.40±0.50 35.70±1.44 77.65±0.19 55.87±0.32 88.55±0.25 60.43±1.90 68.09±0.19 102.11±0.51 87.27±0.15 60.63±1.38 FMFP [81] 95.60±0.09 3.56±0.06 94.74±0.10 33.49±0.33 77.82±0.08 55.03±0.52 88.59±0.07 59.79±0.31 68.18±0.42 100.93±2.12 87.45±0.05 60.18±1.26 SURE 96.14±0.16 2.97±0.13 95.08±0.04 28.64±0.66 80.49±0.18 45.81±0.15 88.73±0.24 58.91±0.58 69.55±0.10 93.46±0.82 87.67±0.12 60.13±0.32 MSP [31] 93.30±0.21 10.41±0.33 90.71±0.04 44.66±1.81 72.43±0.42 91.40±1.95 85.69±0.90 64.41±1.66 59.52±0.62 156.45±2.51 86.33±0.63 63.79±0.95 RegMixup [59] 94.11±0.28 9.89±0.81 89.90±0.26 39.93±1.58 73.51±0.18 85.98±1.05 86.35±0.32 61.70±1.83 63.04±0.57 146.72±2.59 85.60±0.39 59.00±1.27 CRL [54] 93.42±0.09 7.61±0.44 92.88±0.56 39.66±2.83 72.63±0.27 80.94±0.47 87.37±0.28 61.96±0.77 60.20±0.36 146.76±1.42 87.42±0.28 59.26±1.44 VGG [64] SAM [19] 94.11±0.06 5.97±0.08 93.68±0.13 37.21±2.92 73.33±0.36 77.44±0.75 87.42±0.33 63.19±0.58 61.24±0.07 142.54±1.04 86.82±0.25 62.93±1.12 SWA [35] 93.76±0.25 6.64±0.24 93.43±0.16 40.44±1.27 73.98±0.16 74.23±0.58 87.30±0.14 62.89±1.80 62.48±0.19 137.01±0.71 86.29±0.16 62.15±1.64 FMFP [81] 94.26±0.23 5.89±0.16 93.46±0.26 40.67±3.14 74.77±0.31 70.07±1.26 87.58±0.19 60.98±1.16 62.95±0.16 134.04±1.42 86.36±0.12 61.71±1.08 SURE 95.00±0.11 4.98±0.24 93.79±0.62 35.92±2.95 76.51±0.07 65.25±0.17 87.59±0.07 60.27±0.60 63.75±0.11 131.40±0.28 86.12±0.19 63.04±1.05 MSP [31] 94.72±0.23 5.94±0.23 93.00±0.45 37.00±0.31 75.14±0.07 74.68±0.32 86.22±0.22 62.79±0.80 57.90±0.25 180.08±2.52 83.65±0.29 68.61±0.37 RegMixup [59] 95.13±0.22 6.03±0.50 92.20±0.80 38.63±1.63 77.29±0.16 63.96±1.15 86.57±0.07 63.76±1.10 61.96±0.09 147.22±1.57 84.91±0.17 65.92±0.40 CRL [54] 94.79±0.02 5.58±0.42 93.22±0.61 37.34±2.73 76.09±0.06 65.96±0.62 87.41±0.11 60.67±0.72 58.80±0.56 169.44±3.74 84.49±0.04 66.05±0.60 DenseNet [34] SAM [19] 95.31±0.10 4.25±0.17 94.15±0.46 33.33±1.27 78.17±0.26 57.20±0.73 86.99±0.23 61.42±0.74 60.49±0.31 158.94±3.86 84.39±0.57 66.51±1.85 SWA [35] 94.86±0.09 4.65±0.18 94.27±0.27 35.78±4.61 78.17±0.26 57.20±0.73 87.23±0.22 63.33±0.63 60.74±0.46 159.68±3.12 83.83±0.07 68.03±0.75 FMFP [81] 95.07±0.15 4.11±0.19 94.74±0.06 34.67±0.48 78.33±0.40 54.88±1.62 87.92±0.46 60.52±1.12 61.18±0.72 154.98±3.72 84.29±0.26 66.66±1.21 OpenMix [82]§95.51±0.23 4.68±0.72 93.57±0.81 33.57±3.70 78.97±0.31 53.83±0.93 87.45±0.18 62.22±1.15 - - - - SURE 95.57±0.06 3.51±0.09 94.91±0.25 29.52±0.56 80.02±0.13 46.69±0.59 88.78±0.26 58.37±0.39 62.61±0.18 142.59±2.16 84.31±0.42 65.39±2.12 MSP [31] 95.71±0.17 5.90±0.89 92.19±0.82 35.95±3.75 79.15±0.19 53.02±0.89 88.21±0.06 59.46±1.23 67.52±0.18 107.97±0.80 86.78±0.20 61.68±0.99 RegMixup [59] 97.03±0.04 3.47±0.26 93.10±0.56 26.16±1.17 82.14±0.47 47.01±2.12 87.70±0.17 55.24±1.19 69.63±0.09 95.96±0.21 87.38±0.21 59.09±0.75 CRL [54] 95.87±0.08 3.85±0.20 94.10±0.06 32.73±1.22 80.10±0.28 47.99±1.08 88.43±0.34 59.44±1.45 69.00±0.22 97.46±0.90 87.42±0.23 61.02±1.71 WRNet [76] SAM [19] 96.47±0.11 2.91±0.38 94.79±0.29 28.05±1.56 80.67±0.31 44.93±0.87 89.01±0.31 56.60±1.30 69.86±0.37 93.66±2.03 87.49±0.30 60.44±1.19 SWA [35] 94.86±0.09 4.65±0.18 94.27±0.27 35.78±4.61 81.31±0.33 41.15±0.89 89.39±0.16 57.57±1.97 71.27±0.16 84.97±0.12 87.71±0.26 60.00±2.42 FMFP [81] 96.47±0.12 2.33±0.08 95.73±0.01 26.68±2.62 81.66±0.12 39.60±0.15 89.51±0.10 56.41±1.44 71.62±0.04 83.04±0.16 87.78±0.03 60.09±0.83 OpenMix [82]§97.16±0.10 2.32±0.15 94.81±0.34 22.08±1.86 82.63±0.06 39.61±0.54 89.06±0.11 55.00±1.29 - - - - SURE 97.02±0.20 1.79±0.16 96.18±0.01 19.53±1.23 83.71±0.10 32.10±0.28 90.33±0.18 54.34±0.29 73.34±0.36 74.11±0.97 88.23±0.31 58.17±1.50 MSP [31] 98.28±0.08 0.97±0.02 95.76±0.28 20.47±5.38 89.71±0.03 17.66±0.56 90.40±0.25 50.99±0.61 - - - - RegMixup [59] 98.90±0.04 0.89±0.05 94.30±0.25 24.98±3.87 90.79±0.11 15.38±0.51 90.34±0.33 52.01±1.76 - - - - CRL [54] 98.27±0.04 0.99±0.11 95.85±0.44 19.65±2.51 89.74±0.16 17.61±0.71 90.30±0.18 51.58±0.23 - - - - DeiT-B⋆[70] SAM [19] 98.62±0.10 0.58±0.09 96.89±0.34 15.74±1.71 90.43±0.17 15.29±0.19 90.75±0.15 50.02±1.52 - - - - SWA [35] 98.44±0.07 0.82±0.03 96.11±0.20 17.78±3.23 90.17±0.34 15.37±0.44 90.86±0.38 50.64±3.37 - - - - FMFP [81] 98.76±0.02 0.46±0.02 97.15±0.16 16.17±0.55 90.53±0.13 14.30±0.18 91.15±0.32 51.90±1.50 - - - - SURE 98.92±0.07 0.86±0.08 94.37±0.69 27.52±3.11 91.18±0.01 13.79±0.29 90.85±0.05 48.81±0.39 - - - - §reports the results given by models training on extra outliers and all the training data on CIFAR10 [40] CIFAR100 [40] ⋆reports the results given by finetuning ImageNet [14] pre-trained DeiT-B [70] for 50 epochs Table 1. Comparison of the performance of failure prediction on CIFAR10 [40], CIFAR100 [40] and Tiny-ImageNet [41]. We keep 10% training data as the validation set to select the best model. The means and standard deviations over three runs are reported. ↓and ↑indicate that lower and higher values are better respectively. AURC [22] values are multiplied by 103, and all remaining values are in percentage. performance of our model, including Accuracy (Acc.), Area Under the Risk-Coverage Curve (AURC) [22], Area Under the Receiver Operating Characteristic Curve (AU- ROC) [13], False Positive Rate at 95% True Positive Rate (FPR95). Specifically, we leverage AURC, which is com- plementary to Accuracy to measure the uncertainty of the model. AURC measures the area under the curve drawn by plotting the risk according to coverage. Given a confidence threshold, the coverage indicates the ratio of samples whose confidence estimates are higher than the confidence thresh- old, and the risk, also known as the selective risk [21], is an error rate computed by using those samples. A lower value of AURC implies a higher accuracy, and correct and erroneous predictions can be well-separable by a confidence threshold. The definitions of AUROC [13] and FPR95 are detailed in Appendix D. 4.2. Failure prediction We present results on failure prediction on CIFAR10 [40], CIFAR100 [40] and Tiny-ImageNet [41] in Table 1. Exper- iments are conducted with different backbones: ResNet18 [28], VGG16-BN [64], DenseNetBC [34], WRNet28 [76] and DeiT [70]. The architectures and datasets are com-monly used in the community [59, 81, 82]. Note that to ensure the reliability of our model and maintain the rigor and fairness of our experiments, we split 10% of the training data as a validation set for the selection of hyper-parameters and report the performance on the test set. All the exper- iments are repeated three times and we report the mean and the standard deviation in the table. From Table 1, we can see that our SURE achieves significantly better perfor- mance on almost all the metrics than all the competitive ap- proaches across different datasets and diverse architectures, which demonstrates the effectiveness and robustness of our proposed approaches. Note that even though the latest ap- proach OpenMix [82] trains on all the training sets as well as additional outlier data, our SURE still maintains a signif- icant performance gain without using any additional data. 4.3. Long-tailed classification Uncertainty-aware re-weighting When the training data distribution is imbalanced, we find that the second stage uncertainty-aware re-weighting can consistently improve the performance. Note that the two-stage training strategy is commonly used in the community of long-tailed clas- sification [1, 4, 38, 69, 80]. The key difference is that 6 MethodsCIFAR10-LT [12] CIFAR100-LT [12] IF=100 IF=50 IF=10 IF=100 IF=50 IF=10 CE 70.40 74.80 86.40 38.30 43.90 55.70 Mixup [77] 73.06 77.82 87.1 39.54 54.99 58.02 CB-Focal [12] 74.57 79.27 87.10 39.60 45.17 57.99 LDAM-DRW [4] 77.03 81.03 88.16 42.04 46.62 58.71 SSP [73] 77.83 82.13 88.53 43.43 47.11 58.91 BBN [80] 79.82 81.18 88.32 42.56 47.02 59.12 Casual model [69] 80.60 83.60 88.50 44.10 50.30 59.60 MetaSAug-LDAM [45] 80.66 84.34 89.68 48.01 52.27 61.28 Hybrid-SC [71] 81.40 85.36 91.12 46.72 51.87 63.05 ResLT [11] 82.40 85.17 89.70 48.21 52.71 62.01 Dynamic Loss [37] 82.95 88.30 91.24 50.14 54.51 63.99 BCL [83] 84.32 87.24 91.12 51.93 56.59 64.87 GLMC [16] 87.75 90.18 94.04 55.88 61.08 70.74 SURE 83.28 87.72 93.73 51.60 58.57 71.13 GLMC + MaxNorm [1] 87.57 90.22 94.03 57.11 62.32 72.33 SURE + re-weighting 86.93 90.22 94.96 57.34 63.13 73.24 Table 2. Top-1 accuracy (%) of ResNet32 [28] on CIFAR10-LT and CIFAR100-LT [12] with different imbalance factors [100, 50, 10]. SURE, enhanced with re-weighting, achieves comparable top-1 accuracy to the SOTA method GLMC [16] + MaxNorm [1]. MethodsCE [78]SELFIE [65]PLC [78]NCT [6]Dynamic Loss [37]SSR+ [17]Jigsaw-ViT⋆ [7]SURE Acc. (%) 79.4 81.8 83.4 84.1 86.5 88.5 89.0 89.0 ⋆is with DeiT-S [70] and an extra self-supervised loss. The others are with VGG19-BN [64]. Table 3. Comparison of SOTA approaches on learning with noisy labels task on Animal-10N [65] (noise ratio ∼8%). Top-1 test accuracy (%) is reported. MethodsCE [78]CleanNet [42]MWNet [63]SMP [27]NRank [62]PLC [78]WarPI [67]Jigsaw-ViT⋆ [7]SURE Acc. (%) 81.7 83.5 84.7 85.1 85.2 85.3 85.9 86.7 88.0 ⋆is with DeiT-S [70] and an extra self-supervised loss. The others are with ResNet-50 [28]. Table 4. Comparison of SOTA approaches on learning with noisy labels task on Food-101N [42] (noise ratio ∼20%) . Top-1 test accuracy (%) is reported. we use the uncertainty scores obtained from our first-stage training for re-weighting. Precisely, during the first epoch of re-weighting, we save the maximum softmax score for each sample in the training set to serve as the uncertainty score. By applying an exponential mapping to the uncer- tainty score. We re-weight the cross-entropy loss for each sample using e−siand normalize the weights of all sam- ples in a training batch such that they sum up to one. This re-weighting process is carried out over 50 epochs with a learning rate of 5e-3. Further ablation studies about varia- tions of our re-weighting mapping are in Appendix F. Comparison to state-of-the-art approaches We also conduct fair comparison to state-of-the-art approaches on CIFAR10-LT [12] and CIFAR100-LT [12] with different imbalance factors. To make a fair comparison in this task, we train our SURE with ResNet32 [28], the most com- monly used backbone in the community. The results are presented in Table 2. Although our proposed SURE is not originally designed for long-tailed classification, it achieves Severity 1 Severity 2 Severity 3 Severity 4 Severity 5707580859095Average AUROCMethod MSP RegMixup CRL SAM SWA FMFP SURE(a) AUROC Severity 1 Severity 2 Severity 3 Severity 4 Severity 5050100150200250300350Average AURCMethod MSP RegMixup CRL SAM SWA FMFP SURE (b) AURC Figure 3. Comparison of the average AUROC [13] (higher is better) and AURC [22] (lower is better) on CIFAR10-C [30]. We use DenseNet [34] as the backbone and train on the standard CIFAR10 training set. The evaluation results are averaged across the images with 15 types of corruption under 5 severity levels. competitive results by equipping with the second stage uncertainty-aware re-weighting compared to task-specific solutions. The results suggest that leveraging uncertainty estimation for downstream applications is promising, espe- cially using SURE to train the DNNs. 4.4. Learning with noisy labels For learning with noisy labels, we report top-1 test accu- racy on benchmark Animal-10N [65] and Food-101N [42] in Tables 3 and 4 , respectively. On Animal-10N, our SURE outperforms the baseline trained with cross-entropy loss by 9.6%. Compared with NCT [6], which uses two backbones for training, SURE trained with only one backbone im- proves performance by 4.9%. Moreover, SURE achieves higher accuracy than SSR+ [17], which is designed for noisy labels employing techniques such as sample selec- tion and relabelling. In Table 4 on Food-101N, although SURE is not designed for learning with noisy labels, with the default settings, it significantly outperforms all current 7 Method Loss Optimzation Classifer CIFAR100 [40] λcrlλmix SAM SWA CSC Acc.↑ AURC ↓ AUROC ↑ FPR95 ↓ Baseline(MSP) 0 0%% % 75.87±0.31 69.44±2.11 87.00±0.21 60.73±1.16 SAM 0 0!% % 76.60±0.21 62.97±1.02 87.72±0.10 59.35±0.87 SWA 0 0%! % 77.65±0.19 55.87±0.32 88.55±0.25 60.43±1.90 CSC 0 0%% ! 74.05±0.18 78.14±0.26 86.82±0.24 63.56±1.20 FMFP 0 0!! % 77.82±0.08 55.03±0.52 88.59±0.07 59.79±0.31 SAM + CSC 0 0!% ! 75.97±0.39 64.20±1.55 88.06±0.19 59.36±1.21 SWA + CSC 0 0!! % 78.46±0.33 55.68±0.41 87.74±0.44 61.22±2.54 FMFP + CSC 0 0!! ! 78.45±0.13 54.18±0.47 88.23±0.20 60.05±1.03 CRL 1 0%% % 76.42±0.21 62.78±0.21 88.07±0.17 59.02±0.39 CRL + SAM 1 0!% % 76.98±0.32 59.71±1.39 88.26±0.07 59.52±1.92 CRL + SWA 1 0%! % 77.56±0.20 56.88±0.28 88.24±0.45 61.73±1.77 CRL + CSC 1 0%% ! 75.61±0.46 67.83±1.98 87.84±0.11 59.80±2.16 CRL + FMFP 1 0!! % 77.71±0.54 56.24±0.89 88.21±0.44 61.75±1.74 CRL+ SAM + CSC 1 0!% ! 78.21±0.53 53.55±3.28 88.86±0.45 56.37±1.71 CRL+ SWA + CSC 1 0!! % 78.09±0.10 56.61±0.91 87.78±0.21 61.37±1.56 CRL+ FMFP + CSC 1 0!! ! 78.24±0.18 55.01±0.44 88.14±0.11 60.48±0.27 Reg 0 1%% % 76.99±1.19 63.09±4.22 87.71±0.13 58.78±0.50 Reg + SAM 0 1!% % 77.45±0.55 60.68±3.75 87.70±0.39 58.72±1.42 Reg + SWA 0 1%! % 78.55±0.62 52.31±2.10 88.71±0.22 58.99±2.07 Reg + CSC 0 1%% ! 78.32±0.28 62.40±0.58 86.57±0.34 58.77±2.27 Reg + FMFP 0 1!! % 79.04±0.50 50.09±1.00 88.89±0.20 58.47±0.88 Reg + SAM + CSC 0 1!% ! 78.91±0.34 57.43±2.25 87.16±0.23 58.35±0.22 Reg + SWA + CSC 0 1!! % 80.17±0.52 49.87±1.86 87.89±0.10 61.08±1.06 Reg + FMFP + CSC 0 1!! ! 79.88±0.07 48.58±0.34 88.50±0.20 58.52±0.75 CRL + Reg 1 1%% % 78.38±0.17 52.93±1.19 88.97±0.38 56.12±1.33 CRL + Reg + SAM 1 1!% % 78.21±0.53 53.55±3.28 88.86±0.45 56.37±1.71 CRL + Reg + SWA 1 1%! % 78.64±0.16 50.96±1.01 88.96±0.31 59.27±1.47 CRL + Reg + CSC 1 1%% ! 79.42±0.11 54.35±0.91 87.59±0.20 59.67±0.53 CRL + Reg + FMFP 1 1!! % 79.17±0.30 49.96±1.63 88.70±0.20 59.85±2.07 CRL + Reg + SAM + CSC 1 1!% ! 79.10±0.34 56.39±1.25 87.44±0.16 56.98±0.31 CRL + Reg + SWA + CSC 1 1!! % 79.63±0.27 49.14±0.22 88.51±0.34 59.28±2.14 SURE 1 1!! ! 80.49±0.18 45.81±0.15 88.73±0.24 58.91±0.58 Table 5. Ablation study ofdifferent components used in SURE and their combinations on CIFAR100 [40]. SOTAs by at least 1.3%. Results on both benchmarks verify SURE’s robustness towards datasets with label noise. 4.5. Failure prediction under distribution shift In real-world applications, environmental conditions are prone to change frequently, such as shifts in weather from sunny to cloudy and then to rainy. It’s crucial for models to maintain reliable decision-making capabilities under such distribution or domain shifts. To emulate these scenarios, we evaluate our model trained with the clean training set of CIFAR10 (the same training set presents in Section 4.2) on corruption datasets CIFAR10-C [30]. We present the aver- age AUROC and AURC of 15 corruptions for different ap- proaches in Figure 3. Our SURE significantly enhances the failure prediction performance across a spectrum of corrup- tions. When compared to our baseline model, the SURE- based model demonstrates a notable improvement: the av- erage AURC is reduced from 309 to 229. These results highlight SURE’s robustness and adaptability in dynami- cally changing environments. Note that the performances of each corruption are presented in Appendix E. 4.6. Analysis Ablation study To further analyze SURE, we analyze the contribution of each component to our model’s perfor- mance on CIFAR100 in Table 5. We report the means and standard deviations over three runs in our ablations with ResNet18 [28]. Starting from our baseline model, MSP, we observe the incremental impact of adding techniques like RegMixup, CRL, SAM, SWA, and the CSC to the SURE framework. Each addition to the SURE approach Figure 4. The visual results of confidence separation given by different methods on CIFAR100-LT [12] IF=10. SURE leads to better confidence separation than MSP [31] and FMFP [81]. appears to improve accuracy and AURC, with the com- plete SURE method achieving the highest scores reported in the study. Among them, RegMixup and SWA contribute the most to performance, the combination of RegMixup and FMFP holds importance. This comprehensive analy- sis highlights the synergistic effect of our model’s compo- nents, underscoring their collective importance in achiev- ing optimal performance. Note that more analysis, such as the effect of RegMixup regularization weight λmixand CRL weight λcrlare provided in Appendix A. Visualization We provide visualization of confidence dis- tribution on CIFAR100-LT [12] IF=10 in Figure 4. From which, one can recognize SURE leads to clearly better con- fidence separation than MSP and FMFP. From the competi- tive approaches to SURE, the proposed method can increase the uncertainty of misclassified samples while improving accuracy. 5. Conclusion In this paper, we introduce SURE, a novel framework that integrates multiple techniques for model regularization, classifier and optimization, aiming to enhance the reliabil- ity and robustness of DNNs. Our work highlights the short- comings of existing methods when dealing with the com- plex nature of real-world data. This insight underlines the imperative need for approaches like SURE. Through rig- orous evaluation, SURE has consistently outperformed in- dividual methods across various datasets and model archi- tectures in failure prediction. Moreover, its application in addressing real-world challenges, such as long-tailed classi- fication, learning with noisy labels and data corruption, has not only yielded results comparable to state-of-the-art meth- ods in long-tailed distribution datasets but also excelled in scenarios with label noise. This work paves the way for the application of uncertainty estimation methods in various in- tricate real-world situations. Acknowledgement We thank Caizhi Zhu, Yuming Du and Yinqiang Zheng for inspiring discussions and valuable feedback. 8 References [1] Shaden Alshammari, Yu-Xiong Wang, Deva Ramanan, and Shu Kong. Long-tailed recognition via weight balancing. In CVPR , 2022. 3, 5, 6, 7 [2] Murat Sec ¸kin Ayhan, Laura K ¨uhlewein, Gulnar Aliyeva, Werner Inhoffen, Focke Ziemssen, and Philipp Berens. Expert-validated estimation of diagnostic uncertainty for deep neural networks in diabetic retinopathy detection. Med- ical image analysis , 2020. 1 [3] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. NeurIPS , 32, 2019. 3 [4] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label- distribution-aware margin loss. In NeurIPS , 2019. 3, 6, 7 [5] Haw-Shiuan Chang, Erik Learned-Miller, and Andrew Mc- Callum. Active bias: Training more accurate neural networks by emphasizing high variance samples. In NeurIPS , 2017. 3 [6] Yingyi Chen, Shell Xu Hu, Xi Shen, Chunrong Ai, and Johan A. K. Suykens. Compressing features for learning with noisy labels. TNNLS , 2022. 3, 7 [7] Yingyi Chen, Xi Shen, Yahui Liu, Qinghua Tao, and Jo- han AK Suykens. Jigsaw-vit: Learning jigsaw puzzles in vision transformer. Pattern Recognition Letters , 2023. 2, 7 [8] Zixiang Chen, Junkai Zhang, Yiwen Kou, Xiangning Chen, Cho-Jui Hsieh, and Quanquan Gu. Why does sharpness- aware minimization generalize better than sgd? In NeurIPS , 2023. 4 [9] Jiwoong Choi, Dayoung Chun, Hyun Kim, and Hyuk-Jae Lee. Gaussian yolov3: An accurate and fast object detec- tor using localization uncertainty for autonomous driving. In ICCV , 2019. 1 [10] Charles Corbiere, Nicolas Thome, Antoine Saporta, Tuan- Hung Vu, Matthieu Cord, and Patrick Perez. Confidence es- timation via auxiliary models. IEEE TPAMI , 2021. 2 [11] Jiequan Cui, Shu Liu, Zhuotao Tian, Zhisheng Zhong, and Jiaya Jia. Reslt: Residual learning for long-tailed recogni- tion. IEEE TPAMI , 2022. 7 [12] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In CVPR , 2019. 2, 5, 7, 8, 13 [13] Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc curves. In ICML , 2006. 6, 7, 12, 14 [14] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR , 2009. 5, 6 [15] Yifan Ding, Liqiang Wang, Deliang Fan, and Boqing Gong. A semi-supervised two-stage approach to learning from noisy labels. In WACV , 2018. 3 [16] Fei Du, Peng Yang, Qi Jia, Fengtao Nan, Xiaoting Chen, and Yun Yang. Global and local mixture consistency cumulative learning for long-tailed visual recognitions. In CVPR , 2023. 3, 5, 7 [17] Chen Feng, Georgios Tzimiropoulos, and Ioannis Patras. Ssr: An efficient and robust framework for learning with un- known label noise. In BMVC , 2022. 3, 7[18] Di Feng, Lars Rosenbaum, and Klaus Dietmayer. Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection. In 2018 21st in- ternational conference on intelligent transportation systems (ITSC) , 2018. 1 [19] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In ICLR , 2020. 2, 3, 4, 5, 6, 13 [20] Gianni Franchi, Xuanlong Yu, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, and David Filliat. Latent dis- criminant deterministic uncertainty. In ECCV , 2022. 2 [21] Yonatan Geifman and Ran El-Yaniv. Selective classification for deep neural networks. In NeurIPS , 2017. 6 [22] Yonatan Geifman, Guy Uziel, and Ran El-Yaniv. Bias- reduced uncertainty estimation for deep neural classifiers. In ICLR , 2018. 6, 7, 13 [23] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In CVPR , 2018. 2, 4, 12 [24] Gianluca Giuffrida, Luca Fanucci, Gabriele Meoni, Matej Batiˇc, L ´eonie Buckley, Aubrey Dunne, Chris van Dijk, Marco Esposito, John Hefele, Nathan Vercruyssen, et al. Theϕ-sat-1 mission: The first on-board deep neural network demonstrator for satellite earth observation. IEEE Transac- tions on Geoscience and Remote Sensing , 2021. 1 [25] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In ICML , 2017. 2 [26] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co- teaching: Robust training of deep neural networks with ex- tremely noisy labels. In NeurIPS , 2018. 3 [27] Jiangfan Han, Ping Luo, and Xiaogang Wang. Deep self- learning from noisy labels. In ICCV , 2019. 7 [28] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR , 2016. 2, 6, 7, 8, 12, 13 [29] Wei He, Yuhao Chen, and Zhao Yin. Adaptive neural net- work control of an uncertain robot with full-state constraints. IEEE transactions on cybernetics , 2015. 1 [30] Dan Hendrycks and Thomas Dietterich. Benchmarking neu- ral network robustness to common corruptions and perturba- tions. In ICLR , 2019. 2, 5, 7, 8, 14 [31] Dan Hendrycks and Kevin Gimpel. A baseline for detect- ing misclassified and out-of-distribution examples in neural networks. In ICLR , 2017. 1, 2, 6, 8, 13 [32] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. ICLR , 2018. 2 [33] Shell Xu Hu, Pablo G Moreno, Yang Xiao, Xi Shen, Guil- laume Obozinski, Neil D Lawrence, and Andreas Damianou. Empirical bayes transductive meta-learning with synthetic gradients. In ICLR , 2020. 2, 4, 12 [34] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil- ian Q Weinberger. Densely connected convolutional net- works. In CVPR , 2017. 2, 6, 7, 13, 14 [35] Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights 9 leads to wider optima and better generalization. arXiv , 2018. 2, 3, 4, 5, 6, 13 [36] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML , 2018. 3 [37] Shenwang Jiang, Jianan Li, Jizhou Zhang, Ying Wang, and Tingfa Xu. Dynamic loss for robust learning. IEEE TPAMI , 2023. 3, 7 [38] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decou- pling representation and classifier for long-tailed recogni- tion. In ICLR , 2020. 3, 6 [39] Kyeongbo Kong, Junggi Lee, Youngchul Kwak, Minsung Kang, Seong Gyun Kim, and Woo-Jin Song. Recycling: Semi-supervised learning with noisy labels in deep neural networks. IEEE Access , 7:66998–67005, 2019. 3 [40] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Toronto, ON, Canada , 2009. 2, 5, 6, 8, 12 [41] Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N , 7(7):3, 2015. 2, 5, 6, 12 [42] Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. Cleannet: Transfer learning for scalable image classi- fier training with label noise. In CVPR , 2018. 2, 5, 7 [43] Christian Leibig, Vaneeda Allken, Murat Sec ¸kin Ayhan, Philipp Berens, and Siegfried Wahl. Leveraging uncertainty information from deep neural networks for disease detection. Scientific reports , 2017. 1 [44] Junnan Li, Richard Socher, and Steven CH Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In ICLR , 2020. 3 [45] Shuang Li, Kaixiong Gong, Chi Harold Liu, Yulin Wang, Feng Qiao, and Xinjing Cheng. Metasaug: Meta semantic augmentation for long-tailed visual recognition. In CVPR , 2021. 7 [46] Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. Enhanc- ing the reliability of out-of-distribution image detection in neural networks. ICLR , 2017. 2 [47] Sheng Liu, Jonathan Niles-Weed, Narges Razavian, and Car- los Fernandez-Granda. Early-learning regularization pre- vents memorization of noisy labels. In NeurIPS , 2020. 3 [48] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. NeurIPS , 2020. 2 [49] Antonio Loquercio, Mattia Segu, and Davide Scaramuzza. A general framework for uncertainty estimation in deep learn- ing. IEEE Robotics and Automation Letters , 2020. 1 [50] Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah Erfani, Shutao Xia, Sudanthi Wijewickrema, and James Bailey. Dimensionality-driven learning with noisy la- bels. In ICML , 2018. 3 [51] Eran Malach and Shai Shalev-Shwartz. Decoupling “when to update” from “how to update”. In NeurIPS , 2017. 3 [52] Pablo Miralles, Kathiravan Thangavel, Antonio Fulvio Scan- napieco, Nitya Jagadam, Prerna Baranwal, Bhavin Faldu, Ruchita Abhang, Sahil Bhatia, Sebastien Bonnart, IshitaBhatnagar, et al. A critical review on the state-of-the-art and future prospects of machine learning for earth observation operations. Advances in Space Research , 2023. 1 [53] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative ad- versarial networks. ICLR , 2018. 2 [54] Jooyoung Moon, Jihyo Kim, Younghak Shin, and Sangheum Hwang. Confidence-aware learning for deep neural net- works. In ICML , 2020. 2, 4, 6, 12, 13 [55] Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H.S. Torr, and Yarin Gal. Deep deterministic uncer- tainty: A new simple baseline. In CVPR , 2023. 2 [56] Tanya Nair, Doina Precup, Douglas L Arnold, and Tal Arbel. Exploring uncertainty measures in deep networks for mul- tiple sclerosis lesion detection and segmentation. Medical image analysis , 2020. 1 [57] Kento Nishi, Yi Ding, Alex Rich, and Tobias Hollerer. Aug- mentation strategies for learning with noisy labels. In CVPR , 2021. 3 [58] Shreyas Padhy, Zachary Nado, Jie Ren, Jeremiah Liu, Jasper Snoek, and Balaji Lakshminarayanan. Revisiting one-vs-all classifiers for predictive uncertainty and out-of-distribution detection in neural networks. In ICML Workshops , 2020. 2 [59] Francesco Pinto, Harry Yang, Ser Nam Lim, Philip Torr, and Puneet Dokania. Using mixup as a regularizer can surpris- ingly improve accuracy & out-of-distribution robustness. In NeurIPS , 2022. 1, 2, 3, 4, 5, 6, 12, 13 [60] Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. Identifying mislabeled data using the area under the margin ranking. In NeurIPS , 2020. 3 [61] Haoxuan Qu, Lin Geng Foo, Yanchao Li, and Jun Liu. To- wards more reliable confidence estimation. IEEE TPAMI , 2023. 2 [62] Karishma Sharma, Pinar Donmez, Enming Luo, Yan Liu, and I Zeki Yalniz. Noiserank: Unsupervised label noise re- duction with dependence models. In ECCV , 2020. 7 [63] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In NeurIPS , 2019. 7 [64] Karen Simonyan and Andrew Zisserman. Very deep con- volutional networks for large-scale image recognition. In CVPR , 2014. 2, 6, 7, 13 [65] Hwanjun Song, Minseok Kim, and Jae-Gil Lee. Selfie: Re- furbishing unclean samples for robust deep learning. In ICML , 2019. 2, 3, 5, 7 [66] Jacob Steinhardt and Percy S Liang. Unsupervised risk estimation using only conditional independence structure. NeurIPS , 2016. 2 [67] Haoliang Sun, Chenhui Guo, Qi Wei, Zhongyi Han, and Yi- long Yin. Learning to rectify for robust learning with noisy labels. Pattern Recognition , 2022. 3, 7 [68] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiy- oharu Aizawa. Joint optimization framework for learning with noisy labels. In CVPR , 2018. 3 [69] Kaihua Tang, Jianqiang Huang, and Hanwang Zhang. Long- tailed classification by keeping the good and removing the bad momentum causal effect. In NeurIPS , 2020. 3, 6, 7 10 [70] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv ´e J´egou. Training data-efficient image transformers & distillation through at- tention. In ICML , 2021. 2, 5, 6, 7, 12 [71] Peng Wang, Kai Han, Xiu-Shen Wei, Lei Zhang, and Lei Wang. Contrastive learning based hybrid networks for long- tailed image classification. In CVPR , 2021. 3, 7 [72] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Com- bating noisy labels by agreement: A joint training method with co-regularization. In CVPR , 2020. 3 [73] Yuzhe Yang and Zhi Xu. Rethinking the value of labels for improving class-imbalanced learning. In NeurIPS , 2020. 3, 7 [74] Kun Yi and Jianxin Wu. Probabilistic end-to-end noise cor- rection for learning with noisy labels. In CVPR , 2019. 3 [75] Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. How does disagreement help gener- alization against label corruption? In ICML , 2019. 3 [76] Sergey Zagoruyko and Nikos Komodakis. Wide residual net- works. In BMVC , 2016. 2, 6, 13 [77] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimiza- tion. In ICLR , 2018. 2, 3, 7 [78] Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, and Chao Chen. Learning with feature-dependent label noise: A progressive approach. In ICLR , 2021. 3, 7 [79] Evgenii Zheltonozhskii, Chaim Baskin, Avi Mendelson, Alex M Bronstein, and Or Litany. Contrast to divide: Self- supervised pre-training for learning with noisy labels. In WACV , 2022. 3 [80] Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. In CVPR , 2020. 3, 6, 7 [81] Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-Lin Liu. Rethinking confidence calibration for failure prediction. In ECCV , 2022. 2, 4, 5, 6, 8, 13 [82] Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-Lin Liu. Openmix: Exploring outlier samples for misclassification detection. In CVPR , 2023. 2, 5, 6 [83] Jianggang Zhu, Zheng Wang, Jingjing Chen, Yi-Ping Phoebe Chen, and Yu-Gang Jiang. Balanced contrastive learning for long-tailed visual recognition. In CVPR , 2022. 3, 5, 7 11 Appendix This appendix contains the following sections: • Section A: Ablation study of λmixandλcrlfor the Reg- Mixup [59] loss and Correctness Ranking Loss (CRL) [54]. • Section B: Ablation study of τfor the Cosine Similarity Classifier (CSC) [23, 33]. • Section C: Comparison of the performance of different uncertainty estimation methods on CIFAR10-LT and CIFAR100-LT [40] with imbalance factor 10. • Section D: More details about the definition of Area Under the Receiver Operating Characteristic Curve (AUROC) [13] and False Positive Rate at 95% True Positive Rate (FPR95) as mentioned in Section 4.1(c.f. line 388) in our paper. • Section E: More results of failure prediction under distribution shift. • Section F: Ablation study of different re-weighting maps. A. Impact of different λcrland λmixin Reg- Mixup [59] loss and Correctness Ranking Loss (CRL) [54] In this section, we present the results of varying the param- etersλcrlandλmixin the loss function of SURE . The exper- imental results, obtained using a ResNet18 [28] backbone and summarized in Table 6, indicate that different datasets require different optimal weights. Notably, all experiments across various backbones consistently utilized the same val- ues of λcrlandλmixin our paper. We determined the optimal settings as 0.5 for both λcrlandλmixon CIFAR10 [40], 1 for CIFAR100 [40], and 2 for Tiny-ImageNet [41]. Specif- ically, when we fine-tuned DeiT [70], we set λcrlto 0 and λmixto 0.2 across three datasets. Particularly in our down- stream task, we set λcrlto 0 and λmixto 1 when addressing the challenges of long-tailed distribution data. And we set λcrlto 0.2 and λmixto 1 when learning with noisy labels. B. Impact of different τin Cosine Similarity Classifier (CSC) [23, 33] In the same vein as the previous ablation study for λcrland λmix, we also conducted an analysis of the cosine similarity classifier temperature τwithin the SURE framework. ThisRatios CIFAR10 [40] CIFAR100 [40] Tiny-ImageNet [41] Acc.↑ AURC ↓ Acc.↑ AURC ↓ Acc.↑ AURC ↓ Baseline(MSP) 95.41 ± 0.15 4.89 ± 0.96 74.91 ± 0.25 74.87 ± 0.24 63.27±0.04 134.87±1.14 CRL weight λcrl 0.1 95.47±0.19 4.60±0.26 75.47±0.46 75.02±2.99 63.32±0.23 135.62±2.56 0.2 95.33±0.26 4.13±0.64 76.04±0.78 73.03±2.04 63.44±0.16 131.62±1.37 0.5 95.33±0.14 3.98±0.20 75.49±0.39 71.84±1.49 64.86±0.02 124.63±0.49 1 95.13±0.16 4.67±0.40 76.10±0.43 69.05±2.48 65.29±0.14 117.33±1.08 2 93.99±0.08 6.71±0.28 75.30±0.36 72.40±1.48 65.59±0.18 116.61±0.47 5 91.58±0.18 13.29±0.33 71.98±0.55 91.42±2.15 62.66±0.17 136.03±0.94 RegMixup regularization weight λmix 0.1 95.76±0.08 5.81±0.98 77.59±0.67 66.49±2.09 65.42±0.40 123.37±1.00 0.2 95.85±0.11 4.74±0.41 77.35±0.39 66.59±0.77 65.59±0.20 122.26±0.67 0.5 96.23±0.10 4.68±0.47 77.21±0.52 66.32±1.96 66.26±0.21 116.50±2.31 1 95.96±0.29 7.04±0.92 77.64±0.85 63.88±5.22 66.00±0.22 117.79±1.49 2 96.03±0.07 7.03±0.45 77.13±0.31 66.56±0.43 66.26±0.12 113.40±1.31 5 95.83±0.23 6.17±1.74 77.52±0.95 63.40±6.22 65.40±2.06 119.34±12.49 Table 6. Ablation Study of hyper-parameters λcrlandλmixin the loss function of SURE. Experiments are implemented on CI- FAR10, CIFAR100 [40] and Tiny-ImageNet [41] datasets. Ratios CIFAR10 [40] CIFAR100 [40] Tiny-ImageNet [41] Acc.↑ AURC ↓ Acc.↑ AURC ↓ Acc.↑ AURC ↓ Baseline(MSP) 95.41±0.15 4.89±0.96 74.91±0.25 74.87±0.24 63.27±0.04 134.87±1.14 cosine similarity classifier temperature τ 4 96.29±0.01 2.44±0.04 79.73±0.22 53.71±0.16 64.86±0.14 128.28±1.76 8 96.65±0.07 2.13±0.03 80.37±0.07 48.20±0.73 68.26±0.05 99.76±0.59 16 96.17±0.10 2.52±0.07 79.90±0.35 50.28±1.29 69.03±0.05 94.63±0.74 32 96.20±0.10 2.51±0.06 79.07±0.32 53.14±1.82 67.44±0.29 103.51±1.89 Table 7. Ablation Study of hyper-parameters τin Cosine Sim- ilarity Classifier (CSC) of SURE. Experiments are implemented on CIFAR10, CIFAR100 [40] and Tiny-ImageNet [41] datasets. study is detailed in Table 7. For CIFAR10 [40] and CI- FAR100 [40], the best-performing temperature value was found to be τ= 8, while for Tiny-ImageNet [41], a higher temperature of τ= 16 yielded superior results. Specifi- cally, when we fine-tuned DeiT [70], we set the tempera- ture of τ= 16 on three datasets. Note that across all our downstream tasks, we consistently applied a temperature of τ= 8. C. More results of failure prediction on CIFAR10-LT and CIFAR100-LT [40] We evaluate the performance of failure prediction under im- balanced data distribution. The Acc. and AURC are pro- vided in Table 8 for imbalance factor IF = 10. We find that even under imbalanced data distribution, our SURE still sig- nificantly outperforms other approaches of failure predic- tion across different datasets and backbones, demonstrating its robustness under more challenging settings. D. Definition of AUROC [13] and FPR95 AUROC The area under the receiver operating charac- teristic curve (AUROC) measures the area under the curve drawn by plotting the true positive(TP) rate against the false positive(FP) rate. 12 Backbones MethodsCIFAR10-LT [12] CIFAR100-LT [12] IF=10 IF=10 Acc.↑ AURC ↓ Acc.↑ AURC ↓ MSP [31] 88.49±0.18 40.96±3.19 59.39±0.23 196.28±3.57 RegMixup [59] 91.28±0.15 17.74±0.99 62.51±1.13 156.56±4.06 CRL [54] 88.21±0.14 38.78±2.24 60.33±0.29 181.33±3.63 ResNet18 [28] SAM [19] 88.56±0.38 27.44±1.39 60.24±0.44 183.68±3.17 SWA [35] 90.37±0.15 20.88±0.90 63.86±0.11 157.43±1.63 FMFP [81] 90.46±0.06 18.55±0.35 63.20±0.44 153.88±1.91 SURE 92.65±0.11 14.68±0.86 66.83±0.38 122.18±0.93 MSP [31] 86.65±0.16 84.26±4.55 57.96±0.28 257.81±1.84 RegMixup [59] 89.53±0.30 26.75±0.39 61.75±0.08 200.65±4.04 CRL [54] 86.45±0.21 87.05±1.79 57.69±0.25 255.38±5.34 VGG16-BN [64] SAM [19] 88.24±0.51 40.77±3.57 59.17±0.48 223.72±6.66 SWA [35] 89.23±0.05 25.02±0.66 60.95±0.51 188.60±5.36 FMFP [81] 89.23±0.22 21.55±0.34 61.12±0.22 179.68±1.90 SURE 90.47±0.23 19.51±0.59 62.31±0.36 158.17±2.43 MSP [31] 87.75±0.53 37.94±7.71 58.61±0.03 225.57±2.51 RegMixup [59] 91.73±0.16 17.07±0.12 65.14±0.10 131.85±1.81 CRL [54] 88.11±0.21 38.65±1.47 60.06±0.15 188.90±3.69 DenseNetBC [34] SAM [19] 88.79±0.29 27.02±1.23 61.14±0.34 188.08±3.77 SWA [35] 90.76±0.40 16.77±1.06 64.52±0.75 149.15±5.80 FMFP [81] 90.72±0.49 15.80±1.37 65.62±0.24 136.10±1.03 SURE 91.76±0.23 13.72±0.72 65.34±0.08 130.95±2.23 MSP [31] 89.44±0.10 37.28±1.34 62.46±0.05 185.31±0.83 RegMixup [59] 92.44±0.29 14.66±1.96 65.99±0.60 144.91±3.02 CRL [54] 89.57±0.28 37.63±2.31 63.22±0.24 159.26±2.60 WRNet28 [76] SAM [19] 90.86±0.13 21.11±0.72 65.27±0.13 145.33±2.15 SWA [35] 92.17±0.27 12.70±0.83 68.73±0.17 122.27±1.09 FMFP [81] 92.04±0.07 11.35±0.17 69.12±0.40 111.44±1.31 SURE 93.91±0.01 9.40±0.41 70.92±0.27 102.64±1.85 Table 8. Comparison of the performance of failure prediction on CIFAR10-LT and CIFAR100-LT [12] with imbalance factor 10.We keep 10% training data as the validation set to select the best model. The means and standard deviations over three runs are reported. ↓and↑indicate that lower and higher values are better respectively. For each experiment, the best result is shown in bold- face. AURC [22] values are multiplied by 103and all remaining values are in percentage. On datasets with long-tailed distribu- tions, SURE outperforms other methods in almost all cases. FPR95 FPR95 is the abbreviation of FPR-at-95%-TPR that measures the false positive rate (FPR) = FP/(FP+TN) when the true positive rate (TPR) = TP/(TP+FN) is 95%, where TP, TN, FP, and FN denotes true positives, true neg- atives, false positives, and false negatives, respectively. It can be interpreted as the probability that an example pre- dicted incorrectly is misclassified as a correct prediction when TPR is equal to 95%. E. More results of failure prediction under dis- tribution shift In this section, we present the detailed performances of each corruption in Figure 5. We can observe that SURE outper- forms the other methods in almost all corruption types. This consistent superiority across various corruption types indi- cates the robustness of SURE . F. Impact of different re-weighting maps In this section, we investigate the impact of different re-weighting maps on our uncertainty-aware re-weighting strategy in Table 9. Specifically, we explore four methods:Methods Acc. w/o. re-weighting 87.72 exp t= 0.5 89.73 t= 1 90.22 t= 2 88.96 threshold α= 0.5 89.35 α= 0.6 89.50 α= 0.7 89.01 α= 0.8 89.60 α= 0.9 89.87 power p= 2 89.82 p= 3 89.44 p= 4 89.60 p= 5 89.25 linear 89.60 Table 9. Impact of different re-weighting maps. We have investigated the impact of different re-weighting maps on our uncertainty-aware re-weighting strategy on CIFAR10-LT [12] with an Imbalance Factor (IF) of 50. Based on our findings, ‘exp’ (exponential) method with t= 1was selected as the re-weighting map for all our long-tailed classification experiments. exponential (exp), threshold, power, and linear. Let sirep- resent the confidence scores. We define these re-weighting methods with tuning parameters t,α, and pas follows: •Exponential: The weights are defined using the expo- nential function: weights =e−t×si where tis a scaling factor affecting the influence of con- fidence scores. •Threshold : weights =( 1.0−si,ifsi< α 0, otherwise Here, αis the threshold value. •Power: The weights are determined by raising the term to a power: weights = (1.0−si)p In this case, pis the exponent applied to the term 1.0−si . •Linear: A linear relationship is used to calculate the weights: weights = 1.0−si This method directly subtracts the confidence scores from 1.0. Based on the best result in Table 9, we choose “exp” (ex- ponential) with t = 1 as the re-weighting map for all our long-tail classification experiments. 13 brightnesscontrast defocus_blur elastic_transformfogfrost gaussian_noiseglass_blur impulse_noise jpeg_compressionmotion_blurpixelate shot_noisesnow zoom_blur Corruption Type6065707580859095100Average AUROCMethod MSP RegMixup CRL SAM SWA FMFP SURE(a) AUROC brightnesscontrast defocus_blur elastic_transformfogfrost gaussian_noiseglass_blur impulse_noise jpeg_compressionmotion_blurpixelate shot_noisesnow zoom_blur Corruption Type0100200300400500600700Average AURCMethod MSP RegMixup CRL SAM SWA FMFP SURE (b) AURC Figure 5. Comparison of the average AUROC [13] (higher is better) and AURC [13] (lower is better) on CIFAR10-C [30]. We choose DenseNet [34] as the backbone and CIFAR-10 as the training set. The evaluation results are averaged across the images with 5 severity levels under 15 types of corruption. 14 | 8 | 1 | The paper mentions using ResNet architectures and training over 200 epochs with a batch size of 128 on datasets like CIFAR-10 and CIFAR-100. Given that CIFAR-10 has 60,000 images (with a standard resolution of 32x32) and CIFAR-100 has 60,000 images as well, training these relatively smaller datasets with modern architectures on a single GPU (like NVIDIA RTX 3090 or A100) typically falls under the range of 5 to 8 hours for full training. Using techniques like SAM and SWA might add complexity but should not exceed this time widely. Thus, it can be estimated that training this model can indeed be feasible within 8 hours on one high-end GPU, particularly given the details of their experiments, which appears to leverage efficient training methodologies. | yes | Yes | CV | SURE: SUrvey REcipes for building reliable and robust deep networks | 2024-03-01 0:00:00 | https://github.com/YutingLi0606/SURE | 1 | downloaded through script | 1.5 min * 200 epoch = 5 hours | https://drive.google.com/file/d/1vEVFbmY0jFyW0WQj34SxupBXznFySsPL/view?usp=sharing | Yes | -- Just change the requirements.yml file as i noted and need to change the folder name an jsuut run the script. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.