Dataset Viewer
Auto-converted to Parquet Duplicate
dataset
string
model_name
string
model_links
list
paper_title
string
paper_date
timestamp[ns]
paper_url
string
code_links
list
metrics
string
table_metrics
list
prompts
string
paper_text
string
compute_hours
float64
num_gpus
int64
reasoning
string
trainable_single_gpu_8h
string
verified
string
modality
string
paper_title_drop
string
paper_date_drop
string
code_links_drop
string
num_gpus_drop
int64
dataset_link
string
time_and_compute_verification
string
link_to_colab_notebook
string
run_possible
string
notes
string
PDBbind
BAPULM
[]
BAPULM: Binding Affinity Prediction using Language Models
2024-11-06T00:00:00
https://arxiv.org/abs/2411.04150v1
[ "https://github.com/radh55sh/BAPULM" ]
{'RMSE': '0.898±0.0172'}
[ "RMSE" ]
Given the following paper and codebase: Paper: BAPULM: Binding Affinity Prediction using Language Models Codebase: https://github.com/radh55sh/BAPULM Improve the BAPULM model on the PDBbind dataset. The result should improve on the following metrics: {'RMSE': '0.898±0.0172'}. You must use only the codebase provided.
BAPULM: Binding Affinity Prediction using Language Models Radheesh Sharma Meda†and Amir Barati Farimani∗,‡,¶,†,§ †Department of Chemical Engineering, Carnegie Mellon University, 15213, USA ‡Department of Mechanical Engineering, Carnegie Mellon University, 15213, USA ¶Department of Biomedical Engineering, Carnegie Mellon University, 15213, USA §Machine Learning Department, Carnegie Mellon University, 15213, USA E-mail: barati@cmu.edu Abstract Identifying drug-target interactions is essential for developing effective therapeutics. Binding affinity quantifies these interactions, and traditional approaches rely on com- putationally intensive 3D structural data. In contrast, language models can efficiently process sequential data, offering an alternative approach to molecular representation. In the current study, we introduce BAPULM, an innovative sequence-based framework that leverages the chemical latent representations of proteins via ProtT5-XL-U50 and ligands through MolFormer, eliminating reliance on complex 3D configurations. Our approach was validated extensively on benchmark datasets, achieving scoring power (R) values of 0.925 ±0.043, 0.914 ±0.004, and 0.8132 ±0.001 on benchmark1k2101, Test2016 290, and CSAR-HiQ 36, respectively. These findings indicate the robust- ness and accuracy of BAPULM across diverse datasets and underscore the potential of sequence-based models in-silico drug discovery, offering a scalable alternative to 3D-centric methods for screening potential ligands. 1arXiv:2411.04150v1 [q-bio.QM] 6 Nov 2024 Introduction Developing novel therapeutics is essential for addressing extant diseases, newly emerging or untreated diseases, and future potential disorders that have yet to be identified.1The recent COVID-19 pandemic has underscored the critical importance of rapid and innovative drug development to combat these unforeseen global challenges.2,3In this pursuit, drugs, typically organic molecules composed of carbon-catenated structures (ligands), are stereoselectively designed to interact with specific amino acid motifs of their target proteins.4,5These inter- actions are often mediated by non-covalent forces such as hydrogen bonds, van der Waals interactions, and electrostatic forces.6Understanding the strength of these protein-ligand interactions, often represented by the equilibrium dissociation constant (K d), is crucial to advance therapeutic development.7Spectroscopic techniques, including FTIR, NMR, UV- visible spectroscopy, and fluorescence, are employed to test potential ligands for specific proteins.8–11These methods capture conformational transitions within the secondary struc- ture through vibrational bands, structural modifications through chemical changes, changes in absorbance due to the electronic environment, and alterations in fluorescence intensity upon protein-ligand binding, respectively.12,13 In addition to these experimental approaches, computational methods such as molecular docking and molecular dynamics (MD) simulations have revolutionized affinity prediction by offering physical interpretability.14,15While MD simulations accurately estimate binding affinities at the expense of higher compute power, molecular docking enables the exploration of large libraries of potential ligands, offering rapid virtual screening capabilities albeit re- duced accuracy. Despite their limitations, these techniques laid the foundation for in silico methods in drug discovery, paving the way for the adoption of deep learning models, which have achieved considerably higher predictive accuracy. Alongside molecular docking and simulations, 3D structure-based deep learning models adeptly capture the complex spatial features of protein-ligand interactions; however, they are inherently constrained by the dependence on high-resolution crystallographic data. In 2 contrast, the emergence of large-scale datasets featuring sequential 1D representations of proteins and ligands enables the examination of the sequential molecular latent space for the screening of potential ligands.15–17With the availability of large-scale sequential datasets, researchers have developed advanced models such as transformers to leverage these data to produce more accurate affinity predictions. The transformer architecture inherently relies on the attention mechanism, which excels at comprehending sequential data. Language models leverage this architecture, using unsupervised pretraining to capture nuanced and comprehensive relationships within the data while encoding the sequences.17–19Elnaggar et al. pioneered the development of protein sequence-based language models such as ProtBERT, ProtAlbert, ProtElectra, and ProtT5, trained on expansive datasets UniRef, BFD comprising up to 393 billion amino acids. Interestingly, these models excel at attending to sequences that are spatially proximal, highlighting the importance of nearby amino acids over more distant ones.20Subsequently, ligand-specific encoder models such as ChemBerta and Molformer were engineered to encode the SMILES representation of organic molecules. Building on these advancements, PLAPT successfully integrates BERT-based encoders for protein and ligand sequences to improve affinity predictions.21However, the multimodal framework designed by Xu et al. demonstrates superior performance by incorporating ad- ditional binding pocket information through a residue graph network and employing cross- attention between the sequential and structural modalities. Yet, there remains an essential requirement for configurations that can achieve better predictive capabilities without the complications associated with the extensive data and computational demands of the MFE framework. The current study aims to address this research gap by exploring the syner- gistic utilization of pre-trained language models as a compelling alternative in the realm of protein-ligand binding affinity prediction. We present binding affinity prediction using lan- guage models (BAPULM), a framework that capitalizes on the integrated strengths of the ProtT5-XL-U5018and Molformer22encoder models to effectively estimate binding affinity with a predictive feedforward network. By utilizing these unsupervised pre-trained language 3 models, BAPULM achieves high accuracy in binding affinity prediction while maintaining computational efficiency. BAPULM captures stereochemical molecular space and efficiently screens potential ligands, achieving state-of-the-art performance in predicting the binding affinity. Methods BAPULM was developed to utilize the functionality of encoder-based language models, which require simple 1D string expressions as input, such as protein amino acid sequences and ligand SMILES representation, to predict affinity as shown in Figure 1. Figure 1: The overview of the BAPULM framework, which integrates the ProtT5-XL-U50 for protein sequnces and Molformer for ligand SMILES for feature extraction module while encoding the sequnces. These embeddings are aligned through projection layers and fed into a feed-forward predictive network to predict binding affinity. Datasets The dataset employed to train BAPULM is the Binding Affinity Dataset23from the Hugging Face platform, which includes the curated pair of 1.9M unique set of protein-ligand complexes with the experimentally determined binding affinity pKd. BAPULM operates on the subset 4 of the first 100k aminoacid sequences, canonical smiles, and binding affinity (pKd). Figure 2. illustrates the distribution of (a) protein sequence length with only a tiny portion ( 0.2%) of the sequences with a length greater than 3200 and (b) ligand SMILES with a small fraction ( 0.3%) greater than 278. A dataset of protein-ligand feature embeddings, pKd, and normalized binding affinity was generated before model training using the encoder models described in Section 2.3. A split ra- tio of 90:10 was used to build training and validation sets, similar to the percentage employed in the previous work.21Furthermore, the following benchmark datasets were acquired from the various works of literature: Benchmark1k2101,21Test2016 290,24and CSAR-HiQ 3625 to evaluate BAPULM. Every benchmark dataset was meticulously examined to ensure no overlapping with the training dataset.21 Figure 2: Distribution of (a) Protein sequence lengths range from 13 to 7073 amino acids, showing a skewed distribution with most sequences concentrated under 1000 amino acids. (b) Ligand SMILES string lengths range from 4 to 547 characters, also displaying a skewed distribution with the majority of strings being shorter than 100 characters. PreProcessing Macromolecules built from the same set of 20 amino acid repeating units to form unique sequences are proteins. As a part of preprocessing, the protein sequences were separated by spaces into single characters (A-Z) describing the monomeric residuals and to standard- ize the input sequences, the non-essential amino acids Asparagine (B), Selenocysteine (U), Glutamic acid (Z), and Pyrrolysine (O) were replaced by employing the substitution code 5 ’X’.18,21The canonical SMILES captures the structural stereochemistry of the organic mi- cro/macro molecules, ensuring a unique expression for every individual molecule, enabling a standardized representation. Model Architecture BAPULM’s architecture consists of two robust components that are synergistically utilized to predict pKd. Primarily, the feature encoding module harnessed the potency of ProtT5-XL- U50 for protein sequence and Molformer for ligand SMILES to generate consolidated vectors in latent space that constitute all the characteristic information about the proteins and ligands known as feature embeddings, which were subsequently utilized in the forthcoming module. Protein-ligand feature embedding The BAPULM model integrates the ProtT5-XL-U50 model, which is founded on the T5 model,26and differentiates itself from BERT by employing a unified transformer architec- ture (both encoder and decoder) while capturing the biophysical features of amino acids and the language of life.18,26The preprocessed sequences are transformed into tokens following a comprehensive tokenization procedure, as mentioned in ProtTrans.18This method involves padding and truncating the sequence to a maximum length of 3200, also a norm followed by previous work,21generating a list of token IDs and their attending attention mask. Subse- quently, the tokens were passed to the encoder, and a mean pooling operation was performed on the last layer to generate fixed 1024-dimensional feature embeddings, enabling a compre- hensive understanding of the protein sequences with variable lengths. BAPULM further leverages Molformer, a state-of-the-art transformer-based encoder model, which effectively captures the spatial connection between the atoms in the SMILES sequence.22The canonical SMILES of ligands were tokenized while processed through padding and truncating to an utmost length of 278, including micro and macromolecule ligands. The mean pooler output 6 from the encoder was a 768-dimensional embedding vector containing the stereochemical features of the ligand molecule. A detailed breakdown of the lengths of the protein-ligand sequences is available in Supporting Information Table 3 and 4. Therefore, the protein sequence was encoded into a 1024-dimensional embedding space while the ligand smiles to a 768-dimensional vector. To hereafter utilize these in the pre- diction module, both sets of feature vectors were then separately projected onto a lower- dimensional (512) latent space through a linear transformation employing ReLU (rectified linear unit) activation. These consolidated 512-dimensional feature vectors were concate- nated to form a 1024-dimensional input vector to the feed-forward network. Feed-Forward Predictive Network The concatenated 1024-dimensional combined feature vector was passed through four ReLU- activated linear layers, as shown in Figure 1. Before passing through the linear layers, the mini-batches of combined feature embeddings underwent batch normalization to improve training stability by reducing the internal covariance shift.27Dropout was also applied to avert overfitting and create a robust model. The last layer output of the model yielded a normalized scalar value of the binding affinity(pKd). Training and Evaluation Metrics The previously generated feature dataset was utilized to train BAPULM, employing Mean Squared Error(MSE) as a loss function, which estimates the average squared difference be- tween the actual pKdand predicted affinity as shown below: MSE =1 nnX i=1 pKd,true ,i−pKd,pred ,i2(1) This loss function was optimized utilizing the Adam optimizer to update the model’s weights. The training process was executed on an Nvidia RTX 2080 Ti with 11GB of memory 7 and completed in approximately four minutes. Additionally, the training hyperparameters are provided in the Supporting Information Table 5. To estimate the efficacy of BAPULM in predicting the negative log of the binding affin- ity dissociation constant (pKd) between protein-ligand complexes, we used the following evaluation metrics: Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Pearson correlation coefficient (R) as shown in the equations 2, 3, 4, where pKdtrue, pKdpred correspond to the experimental and predicted affinities. MAE =1 nnX i=1 pKd,true ,i−pKd,pred ,i (2) RMSE =vuut1 nnX i=1 pKd,true ,i−pKd,pred ,i2(3) R=Pn i=1 pKd,true ,i−µpKd,true pKd,pred ,i−µpKd,pred rPn i=1 pKd,true ,i−µpKd,true2Pn i=1 pKd,pred ,i−µpKd,pred2(4) These metrics are widely adopted in regression studies and were established in published literature.12,15,24,28In particular, the person correlation coefficient (R) was considered as one of the scoring power metrics in evaluating the performance.15Again, both RMSE and MAE were employed to provide a comprehensive understanding of performance, as RMSE is optimal for errors with a normal distribution. In contrast, MAE is better suited for errors with a Laplacian distribution.29Since these metrics evaluate predicted and experimental pKdvalues, the model’s output was denormalized onto the same scale as the experimental affinity to assess the performance. 8 Results and Discussion BAPULM’s unique ability to predict binding affinity originates from the inherent nature of its architecture, which effectively captures the intricate features of protein sequences and ligand molecular structures. As shown in Table 1, BAPULM constantly displayed an improvement in each metric compared to PLAPT,21demonstrating its exceptional performance. Notably, BAPULM achieved a higher person correlation coefficient (R) with an increase of 9.6% (0.970) and 40.7% (0.960) on training and validation datasets, respectively, indicating a robust correlation between predicted and experimental pKdvalues. Also, the consolidated clustering of points along the identity line in the parity plots, as displayed in Figure 3(a,b), corroborates with the higher correlation coefficient. Table 1: Evaluation Metrics for BAPULM and PLAPT on Training and Validation Datasets Dataset Model R ↑MSE ↓RMSE ↓MAE ↓ TrainBAPULM (this study) 0.970 0.157 0.397 0.245 PLAPT 0.886 0.586 0.765 0.756 ValidationBAPULM (this study) 0.960 0.177 0.421 0.248 PLAPT 0.683 1.466 1.211 0.949 Furthermore, BAPULM exhibited remarkably lower error metrics, with a drop of 73.2%, 48.1%, and 67.6% in MSE (0.157), RMSE (0.397), and MAE (0.245), respectively, on the training data Similarly, on the validation data, the model showed a decline of 87.9% in MSE (0.177), 65.3% in RMSE (0.421), and 73.9% in MAE (0.248), underscoring its predic- tive capability. This significant improvement across both training and validation datasets demonstrated the ability of the model to comprehensively capture the underlying interactions between the proteins and ligands, facilitating accurate predictions. Moreover, BAPULM’s predictive ability was further validated on three distinct bench- mark datasets, where it was compared to current state-of-the-art models, as shown in Table 2. The evaluation metrics in Table 2 are computed as the mean and standard deviation, estimated using different seed values (2102, 256, 42), to accurately reflect the model’s per- 9 Figure 3: Evaluation of BAPULM on multiple datasets where the scatter plots depict the correlation between predicted and experimental pKdvalues. The datasets represented in- clude the (a) Training ,(b) Validation (c) Benchmark1k2101,(d) Test2016 290, and (e)CSAR- HiQ 36. formance during inference on test datasets with the trained model weights. Accordingly, on the benchmark2k1k dataset, BAPULM demonstrates improved evaluated values compared to PLAPT, with an increase in the R-value of 4.76% and a drop in RMSE, MAE by 19.1% and 37.2%. Table 2: Model Performance on Various Benchmark Datasets Dataset Model Data Representation Feature extraction R ↑ RMSE ↓ MAE ↓ benchmark1k2101BAPULM (this study) seq + smiles canonical ProtT5-XL-U50 + Molformer 0.925 ±0.043 0.745 ±0.236 0.432 ±0.013 PLAPT seq + smiles canonical ProtBert + ChemBerta 0.883 0.922 0.688 Test2016 290BAPULM (this study) seq + smiles canonical ProtT5-XL-U50 + Molformer 0.914 ±0.004 0.898 ±0.0172 0.645 ±0.0166 MFE Protein seq, 3D structure + ligand graph Multimodal between seq, structure + ligand graph 0.851 1.151 0.882 PLAPT seq + smiles canonical ProtBert + ChemBerta 0.845 1.196 0.906 CAPLA protein seq, ligand smiles + binding pocket 1D convolution block + Cross attention (pocket/ligand) 0.843 1.200 0.966 DeepDTAF Protein seq, ligand smiles + binding pocket 1D Conv, 1D Conv + 3 Conv layers for binding pocket 0.789 1.355 1.073 OnionNet Protein-ligand 3D grid 3D Conv + Neural Attention 0.816 1.278 0.984 CSAR-HiQ 36BAPULM (this study) seq + smiles canonical ProtT5-XL-U50 + Molformer 0.8132 ±0.012 1.328 ±0.020 1.029 ±0.022 affinity pred - - 0.774 1.484 1.176 PLAPT seq + smiles canonical ProtBert + ChemBerta 0.731 1.349 1.157 CAPLA seq, smiles + binding pocket 1D convolution block + Cross attention (pocket/ligand) 0.704 1.454 1.160 DeepDTAF seq + smiles 1D CNN on seq and smiles 0.543 2.765 2.318 Xu et al.28developed a multimodal feature extraction (MFE) framework that employed the following feature extraction module involving 1D protein sequence, binding pocket surface through point cloud, 3D structural features, and the ligand molecular graph. It slightly out- 10 performed PLAPT on the Test 2016 dataset by 0.6% improvement in correlation coefficient (R) while reducing the RMSE and MAE by 3.8% and 2.6%, becoming the current state- of-the-art affinity prediction model. However, BAPULM leveraging ProtT5-XL-U50, Mol- former substantially outperformed MFE’s performance by 7.4%, 21.8%,26.7% in R (0.914) , RMSE(0.898) and MAE (0.642), respectively. Additionally, BAPULM surpassed both se- quence and structure-based models on every metric. It outperformed CAPLA24by 8.4% in R, 25.2% in RMSE, and 32.2% in MAE. Against DeepDTAF,7BAPULM showed a higher linear correlation value with an increase of 15.9%, reduced RMSE by 33.7%, and decreased MAE by 39.9%. Furthermore, compared to OnionNet,15it achieved a 12% higher R-value, a lower RMSE, and an MAE of 29.7% and 34.5%, respectively. This implies that BAPULM was successfully able to capture the linear relationship between pKd(experimental) and pKd (predicted), alongside being more accurate by achieving lower RMSE and MAE values. Finally, on the CSAR-HiQ 36 dataset, BAPULM yet again proved its exceptional predic- tive ability. Unlike PLAPT, BAPULM was able to capture the identity relationship between predicted and actual binding affinity, besides being accurate.21BAPULM achieved a no- table scoring power value of 0.813, denoting an 11.2% improvement over PLAPT and 5.1 % against affinity pred.2Similarly, the percentage improvement on the other two metrics was greater (MAE: 12.5%, RMSE: 10.5%) than PLAPT’s advancement over affinity pred (MAE: 1.62%, RMSE:9.10%). Additionally, BAPULM outperforms other sequence-based models on R, RMSE, and MAE against CAPLA by 15.25%,8.67%,11.29%, and over DeepDTAF by 48.7%, 51.96%, 55.59%, respectively. Furthermore, to gain insights into BAPULM’s excellent correlation capabilities, features from the penultimate layer were extracted and utilized to generate t-distributed Stochas- tic Neighbor Embedding (t-SNE) visualizations. t-SNE is a statistical method that maps high-dimensional data to a lower-dimensional space, conserving the local structure and en- abling the visualization in a lower dimension.30To understand the influence of encoder-based language models in predicting binding affinity, we employed the combination of transformer 11 Figure 4: Embedding visualizations of protein-ligand binding affinity mapped onto features extracted from (a) BAPULM, (b) ProtBert & Molformer, and (c) ProtBert & ChemBerta, illustrating the latent space representations of each configuration on train dataset. encoders, such as protBERT, ChemBERTa, and Molformer, within the same model architec- ture, assessing their ability to capture the binding affinity between protein-ligand complexes effectively. BAPULM demonstrates a clear and distinct gradient transition in the t-SNE visu- alization, indicating a strong correlation between the latent representations of protein-ligand complexes and their binding affinities. In contrast, the distribution for the ProtBERT and MolFormer models is more dispersed, with less noticeable separation of embeddings based on pKdvalues. Similarly, the t-SNE visualization for ProtBERT and ChemBERTa shows a partial gradient transition but with some overlap between high-affinity and low-affinity complexes. Although both ProtBERT & MolFormer and ProtBERT & ChemBERTa exhibit some clustering of complexes according to pKd, the clustering is much more prominent in BAPULM. This is attributed to using rotary positional embeddings in Molformer during pretraining, enabling it to learn spatial relationships within the ligand. The synergistic com- bination of Molformer with ProtT5-XL-U50 in BAPULM effectively captured the binding affinity correlation, resulting in a clear and distinct separation of protein-ligand complexes in the t-SNE visualization. This separation is characterized by a smooth color gradient, indi- cating BAPULM’s ability to distinguish between complexes with varying binding affinities. 12 Conclusion This study introduces a sequence-based machine-learning model, BAPULM, that lever- ages transformer-based language models ProtT5-XL-U-50 and Molformer to predict protein- ligand binding affinity. BAPULM effectively captures the latent features of protein-ligand complexes without relying on structural data, enabling a robust representation by harness- ing the inherent information in biochemical sequences. This approach significantly enhances predictive accuracy while reducing computational complexity. The integration of Molformer with rotary positional encoding enhanced BAPULM’s ability to comprehend the stereo- chemistry of ligands without requiring detailed 3D configurations to demonstrate superior performance across diverse benchmarks. Our t-SNE visualizations reveal that synergistic integration of these encoders displayed a distinct clustering of complexes according to bind- ing affinity, substantiating BAPULM’s predictive capability. This framework presents an efficient alternative to conventional structure-based models, demonstrating the potential of using sequence-based models for rapid virtual screening. Data and Software Availability The necessary code and data used in this study can be accessed here: https://github. com/radh55sh/BAPULM.git Acknowledgement We acknowledge the contributions of various individuals and organizations that have made this study possible. This includes the providers of the datasets used in our research, the developers of PyTorch, and the teams behind ProtT5-XL-U50 and Molformer. 13 References (1) Mollaei, P.; Guntuboina, C.; Sadasivam, D.; Farimani, A. B. IDP-Bert: Predicting Properties of Intrinsically Disordered Proteins (IDP) Using Large Language Models. 2024 , (2) Blanchard, A. E.; Gounley, J.; Bhowmik, D.; Chandra Shekar, M.; Lyngaas, I.; Gao, S.; Yin, J.; Tsaris, A.; Wang, F.; Glaser, J. Language models for the prediction of SARS- CoV-2 inhibitors. International Journal of High Performance Computing Applications 2022 ,36, 587–602. (3) Patil, S.; Mollaei, P.; farimani, A. B. Forecasting COVID-19 New Cases Using Trans- former Deep Learning Model. medRxiv 2023 , 2023.11.02.23297976. (4) Mollaei, P.; Barati Farimani, A. Unveiling Switching Function of Amino Acids in Pro- teins Using a Machine Learning Approach. Journal of Chemical Theory and Computa- tion2023 ,19, 8472–8480. (5) Du, X.; Li, Y.; Xia, Y. L.; Ai, S. M.; Liang, J.; Sang, P.; Ji, X. L.; Liu, S. Q. Insights into Protein–Ligand Interactions: Mechanisms, Models, and Methods. International Journal of Molecular Sciences 2016 ,17. (6) Adhav, V. A.; Saikrishnan, K. The Realm of Unconventional Noncovalent Interactions in Proteins: Their Significance in Structure and Function. 2024 ,14, 22. (7) Wang, K.; Zhou, R.; Li, Y.; Li, M. DeepDTAF: a deep learning method to predict protein-ligand binding affinity. Briefings in Bioinformatics 22 , 1–15. (8) K¨ otting, C.; Gerwert, K. Monitoring protein-ligand interactions by time-resolved FTIR difference spectroscopy. Methods in Molecular Biology 2013 ,1008, 299–323. (9) Dalvit, C.; Gm¨ ur, I.; R¨ oßler, P.; Gossert, A. D. Affinity measurement of strong ligands 14 with NMR spectroscopy: Limitations and ways to overcome them. Progress in Nuclear Magnetic Resonance Spectroscopy 2023 ,138-139 , 52–69. (10) Nienhaus, K.; Nienhaus, G. U. Probing Heme Protein-Ligand Interactions by UV/Visible Absorption Spectroscopy. Methods in Molecular Biology 2005 ,305, 215– 241. (11) Rossi, A. M.; Taylor, C. W. Analysis of protein-ligand interactions by fluorescence polarization. Nature Protocols 2011 6:3 2011 ,6, 365–387. (12) Zhang, X.; Gu, Y.; Xu, G.; Li, Y.; Wang, J.; Yang, Z. HaPPy: Harnessing the Wisdom from Multi-Perspective Graphs for Protein-Ligand Binding Affinity Prediction (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence 2023 ,37, 16384–16385. (13) Qi, C.; Mankinen, O.; Telkki, V. V.; Hilty, C. Measuring Protein-Ligand Binding by Hyperpolarized Ultrafast NMR. Journal of the American Chemical Society 2024 ,146, 5063–5066. (14) Zhao, J.; Cao, Y.; Zhang, L. Exploring the computational methods for protein-ligand binding site prediction. Computational and Structural Biotechnology Journal 2020 ,18, 417. (15) Zheng, L.; Fan, J.; Mu, Y. OnionNet: A Multiple-Layer Intermolecular-Contact-Based Convolutional Neural Network for Protein-Ligand Binding Affinity Prediction. ACS Omega 2019 ,4, 15956–15965. (16) Wang, H.; Liu, H.; Ning, S.; Zeng, C.; Zhao, Y. DLSSAffinity: protein–ligand binding affinity prediction via a deep learning model. Physical Chemistry Chemical Physics 2022 ,24, 10124–10133. 15 (17) Guntuboina, C.; Das, A.; Mollaei, P.; Kim, S.; Barati Farimani, A. PeptideBERT: A Language Model Based on Transformers for Peptide Property Prediction. Journal of Physical Chemistry Letters 2023 ,14, 10427–10434. (18) Elnaggar, A.; Heinzinger, M.; Dallago, C.; Rehawi, G.; Wang, Y.; Jones, L.; Gibbs, T.; Feher, T.; Angerer, C.; Steinegger, M.; Bhowmik, D.; Rost, B. ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Learning. IEEE TRANS PATTERN ANALYSIS & MACHINE INTELLIGENCE 2021 ,14. (19) Kuan, D.; Farimani, A. B. AbGPT: De Novo Antibody Design via Generative Language Modeling. 2024 , (20) Vig, J.; Madani, A.; Varshney, L. R.; Xiong, C.; Socher, R.; Rajani, N. F. BERTOL- OGY MEETS BIOLOGY: INTERPRETING ATTENTION IN PROTEIN LAN- GUAGE MODELS. (21) Rose, T.; Anand, N.; Shen, T. PLAPT: PROTEIN-LIGAND BINDING AFFINITY PREDICTION USING PRE-TRAINED TRANSFORMERS. (22) Ross, J.; Belgodere, B.; Chenthamarakshan, V.; Padhi, I.; Mroueh, Y.; Das, P. Large- Scale Chemical Language Representations Capture Molecular Structure and Properties. Nature Machine Intelligence 2021 ,4, 1256–1264. (23) Glaser, J. Binding Affinity Dataset. https://huggingface.co/datasets/jglaser/binding affinity, 2022. (24) Jin, Z.; Wu, T.; Chen, T.; Pan, D.; Wang, X.; Xie, J.; Quan, L.; Lyu, Q. CAPLA: improved prediction of protein-ligand binding affinity by a deep learning approach based on a cross-attention mechanism. Bioinformatics (Oxford, England) 2023 ,39. (25) Dunbar, J. B.; Smith, R. D.; Yang, C. Y.; Ung, P. M. U.; Lexa, K. W.; Khazanov, N. A.; Stuckey, J. A.; Wang, S.; Carlson, H. A. CSAR benchmark exercise of 2010: selection 16 of the protein-ligand complexes. Journal of chemical information and modeling 2011 , 51, 2036–2046. (26) Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P. J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Trans- former. Journal of Machine Learning Research 2020 ,21, 1–67. (27) Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015 , (28) Xu, S.; Shen, L.; Zhang, M.; Jiang, C.; Zhang, X.; Xu, Y.; Liu, J.; Liu, X. Surface-based multimodal protein–ligand binding affinity prediction. Bioinformatics 2024 ,40. (29) Hodson, T. O. Root-mean-square error (RMSE) or mean absolute error (MAE): when to use them or not. Geoscientific Model Development 2022 ,15, 5481–5487. (30) Badrinarayanan, S.; Guntuboina, C.; Mollaei, P.; Farimani, A. B. Multi-Peptide: Mul- timodality Leveraged Language-Graph Learning of Peptide Properties. 2024 , 17 Supporting Information Sequence Distributions Table 3, 4 present the detailed length distributions of protein sequences and ligand molecules in our dataset. Table 3: Distribution of Protein Sequences by Length Range Length Range Number of Protein Sequences 1–1000 88,485 1001–2000 10,598 2001–3200 706 3201–4000 123 4001–7073 88 Table 4: Distribution of Ligand Molecules by Length Range Length Range Number of Ligand Molecules 1–100 94,831 101–200 4,085 201–278 753 279–478 330 479–547 1 18 Hyperparameters Table 5 summarizes the key hyperparameters, detailing essential configurations utilized for training the model. Table 5: BAPULM model hyperparameters Hyperparameters Value Seed 2102 Loss Function MSE Optimizer Adam Learning Rate 1e-3 Batch size 256 Epochs 60 Scheduler ReduceLROnPlateau Scheduler Patience 5 Scheduler Factor 0.2 19
1
1
The model uses ProtT5-XL-U50 and MolFormer architectures, which are large transformer-based models. Given that training on an Nvidia RTX 2080 Ti took approximately 4 minutes, and assuming training occurs over a reduced dataset with 100k sequences, with a complex architecture having a moderate number of parameters, a single GPU can efficiently handle the workload and complete the training process in under 8 hours. The choice of MSE as the loss function indicates the use of a regression approach which is generally faster. Considering everything, estimating a conservative training time of about 1 hour estimated as feasible on a single GPU.
yes
Yes
Bioinformatics
BAPULM: Binding Affinity Prediction using Language Models
2024-11-06 0:00:00
https://github.com/radh55sh/BAPULM
1
https://huggingface.co/datasets/radh25sh/BAPULM/resolve/main/prottrans_molformer_tensor_dataset100k.json?download=true
16sec * 60 epochs = 16 minutes
https://colab.research.google.com/drive/1--rNlCN01wUgN_6cTTuiVcusqSP9vGlG?usp=sharing
Yes
-- no pdbind dataset.Specifices to use prottrans malformer
Digital twin-supported deep learning for fault diagnosis
DANN
[]
A domain adaptation neural network for digital twin-supported fault diagnosis
2025-05-27T00:00:00
https://arxiv.org/abs/2505.21046v1
[ "https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis" ]
{'Accuray': '80.22'}
[ "Accuray" ]
Given the following paper and codebase: Paper: A domain adaptation neural network for digital twin-supported fault diagnosis Codebase: https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis Improve the DANN model on the Digital twin-supported deep learning for fault diagnosis dataset. The result should improve on the following metrics: {'Accuray': '80.22'}. You must use only the codebase provided.
A domain adaptation neural network for digital twin-supported fault diagnosis Zhenling Chen CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, FranceHaiwei Fu CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, France Zhiguo Zeng Chair on Risk and Resilience of Complex Systems, Laboratoie Genie Industriel, CentraleSupélec, Université Paris-Saclay, 91190, Gif-sur-Yvette, France Abstract —Digital twins offer a promising solution to the lack of sufficient labeled data in deep learning-based fault diagnosis by generating simulated data for model training. However, discrepancies between simulation and real-world systems can lead to a significant drop in performance when models are applied in real scenarios. To address this issue, we propose a fault diagnosis framework based on Domain-Adversarial Neu- ral Networks (DANN), which enables knowledge transfer from simulated (source domain) to real-world (target domain) data. We evaluate the proposed framework using a publicly available robotics fault diagnosis dataset, which includes 3,600 sequences generated by a digital twin model and 90 real sequences collected from physical systems. The DANN method is compared with commonly used lightweight deep learning models such as CNN, TCN, Transformer, and LSTM. Experimental results show that incorporating domain adaptation significantly improves the diag- nostic performance. For example, applying DANN to a baseline CNN model improves its accuracy from 70.00% to 80.22% on real-world test data, demonstrating the effectiveness of domain adaptation in bridging the sim-to-real gap.1 Index Terms —predictive maintenance, fault diagnosis, digital failure twin, domain adaptation neural network (DANN) I. I NTRODUCTION Fault diagnosis aims at identifying the cause of a failure from observational data from sensors [1]. One of the major challenge in fault diagnosis is that the state-of-the-art deep learning-based models often require large amount of data. It is, however, often difficult to obtain these data in practice [2]. Digital twin technology combines physical entity with its digital representation. It can accurately reproduce the scenes in the physical world in the virtual environment, providing great convenience for the analysis, optimization and control of physical system [3]. Using digital twins to generate sim- ulated failure data and train a deep learning model for fault diagnosis has become a promising approach to solve the data insufficiency issue of fault diagnosis. There are already some existing works in applying dig- ital twins for fault diagnosis. For example, Jain et al. [4] proposed a digital twin-based fault diagnosis framework that 1Code and datasets available at: https://github.com/JialingRichard/Digital- Twin-Fault-Diagnosisutilizes the digital twin model to simulate system behavior and identify fault patterns in distributed photovoltaic systems, Wang et al. [5] proposed a digital twin-based fault diagnosis framework that integrates sensor data and physical models to detect and diagnose faults in rotating machinery within smart manufacturing systems. Yang et al.[6] proposed a digital twin-driven fault diagnosis method that combines virtual and real data to diagnose composite faults, where the digital twin generates virtual samples to compensate for the scarcity of fault samples in real systems. Most of these existing works assume that condition-monitoring data are availalbe on the same level as the component being diagnosed. In practice, however, deploying sensors at the component level is often difficult. One has to rely on system-level condition-monitoring data to infer the component-level failure modes [7]. In one of our previous works [8], we developed a digital twin model of a robot and use it to generate simulated failure data for fault diagnosis. Testing data are collected from a real robot with different injected failures to test the performance of the developed model. The existing works share a common assumption: The digital twin model can accurately predict the actual behavior of the component under test. However, in practice, the digital twin model is not always accurate. Then, the fault diagnosis model trained on simulation data often suffers from poor performance when applied to real data, due to the imprecision of the simulation model. To address this issue, we propose a Domain Adversarial Neural Network (DANN)-based framework for digital twin-supported fault diagnosis. Through the DANN [9], the developed model is able to learn useful features from the simulated data even the simulation does not exactly match the reality. We also performed a benchmark study by comparing the performance of the developed model with other state-of-the-art deep learning models, including LSTM [10], Transformer [11], CNN [12] and TCN [13]. The main contributions of this paper are: •We propose a novel DANN-based framework for digital twin-supported fault diagnosis.arXiv:2505.21046v1 [cs.LG] 27 May 2025 •We present an open-source dataset for digital twin- supported fault diagnosis. The dataset include simulated training data and real test data. •We conducted a detailed benchmark study where the performance of the developed model is compared with four other state-of-the-art deep learning models. II. D IGITAL TWIN MODEL AND DATASET DESCRIPTION In this paper, we consider the open source dataset for digital twin-supported fault diagnosis we developed previously in [8]. The dataset is created based on the digital failure twin model of a robot, as shown in Fig. 1. Fig. 1: The fault diagnosis in digital twin for robot [8]. A digital twin model is a simulation model used to simulate the failure behavior of the robot and connect to the physical entity to reflect its real-time states. The robot comprises of six motors. We monitor the trajectory of the end-effector and the control commands of each motor. The goal of the fault diagnosis is to use the condition-monitoring data to infer the failure modes of the four out of the six motors. Each motor might subject to two failure modes, i.e., stuck and steady-state error. The digital failure twin model is built as a two-layer model. On the motor level, we model the dynamics of the motor and its controller. Then, the response of each motor is fed into a forward kinematics model, which allows simulating the end-effector trajectory from the postions of the motors. The stuck and steady-state error can be simulated by changing the response of each motor, as shown in Fig. 1. To generate the training dataset, we generate 400 random trajectories, and simulate the 9classes (one normal state and eight failure states where each motor could be in either one of the two failure modes) under each trajectory. Each sample contains records spanning 1000 time steps. Then, we collect test data by randomly simulate 90 trajectories following the same protocals. In the original work [8], an LSTM was trained on the simulation dataset and applied to dignose the failures on the real robot. The results showed that, although the trained model performed well on the validation set (seperated from training data, but still from simulation), it performs poorly on the real testing dataset ( 96% V .S.69%). The main reason is that the simulation model does not match exactly the behavior of thereal robot. In this paper, we intend to address this issue through transfer learning. III. E XISTING TRANSFER LEARNING MODELS Prevalent deep learning-based models show great success in both academia and industry [14]. For example, Convolutional Neural Networks (CNN) use in automated fault detection for machinery vibrations [15], Recurrent Neural Networks (RNN) for example, LSTM have proven useful in diagnosing faults based on time series data [16]. In current work, Plakias et al. combined the dense convolutional blocks and the attention mechanism to develop a new attentive dense CNN for fault diagnosis [17]. Although these methods can achieve high per- formance in fault diagnosis, the application of these methods is usually under the assumption that test data and train data come from the same data distribution. Also, the current deep learning-based models are under the Independent Identically Distribution (i.i.d.). As we discussed before, the data generated from a digital twin might not exactly match the actual behavior in the physical entity. As a result, the distribution of training and testing dataset cannot be assumed as i.d.d., due to steady-state errors that cause in friction or other mechanical effects and real time faults that can big impact the results. In this paper, we use transfer learning methods to deal with source domain and target domain alignment in digital twin in data distribution. To solve the issue of data distribution discrepancy, various domain adaptation techniques in transfer learning have been introduced for diagnosing bearing faults [18–20]. Transfer learning can also be used to learn knowledge from source domain for fault diagnosis on a different target domain. Applications of transfer learning in fault diagnosis include representation adaptation [21–24], parameter transfer [25–27], adversarial-based domain adaptation [28, 29]. One of the most often used domain adaptation methods is representation adaptation which to align the distribution of the representations from the source domain and target domain by reducing the distribution discrepancy. Some neural networks are build for this, such as feature-based transfer neural network (FTNN) [24], deep convolutional transfer learning network (DCTLN) [21]. Shao et al. proposed a CNN-based machine fault diagnosis framework in parameter transfer [27], and experimental results show that DCTLN can get the average accuracy of 86.3%. Experimental results illustrate that the proposed method can achieve the test accuracy near 100% on three mechanical datasets, and in the gearbox dataset, the accuracy can reach 99.64%. In adversarial-based domain adaptation, Cheng et al. pro- posed Wasserstein distance based deep transfer learning (WD- DTL) [28] which uses CNN as pre-trained model. Experimen- tal results show that the transfer accuracy of WD-DTL can reach 95.75% on average. Lu et al. develop a domain adapta- tion combined with deep convolutional generative adversarial network (DADCGAN)-based methodology for diagnosing DC arc faults [29]. DADCGAN is a robust and reliable fault diagnosis scheme based on a lightweight CNN-based classifier can be achieved for the target domain. In this paper, we choose the DANN architecture to de- velop a framework of digital twin-supported fault diagnosis. The main reason is that its architecture is simple and can efficiently capture the features from the source domain and generalize well on the target domain. Moreover, DANN’s adversarial training mechanism enables the model to learn domain-invariant features, making it particularly effective in reducing the distribution discrepancy between source and tar- get domains. Furthermore, DANN performs well with limited labeled data from the target domain, addressing the common challenge of insufficient fault data in practical applications. Its ability to handle complex and nonlinear relationships in data and make DANN a reliable and scalable solution for fault diagnosis. IV. DANN M ODEL ARCHITECTURE We use Domain Adversarial Neural Network (DANN) model [9] and extend its application in digital twin in robotics maintenance prediction that previously and originally utilize in transfer learning in domain adaptation. The architecture of DANN is shown in Figure 2. Let us assume the input samples are represented by x∈X, where Xis some input space and certain labels (output) yfrom the label space Y. We assume that there exist two distributions S(x, y)andT(x, y)onX⊗Y, which will be referred to as the source domain and the target domain. Our goal is to predict labels ygiven the input xfor the target domain. We denote with dithe binary variable (domain label) for theith example, which indicates whether xicome from the source domain ( xi∼S(x) if di=0) or from the target distribution ( xi∼T(x) if di=1). We assume that the input x is first representative by a representation learning Gf(Feature Extractor) to a d-dimensional feature vector f ∈Rd, and we denote the vector of parameters of all layers in this mapping as θf,f=Gf(x;θf). Then, the feature vector fis representative byGy(label predictor) to the label y, and we denote the parameters of this learning with θy. Finally, the same feature vector fis representative to the domain label dby a mapping Gd(domain classifier) with the parameters θd. For the model learning, we minimize the label prediction loss on the annotated part (i.e. the source part) of the train set, also the parameters of both the feature extractor and the label predictor are optimized in order to minimize the empirical loss for the source domain samples. This ensures the discriminativeness of the features fand the overall good prediction performance of the combination of the feature extractor and the label predictor on the source domain. By doing so, we make the features fdomain-invariant. We need to make the distributions S(f)=Gf(x;θf)| x∼S(x) and T(f)=Gf(x;θf)| x∼T(x) to be similar [30]. To Measure the dissimilarity of the distributions S(f)and T(f), the distributions are constantly changing in learning progresses, we estimate the dissimilarity is to look at the loss of the domain classifier Gd, provided that the parametersθdof the domain classifier have been trained to discriminate between the two feature distributions. In training to obtain domain-invariant features, we seek the parameters θfof the feature representative that maximize the loss of the domain classifier (by making the two feature distributions as similar as possible), and simultaneously seeking the parameters θdof the domain classifier that minimize the loss of the domain classifier. And we seek to minimize the loss of the label predictor. The function is: E(θf, θy, θd) =X i=1..NLy(Gy(Gf(xi;θf);θy), yi)− λX i=1..NLd(Gd(Gf(xi;θf);θd), yi) =X i=1..NLi y(θf, θy)−λX i=1..NLi d(θf, θd)(1) where Lyis the loss for label prediction, Ldis the loss for the domain classification, and Li y,Li ddenote the corresponding loss functions evaluated at the i training example. We seek the the parameters ˆθf,ˆθy,ˆθdby solving the following optimization problem: (ˆθf,ˆθy) =argmin θf,θyE(θf, θy,ˆθd) ˆθd=argmax θdE(ˆθf,ˆθy, θd)(2) Then, we do optimization in backpropagation to seek the parameters ˆθf,ˆθy,ˆθdat the end progressing of class classifier and domain classifier. We also do a gradient reversal layer (GRL) to update and diferences the −λfacotor in (1). The backpropagation processing passes through the GRL, by the partial derivatives of the loss that is downstream the GRL (i.e. Ld) w.r.t. the layer parameters that are upstream the GRL (i.e. θf) get multiplied by −λ(i.e.∂Ld ∂θfis effectively replaced with −λ∂Ld ∂θf). We have forward and backward function Rλ(x): Rλ(x) =x (3) dRλ dx=−λI (4) where I is the identity matrix. In feature extractor, we use CNN to do feature extracting, based on our baseline, CNN model has a better results, so we use CNN architecture and its representation to do feature extracting. The CNN is in two convolutional layers, and we set kernel size is 3, number of filters is 64. V. E XPERIMENTS A. Dataset In this case study, we work on the dataset originally reported in [8]. As [8], we retained the desired and realized trajectory coordinates (x, y, z) and introduced a derived feature set representing the residuals between the desired and realized trajectories. As a result, the final feature set comprises six features: the desired trajectory coordinates (x, y, z) and the corresponding residuals (x, y, z). Fig. 2: DANN Architecture [9] The source domain dataset generated by the digital twin consists of 3600 samples across 9 distinct labels, with each label containing 400 samples. The real-world measurements are treated as target domain. We have 90 samples in the target domain. We split the source domain dataset into training and validation sets with a 9to1ratio, and the target domain dataset is used as the test set. The DANN described in Sect. IV is used to train a fault diagnosis model using the source domain data. Only the measured features in the target domain, but not the labels are used in the training process of the DANN to learn the domain invariate features. Then, the trained DANN is applied to predict the failure labels of the target domain. B. Evaluation Metrics The performance of all methods is evaluated using Accu- racy andF1 Score , which are defined as follows: a) Accuracy: Accuracy =Number of Correct Predictions Total Number of Predictions =TP+TN TP+TN+FP+FN(5) where TP,TN,FP, and FN represent the number of true positives, true negatives, false positives, and false negatives, respectively. b) F1 Score: The F1 Score is the harmonic mean of precision and recall: F1 Score = 2·Precision ·Recall Precision +Recall(6)where: Precision =TP TP+FP,Recall =TP TP+FN(7) These metrics provide a balanced evaluation of the model’s performance. C. Benchmarked models We use four current prevalent deep learning methods and models as baseline: •LSTM [10] Long Short-Term Memory (LSTM) deals with time series data in deep learning, it often uses for preventing gradient vanishing and gradient explosion in deep learning. LSTM is a special type of recurrent neural network (RNN), and can effectively capture and process long-term dependencies in sequence data by introducing memory units and gating mechanisms. •Transformer [11] Transformer is better at context depen- dency. And it is very versatile especially in multimodal. This ability to dynamically focus on relevant parts of the input is a key reason why Transformer model excel when processing sequence data. •CNN [12] Convolutional Neural Networks (CNN) is mainly used as a visual neural network, which mainly extracts features layer by layer through multiple and deep convolution. •TCN [13] It is a deep learning model specifically de- signed to process sequential data, combining the parallel processing capabilities of convolutional neural networks (CNN) with the long-term dependent modeling capabili- ties of recurrent neural networks (RNN). D. Implementation Details The implementation of the DANN is carried out using PyTorch. The experiments are conducted on NVIDIA RTX 3060 GPU with the following parameter settings: Learning rate is 0.001, Batch size is 32, Number of epochs is 250, Optimizer is Adam, and Alpha: α=2 1 +e−10p−1 (8) where p=epoch max epoch(9) VI. R ESULTS AND DISCUSSIONS A. Average accuracy and F1 score over all methods In this subsection, we systematically compare the results from the DANN with the four benchmarked models. We conduct experiments to evaluate the accuracy of the models on the train set, validation set, and real test set, as shown in table I. Additionally, we record the F1-score for each one of the nine classes, as shown in table II. Due to the randomness of deep learning models, each experiment is conducted five times, and both the average values and standard deviations of the performance metrics are calculated. From Table I, it can be seen that the four benchmarked deep learning models do not perform well, especially on the test set. The performance on the test set drops significantly compared to the training set and validation set. This can be explained by the imprecision of the simulation model used to generate the training data. The DANN, on the other hand, achieve much better performance on the test set. This is because through domain adaptation, the DANN is able to extract domain invariate features and generalize them to the target domain. It is observed from Table II that most of the benchmarked models exhibit very low classification accuracy for the state healthy. This is because, healthy state is very similar to other states where one motor has steady-state errors. When the simulation model is not accurate, the generated training data are even more difficult to distinguish between healthy and steady-state error states. The DANN, on the other hand, performs well in classifying the state of healthy. This is because after the domain adaptation, in the extracted feature space, the healthy state becomes well-seperated with the other states. In summary, among the commonly used deep learning models in our experiments, the model that combines a deeper and wider CNN as the backbone with the DANN structure is the relatively optimal choice.B. Ablation study for Digital Twin To demonstrate the necessity of using a digital twin model for this task, we conduct an ablation experiment. We train the model using only the real test set, excluding the train and validation sets generated entirely by the digital twin model. In the real test data, we split the dataset into train and testing sets at a ratio of 7:3. Our dataset contains only 90 real data points, and it is clear that most deep learning models struggle to fit on such a small dataset. The results we recorded in Table III, which indicate that, with such a limited amount of data, common methods cannot make accurate predictions. Use digital twin model to generate simulation data, on the other hand, clearly improve the performance, as the generated simulation data help the deep learning model to better learn the relevant features. VII. C ONCLUSIONS AND FUTURE WORKS In this paper, we proposed a new deep learning baseline for fault diagnosis using an existing digital twin dataset. We applied commonly used lightweight deep learning models and demonstrated that the Domain-Adversarial Neural Net- work (DANN) approach with a CNN backbone, as a transfer learning method, achieves higher accuracy compared to other models. Furthermore, our experiments validate that combining digital twin simulation with domain adaptation techniques can effectively address the issue of limited real-world data in fault diagnosis tasks. We selected lightweight models such as CNN, TCN, Trans- former, and LSTM due to their wide adoption in time-series fault diagnosis, ease of training, and relatively low computa- tional cost. Although these models serve as strong baselines, we acknowledge that more advanced architectures—such as pre-trained large-scale models or graph-based neural net- works—may offer improved generalization and performance. Exploring these alternatives remains a promising direction for future research. However, several limitations remain. First, the DANN framework requires more computational resources and deep learning expertise, which may pose challenges for practical deployment, particularly in resource-constrained industrial set- tings. Second, the inevitable discrepancies between the digital twin and the real-world system limit the performance of the model, as current simulations cannot fully capture complex physical dynamics. Third, while DANN improves generaliza- tion, the deep learning models used in this study still have room for improvement. Future work could explore more robust and generalizable models, such as those pre-trained on large- scale datasets or more advanced domain adaptation methods. ACKNOWLEDGMENT The research of Zhiguo Zeng is supported by ANR-22- CE10-0004, and chair of Risk and Resilience of Complex Systems (Chaire EDF, Orange and SNCF). Haiwei Fu and Zhenling Chen participate in this project as lab project in their master curricum in Centralesupélec. The authors would like to thank Dr. Myriam Tami for managing this project. TABLE I: Performance Comparison of Baseline Models Model Training Accuracy (%) Validation Accuracy (%) Test Accuracy (%) LSTM 96.06±5.57 92.22±4.60 56.00±4.59 Transformer 97.73±0.33 75.94±1.52 48.44±2.29 TCN 87.96±0.86 67.67±0.65 44.22±1.63 CNN 99.94±0.11 96.78±0.76 70.00±1.99 DANN 99.29±0.67 95.28±0.72 80.22±1.78 TABLE II: Performance Comparison on Each Category (F1 Score) LSTM Transformer TCN CNN DANN Healthy 0.00±0.00 0.00±0.00 0.07±0.09 0.07±0.09 0.67±0.04 Motor 1 Stuck 0.86±0.06 0.63±0.05 0.65±0.04 0.81±0.03 0.84±0.04 Motor 1 Steady state error 0.55±0.14 0.67±0.09 0.46±0.04 0.85±0.03 0.90±0.05 Motor 2 Stuck 0.72±0.05 0.65±0.14 0.36±0.03 0.73±0.07 0.79±0.04 Motor 2 Steady state error 0.53±0.16 0.40±0.05 0.46±0.08 0.90±0.05 0.87±0.02 Motor 3 Stuck 0.55±0.05 0.54±0.08 0.48±0.05 0.63±0.09 0.80±0.03 Motor 3 Steady state error 0.63±0.11 0.38±0.10 0.62±0.10 0.91±0.03 0.91±0.06 Motor 4 Stuck 0.49±0.06 0.42±0.08 0.40±0.07 0.59±0.06 0.78±0.04 Motor 4 Steady state error 0.43±0.05 0.41±0.07 0.28±0.02 0.53±0.02 0.62±0.08 TABLE III: Performance Ablation Study Model Only Real Data Accuracy (%) Digital twin-supported deep learning (%) LSTM 14.92±4.09 56.00±4.59 Transformer 18.10±2.58 48.44±2.29 TCN 15.24±1.62 44.22±1.63 CNN 13.97±2.54 70.00±1.99 DANN 15.87±4.71 80.22±1.78 REFERENCES [1] Y . Zhang, J. Ji, Z. Ren, Q. Ni, F. Gu, K. Feng, K. Yu, J. Ge, Z. Lei, and Z. Liu, “Digital twin-driven partial domain adaptation network for intelligent fault diagnosis of rolling bearing,” Reliability Engineering & System Safety , vol. 234, p. 109186, 2023. [2] D. Zhong, Z. Xia, Y . Zhu, and J. Duan, “Overview of predictive maintenance based on digital twin technology,” Heliyon , vol. 9, no. 4, 2023. [3] M. G. Juarez, V . J. Botti, and A. S. Giret, “Digital twins: Review and challenges,” Journal of Computing and Information Science in Engineering , vol. 21, no. 3, p. 030802, 2021. [4] P. Jain, J. Poon, J. P. Singh, C. Spanos, S. R. Sanders, and S. K. Panda, “A digital twin approach for fault diagnosis in distributed photovoltaic systems,” IEEE Transactions on Power Electronics , vol. 35, no. 1, pp. 940–956, 2019. [5] J. Wang, L. Ye, R. X. Gao, C. Li, and L. Zhang, “Digital twin for rotating machinery fault diagnosis in smart manufacturing,” International Journal of Production Research , vol. 57, no. 12, pp. 3920–3934, 2019. [6] C. Yang, B. Cai, Q. Wu, C. Wang, W. Ge, Z. Hu, W. Zhu, L. Zhang, and L. Wang, “Digital twin-driven fault diagnosis method for composite faults by combining virtual and real data,” Journal of Industrial Information Integration , vol. 33, p. 100469, 2023. [7] Y . Ran, X. Zhou, P. Lin, Y . Wen, and R. Deng, “A survey of predictive maintenance: Systems, purposes and approaches,” arXiv preprint arXiv:1912.07383 , pp. 1–36, 2019. [8] K. M. Court, X. M. Court, S. Du, and Z. Zeng, “Use digital twins to sup- port fault diagnosis from system-level condition-monitoring data,” arXiv preprint arXiv:2411.01360 , 2024. [9] Y . Ganin and V . Lempitsky, “Unsupervised domain adaptation by backpropagation,” inInternational conference on machine learning , pp. 1180–1189, PMLR, 2015. [10] S. Hochreiter, “Long short-term memory,” Neural Computation MIT-Press , 1997. [11] A. Vaswani, “Attention is all you need,” Advances in Neural Information Processing Systems , 2017. [12] Y . LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Handwritten digit recognition with a back-propagation network,” Advances in neural information processing systems , vol. 2, 1989. [13] S. Bai, J. Z. Kolter, and V . Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv preprint arXiv:1803.01271 , 2018. [14] M. He and D. He, “Deep learning based approach for bearing fault diagnosis,” IEEE Transactions on Industry Applications , vol. 53, no. 3, pp. 3057–3065, 2017. [15] M. Xia, T. Li, L. Xu, L. Liu, and C. W. De Silva, “Fault diagnosis for rotating machinery using multiple sensors and convolutional neural networks,” IEEE/ASME transactions on mechatronics , vol. 23, no. 1, pp. 101–110, 2017. [16] J. Shi, D. Peng, Z. Peng, Z. Zhang, K. Goebel, and D. Wu, “Planetary gearbox fault diagnosis using bidirectional-convolutional lstm networks,” Mechanical Systems and Signal Processing , vol. 162, p. 107996, 2022.[17] S. Plakias and Y . S. Boutalis, “Fault detection and identification of rolling element bearings with attentive dense cnn,” Neurocomputing , vol. 405, pp. 208–217, 2020. [18] W. Li, R. Huang, J. Li, Y . Liao, Z. Chen, G. He, R. Yan, and K. Gryllias, “A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges,” Mechanical Systems and Signal Processing , vol. 167, p. 108487, 2022. [19] H. Zhiyi, S. Haidong, J. Lin, C. Junsheng, and Y . Yu, “Transfer fault diagnosis of bearing installed in different machines using enhanced deep auto-encoder,” Measurement , vol. 152, p. 107393, 2020. [20] H. Cao, H. Shao, X. Zhong, Q. Deng, X. Yang, and J. Xuan, “Unsupervised domain- share cnn for machine fault transfer diagnosis from steady speeds to time-varying speeds,” Journal of Manufacturing Systems , vol. 62, pp. 186–198, 2022. [21] L. Guo, Y . Lei, S. Xing, T. Yan, and N. Li, “Deep convolutional transfer learning network: A new method for intelligent fault diagnosis of machines with unlabeled data,” IEEE Transactions on Industrial Electronics , vol. 66, no. 9, pp. 7316–7325, 2018. [22] S. Pang and X. Yang, “A cross-domain stacked denoising autoencoders for rotating machinery fault diagnosis under different working conditions,” Ieee Access , vol. 7, pp. 77277–77292, 2019. [23] D. Xiao, Y . Huang, L. Zhao, C. Qin, H. Shi, and C. Liu, “Domain adaptive motor fault diagnosis using deep transfer learning,” Ieee Access , vol. 7, pp. 80937–80949, 2019. [24] B. Yang, Y . Lei, F. Jia, and S. Xing, “An intelligent fault diagnosis approach based on transfer learning from laboratory bearings to locomotive bearings,” Mechanical Systems and Signal Processing , vol. 122, pp. 692–706, 2019. [25] Z. He, H. Shao, X. Zhang, J. Cheng, and Y . Yang, “Improved deep transfer auto- encoder for fault diagnosis of gearbox under variable working conditions with small training samples,” Ieee Access , vol. 7, pp. 115368–115377, 2019. [26] H. Kim and B. D. Youn, “A new parameter repurposing method for parameter transfer with small dataset and its application in fault diagnosis of rolling element bearings,” Ieee Access , vol. 7, pp. 46917–46930, 2019. [27] S. Shao, S. McAleer, R. Yan, and P. Baldi, “Highly accurate machine fault diagnosis using deep transfer learning,” IEEE Transactions on Industrial Informatics , vol. 15, no. 4, pp. 2446–2455, 2018. [28] C. Cheng, B. Zhou, G. Ma, D. Wu, and Y . Yuan, “Wasserstein distance based deep adversarial transfer learning for intelligent fault diagnosis with unlabeled or insufficient labeled data,” Neurocomputing , vol. 409, pp. 35–45, 2020. [29] S. Lu, T. Sirojan, B. T. Phung, D. Zhang, and E. Ambikairajah, “Da-dcgan: An effective methodology for dc series arc fault diagnosis in photovoltaic systems,” IEEE Access , vol. 7, pp. 45831–45840, 2019. [30] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of statistical planning and inference , vol. 90, no. 2, pp. 227–244, 2000.
2
1
The DANN model employs a CNN architecture with two convolutional layers. Given the specified batch size of 32 and 250 training epochs on a dataset with 3,600 samples (360 samples per class for 9 distinct labels, plus a significantly smaller test set of 90 samples), the total iterations required for training would be (3600 / 32) * 250 = approximately 28,125 iterations. Based on similar CNN models, training on a single high-performance GPU like NVIDIA RTX 3060 could complete 28,125 iterations in about 2 hours, assuming decent parallel computation and efficient training technique. The small size of the dataset (only 3,600) and the limited amount of training data needed due to the domain adaptation should also decrease training time. Thus, it is feasible to conclude that the DANN model could be trained in under 8 hours on a single GPU, effectively allowing for its relatively simple architecture with the specified dataset.
yes
Yes
Time Series
A domain adaptation neural network for digital twin-supported fault diagnosis
2025-05-27T00:00:00.000Z
[https://github.com/JialingRichard/Digital-Twin-Fault-Diagnosis]
1
Included in Repo
3 Hours
Copy of train_ai_pytorch_DANN.ipynb
Yes
It starts and runs successfully
MNIST
GatedGCN+
[]
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00
https://arxiv.org/abs/2502.09263v1
[ "https://github.com/LUOyk1999/GNNPlus" ]
{'Accuracy': '98.712 ± 0.137'}
[ "Accuracy" ]
Given the following paper and codebase: Paper: Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Codebase: https://github.com/LUOyk1999/GNNPlus Improve the GatedGCN+ model on the MNIST dataset. The result should improve on the following metrics: {'Accuracy': '98.712 ± 0.137'}. You must use only the codebase provided.
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yuankai Luo1 2Lei Shi*1Xiao-Ming Wu*2 Abstract Message-passing Graph Neural Networks (GNNs) are often criticized for their limited expres- siveness, issues like over-smoothing and over- squashing, and challenges in capturing long-range dependencies, while Graph Transformers (GTs) are considered superior due to their global atten- tion mechanisms. Literature frequently suggests that GTs outperform GNNs, particularly in graph- level tasks such as graph classification and re- gression. In this study, we explore the untapped potential of GNNs through an enhanced frame- work, GNN+, which integrates six widely used techniques: edge feature integration, normaliza- tion, dropout, residual connections, feed-forward networks, and positional encoding, to effectively tackle graph-level tasks. We conduct a systematic evaluation of three classic GNNs—GCN, GIN, and GatedGCN—enhanced by the GNN+frame- work across 14 well-known graph-level datasets. Our results show that, contrary to the prevailing belief, classic GNNs excel in graph-level tasks, securing top three rankings across all datasets and achieving first place in eight, while also demonstrating greater efficiency than GTs. This highlights the potential of simple GNN architec- tures, challenging the belief that complex mech- anisms in GTs are essential for superior graph- level performance. Our source code is available at https://github.com/LUOyk1999/tunedGNN-G. 1. Introduction Graph machine learning addresses both graph-level tasks and node-level tasks, as illustrated in Figure 1. These tasks fundamentally differ in their choice of the basic unit for dataset composition, splitting, and training, with graph-level tasks focusing on the entire graph, while node-level tasks focus on individual nodes. Graph-level tasks (Dwivedi et al., 1Beihang University2The Hong Kong Polytechnic University. *Corresponding authors: Lei Shi <{leishi, luoyk }@buaa.edu.cn >, Xiao-Ming Wu <xiao-ming.wu@polyu.edu.hk >. Preprint. Figure 1. Differences between graph-level and node-level tasks. 2023; Hu et al., 2020; Luo et al., 2023b;a) often involve the classification of relatively small molecular graphs in chem- istry (Morris et al., 2020) or the prediction of protein proper- ties in biology (Dwivedi et al., 2022). In contrast, node-level tasks typically involve large social networks (Tang et al., 2009) or citation networks (Yang et al., 2016), where the primary goal is node classification. This distinction in the fundamental unit of dataset leads to differences in method- ologies, training strategies, and application domains. Message-passing Graph Neural Networks (GNNs) (Gilmer et al., 2017), which iteratively aggregate information from local neighborhoods to learn node representations, have be- come the predominant approach for both graph-level and node-level tasks (Niepert et al., 2016; Kipf & Welling, 2017; Veliˇckovi ´c et al., 2018; Xu et al., 2018; Bresson & Laurent, 2017; Wu et al., 2020). Despite their widespread success, GNNs exhibit several inherent limitations, including re- stricted expressiveness (Xu et al., 2018; Morris et al., 2019), over-smoothing (Li et al., 2018; Chen et al., 2020), over- squashing (Alon & Yahav, 2020), and a limited capacity to capture long-range dependencies (Dwivedi et al., 2022). A prevalent perspective is that Graph Transformers (GTs) (M¨uller et al., 2023; Min et al., 2022; Hoang et al., 2024), as an alternative to GNNs, leverage global attention mech- anisms that enable each node to attend to all others (Yun et al., 2019; Dwivedi & Bresson, 2020), effectively model- 1arXiv:2502.09263v1 [cs.LG] 13 Feb 2025 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence ing long-range interactions and addressing issues such as over-smoothing, over-squashing, and limited expressiveness (Kreuzer et al., 2021; Ying et al., 2021; Zhang et al., 2023; Luo et al., 2023c; 2024b). However, the quadratic com- plexity of global attention mechanisms limits the scalability of GTs in large-scale, real-world applications (Behrouz & Hashemi, 2024; Sancak et al., 2024; Ding et al., 2024). Moreover, it has been noted that many state-of-the-art GTs (Chen et al., 2022; Ramp ´aˇsek et al., 2022; Shirzad et al., 2023; Ma et al., 2023) still rely—either explicitly or implic- itly—on the message passing mechanism of GNNs to learn local node representations, thereby enhancing performance. Recent studies (Luo et al., 2024a; 2025a;b) have shown that, contrary to common belief, classic GNNs such as GCN (Kipf & Welling, 2017), GAT (Veli ˇckovi ´c et al., 2018), and GraphSAGE (Hamilton et al., 2017) can achieve perfor- mance comparable to, or even exceeding, that of state-of-the- art GTs for node-level tasks. However, a similar conclusion has not yet been established for graph-level tasks. While T¨onshoff et al. (2023) conducted pioneering research demon- strating that tuning a few hyperparameters can significantly enhance the performance of classic GNNs, their results indi- cate that these models still do not match the overall perfor- mance of GTs. Furthermore, their investigation is limited to the Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). This raises an important question: “Can classic GNNs also excel in graph-level tasks?” To thoroughly investigate this question, we introduce GNN+, an enhanced GNN framework that incorporates es- tablished techniques into the message-passing mechanism, to effectively address graph-level tasks. As illustrated in Fig. 2, GNN+integrates six widely used techniques: the incorporation of edge features (Gilmer et al., 2017), normal- ization (Ioffe & Szegedy, 2015), dropout (Srivastava et al., 2014), residual connections (He et al., 2016), feed-forward networks (FFN) (Vaswani et al., 2017), and positional en- coding (Vaswani et al., 2017). Each technique serves as a hyperparameter that can be tuned to optimize performance. We systematically evaluate 3 classic GNNs—GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bres- son & Laurent, 2017)—enhanced by the GNN+frame- work across 14 well-known graph-level datasets from GNN Benchmark (Dwivedi et al., 2023), LRGB (Dwivedi et al., 2022), and OGB (Hu et al., 2020). The results demonstrate that the enhanced versions of classic GNNs match or even outperform state-of-the-art (SOTA) GTs, achieving rankings in the top three , including first place in eight datasets , while exhibiting superior efficiency. These findings pro- vide a positive answer to the previously posed question, suggesting that the true potential of GNNs for graph-level applications has been previously underestimated, and the GNN+framework effectively unlocks this potential whileaddressing their inherent limitations. Our ablation study also highlights the importance of each technique used in GNN+and offers valuable insights for future research. 2. Classic GNNs for Graph-level Tasks Define a graph as G= (V,E,X,E), where Vis the set of nodes, and E ⊆ V × V is the set of edges. The node feature matrix is X∈R|V|×dV, where |V|is the number of nodes, anddVis the dimension of the node features. The edge feature matrix is E∈R|E|×dE, where |E|is the number of edges and dEis the dimension of the edge features. Let A∈R|V|×|V|denote the adjacency matrix of G. Message-passing Graph Neural Networks (GNNs) com- pute node representations hl vat each layer lvia a message- passing mechanism, defined by Gilmer et al. (2017): hl v=UPDATEl hl−1 v,AGGln hl−1 u|u∈ N(v)o , (1) whereN(v)represents the neighboring nodes adjacent to v, AGGlis the message aggregation function, and UPDATEl is the update function. Initially, each node vis assigned a feature vector h0 v=xv∈Rd. The function AGGlis then used to aggregate information from the neighbors of vto update its representation. The output of the last layer L, i.e., GNN (v,A,X) =hL v, is the representation of vproduced by the GNN. In this work, we focus on three classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018), and GatedGCN (Bresson & Laurent, 2017), which differ in their approach to learning the node representation hl v. Graph Convolutional Networks (GCN) (Kipf & Welling, 2017), the vanilla GCN model, is formulated as: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl), (2) where ˆdv= 1+P u∈N(v)1,P u∈N(v)1denotes the degree of node v,Wlis the trainable weight matrix in layer l, and σis the activation function, e.g., ReLU(·) = max(0 ,·). Graph Isomorphism Networks (GIN) (Xu et al., 2018) learn node representations through a different approach: hl v=MLPl((1 + ϵ)·hl−1 v+X u∈N(v)hl−1 u), (3) where ϵis a constant, typicallyset to 0, and MLPldenotes a multi-layer perceptron, which usually consists of 2 layers. Residual Gated Graph Convolutional Networks (Gat- edGCN) (Bresson & Laurent, 2017) enhance traditional graph convolutions by incorporating gating mechanisms, improving adaptability and expressiveness: hl v=hl−1 vWl 1+X u∈N(v)ηv,u⊙hl−1 uWl 2, (4) 2 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence where ηv,u=σ(hl−1 vWl 3+hl−1 uWl 4)is the gating func- tion, and σdenotes the sigmoid activation function. This gating function determines how much each neighboring node contributes to updating the representation of the cur- rent node. The matrices Wl 1,Wl 2,Wl 3,Wl 4are trainable weight matrices specific to the layer l. Graph-level tasks treat the entire graph, rather than indi- vidual nodes or edges, as the fundamental unit for dataset composition, splitting, and training. Formally, given a la- beled graph dataset Γ ={(Gi,yi)}n i=1, each graph Giis associated with a label vector yi, representing either cat- egorical labels for classification or continuous values for regression. Next, the dataset Γis typically split into training, validation, and test sets, denoted as Γ = Γ train∪Γval∪Γtest. Graph-level tasks encompass inductive prediction tasks that operate on entire graphs, as well as on individual nodes or edges (Dwivedi et al., 2022), with each corresponding to a distinct label vector yi. Each type of task requires a tai- lored graph readout function R, which aggregates the output representations to compute the readout result, expressed as: hreadout i = Rn hL v:v∈ Vio , (5) where Virepresents the set of nodes in the graph Gi. For example, for graph prediction tasks , which aim to make predictions about the entire graph, the readout function R often operates as a global mean pooling function. Finally, for any graph Gi, the readout result is passed through a prediction head g(·)to obtain the predicted label ˆyi= g(hreadout i). The training objective is to minimize the total lossL(θ) =P Gi∈Γtrainℓ(ˆyi,yi)w.r.t. all graphs in the training set Γtrain, where yirepresents the ground-truth label ofGiandθdenotes the trainable GNN parameters. 3. GNN+: Enhancing Classic GNNs for Graph-level Tasks We propose an enhancement to classic GNNs for graph-level tasks by incorporating six popular techniques: edge feature integration, normalization, dropout, residual connections, feed-forward networks (FFN), and positional encoding. The enhanced framework, GNN+, is illustrated in Figure 2. 3.1. Edge Feature Integration Edge features were initially incorporated into some GNN frameworks (Gilmer et al., 2017; Hu et al., 2019) by directly integrating them into the message-passing process to en- hance information propagation between nodes. Following this practice, GraphGPS (Ramp ´aˇsek et al., 2022) and subse- quent GTs encode edge features within their local modules to enrich node representations. Taking GCN (Eq. 2) as an example, the edge features are Figure 2. The architecture of GNN+. integrated into the massage-passing process as follows: hl v=σ(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e),(6) where Wl eis the trainable weight matrix in layer l, andeuv is the feature vector of the edge between uandv. 3.2. Normalization Normalization techniques play a critical role in stabilizing the training of GNNs by mitigating the effects of covariate shift, where the distribution of node embeddings changes across layers during training. By normalizing node em- beddings at each layer, the training process becomes more stable, enabling the use of higher learning rates and achiev- ing faster convergence (Cai et al., 2021). Batch Normalization (BN) (Ioffe & Szegedy, 2015) and Layer Normalization (LN) (Ba et al., 2016) are widely used techniques, typically applied to the output of each layer before the activation function σ(·). Here, we use BN: hl v=σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl+euvWl e)). (7) 3.3. Dropout Dropout (Srivastava et al., 2014), a technique widely used in convolutional neural networks (CNNs) to address overfitting by reducing co-adaptation among hidden neurons (Hinton et al., 2012; Yosinski et al., 2014), has also been found to be effective in addressing similar issues in GNNs (Shu et al., 2022), where the co-adaptation effects propagate and accu- mulate via message passing among different nodes. Typi- 3 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence cally, dropout is applied to the embeddings after activation: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))).(8) 3.4. Residual Connection Residual connections (He et al., 2016) significantly enhance CNN performance by directly connecting the input of a layer to its output, thus alleviating the problem of vanishing gra- dient. They were first adopted by the vanilla GCN (Kipf & Welling, 2017) and has since been incorporated into subse- quent works such as GatedGCN (Bresson & Laurent, 2017) and DeepGCNs (Li et al., 2019). Formally, residual connec- tions can be integrated into GNNs as follows: hl v=Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v.(9) While deeper networks, such as deep CNNs (He et al., 2016; Huang et al., 2017), are capable of extract more complex fea- tures, GNNs encounter challenges like over-smoothing (Li et al., 2018), where deeper models lead to indistinguishable node representations. Consequently, most GNNs are shal- low, typically with 2 to 5 layers. However, by incorporating residual connections, we show that deeper GNNs, ranging from 3 to 20 layers, can achieve strong performance. 3.5. Feed-Forward Network GTs incorporate a feed-forward network (FFN) as a crucial component within each of their layers. The FFN enhances the model’s ability to perform complex feature transforma- tions and introduces non-linearity, thereby increasing the network’s expressive power. Inspired by this, we propose appending a fully-connected FFN at the end of each layer of GNNs, defined as: FFN(h) =BN(σ(hWl FFN 1)Wl FFN 2+h), (10) where Wl FFN 1andWl FFN 2are the trainable weight matrices of the FFN at the l-th GNN layer. The node embeddings output by the FFN are then computed as: hl v=FFN(Dropout (σ(BN(X u∈N(v)∪{v}1pˆduˆdvhl−1 uWl +euvWl e))) +hl−1 v). (11) 3.6. Positional Encoding Positional encoding (PE) was introduced in the Transformer model (Vaswani et al., 2017) to represent the positions of tokens within a sequence for language modeling. In GTs,Table 1. Overview of the datasets used for graph-level tasks. Dataset # graphs Avg. # nodes Avg. # edges Task Type ZINC 12,000 23.2 24.9 Graph regression MNIST 70,000 70.6 564.5 Graph classification CIFAR10 60,000 117.6 941.1 Graph classification PATTERN 14,000 118.9 3,039.3 Inductive node cls. CLUSTER 12,000 117.2 2,150.9 Inductive node cls. Peptides-func 15,535 150.9 307.3 Graph classification Peptides-struct 15,535 150.9 307.3 Graph regression PascalVOC-SP 11,355 479.4 2,710.5 Inductive node cls. COCO-SP 123,286 476.9 2,693.7 Inductive node cls. MalNet-Tiny 5,000 1,410.3 2,859.9 Graph classification ogbg-molhiv 41,127 25.5 27.5 Graph classification ogbg-molpcba 437,929 26.0 28.1 Graph classification ogbg-ppa 158,100 243.4 2,266.1 Graph classification ogbg-code2 452,741 125.2 124.2 Graph classification PE is used to incorporate graph positional or structural infor- mation. The encodings are typically added or concatenated to the input node features xvbefore being fed into the GTs. Various PE methods have been proposed, such as Laplacian Positional Encoding (LapPE) (Dwivedi & Bresson, 2020; Kreuzer et al., 2021), Weisfeiler-Lehman Positional Encod- ing (WLPE) (Zhang et al., 2020), Random Walk Structural Encoding (RWSE) (Li et al., 2020; Dwivedi et al., 2021; Ramp ´aˇsek et al., 2022), Learnable Structural and Positional Encodings (LSPE) (Dwivedi et al., 2021), and Relative Ran- dom Walk Probabilities (RRWP) (Ma et al., 2023). Follow- ing the practice, we use RWSE, one of the most efficient PE methods, to improve the performance of GNNs as follows: xv= [xv∥xRWSE v]WPE, (12) where [·∥·]denotes concatenation, xRWSE v represents the RWSE of node v, andWPEis the trainable weight matrix. 4. Assessment: Experimental Setup Datasets, Table 1 . We use widely adopted graph-level datasets in our experiments, including ZINC ,MNIST , CIFAR10 ,PATTERN , and CLUSTER from the GNN Benchmark (Dwivedi et al., 2023); Peptides-func ,Peptides- struct ,PascalVOC-SP ,COCO-SP , and MalNet-Tiny from Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021); and ogbg-molhiv ,ogbg- molpcba ,ogbg-ppa , and ogbg-code2 from Open Graph Benchmark (OGB) (Hu et al., 2020). We follow their re- spective standard evaluation protocols including the splits and metrics. For further details, refer to the Appendix A.2. Baselines. Our main focus lies on classic GNNs: GCN (Kipf & Welling, 2017), GIN (Xu et al., 2018; Hu et al., 2019), GatedGCN (Bresson & Laurent, 2017), the SOTA GTs: GT (2020), GraphTrans (2021), SAN (2021), Graphormer (2021), SAT (2022), EGT (2022), GraphGPS (2022; 2023), GRPE (2022), Graphormer-URPE (2022), Graphormer-GD (2023), Specformer (2023), LGI- GT (2023), GPTrans-Nano (2023b), Graph ViT/MLP-Mixer (2023), NAGphormer (2023a), DIFFormer (2023), MGT 4 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 2. Test performance on five benchmarks from (Dwivedi et al., 2023) (%). Shown is the mean ±s.d. of 5 runs with different random seeds.+denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for ZINC, PATTERN, and CLUSTER, and ∼100K for MNIST and CIFAR10. The top 1st,2ndand3rdresults are highlighted. ZINC MNIST CIFAR10 PATTERN CLUSTER # graphs 12,000 70,000 60,000 14,000 12,000 Avg. # nodes 23.2 70.6 117.6 118.9 117.2 Avg. # edges 24.9 564.5 941.1 3039.3 2150.9 Metric MAE↓ Accuracy ↑ Accuracy ↑ Accuracy ↑ Accuracy ↑ GT (2020) 0.226 ±0.014 90.831 ±0.161 59.753 ±0.293 84.808 ±0.068 73.169 ±0.622 SAN (2021) 0.139 ±0.006 – – 86.581 ±0.037 76.691 ±0.650 Graphormer (2021) 0.122 ±0.006 – – – – SAT (2022) 0.094 ±0.008 – – 86.848 ±0.037 77.856 ±0.104 EGT (2022) 0.108 ±0.009 98.173 ±0.087 68.702 ±0.409 86.821 ±0.020 79.232 ±0.348 GraphGPS (2022) 0.070 ±0.004 98.051 ±0.126 72.298 ±0.356 86.685 ±0.059 78.016 ±0.180 GRPE (2022) 0.094 ±0.002 – – 87.020 ±0.042 – Graphormer-URPE (2022) 0.086 ±0.007 – – – – Graphormer-GD (2023) 0.081 ±0.009 – – – – Specformer (2023) 0.066 ±0.003 – – – – LGI-GT (2023) – – – 86.930 ±0.040 – GPTrans-Nano (2023b) – – – 86.731 ±0.085 – Graph ViT/MLP-Mixer (2023) 0.073 ±0.001 98.460 ±0.090 73.960 ±0.330 – – Exphormer (2023) – 98.414 ±0.038 74.754 ±0.194 86.734 ±0.008 – GRIT (2023) 0.059 ±0.002 98.108 ±0.111 76.468 ±0.881 87.196 ±0.076 80.026 ±0.277 GRED (2024) 0.077 ±0.002 98.383 ±0.012 76.853 ±0.185 86.759 ±0.020 78.495 ±0.103 GEAET (2024) – 98.513 ±0.086 76.634 ±0.427 86.993 ±0.026 – TIGT (2024) 0.057 ±0.002 98.231 ±0.132 73.963 ±0.361 86.681 ±0.062 78.025 ±0.223 Cluster-GT (2024a) 0.071 ±0.004 – – – – GMN (2024) – 98.391 ±0.182 74.560 ±0.381 87.090 ±1.260 – Graph-Mamba (2024) – 98.420 ±0.080 73.700 ±0.340 86.710 ±0.050 76.800 ±0.360 GCN 0.367 ±0.011 90.705 ±0.218 55.710 ±0.381 71.892 ±0.334 68.498 ±0.976 GCN+0.076 ±0.00979.3%↓98.382 ±0.0958.5%↑69.824 ±0.41325.4%↑87.021 ±0.09521.1%↑77.109 ±0.87212.6%↑ GIN 0.526 ±0.051 96.485 ±0.252 55.255 ±1.527 85.387 ±0.136 64.716 ±1.553 GIN+0.065 ±0.00487.6%↓98.285 ±0.1031.9%↑69.592 ±0.28725.9%↑86.842 ±0.0481.7%↑ 74.794 ±0.21315.6%↑ GatedGCN 0.282 ±0.015 97.340 ±0.143 67.312 ±0.311 85.568 ±0.088 73.840 ±0.326 GatedGCN+0.077 ±0.00572.7%↓98.712 ±0.1371.4%↑77.218 ±0.38114.7%↑87.029 ±0.0371.7%↑ 79.128 ±0.2357.1%↑ Time (epoch) of GraphGPS 21s 76s 64s 32s 86s Time (epoch) of GCN+7s 60s 40s 19s 29s (2023), DRew (2023), Exphormer (2023), GRIT (2023), GRED (2024), GEAET (2024), Subgraphormer (2024), TIGT (2024), GECO (2024), GPNN (2024), Cluster-GT (2024a), and the SOTA graph state space models (GSSMs): GMN (2024), Graph-Mamba (2024), GSSC (2024b). Fur- thermore, various other GTs exist in related surveys (Hoang et al., 2024; Shehzad et al., 2024; M ¨uller et al., 2023), empir- ically shown to be inferior to the GTs we compared against for graph-level tasks. We report the performance results of baselines primarily from (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023), with the remaining obtained from their re- spective original papers or official leaderboards whenever possible, as those results are obtained by well-tuned models. Hyperparameter Configurations. We conduct hyperpa- rameter tuning on 3 classic GNNs, consistent with the hy- perparameter search space of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). Specifically, we utilize the AdamW optimizer (Loshchilov, 2017) with a learning rate from{0.0001,0.0005,0.001}and an epoch limit of 2000. As discussed in Section 3, we focus on whether to use the edge feature module, normalization (BN), residual connections, FFN, PE (RWSE), and dropout rates from {0.05,0.1,0.15,0.2,0.3}, the number of layers from 3 to 20. Considering the large number of hyperparameters anddatasets, we do not perform an exhaustive search. Addition- ally, we retrain baseline GTs using the same hyperparam- eter search space and training environments as the classic GNNs. Since the retrained results did not surpass those in their original papers, we present the results from those sources .GNN+denotes the enhanced version. We report mean scores and standard deviations after 5 independent runs with different random seeds. Detailed hyperparameters are provided in Appendix A. 5. Assessment: Results and Findings 5.1. Overall Performance We evaluate the performance of the enhanced versions of 3 classic GNNs across 14 well-known graph-level datasets. The enhanced versions of classic GNNs achieved state- of-the-art performance, ranking in the top three across 14 datasets , including first place in 8 of them , while also demonstrating superior efficiency . This suggests that the GNN+framework effectively harnesses the po- tential of classic GNNs for graph-level tasks and suc- cessfully mitigates their inherent limitations. 5 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 3. Test performance on five datasets from Long-Range Graph Benchmarks (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021). +denotes the enhanced version, while the baseline results were obtained from their respective original papers. # Param ∼500K for all. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny # graphs 15,535 15,535 11,355 123,286 5,000 Avg. # nodes 150.9 150.9 479.4 476.9 1,410.3 Avg. # edges 307.3 307.3 2,710.5 2,693.7 2,859.9 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ GT (2020) 0.6326 ±0.0126 0.2529 ±0.0016 0.2694 ±0.0098 0.2618 ±0.0031 – SAN (2021) 0.6439 ±0.0075 0.2545 ±0.0012 0.3230 ±0.0039 0.2592 ±0.0158 – GraphGPS (2022) 0.6535 ±0.0041 0.2500 ±0.0005 0.3748 ±0.0109 0.3412 ±0.0044 0.9350 ±0.0041 GraphGPS (2023) 0.6534 ±0.0091 0.2509 ±0.0014 0.4440 ±0.0065 0.3884 ±0.0055 0.9350 ±0.0041 NAGphormer (2023a) – – 0.4006 ±0.0061 0.3458 ±0.0070 – DIFFormer (2023) – – 0.3988 ±0.0045 0.3620 ±0.0012 – MGT (2023) 0.6817 ±0.0064 0.2453 ±0.0025 – – – DRew (2023) 0.7150 ±0.0044 0.2536 ±0.0015 0.3314 ±0.0024 – – Graph ViT/MLP-Mixer (2023) 0.6970 ±0.0080 0.2449 ±0.0016 – – – Exphormer (2023) 0.6258 ±0.0092 0.2512 ±0.0025 0.3446 ±0.0064 0.3430 ±0.0108 0.9402 ±0.0021 GRIT (2023) 0.6988 ±0.0082 0.2460 ±0.0012 – – – Subgraphormer (2024) 0.6415 ±0.0052 0.2475 ±0.0007 – – – GRED (2024) 0.7133 ±0.0011 0.2455 ±0.0013 – – – GEAET (2024) 0.6485 ±0.0035 0.2547 ±0.0009 0.3933 ±0.0027 0.3219 ±0.0052 – TIGT (2024) 0.6679 ±0.0074 0.2485 ±0.0015 – – – GECO (2024) 0.6975 ±0.0025 0.2464 ±0.0009 0.4210 ±0.0080 0.3320 ±0.0032 – GPNN (2024) 0.6955 ±0.0057 0.2454 ±0.0003 – – – Graph-Mamba (2024) 0.6739 ±0.0087 0.2478 ±0.0016 0.4191 ±0.0126 0.3960 ±0.0175 0.9340 ±0.0027 GSSC (2024b) 0.7081 ±0.0062 0.2459 ±0.0020 0.4561 ±0.0039 – 0.9406 ±0.0064 GCN 0.6860 ±0.0050 0.2460 ±0.0007 0.2078 ±0.0031 0.1338 ±0.0007 0.8100 ±0.0081 GCN+0.7261 ±0.0067 5.9%↑0.2421 ±0.0016 1.6%↓0.3357 ±0.0087 62.0%↑0.2733 ±0.0041 104.9% ↑0.9354 ±0.0045 15.5%↑ GIN 0.6621 ±0.0067 0.2473 ±0.0017 0.2718 ±0.0054 0.2125 ±0.0009 0.8898 ±0.0055 GIN+0.7059 ±0.0089 6.6%↑0.2429 ±0.0019 1.8%↓0.3189 ±0.0105 17.3%↑0.2483 ±0.0046 16.9%↑ 0.9325 ±0.0040 4.8%↑ GatedGCN 0.6765 ±0.0047 0.2477 ±0.0009 0.3880 ±0.0040 0.2922 ±0.0018 0.9223 ±0.0065 GatedGCN+0.7006 ±0.0033 3.6%↑0.2431 ±0.0020 1.9%↓0.4263 ±0.0057 9.9%↑ 0.3802 ±0.0015 30.1%↑ 0.9460 ±0.0057 2.6%↑ Time (epoch) of GraphGPS 6s 6s 17s 213s 46s Time (epoch) of GCN+6s 6s 12s 162s 6s GNN Benchmark, Table 2. We observe that our GNN+ implementation substantially enhances the performance of classic GNNs, with the most significant improvements on ZINC, PATTERN, and CLUSTER. On MNIST and CIFAR, GatedGCN+outperforms SOTA models such as GEAET and GRED, securing top rankings. Long-Range Graph Benchmark (LRGB), Table 3. The results reveal that classic GNNs can achieve strong perfor- mance across LRGB datasets. Specifically, GCN+excels on the Peptides-func and Peptides-struct datasets. On the other hand, GatedGCN+achieves the highest accuracy on MalNet-Tiny. Furthermore, on PascalVOC-SP and COCO- SP, GatedGCN+significantly improves performance, se- curing the third-best model ranking overall. These results highlight the potential of classic GNNs in capturing long- range interactions in graph-level tasks. Open Graph Benchmark (OGB), Table 4. Finally, we test our method on four OGB datasets. As shown in Table 4, GatedGCN+consistently ranks among the top three mod- els and achieves top performance on three out of the four datasets. On ogbg-ppa, GatedGCN+shows an improve- ment of approximately 9%, ranking first on the OGB leader- board. On ogbg-molhiv and ogbg-molpcba, GatedGCN+ even matches the performance of Graphormer and EGT pre-trained on other datasets. Additionally, on ogbg-code2, GatedGCN+secures the third-highest performance, under-scoring the potential of GNNs for large-scale OGB datasets. 5.2. Ablation Study To examine the unique contributions of different technique used in GNN+, we conduct a series of ablation analysis by selectively removing elements such as edge feature module (Edge.), normalization (Norm), dropout, residual connec- tions (RC), FFN, PE from GCN+, GIN+, and GatedGCN+. The effect of these ablations is assessed across GNN Bench- mark (see Table 5), LRGB, and OGB (see Table 6) datasets. Our ablation study demonstrates that each module incor- porated in GNN+—including edge feature integration, normalization, dropout, residual connections, FFN, and PE—is indispensable ; the removal of any single com- ponent results in a degradation of overall performance. Observation 1: The integration of edge features is par- ticularly effective in molecular and image superpixel datasets, where these features carry critical information. In molecular graphs such as ZINC and ogbg-molhiv, edge features represent chemical bond information, which is es- sential for molecular properties. Removing this module leads to a significant performance drop. In protein networks ogbg-ppa, edges represent normalized associations between proteins. Removing the edge feature module results in a sub- 6 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 4. Test performance in four benchmarks from Open Graph Benchmark (OGB) (Hu et al., 2020).+denotes the enhanced version, while the baseline results were obtained from their respective original papers.†indicates the use of additional pretraining datasets, included here for reference only and excluded from ranking. ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # graphs 41,127 437,929 158,100 452,741 Avg. # nodes 25.5 26.0 243.4 125.2 Avg. # edges 27.5 28.1 2,266.1 124.2 Metric AUROC ↑ Avg. Precision ↑ Accuracy ↑ F1 score ↑ GT (2020) – – 0.6454 ±0.0033 0.1670 ±0.0015 GraphTrans (2021) – 0.2761 ±0.0029 – 0.1830 ±0.0024 SAN (2021) 0.7785 ±0.2470 0.2765 ±0.0042 – – Graphormer (pre-trained) (2021) 0.8051 ±0.0053†– – – SAT (2022) – – 0.7522 ±0.0056 0.1937 ±0.0028 EGT (pre-trained) (2022) 0.8060 ±0.0065†0.2961 ±0.0024†– – GraphGPS (2022) 0.7880 ±0.0101 0.2907 ±0.0028 0.8015 ±0.0033 0.1894 ±0.0024 Specformer (2023) 0.7889 ±0.0124 0.2972 ±0.0023 – – Graph ViT/MLP-Mixer (2023) 0.7997 ±0.0102 – – – Exphormer (2023) 0.7834 ±0.0044 0.2849 ±0.0025 – – GRIT (2023) 0.7835 ±0.0054 0.2362 ±0.0020 – – Subgraphormer (2024) 0.8038 ±0.0192 – – – GECO (2024) 0.7980 ±0.0200 0.2961 ±0.0008 0.7982 ±0.0042 0.1915 ±0.0020 GSSC (2024b) 0.8035 ±0.0142 – – – GCN 0.7606 ±0.0097 0.2020 ±0.0024 0.6839 ±0.0084 0.1507 ±0.0018 GCN+0.8012 ±0.0124 5.4%↑0.2721 ±0.0046 34.7%↑0.8077 ±0.0041 18.1%↑0.1787 ±0.0026 18.6%↑ GIN 0.7835 ±0.0125 0.2266 ±0.0028 0.6892 ±0.0100 0.1495 ±0.0023 GIN+0.7928 ±0.0099 1.2%↑0.2703 ±0.0024 19.3%↑0.8107 ±0.0053 17.7%↑0.1803 ±0.0019 20.6%↑ GatedGCN 0.7687 ±0.0136 0.2670 ±0.0020 0.7531 ±0.0083 0.1606 ±0.0015 GatedGCN+0.8040 ±0.0164 4.6%↑0.2981 ±0.0024 11.6%↑0.8258 ±0.0055 9.7%↑ 0.1896 ±0.0024 18.1%↑ Time (epoch/s) of GraphGPS 96s 196s 276s 1919s Time (epoch/s) of GCN+16s 91s 178s 476s Table 5. Ablation study on GNN Benchmark (Dwivedi et al., 2023) (%). - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. ZINC MNIST CIFAR10 PATTERN CLUSTER Metric MAE↓ Accuracy ↑Accuracy ↑Accuracy ↑Accuracy ↑ GCN+0.076 ±0.009 98.382 ±0.095 69.824 ±0.413 87.021 ±0.095 77.109 ±0.872 (-) Edge. 0.135 ±0.004 98.153 ±0.042 68.256 ±0.357 86.854 ±0.054 – (-) Norm 0.107 ±0.011 97.886 ±0.066 60.765 ±0.829 52.769 ±0.874 16.563 ±0.134 (-) Dropout – 97.897 ±0.071 65.693 ±0.461 86.764 ±0.045 74.926 ±0.469 (-) RC 0.159 ±0.016 95.929 ±0.169 58.186 ±0.295 86.059 ±0.274 16.508 ±0.615 (-) FFN 0.132 ±0.021 97.174 ±0.063 63.573 ±0.346 86.746 ±0.088 72.606 ±1.243 (-) PE 0.127 ±0.010 – – 85.597 ±0.241 75.568 ±1.147 GIN+0.065 ±0.004 98.285 ±0.103 69.592 ±0.287 86.842 ±0.048 74.794 ±0.213 (-) Edge. 0.122 ±0.009 97.655 ±0.075 68.196 ±0.107 86.714 ±0.036 65.895 ±3.425 (-) Norm 0.096 ±0.006 97.695 ±0.065 64.918 ±0.059 86.815 ±0.855 72.119 ±0.359 (-) Dropout – 98.214 ±0.064 66.638 ±0.873 86.836 ±0.053 73.316 ±0.355 (-) RC 0.137 ±0.031 97.675 ±0.175 64.910 ±0.102 86.645 ±0.125 16.800 ±0.088 (-) FFN 0.104 ±0.003 11.350 ±0.008 60.582 ±0.395 58.511 ±0.016 62.175 ±2.895 (-) PE 0.123 ±0.014 – – 86.592 ±0.049 73.925 ±0.165 GatedGCN+0.077 ±0.005 98.712 ±0.137 77.218 ±0.381 87.029 ±0.037 79.128 ±0.235 (-) Edge. 0.119 ±0.001 98.085 ±0.045 72.128 ±0.275 86.879 ±0.017 76.075 ±0.845 (-) Norm 0.088 ±0.003 98.275 ±0.045 71.995 ±0.445 86.942 ±0.023 78.495 ±0.155 (-) Dropout 0.089 ±0.003 98.225 ±0.095 70.383 ±0.429 86.802 ±0.034 77.597 ±0.126 (-) RC 0.106 ±0.002 98.442 ±0.067 75.149 ±0.155 86.845 ±0.025 16.670 ±0.307 (-) FFN 0.098 ±0.005 98.438 ±0.151 76.243 ±0.131 86.935 ±0.025 78.975 ±0.145 (-) PE 0.174 ±0.009 – – 85.595 ±0.065 77.515 ±0.265 stantial accuracy decline, ranging from 0.5083 to 0.7310 for classic GNNs. Similarly, in image superpixel datasets like CIFAR-10, PascalVOC-SP, and COCO-SP, edge features encode spatial relationships between superpixels, which are crucial for maintaining image coherence. However, in codegraphs such as ogbg-code2 and MalNet-Tiny, where edges represent call types, edge features are less relevant to the prediction tasks, and their removal has minimal impact. Observation 2: Normalization tends to have a greater impact on larger-scale datasets, whereas its impact is less significant on smaller datasets. For large-scale datasets such as CIFAR 10, COCO-SP, and the OGB datasets, removing normalization leads to signifi- cant performance drops. Specifically, on ogbg-ppa, which has 158,100 graphs, ablating normalization results in an accuracy drop of around 15% for three classic GNNs. This result is consistent with Luo et al. (2024a), who found that normalization is more important for GNNs in node clas- sification on large graphs. In such datasets, where node feature distributions are more complex, normalizing node embeddings is essential for stabilizing the training process. Observation 3: Dropout proves advantageous for most datasets, with a very low dropout rate being sufficient and optimal . Our analysis highlights the crucial role of dropout in main- taining the performance of classic GNNs on GNN Bench- mark and LRGB and large-scale OGB datasets, with its ablation causing significant declines—for instance, an 8.8% relative decrease for GatedGCN+on CIFAR-10 and a 20.4% relative decrease on PascalVOC-SP. This trend continues in 7 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 6. Ablation study on LRGB and OGB datasets. - indicates that the corresponding hyperparameter is not used in GNN+, as it empirically leads to inferior performance. Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 Metric Avg. Precision ↑ MAE↓ F1 score ↑ F1 score ↑ Accuracy ↑ AUROC ↑Avg. Precision ↑Accuracy ↑ F1 score ↑ GCN+0.7261 ±0.0067 0.2421 ±0.0016 0.3357 ±0.0087 0.2733 ±0.0041 0.9354 ±0.0045 0.8012 ±0.0124 0.2721 ±0.0046 0.8077 ±0.0041 0.1787 ±0.0026 (-) Edge. 0.7191 ±0.0036 – 0.2942 ±0.0043 0.2219 ±0.0060 0.9292 ±0.0034 0.7714 ±0.0204 0.2628 ±0.0019 0.2994 ±0.0062 0.1785 ±0.0033 (-) Norm 0.7107 ±0.0027 0.2509 ±0.0026 0.1802 ±0.0111 0.2332 ±0.0079 0.9236 ±0.0054 0.7753 ±0.0049 0.2528 ±0.0016 0.6705 ±0.0104 0.1679 ±0.0027 (-) Dropout 0.6748 ±0.0055 0.2549 ±0.0025 0.3072 ±0.0069 0.2601 ±0.0046 – 0.7431 ±0.0185 0.2405 ±0.0047 0.7893 ±0.0052 0.1641 ±0.0043 (-) RC – – 0.2734 ±0.0036 0.1948 ±0.0096 0.8916 ±0.0048 – – 0.7520 ±0.0157 0.1785 ±0.0029 (-) FFN – – 0.2786 ±0.0068 0.2314 ±0.0073 0.9118 ±0.0078 0.7432 ±0.0052 0.2621 ±0.0019 0.7672 ±0.0071 0.1594 ±0.0020 (-) PE 0.7069 ±0.0093 0.2447 ±0.0015 – – – 0.7593 ±0.0051 0.2667 ±0.0034 – – GIN+0.7059 ±0.0089 0.2429 ±0.0019 0.3189 ±0.0105 0.2483 ±0.0046 0.9325 ±0.0040 0.7928 ±0.0099 0.2703 ±0.0024 0.8107 ±0.0053 0.1803 ±0.0019 (-) Edge. 0.7033 ±0.0015 0.2442 ±0.0028 0.2956 ±0.0047 0.2259 ±0.0053 0.9286 ±0.0049 0.7597 ±0.0103 0.2702 ±0.0021 0.2789 ±0.0031 0.1752 ±0.0020 (-) Norm 0.6934 ±0.0077 0.2444 ±0.0015 0.2707 ±0.0037 0.2244 ±0.0063 0.9322 ±0.0025 0.7874 ±0.0114 0.2556 ±0.0026 0.6484 ±0.0246 0.1722 ±0.0034 (-) Dropout 0.6384 ±0.0094 0.2531 ±0.0030 0.3153 ±0.0113 – – – 0.2545 ±0.0068 0.7673 ±0.0059 0.1730 ±0.0018 (-) RC 0.6975 ±0.0038 0.2527 ±0.0015 0.2350 ±0.0044 0.1741 ±0.0085 0.9150 ±0.0047 0.7733 ±0.0122 0.1454 ±0.0061 – 0.1617 ±0.0026 (-) FFN – – 0.2393 ±0.0049 0.1599 ±0.0081 0.8944 ±0.0074 – 0.2534 ±0.0033 0.6676 ±0.0039 0.1491 ±0.0016 (-) PE 0.6855 ±0.0027 0.2455 ±0.0019 0.3141 ±0.0031 – – 0.7791 ±0.0268 0.2601 ±0.0023 – – GatedGCN+0.7006 ±0.0033 0.2431 ±0.0020 0.4263 ±0.0057 0.3802 ±0.0015 0.9460 ±0.0057 0.8040 ±0.0164 0.2981 ±0.0024 0.8258 ±0.0055 0.1896 ±0.0024 (-) Edge. 0.6882 ±0.0028 0.2466 ±0.0018 0.3764 ±0.0117 0.3172 ±0.0109 0.9372 ±0.0062 0.7831 ±0.0157 0.2951 ±0.0028 0.0948 ±0.0000 0.1891 ±0.0021 (-) Norm 0.6733 ±0.0026 0.2474 ±0.0015 0.3628 ±0.0043 0.3527 ±0.0051 0.9326 ±0.0056 0.7879 ±0.0178 0.2748 ±0.0012 0.6864 ±0.0165 0.1743 ±0.0026 (-) Dropout 0.6695 ±0.0101 0.2508 ±0.0014 0.3389 ±0.0066 0.3393 ±0.0051 – – 0.2582 ±0.0036 0.8088 ±0.0062 0.1724 ±0.0027 (-) RC – 0.2498 ±0.0034 0.4075 ±0.0052 0.3475 ±0.0064 0.9402 ±0.0054 0.7833 ±0.0177 0.2897 ±0.0016 0.8099 ±0.0053 0.1844 ±0.0025 (-) FFN – – – 0.3508 ±0.0049 0.9364 ±0.0059 – 0.2875 ±0.0022 – 0.1718 ±0.0024 (-) PE 0.6729 ±0.0084 0.2461 ±0.0025 0.4052 ±0.0031 – – 0.7771 ±0.0057 0.2813 ±0.0022 – – large-scale OGB datasets, where removing dropout results in a 5–13% performance drop across 3 classic GNNs on ogbg-molpcba. Notably, 97% of the optimal dropout rates are≤0.2, and 64% are ≤0.1, indicating that a very low dropout rate is both sufficient and optimal for graph-level tasks. Interestingly, this finding for graph-level tasks con- trasts with Luo et al. (2024a)’s observations for node-level tasks, where a higher dropout rate is typically required. Observation 4: Residual connections are generally es- sential, except in shallow GNNs applied to small graphs. Removing residual connections generally leads to signifi- cant performance drops across datasets, with the only excep- tions being found in the peptide datasets. Although similar in the number of nodes to CLUSTER and PATTERN, pep- tide datasets involve GNNs with only 3-5 layers, while the others use deeper networks with over 10 layers. For shallow networks in small graphs, residual connections may not be as beneficial and can even hurt performance by disrupting feature flow. In contrast, deeper networks in larger graphs rely on residual connections to maintain gradient flow and enable stable, reliable long-range information exchange. Observation 5: FFN is crucial for GIN+and GCN+, greatly impacting their performance across datasets. Ablating FFN leads to substantial performance declines for GIN+and GCN+across almost all datasets, highlighting its essential role in graph-level tasks. Notably, on MNIST, removing FNN leads to an 88% relative accuracy drop for GIN+. This is likely because the architectures of GIN+and GCN+rely heavily on FFN for learning complex node fea-ture representations. In contrast, GatedGCN+uses gating mechanisms to adaptively adjust the importance of neigh- boring nodes’ information, reducing the need for additional feature transformations. The only exceptions are observed in the peptides datasets, where FFN is not used in all three models. This may be due to the shallow GNN architecture, where complex feature transformations are less necessary. Observation 6: PE is particularly effective for small- scale datasets, but negligible for large-scale datasets. Removing PE significantly reduces performance for classic GNNs on small-scale datasets like ZINC, PATTERN, CLUS- TER, Peptides-func, and ogbg-molhiv, which only contain 10,000-40,000 graphs. By contrast, on large-scale datasets like ogbg-code2, ogbg-molpcba, ogbg-ppa, and COCO-SP (over 100,000 graphs), the impact of PE is less pronounced. This may be because smaller datasets rely more on PE to capture graph structure, whereas larger datasets benefit from the abundance of data, reducing the need for PE. 6. Conclusion This study highlights the often-overlooked potential of clas- sic GNNs in tacking graph-level tasks. By integrating six widely used techniques into a unified GNN+framework, we enhance three classic GNNs for graph-level tasks. Evalu- ations on 14 benchmark datasets reveal that, these enhanced GNNs match or outperform GTs, while also demonstrating greater efficiency. These findings challenge the prevailing belief that GTs are inherently superior, reaffirming the capa- bility of simple GNN structures as powerful models. 8 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Impact Statements This paper presents work whose goal is to advance the field of Graph Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. References Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205 , 2020. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. Bar-Shalom, G., Bevilacqua, B., and Maron, H. Sub- graphormer: Unifying subgraph gnns and graph transformers via graph products. arXiv preprint arXiv:2402.08450 , 2024. Behrouz, A. and Hashemi, F. Graph mamba: Towards learn- ing on graphs with state space models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 119–130, 2024. Bo, D., Shi, C., Wang, L., and Liao, R. Specformer: Spectral graph neural networks meet transformers. arXiv preprint arXiv:2303.01028 , 2023. Bresson, X. and Laurent, T. Residual gated graph convnets. arXiv preprint arXiv:1711.07553 , 2017. Cai, T., Luo, S., Xu, K., He, D., Liu, T.-y., and Wang, L. Graphnorm: A principled approach to accelerating graph neural network training. In International Conference on Machine Learning , pp. 1204–1215. PMLR, 2021. Chen, D., Lin, Y ., Li, W., Li, P., Zhou, J., and Sun, X. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelli- gence , volume 34, pp. 3438–3445, 2020. Chen, D., O’Bray, L., and Borgwardt, K. Structure-aware transformer for graph representation learning. In Interna- tional Conference on Machine Learning , pp. 3469–3489. PMLR, 2022. Chen, J., Gao, K., Li, G., and He, K. NAGphormer: A tokenized graph transformer for node classification in large graphs. In The Eleventh International Confer- ence on Learning Representations , 2023a. URL https: //openreview.net/forum?id=8KYeilT3Ow. Chen, Z., Tan, H., Wang, T., Shen, T., Lu, T., Peng, Q., Cheng, C., and Qi, Y . Graph propagation trans- former for graph representation learning. arXiv preprint arXiv:2305.11424 , 2023b.Choi, Y . Y ., Park, S. W., Lee, M., and Woo, Y . Topology-informed graph transformer. arXiv preprint arXiv:2402.02005 , 2024. Ding, Y ., Orvieto, A., He, B., and Hofmann, T. Recurrent distance-encoding neural networks for graph representa- tion learning, 2024. URL https://openreview.net/forum? id=lNIj5FdXsC. Dwivedi, V . P. and Bresson, X. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699 , 2020. Dwivedi, V . P., Luu, A. T., Laurent, T., Bengio, Y ., and Bres- son, X. Graph neural networks with learnable structural and positional representations. In International Confer- ence on Learning Representations , 2021. Dwivedi, V . P., Ramp ´aˇsek, L., Galkin, M., Parviz, A., Wolf, G., Luu, A. T., and Beaini, D. Long range graph bench- mark. arXiv preprint arXiv:2206.08164 , 2022. Dwivedi, V . P., Joshi, C. K., Luu, A. T., Laurent, T., Ben- gio, Y ., and Bresson, X. Benchmarking graph neural networks. Journal of Machine Learning Research , 24 (43):1–48, 2023. Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 , 2019. Freitas, S. and Dong, Y . A large-scale database for graph representation learning. Advances in neural information processing systems , 2021. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chem- istry. In International conference on machine learning , pp. 1263–1272. PMLR, 2017. Gutteridge, B., Dong, X., Bronstein, M. M., and Di Gio- vanni, F. Drew: Dynamically rewired message pass- ing with delay. In International Conference on Machine Learning , pp. 12252–12267. PMLR, 2023. Hamilton, W., Ying, Z., and Leskovec, J. Inductive repre- sentation learning on large graphs. Advances in neural information processing systems , 30, 2017. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. He, X., Hooi, B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. A generalization of vit/mlp-mixer to graphs. InInternational conference on machine learning , pp. 12724–12745. PMLR, 2023. 9 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012. Hoang, V . T., Lee, O., et al. A survey on structure-preserving graph transformers. arXiv preprint arXiv:2401.16176 , 2024. Hu, W., Liu, B., Gomes, J., Zitnik, M., Liang, P., Pande, V ., and Leskovec, J. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 , 2019. Hu, W., Fey, M., Zitnik, M., Dong, Y ., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems , 33:22118–22133, 2020. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 4700–4708, 2017. Huang, S., Song, Y ., Zhou, J., and Lin, Z. Cluster-wise graph transformer with dual-granularity kernelized at- tention. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. URL https://openreview.net/forum?id=3j2nasmKkP. Huang, Y ., Miao, S., and Li, P. What can we learn from state space models for machine learning on graphs? arXiv preprint arXiv:2406.05815 , 2024b. Hussain, M. S., Zaki, M. J., and Subramanian, D. Global self-attention as a replacement for graph convolution. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pp. 655–665, 2022. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. InInternational conference on machine learning , pp. 448– 456. pmlr, 2015. Kipf, T. N. and Welling, M. Semi-supervised classifica- tion with graph convolutional networks. In International Conference on Learning Representations , 2017. URL https://openreview.net/forum?id=SJU4ayYgl. Kreuzer, D., Beaini, D., Hamilton, W., L ´etourneau, V ., and Tossou, P. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems , 34:21618–21629, 2021. Li, G., Muller, M., Thabet, A., and Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF international conference on computer vision , pp. 9267–9276, 2019.Li, P., Wang, Y ., Wang, H., and Leskovec, J. Distance en- coding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems , 33:4465–4478, 2020. Li, Q., Han, Z., and Wu, X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI conference on artificial intelligence , 2018. Liang, J., Chen, M., and Liang, J. Graph external attention enhanced transformer. arXiv preprint arXiv:2405.21061 , 2024. Lin, C., Ma, L., Chen, Y ., Ouyang, W., Bronstein, M. M., and Torr, P. Understanding graph transformers by gen- eralized propagation, 2024. URL https://openreview.net/ forum?id=JfjduOxrTY. Loshchilov, I. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. Luo, S., Li, S., Zheng, S., Liu, T.-Y ., Wang, L., and He, D. Your transformer may not be as powerful as you expect. Advances in Neural Information Processing Systems , 35: 4301–4315, 2022. Luo, Y ., Shi, L., and Thost, V . Improving self-supervised molecular representation learning using persistent homol- ogy. In Thirty-seventh Conference on Neural Information Processing Systems , 2023a. URL https://openreview.net/ forum?id=wEiUGpcr0M. Luo, Y ., Shi, L., Xu, M., Ji, Y ., Xiao, F., Hu, C., and Shan, Z. Impact-oriented contextual scholar profiling using self-citation graphs. arXiv preprint arXiv:2304.12217 , 2023b. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. In Thirty-seventh Conference on Neural Information Processing Systems , 2023c. URL https:// openreview.net/forum?id=g49s1N5nmO. Luo, Y ., Shi, L., and Wu, X.-M. Classic GNNs are strong baselines: Reassessing GNNs for node classification. In The Thirty-eight Conference on Neural Information Pro- cessing Systems Datasets and Benchmarks Track , 2024a. URL https://openreview.net/forum?id=xkljKdGe4E. Luo, Y ., Thost, V ., and Shi, L. Transformers over directed acyclic graphs. Advances in Neural Information Process- ing Systems , 36, 2024b. Luo, Y ., Li, H., Liu, Q., Shi, L., and Wu, X.-M. Node identifiers: Compact, discrete representations for effi- cient graph learning. In The Thirteenth International Conference on Learning Representations , 2025a. URL https://openreview.net/forum?id=t9lS1lX9FQ. 10 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Luo, Y ., Wu, X.-M., and Zhu, H. Beyond random masking: When dropout meets graph convolutional networks. In The Thirteenth International Conference on Learning Representations , 2025b. URL https://openreview.net/ forum?id=PwxYoMvmvy. Ma, L., Lin, C., Lim, D., Romero-Soriano, A., Dokania, P. K., Coates, M., Torr, P., and Lim, S.-N. Graph inductive biases in transformers without message passing. arXiv preprint arXiv:2305.17589 , 2023. Min, E., Chen, R., Bian, Y ., Xu, T., Zhao, K., Huang, W., Zhao, P., Huang, J., Ananiadou, S., and Rong, Y . Trans- former for graphs: An overview from architecture per- spective. arXiv preprint arXiv:2202.08455 , 2022. Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M. Weisfeiler and leman go neural: Higher-order graph neural networks. In Pro- ceedings of the AAAI conference on artificial intelligence , volume 33, pp. 4602–4609, 2019. Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M. Tudataset: A collection of bench- mark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 , 2020. M¨uller, L., Galkin, M., Morris, C., and Ramp ´aˇsek, L. Attending to graph transformers. arXiv preprint arXiv:2302.04181 , 2023. Ngo, N. K., Hy, T. S., and Kondor, R. Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics , 159(3), 2023. Niepert, M., Ahmed, M., and Kutzkov, K. Learning con- volutional neural networks for graphs. In International conference on machine learning , pp. 2014–2023. PMLR, 2016. Park, W., Chang, W., Lee, D., Kim, J., and Hwang, S.-w. Grpe: Relative positional encoding for graph transformer. arXiv preprint arXiv:2201.12787 , 2022. Ramp ´aˇsek, L., Galkin, M., Dwivedi, V . P., Luu, A. T., Wolf, G., and Beaini, D. Recipe for a general, powerful, scal- able graph transformer. arXiv preprint arXiv:2205.12454 , 2022. Sancak, K., Hua, Z., Fang, J., Xie, Y ., Malevich, A., Long, B., Balin, M. F., and C ¸ataly ¨urek, ¨U. V . A scalable and effective alternative to graph transformers. arXiv preprint arXiv:2406.12059 , 2024. Shehzad, A., Xia, F., Abid, S., Peng, C., Yu, S., Zhang, D., and Verspoor, K. Graph transformers: A survey. arXiv preprint arXiv:2407.09777 , 2024.Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. Exphormer: Sparse transformers for graphs. arXiv preprint arXiv:2303.06147 , 2023. Shu, J., Xi, B., Li, Y ., Wu, F., Kamhoua, C., and Ma, J. Understanding dropout for graph neural networks. In Companion Proceedings of the Web Conference 2022 , pp. 1128–1138, 2022. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929–1958, 2014. Tang, J., Sun, J., Wang, C., and Yang, Z. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowl- edge discovery and data mining , pp. 807–816, 2009. T¨onshoff, J., Ritzert, M., Rosenbluth, E., and Grohe, M. Where did the gap go? reassessing the long-range graph benchmark. arXiv preprint arXiv:2309.00367 , 2023. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. Veliˇckovi ´c, P., Cucurull, G., Casanova, A., Romero, A., Li`o, P., and Bengio, Y . Graph attention networks. In International Conference on Learning Representations , 2018. Wang, C., Tsepa, O., Ma, J., and Wang, B. Graph-mamba: Towards long-range graph sequence modeling with se- lective state spaces. arXiv preprint arXiv:2402.00789 , 2024. Wu, Q., Yang, C., Zhao, W., He, Y ., Wipf, D., and Yan, J. DIFFormer: Scalable (graph) transformers induced by en- ergy constrained diffusion. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=j6zUzrapY3L. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., and Philip, S. Y . A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning sys- tems, 32(1):4–24, 2020. Wu, Z., Jain, P., Wright, M., Mirhoseini, A., Gonzalez, J. E., and Stoica, I. Representing long-range context for graph neural networks with global attention. Advances in Neural Information Processing Systems , 34:13266– 13279, 2021. Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 , 2018. 11 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Yang, Z., Cohen, W., and Salakhudinov, R. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning , pp. 40–48. PMLR, 2016. Yin, S. and Zhong, G. Lgi-gt: Graph transformers with local and global operators interleaving. 2023. Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., Shen, Y ., and Liu, T.-Y . Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems , 34:28877–28888, 2021. Yosinski, J., Clune, J., Bengio, Y ., and Lipson, H. How trans- ferable are features in deep neural networks? Advances in neural information processing systems , 27, 2014. Yun, S., Jeong, M., Kim, R., Kang, J., and Kim, H. J. Graph transformer networks. Advances in neural information processing systems , 32, 2019. Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the expressive power of GNNs via graph biconnectivity. In The Eleventh International Conference on Learning Rep- resentations , 2023. URL https://openreview.net/forum? id=r9hNv76KoT3. Zhang, J., Zhang, H., Xia, C., and Sun, L. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140 , 2020. 12 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence A. Datasets and Experimental Details A.1. Computing Environment Our implementation is based on PyG (Fey & Lenssen, 2019). The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. A.2. Datasets Table 7 presents a summary of the statistics and characteristics of the datasets. •GNN Benchmark (Dwivedi et al., 2023) . ZINC contains molecular graphs with node features representing atoms and edge features representing bonds The task is to regress the constrained solubility (logP) of the molecule. MNIST and CIFAR10 are adapted from image classification datasets, where each image is represented as an 8-nearest-neighbor graph of SLIC superpixels, with nodes representing superpixels and edges representing spatial relationships. The 10-class classification tasks follow the original image classification tasks. PATTERN andCLUSTER are synthetic datasets sampled from the Stochastic Block Model (SBM) for inductive node classification, with tasks involving sub-graph pattern recognition and cluster ID inference. For all datasets, we adhere to the respective training protocols and standard evaluation splits (Dwivedi et al., 2023). •Long-Range Graph Benchmark (LRGB) (Dwivedi et al., 2022; Freitas & Dong, 2021) . Peptides-func andPeptides- struct are atomic graphs of peptides from SATPdb, with tasks of multi-label graph classification into 10 peptide functional classes and graph regression for 11 3D structural properties, respectively. PascalVOC-SP andCOCO-SP are node classification datasets derived from the Pascal VOC and MS COCO images by SLIC superpixelization, where each superpixel node belongs to a particular object class. We did not use PCQM-Contact in (Dwivedi et al., 2022) as its download link was no longer valid. MalNet-Tiny (Freitas & Dong, 2021) is a subset of MalNet with 5,000 function call graphs (FCGs) from Android APKs, where the task is to predict software type based on structure alone. For each dataset, we follow standard training protocols and splits (Dwivedi et al., 2022; Freitas & Dong, 2021). •Open Graph Benchmark (OGB) (Hu et al., 2020) .We also consider a collection of larger-scale datasets from OGB, containing graphs in the range of hundreds of thousands to millions: ogbg-molhiv andogbg-molpcba are molecular property prediction datasets from MoleculeNet. ogbg-molhiv involves binary classification of HIV inhibition, while ogbg-molpcba predicts results of 128 bioassays in a multi-task setting. ogbg-ppa contains protein-protein association networks, where nodes represent proteins and edges encode normalized associations between them; the task is to classify the origin of the network among 37 taxonomic groups. ogbg-code2 consists of abstract syntax trees (ASTs) from Python source code, with the task of predicting the first 5 subtokens of the function’s name. We maintain all the OGB standard evaluation settings (Hu et al., 2020). Table 7. Overview of the datasets used for graph-level tasks (Dwivedi et al., 2023; 2022; Hu et al., 2020; Freitas & Dong, 2021). Dataset # graphs Avg. # nodes Avg. # edges # node/edge feats Prediction level Prediction task Metric ZINC 12,000 23.2 24.9 28/1 graph regression MAE MNIST 70,000 70.6 564.5 3/1 graph 10-class classif. Accuracy CIFAR10 60,000 117.6 941.1 5/1 graph 10-class classif. Accuracy PATTERN 14,000 118.9 3,039.3 3/1 inductive node binary classif. Accuracy CLUSTER 12,000 117.2 2,150.9 7/1 inductive node 6-class classif. Accuracy Peptides-func 15,535 150.9 307.3 9/3 graph 10-task classif. Avg. Precision Peptides-struct 15,535 150.9 307.3 9/3 graph 11-task regression MAE PascalVOC-SP 11,355 479.4 2,710.5 14/2 inductive node 21-class classif. F1 score COCO-SP 123,286 476.9 2,693.7 14/2 inductive node 81-class classif. F1 score MalNet-Tiny 5,000 1,410.3 2,859.9 5/1 graph 5-class classif. Accuracy ogbg-molhiv 41,127 25.5 27.5 9/3 graph binary classif. AUROC ogbg-molpcba 437,929 26.0 28.1 9/3 graph 128-task classif. Avg. Precision ogbg-ppa 158,100 243.4 2,266.1 1/7 graph 37-task classif. Accuracy ogbg-code2 452,741 125.2 124.2 2/2 graph 5 token sequence F1 score A.3. Hyperparameters and Reproducibility Please note that we mainly follow the experiment settings of GraphGPS (Ramp ´aˇsek et al., 2022; T ¨onshoff et al., 2023). For the hyperparameter selections of classic GNNs, in addition to what we have covered, we list other settings in Tables 8, 9, 10, 13 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence 11, 12, 13. Further details regarding hyperparameters can be found in our code. In all experiments, we use the validation set to select the best hyperparameters. GNN+denotes enhanced implementation of the GNN model. Our code is available under the MIT License. Table 8. Hyperparameter settings of GCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 6 5 12 12 Edge Feature Module True True True True False Normalization BN BN BN BN BN Dropout 0.0 0.15 0.05 0.05 0.1 Residual Connections True True True True True FFN True True True True True PE RWSE-32 False False RWSE-32 RWSE-20 Hidden Dim 64 60 65 90 90 Graph Pooling add mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.0005 0.001 0.001 0.001 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 260,177 112,570 114,345 517,219 516,674 Time (epoch) 7.6s 60.1s 40.2s 19.5s 29.7s Table 9. Hyperparameter settings of GCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 14 18 8 4 10 4 4 Edge Feature Module True False True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.05 0.0 0.1 0.2 0.2 0.2 Residual Connections False False True True True False False True True FFN False False True True True True True True True PE RWSE-32 RWSE-32 False False False RWSE-20 RWSE-16 False False Hidden Dim 275 255 85 70 110 256 512 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.001 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 400 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 507,351 506,127 520,986 460,611 494,235 1,407,641 13,316,700 5,549,605 23,291,826 Time (epoch) 6.9s 6.6s 12.5s 162.5s 6.6s 16.3s 91.4s 178.2s 476.3s 14 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 10. Hyperparameter settings of GIN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 12 5 5 8 10 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.0 0.1 0.05 0.05 0.05 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 80 60 60 100 90 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.001 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 477,241 118,990 115,450 511,829 497,594 Time (epoch) 9.4s 56.8s 46.3s 18.5s 20.5s Table 11. Hyperparameter settings of GIN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 3 5 16 16 5 3 16 5 4 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.2 0.2 0.1 0.0 0.0 0.0 0.3 0.15 0.1 Residual Connections True True True True True True True False True FFN False False True True True False True True True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 240 200 70 70 130 256 300 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 50 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 250 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 506,126 518,127 486,039 487,491 514,545 481,433 8,774,720 8,173,605 24,338,354 Time (epoch) 7.4s 6.1s 14.8s 169.2s 5.9s 10.9s 89.2s 213.9s 489.8s 15 Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence Table 12. Hyperparameter settings of GatedGCN+on benchmarks from (Dwivedi et al., 2023). Hyperparameter ZINC MNIST CIFAR10 PATTERN CLUSTER # GNN Layers 9 10 10 12 16 Edge Feature Module True True True True True Normalization BN BN BN BN BN Dropout 0.05 0.05 0.15 0.2 0.2 Residual Connections True True True True True FFN True True True True True PE RWSE-20 False False RWSE-32 RWSE-20 Hidden Dim 70 35 35 64 56 Graph Pooling sum mean mean – – Batch Size 32 16 16 32 16 Learning Rate 0.001 0.001 0.001 0.0005 0.0005 # Epochs 2000 200 200 200 100 # Warmup Epochs 50 5 5 5 5 Weight Decay 1e-5 1e-5 1e-5 1e-5 1e-5 # Parameters 413,355 118,940 116,490 466,001 474,574 Time (epoch) 10.5s 137.9s 115.0s 32.6s 34.1s Table 13. Hyperparameter settings of GatedGCN+on LRGB and OGB datasets. Hyperparameter Peptides-func Peptides-struct PascalVOC-SP COCO-SP MalNet-Tiny ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2 # GNN Layers 5 4 12 20 6 3 10 4 5 Edge Feature Module True True True True True True True True True Normalization BN BN BN BN BN BN BN BN BN Dropout 0.05 0.2 0.15 0.05 0.0 0.0 0.2 0.15 0.2 Residual Connections False True True True True True True True True FFN False False False True True False True False True PE RWSE-32 RWSE-32 RWSE-32 False False RWSE-20 RWSE-16 False False Hidden Dim 135 145 95 52 100 256 256 512 512 Graph Pooling mean mean – – max mean mean mean mean Batch Size 16 32 32 50 16 32 512 32 32 Learning Rate 0.0005 0.001 0.001 0.001 0.0005 0.0001 0.0005 0.0003 0.0001 # Epochs 300 300 200 300 150 100 100 300 30 # Warmup Epochs 5 5 10 10 10 5 5 10 2 Weight Decay 0.0 0.0 0.0 0.0 1e-5 1e-5 1e-5 1e-5 1e-6 # Parameters 521,141 492,897 559,094 508,589 550,905 1,076,633 6,016,860 5,547,557 29,865,906 Time (epoch) 17.3s 8.0s 21.3s 208.8s 8.9s 15.1s 85.1s 479.8s 640.1s 16
4
1
The GNN models (GCN, GIN, and GatedGCN) enhanced with GNN+ have approximately 500K parameters each, which is moderate for graph neural networks. The datasets used involve a variety of sizes, but the mentioned ones have a maximum of around 500K graphs (like the OGB datasets). Given the average training time of these models in a practical setting can range from 5 to 40 seconds per epoch depending on the dataset's characteristics and model complexity, it is reasonable to assume a total training time of well under 8 hours. Since the authors conducted training over multiple datasets with a maximum of 2000 epochs mentioned but did not detail extensive training times, sampling training times from known GNN implementations indicates a single GPU can handle this load adequately in under 8 hours. These models can be run on a high-memory GPU, allowing flexible batch size choices to ensure memory constraints are met. Overall, a single GPU configuration is likely sufficient given the moderate size of the models and datasets involved. Therefore, this model can be trained in under 8 hours on a single GPU.
yes
Yes
Graph
Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence
2025-02-13T00:00:00.000Z
[https://github.com/LUOyk1999/GNNPlus]
1
https://data.pyg.org/datasets/benchmarking-gnns/MNIST_v2.zip
9 hour approx - ( 200 epochs * avg 157.2 sec)
https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing
Yes
null
ogbg-molhiv
GatedGCN+
[]
"Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence(...TRUNCATED)
2025-02-13T00:00:00
https://arxiv.org/abs/2502.09263v1
[ "https://github.com/LUOyk1999/GNNPlus" ]
"{'Test ROC-AUC': '0.8040 ± 0.0164', 'Validation ROC-AUC': '0.8329 ± 0.0158', 'Number of params': (...TRUNCATED)
[ "Test ROC-AUC", "Ext. data", "Validation ROC-AUC", "Number of params" ]
"Given the following paper and codebase:\n Paper: Unlocking the Potential of Classic GNNs for Gra(...TRUNCATED)
"Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence(...TRUNCATED)
4
1
"The paper describes training across 14 well-known graph-level datasets with a mean parameter count (...TRUNCATED)
yes
Yes
Graph
"Unlocking the Potential of Classic GNNs for Graph-level Tasks: Simple Architectures Meet Excellence(...TRUNCATED)
2025-02-13T00:00:00.000Z
[https://github.com/LUOyk1999/GNNPlus]
1
http://snap.stanford.edu/ogb/data/graphproppred/csv_mol_download/hiv.zip
approx 40 min - ( 100 epochs * 22.8s)
https://drive.google.com/file/d/1Y7jMNhNybbdgrUJa_MxcOrbwpJNkDPav/view?usp=sharing
Yes
null
Fashion-MNIST
Continued fraction of straight lines
[]
Real-valued continued fraction of straight lines
2024-12-16T00:00:00
https://arxiv.org/abs/2412.16191v1
["https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction(...TRUNCATED)
{'Accuracy': '84.12', 'Trainable Parameters': '7870', 'NMI': '74.4'}
[ "Percentage error", "Accuracy", "Trainable Parameters", "NMI", "Power consumption" ]
"Given the following paper and codebase:\n Paper: Real-valued continued fraction of straight line(...TRUNCATED)
"Real-valued continued fraction of straight lines Vijay Prakash S Alappuzha, Kerala, India. prakash.(...TRUNCATED)
4
1
"The model is trained on the Fashion-MNIST dataset, which consists of 60,000 training images and 10,(...TRUNCATED)
yes
Yes
CV
Real-valued continued fraction of straight lines
2024-12-16T00:00:00.000Z
[https://github.com/grasshopper14/Continued-fraction-of-straight-lines/blob/main/continued_fraction_reg.py]
1
https://github.com/zalandoresearch/fashion-mnist
20 min
https://colab.research.google.com/drive/1LNMCRLMIWN5U_9WDeRxYmcbnAgaNadSd?usp=sharing
Yes
Yes Everythng is running successfully
Traffic
GLinear
[]
"Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series(...TRUNCATED)
2025-01-02T00:00:00
https://arxiv.org/abs/2501.01087v3
[ "https://github.com/t-rizvi/GLinear" ]
{'MSE ': '0.3222'}
[ "MSE " ]
"Given the following paper and codebase:\n Paper: Bridging Simplicity and Sophistication using GL(...TRUNCATED)
"IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE JOURNAL, 2025 1 Bridging Simplic(...TRUNCATED)
4
1
"The GLinear model, being a simplified architecture without complex components like Transformers, sh(...TRUNCATED)
yes
Yes
Time Series
"Bridging Simplicity and Sophistication using GLinear: A Novel Architecture for Enhanced Time Series(...TRUNCATED)
2025-01-02 0:00:00
https://github.com/t-rizvi/GLinear
1
Inside the repo in dataset folder
193 sec * 4 = 12.9 minutes
https://colab.research.google.com/drive/1sI72VSxjN4cyQR7UrueWfBXwoFi9Y9Qr?usp=sharing
Yes
"-- Training on all data set is included inside the scripts/EXP-LookBackWindow_\\&_LongForecasting/(...TRUNCATED)
BTAD
URD
[]
Unlocking the Potential of Reverse Distillation for Anomaly Detection
2024-12-10T00:00:00
https://arxiv.org/abs/2412.07579v1
[ "https://github.com/hito2448/urd" ]
"{'Segmentation AUROC': '98.1', 'Detection AUROC': '93.9', 'Segmentation AUPRO': '78.5', 'Segmentati(...TRUNCATED)
[ "Detection AUROC", "Segmentation AUROC", "Segmentation AP", "Segmentation AUPRO" ]
"Given the following paper and codebase:\n Paper: Unlocking the Potential of Reverse Distillation(...TRUNCATED)
"Unlocking the Potential of Reverse Distillation for Anomaly Detection Xinyue Liu1, Jianyuan Wang2*,(...TRUNCATED)
4
1
"The proposed method utilizes a WideResNet50 architecture as a teacher network which typically has a(...TRUNCATED)
yes
Yes
CV
Unlocking the Potential of Reverse Distillation for Anomaly Detection
2024-12-10 0:00:00
https://github.com/hito2448/urd
1
https://www.mydrive.ch/shares/38536/3830184030e49fe74747669442f0f282/download/420938113-1629952094/mvtec_anomaly_detection.tar.xz; https://www.robots.ox.ac.uk/~vgg/data/dtd/download/dtd-r1.0.1.tar.gz
8 hours for one folder. There are 11 folders.
https://drive.google.com/file/d/1OLbo3FifM1a7-wbCtfpjZrZLr0K5bS87/view?usp=sharing
Yes
-- Just need to change the num_workers in train.py according to system
York Urban Dataset
DT-LSD
[]
DT-LSD: Deformable Transformer-based Line Segment Detection
2024-11-20T00:00:00
https://arxiv.org/abs/2411.13005v1
[ "https://github.com/SebastianJanampa/DT-LSD" ]
{'sAP5': '30.2', 'sAP10': '33.2', 'sAP15': '35.1'}
[ "sAP5", "sAP10", "sAP15", "FH" ]
"Given the following paper and codebase:\n Paper: DT-LSD: Deformable Transformer-based Line Segme(...TRUNCATED)
"DT-LSD: Deformable Transformer-based Line Segment Detection Sebastian Janampa The University of New(...TRUNCATED)
4
1
"The proposed DT-LSD model has a relatively small batch size of 2 and uses a single Nvidia RTX A5500(...TRUNCATED)
yes
Yes
CV
DT-LSD: Deformable Transformer-based Line Segment Detection
2024-11-20 0:00:00
https://github.com/SebastianJanampa/DT-LSD
1
script to download is provided in colab file.
uses cpu to trainf or some reason 8hr per epoch
https://colab.research.google.com/drive/1XPiW-hDq6q8HNZ4yVP0oAn-3a1_ay5rG?usp=sharing
Yes
-- Trains but uses cpu for some reason
UCR Anomaly Archive
KAN
[]
KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks
2024-11-01T00:00:00
https://arxiv.org/abs/2411.00278v1
[ "https://github.com/issaccv/KAN-AD" ]
{'AUC ROC ': '0.7489'}
[ "Average F1", "AUC ROC " ]
"Given the following paper and codebase:\n Paper: KAN-AD: Time Series Anomaly Detection with Kolm(...TRUNCATED)
"KAN-AD: Time Series Anomaly Detection with Kolmogorov–Arnold Networks Quan Zhou*, Changhua Pei, H(...TRUNCATED)
4
1
"The KAN-AD model is based on a novel architecture that leverages Fourier series for anomaly detecti(...TRUNCATED)
yes
Yes
Time Series
KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks
2024-11-01 0:00:00
https://github.com/issaccv/KAN-AD
1
Downloaded when running prepeare_env.sh from repository & uses UTS dataset, https://github.com/CSTCloudOps/datasets
"There are 5 folders. May take around 2 hours or more no idea as time was not specified and traing w(...TRUNCATED)
https://colab.research.google.com/drive/1sE1mKwy3n9yameE-JG27Oa_HI-q8lFn9?usp=sharing
Yes
"-- After the installation of environment.sh. I changed a line of code to run matplot lib on colab (...TRUNCATED)
Chameleon
CoED
[]
Improving Graph Neural Networks by Learning Continuous Edge Directions
2024-10-18T00:00:00
https://arxiv.org/abs/2410.14109v1
[ "https://github.com/hormoz-lab/coed-gnn" ]
{'Accuracy': '79.69±1.35'}
[ "Accuracy" ]
"Given the following paper and codebase:\n Paper: Improving Graph Neural Networks by Learning Con(...TRUNCATED)
"Preprint IMPROVING GRAPH NEURAL NETWORKS BY LEARN - INGCONTINUOUS EDGE DIRECTIONS Seong Ho Pahng1, (...TRUNCATED)
4
1
"The proposed CoED GNN is a graph neural network architecture that utilizes a complex-valued Laplaci(...TRUNCATED)
yes
Yes
Graph
Improving Graph Neural Networks by Learning Continuous Edge Directions
2024-10-18 0:00:00
https://github.com/hormoz-lab/coed-gnn
1
specify on the classification.py and it handles itself
2 min
https://colab.research.google.com/drive/1FiCFbVmQhjIqcCdViYynfEb9mWtJkB09?usp=sharing
Yes
-- I have put the best parameter with advice of "Gemini" Can change accordingly.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
1