Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
10
56
reviewer_id
stringlengths
10
14
Question
stringlengths
61
2.58k
isHighQuality
bool
2 classes
ocr
stringlengths
10
52k
2O2FOO8pl4
XNAD64x9rP
How does adversarial training impact the privacy in your experimental findings? Could the authors explicate the insights/findings?
true
LABEL PRIVACY SOURCE CODING IN VERTICAL FEDERATED LEARNING Anonymous authors Paper under double-blind review ABSTRACT We study label privacy protection in vertical federated learning (VFL). VFL enables an active party who possesses labeled data to improve model performance (utility) by collaborating with passive parties who have auxiliary features. Recently, there has been a growing concern for protecting label privacy against honest passive parties who may surreptitiously deduce private labels from the output of their bottom models. However, existing studies do not remove the prior label information in the active party’s features from labels in an offline phase, thus lead label privacy to passive parties. In contrast, methods that focus on training-phase perturbation, we propose a novel offline-phase data cleansing approach to protect label privacy without compromising utility. Specifically, we first formulate a Label Privacy Source Coding (LPSC) problem to remove the redundant label information in the active party’s features from labels in multiple samples a new weight and label i.e., high federated training. We give a privacy guarantee and theoretically prove that gradient boosting efficiently optimizes the LPSC problem. Therefore, we propose the Vertical Federated Gradient Boosting (VFGBoost) framework to address the LPSC problem. Moreover, given that LPSC only provides upper-bounded privacy enhancement, VFGBoost further enables a flexible privacy-utility trade-off by incorporating adversarial training during federated training. Experimental results on four real-world datasets substantiate the efficacy of LPSC and the superiority of our VFGBoost framework. 1 INTRODUCTION Vertical federated learning (VFL) Yang et al. (2019) enable global model construction among organizations with datasets sharing overlapping sample spaces but feature features. Fig. 1(a) presents an overview of the multi-party VFL problem, where an active party possesses labeled data and has aligned samples with passive parties that own auxiliary features. The primary goal of VFL is to build a well-performed federated model in a privacy-preserving and efficient manner. Recently, label privacy protection has attracted increasing attention in VFL studies. Existing studies in VFL label privacy protection Li et al. (2022), Fu et al. (2022), Sun et al. (2022) rely on a model-simulating policy to protect label privacy and feature privacy, respectively. They protect label privacy by training a complex yet deterministic top model with various perturbation techniques. However, when a passive party steals the deterministic top model (e.g., via model completion attack Fu et al. (2022)), the worst-case label privacy leakage occurs and approximates the federated model performances to protect label privacy and feature privacy, respectively. More concrete approaches are needed to embeddings adopted for label protection exacerbate the feature privacy leakage to the active party Jin et al. (2021); Ye et al. (2022). The fundamental cause of this dilemma is that existing studies Li et al. (2022), Fu et al. (2022), Sun et al. (2022) directly optimize the forward embeddings for label prediction, making forward embedding highly correlated with and informative about private labels. Our key insight is that label privacy protection in VFL should be decoupled into two independent tasks: 1) offline-phase cleaning, which enhances privacy without compromise by removing the redundant label information from labels, and 2) training-phase perturbation, which further balances privacy-utility trade-off via inadequately learning from perturbed labels or gradients. --- Figure 1: (a) The multi-party VFL problem setting. (b) Vanilla VFL and label privacy threat As a remedy to the aforementioned loophole, we formulate a Label Privacy Source Coding (LPSC) problem to encode minimum-sufficient label privacy in an offline phase. The idea is to remove the label information present in the active party’s local features, which is redundant for VFL from the ground truth label. By doing so, the risk of label leakage from the network is eliminated, without sacrificing utility. We theoretically analyze the privacy guarantee of LPSC. LPSC is a constrained optimization problem of two mutual information. We prove that gradient boosting Freund & Schapre (1997) is a simple and efficient approach to optimize the LPSC problem. Specifically, gradient boosting converts the uniformly-weighted original labels to re-weighted residuals of the active party’s local predictions, thus eliminating the redundant label privacy. Therefore, we propose Vertical Federated Gradient Boosting (VFGBoost) to shift the federated learning target from the uniformly-weighted original labels $p_{ij}(i,j)$ to the LPSC-encoded re-weighted residuals $(p_{ij}(i,j))$ by encoding minimum-offspring training. Our proposed VFGBoost follows the aforementioned two-phased paradigm. In the offline LPSC phase (Fig. 2, phase 1), the active party trains a local model on its local data and computes the LPSC-encoded re-weighted residuals $p_{ij}(i,j)$ via gradient boosting as the learning target for VFL. Subsequently, in the federated training phase (Fig. 2, phase 2), the active parties train a federated model in the re-weighted residuals. Hence, the federated prediction is the weighted sum of the active party’s local prediction and the federated predicted residual. Crucially, the inherent label privacy enhancement of LPSC is upper-bounded by the label information present in the active party’s local features, potentially falling short in practical scenarios. To circumvent this, perturbation methods can be subsequently employed to enhance label privacy with a consequent reduction in utility. Specifically, VFGBoost utilizes adversarial training through a mass-inference technique to fine-tune trained classes (Fig. 2, phase 3), while also updating the passive parties’ bottom models to thwart the attack. Consequently, VFGBoost consists of a utility objective that learns to in the LPSC-encoded label privacy (re-weighted residuals) $p_{ij}(i,j)$, as well as an优化目标 that ensures protection against training labels $p_{ij}(i,j)$. We jointly optimize both objectives, utilizing a hyperparameter to enable flexible balancing of the privacy-utility trade-off. Moreover, VFGBoost is model-agnostic and allows any gradient-based model. Our comprehensive experiments conducted on four real-world datasets in the realm of recommendation and healthcare demonstrate that the LPSC can enhance label privacy without compromising utility, and the proposed VFGBoost framework achieves a superior privacy-utility trade-off compared to seven baseline methods. In summary, our contributions are as follows: - We decouple label privacy protection in VFL into two independent tasks: offline-phase cleaning to inherently enhance privacy without compromising utility, and training-phase perturbation for nuanced privacy-utility trade-offs. - We formulate a Label Privacy Source Coding (LPSC) problem with privacy guarantee to one-side optimization by the refined label information for offline-phase cleansing. - We further propose VFGBoost that utilizes gradient boosting to optimize LPSC and incorporates adversarial training to enable additional privacy enhancement. - We perform extensive experiments on four real-world datasets to demonstrate the efficacy of LPSC and the superiority of our proposed VFGBoost framework. --- 2 RELATED WORK **Label Protection in VFL.** Existing label privacy protection techniques in VFL mainly include cryptographic methods and perturbation methods. **Cryptographic methods** Fu et al. (2021); Cheng et al. (2021); Ren et al. (2022) incur significant overheads in computation and communication, which is typically unfeasible in practice. Therefore, they are not investigated and compared in this work. **Perturbation methods** introduce noise to labels or gradients to update the passive parties models. For instance, Li et al. Li et al. Li et al. (2022) employ adapted Gaussian noise to perturb the gradients to defend against label attacks. Sun et al. Sun et al. (2022) minimize the distance correlation between the forward embedding and the label to defend against the spectral Traan et al. (2018). Giang-based MI estimator Alemi et al. (2016) to gauge MI between the embedding and label. Social computing gradients. Yang et al. Yang et al. (2022) apply differential privacy Dwork et al. (2006) to a gradient perturbation-based split learning framework. Overall, due to the forward embeddings in existing works being optimized for label prediction Li et al. (2022); Zou et al. (2022); Sun et al. (2022), the worst-case label privacy leakage approximates the VFL utility, which is unacceptable. **Mutual Information for Privacy Protection.** MID Zuo et al. (2023) uses mutual information (MI) regularization to minimize the entropy of forward embedding during federated training. It adapts a VAE based MI estimator Alemi et al. (2016) to gauge MI between the embedding and label. Social explicit MI estimation Alemi et al. (2016); Belghazi et al. (2018), however, is resource-intensive and needs Gaussian noise, reducing utility. Conversely, our LPSC employs gradient boosting to enhance privacy efficiently without added noise. **Privacy Protection via Offline Pre-processing.** Recently, InstaHide Huang et al. (2020) and Fed-Pass Gu et al. (2023) are proposed to pre-process features to safeguard feature privacy by merging training tasks with noise. Nevertheless, to our best knowledge, there are no existing pre-processing approaches designed for label privacy protection. 3 PROBLEM APPROXIMATION **Vertical Federated Learning Setting.** In a typical VFL setting, as shown in Fig. 1(a), the aligned training data $D$ has sample identifiers (IDs) $i$ and labels $y$. The feature matrix $X = [X_0, X_1, \dots, X_k]$ is vertically partitioned among $K + 1$ parties by feature. An active party $P_0$ has labeled local samples $i \in X_{k+1}$. Moreover, the active party has local data $\mathcal{D}_{loc} = \begin{bmatrix} v^*_i & x^*_i & y^*_i \end{bmatrix}$, which additionally includes unaligned labeled samples. In VFL the active party leverages the auxiliary features from passive parties to train a federated model while protecting privacy. For simplicity, we use sample ID $i$ to represent $P_i$'s features $x_{ik}$ in functions (e.g., $h_{ij}(x_i)$) denotes $h_{ij}(x_{ik})$. A summary of notations and their corresponding descriptions is provided in the Appendix A. **Table 1: Threat Model.** (See details in Appendis B.) | Threat model | Adversary | Attack objective | Attack method | Adversary's capability | | :--- | :--- | :--- | :--- | :--- | | Semi-honest | Passive parties | min $R_{pj}(i,j)$ | Norm, Spectral, PMC | A few samples with features | **Threat Model.** We focus on privacy leakage stemming from the forward embedding of passive parties' bottom models. We assume that both active and passive parties are *semi-honest* and non-colluding, meaning that they follow the training protocol but attempt to extract private information. To attack label privacy in passive parties $P_i$, we mitigate error $h_{ip}(i,j)$ against the ground truth ID-label joint distribution $P_{ip}(i,j)$ given $D_{ip}(i,j)$ as SSL-divergence: $$\min_{A \in A} R_{pj}(i,j) = \min_{A \in A} R_{pj}(i,j)\left|A(h_{ip}(i,j))\right|$$ where $A \in H$ is any attack function that infers the raw label from $P_i$'s forward embedding $h_{ip}(i,j)$. Every attack method corresponds to a different attack function $A(i)$. We assume the adversary $P_k$ makes use of $P_{ip}(i,j)$, which is different from different attack functions $A(i)$. The feature attack object formulated similarly. As shown in Table 1, one threat model, a passive party use model norm attack Li et al. (2022), spectral traun et al. (2018), or an active model completion (PMC) attack Fu et al. (2022) to build the attack function $A(i)$. Similarly, an active adversary use model inversion attack He et al. (2019) to attack features. --- Figure 2: The VFGBoost framework. Left: the offline Label Privacy Source Coding (LPSC) phase (Section 4.2). Right: the federated training phase (Section 4.3). **Privacy Definition.** According to Equation 1, the adversary’s objective is to minimize the expected estimation error of the ground-truth conditional distribution $p_{pg}(y|i)$. Therefore, the private label information is and only is the ID-label joint distribution $p_{pg}(y,i)$, see details in Appendix B.2. Our goal is to design an offline-phase privacy mechanism $\mathcal{M}$ for the active model $P_0$ that outputs a new joint distribution $p_{jx}(i,y)$ $M(p_{jx}(i,y), X_O)$. Subsequently, the bottom model $h_O$ is trained to fit $p_{jx}(i,y)$ during federated training. Therefore, the optimal attack error given $p_{jx}(i,y)$ is: $$\mathbb{R}_{p_{jx}(i,y)}(p_{jx}(i,y)) = \min_{\psi_i^*} \mathbb{E}_{p_{jx}(i,y)}[D_{X_C}(p_{jx}(i,y))|A_{h_O}(i,y)]$$ where $\psi_i^*$ is $P_0$’s top model trained to map $h_O$’s to the new label $p_{jx}(i,y)$. We notice that, maximum $p_{jx}(i,y)$ $\mathbb{R}_{p_{jx}(i,y)}(p_{jx}(i,y)) \iff \min_{p_{jx}(i,y)} I(p_{jx}(i,y)|p_{jx}(i,y))$. Thereby, our threat model coincides with $e$-mutual information privacy ($e-MIP$), see more details in Appendix B. ### 4 PROPOSED APPROACH In this section, we introduce our two-phase VFGBoost framework that consists of an offline LPSC phase and a federated training phase, as depicted in Fig. 2. Specifically, we first formulate the LPSC problem (Section 4.1) that encodes minimum-sufficient label privacy by removing redundant label information from the active party’s features. Then, we prove that gradient boosting can efficiently optimize the LPSC problem (Section 4.2). thereby, we proposed our VFGBoost framework that leverages gradient boosting to tackle LPSC. Moreover, to flexibly balance the privacy-utility trade-off, VFGBoost incorporates adversarial training in the federated training phase (Section 4.3). The active party mimics adversarial passive parties to attack labels and, in turn, solves a max-min optimization problem to protect labels. ### 4.1 LABEL PRIVACY SOURCE CODING PROBLEM In the offline phase (Fig. 2, phase 1), we aim to encode minimum-sufficient label privacy from the ground-truth label privacy $p_{jx}(i,y)$ by removing the redundant label information $p_{rest}(i,y)$ in the active party’s local features $X_O$, as demonstrated in Fig. 3. To do so, we formally define a label privacy source coding problem as follows: **Problem 1** (Label Privacy Source Coding). Given ground-truth label privacy $p_{jx}(i,y)$ and the active party $P_0$’s learned label privacy $p_{jx}(i,y)$ from its feature $X_O$, the label privacy source coding problem is to optimize a new ID-label joint distribution $p_{jx}(i,y)$ as follows: $$\max_{p_{jx}(i,y)} I(p_{jx}(i,y); p_{jx}(i,y))$$ (Sufficient) $$s.t. I(p_{jx}(i,y); p_{jx}(i,y)) = 0$$ (Minimum), --- where $I(\cdot; \cdot)$ denotes mutual information. The optimized ID-label joint distribution $p_{psec}(i, y)$ assigns each sample a new weight through the marginal $p_{psec}(i)$ and/or label through the conditional $p_{psec}(y|i)$. We will show that gradient boosting, which is detailed in the Appendix C.1, efficiently solves the LPSC problem. The privacy leakage inherent in LPSC is rigorously bounded as described in the following theorem: **Theorem 1** (Privacy Guarantee) A LPSC algorithm is optimal information privacy ($\epsilon$ MIP). The privacy leakage is bounded by $\epsilon H(p_{psec}(i), p_{psec}(y|i))$, the conditional entropy of the ground-truth label distribution $p_{psec}(i, y)$ given the active party’s label distribution $p_{psec}(i, y)$. Formally, $$I(p_{pf}(i), y); p_{pf}(i|y_i) \leq \epsilon \text{bits},$$ where $p_{pf}^{*}_{psec}(i)$ represents the optimal solution of Equation 2 in the LPSC problem. **Proof.** The proof of Theorem 1 is provided in the Appendix B.3. The intuition behind Theorem 1 is that privacy leakage in LPSC is inversely related to the amount of label information the active party can infer from its local features. ### 4.2 GRADIENT BOOSTING SOLVES LPSC PROBLEM A recent insight of mutual information (MI) regularization for privacy protection Zou et al. (2023) relies on a notion of MI neural estimation Alemi et al. (2016); Belghazi et al. (2018), which explicitly estimates MI via Gaussian noise. However, explicit MI estimation is inefficient, and the introduced noise under model fitting Belgaz et al. (2018). In contrast, we prove that gradient boosting is a simple and efficient approach to solve the LPSC problem. To solve problem 1, 1) the active party $P_0$ first learns the label privacy $p_{psec}(i, y)$ present in its features $X_{i}^{\infty}$; 2) Then, active party $P_0$ optimizes the joint distribution $p_{psec}(i, y)$ by solving Eq. 2. We elaborate on each step as follows: **(1) Learning $p_{psec}(i, y)$.** To learn $p_{psec}(i, y)$, which is the label privacy present in local features $X_{0}$, the active party $P_0$ only needs to learn the conditional $p_{psec}(y|i)$ as the marginal $p_{psec}(i) = p_{pf}(i) \sim U$ is uniform. To do so, $P_0$ train model $f_0$ on its local data $D^{\infty\text{seq}}$ indexed by $x^{\infty}$ as follows: $$\theta^* = \arg \min_{\sum_{i=1}^{n} |f_{pc}|^2} \frac{1}{|f_{pc}|} \mathcal{L}_{\text{emp}}(y_i, f_0(i)),$$ where $\mathcal{L}_{\text{emp}}$ denotes the empirical loss $f_0(i)$ denotes $f_0(x_{pc})$ for simplicity and models the conditional label distribution $p_{psec}(i|y_i)$. Consequently, the active party learns $p_{pf}(i)$ as $p_{pf}(i|y_i)$. **(2) Optimizing $p_{psec}(i, y)$.** We point out that the gradient boosting algorithm optimizes the LPSC problem by taking AdaBoost Freund & Schapire (1997) as an example. As shown in Theorem 2 and Theorem 3, this algorithm optimizes the KL-divergence between $p_{psec}(i)$ and the uniform distribution $U$ (Eq. 4), while fixing the conditional distribution $p_{psec}(i|y_i)$ as the ground truth $p_{pf}(i|y_i)$. **Theorem 2.** Assuming fixed conditional distribution $p_{psec}(i|y_i) = p_{pf}(i|y_i)$ and let $U$ denote uniform distribution, the LPSC problem I can be reduced to: $$\min_{p_{pf}(i)} D_{pf}(p_{pf}(i)|U) \quad s.t. \sum_{i=1}^{n} p_{pf}(i) y_i f_0(i) = 0,$$ where $i \in i$ is the sample index of aligned training data with IDs $i$. **Proof.** The proof of Theorem 2 is provided in the Appendix D.1. The theorem 2 reduces the LPSC problem to a convex optimization problem Eq. 4, which can be solved via Lagrangeian. It projects the ground-truth label privacy $p_{pf}(i|y_i)$ to an information plane that is orthogonal to $p_{pf}(i|y_i)$, thus estimating the redundant label information in active party’s feature $X_0$. **Theorem 3.** Schapire & Freund (2013) The solution of the convex optimization problem Eq. 4 is equivalent to AdaBoost Freund & Schapire (1997): $$p_{pf}(i) = \frac{e^{-\alpha p_f(f_0(i))}}{\sum_{i=1}^{n} e^{-\alpha p_f(f_0(i))}},$$ where $\alpha = \frac{1}{2} \ln\left(\frac{1}{\epsilon}\right)$ and $e$ is the classification error of $f_0$. $p_{pf}(i)$ can be computed in $O(|i|)$ time-complexity. --- Thereby, AdaBoost efficiently optimizes the LPSC problem. Notably, LPSC can be reduced to different boosting algorithms under different assumptions. In Section 3.4, we evaluate the performance of AdaBoost shown in Fig. 2.1 LPSC only provides upper-bounded privacy protection without compromising utility, which is upper-bounded by $p_{fed}(i,y)$ and may not meet practical privacy requirements. Therefore, VFGboost further incorporates adversarial training to enable additional and flexible privacy enhancement by sacrificing utility in the federated training phase. It should be noted that the integration of LPSC with other perturbation methods is also a valid approach for achieving the active privacy enhancement, as evaluated in Section 3. 4.3 VFGBOOST FRAMEWORK Based on our finding that gradient boosting solves the LPSC problem, we propose a novel Vertical Federated Gradient Boosting framework, VFGBoost, to efficiently and flexibly protect label privacy in VFL, VFGBoost leverages gradient boosting to solve the LPSC problem in an offline phase. However, only provides upper-bounded privacy protection without compromising utility, which is upper-bounded by $p_{fed}(i,y)$ and may not meet practical privacy requirements. Therefore, VFGboost further incorporates adversarial training to enable additional and flexible privacy enhancement by sacrificing utility in the federated training phase. It should be noted that the integration of LPSC with other perturbation methods is also a valid approach for achieving the active privacy enhancement, as evaluated in Section 3. To achieve offline LPSC (Fig. 2, phase 1), VFGBoost leverages gradient boosting to compute the re-weighted residuals $D_{fpc}$. After LPSC, we shift the learning target from ground-truth labels to residuals with re-wheeled samples. In the federated training phase (Fig. 2, phase 2), all parties collaboratively train a federated model $h_fed$ to fit the re-weighted residuals $D_{fpc}$ as follows: $$h_fed(i) = g_1\left(\{h_v_i\}_{i=1}^k\right),$$ (5) where $g_1$ is the aggregation top model trained by the active party $P_b$, and $h_v_i$ denotes $h_v_i(x_{k,i})$ from $P_b$, for simplicity. The overall VFGboost framework $v_{FGBoost}$ can be expressed as: $$v_{FGBoost}(i) = f_0(i) + \alpha \cdot f_1(i),$$ where $\alpha > 0$ represents the weight of the aggregated residuals. 4.3.1 LEARNING OBJECTIVES The training procedure has two objectives: 1) utility objective $\mathcal{L}_{util}$ to fit the LPSC-encoded results $D_{fpc}$, and 2) privacy objective $p_{priv}$ to further enhance label privacy via adversarial training. LPSC utility objective. The utility objective训练 the federated model $h_fed$ in Eq. 5 to fit LPSC-encoded re-weighted residuals $D_{fpc} = (w,r)$ as follows: $$\min_{\lambda(w_i)} \sum_{i=1}^k w_i \cdot \mathcal{L}_{util}(r_i,h_fed(i)),$$ where $(w_i, r_i) \in D_{fpc}$ is the weight and residual of the $i$-th sample and $\mathcal{L}_{util}$ denotes utility loss. Adversarial privacy objective. Given LPSC only provides upper-bounded privacy enhancement, we employ adversarial training to enable training utility for additional privacy enhancement. Specifically, by conducting it by producing the bottom model $h_fed$ with adversarial models $a_{g_1,i}$ to attack each bottom model $h_v_i_{g_1,i}$ and in turn, trains the bottom models to defend against these attacks. Therefore, the adversarial training process can be formulated as a max-min optimization problem as follows: $$\max_{w_i} \mathbb{E}_{\text{priv}}[y_i(a_v_i \circ h_v_i(i))], \quad s.t.\forall k \in [1,\dots,K],$$ where $L_{priv}(k)$ denotes the privacy loss for passive party $P_b$. In summary, the overall objective is to solve the following max-min optimization problem: $$\min_{\lambda(w_i)} \max_{a_{g_1,i}} \mathbb{E}_{\text{priv}}[y_i(a_v_i \circ h_v_i(i))],$$ (6) where $\beta \geq 0$ is a small hyperparameter to control privacy-utility trade-off. A non-zero $\beta$ enables the trade-off of utility for additional privacy enhancement, building on the inherent, upper-bounded, privacy provided by LPSC. Algorithm 1 outlines the training process of VFGBoost. --- Algorithm 1 VFGBoost framework Require: Local data $D^{DNA} = \{i_{max}, X_{min}, y_{min}\}$, and aligned data $D = \{i, X_0, ..., K_X, y\}$. $\Rightarrow$ Phase 1: Label privacy source coding (LPSC) 1. Active party $P_0$ learns $\phi_k(i)$ by training $\rho_i$ on $D^{DNA}$ via Eq. 3. 2. Active party $P_0$ optimizes $\gamma_{php}(i, y)$ by computing weight-residual $\mathcal{D}_{php} = (\boldsymbol{w}, \boldsymbol{r})$. $\triangleright$ Phase 2: Federated training 3. $P_0$ initializes $\lambda$ and $\{\phi_k\}_k=1$. Passive parties $\{P_k\}_k=1$ initialize $\psi_k[k]=1$, respectively. 4. for each batch of samples with $D_b$ in $i$ do $\Rightarrow$ Loss Computations 5. $P_0$ computes $\tilde{p}_k(\boldsymbol{v}_k) = \phi_k(\boldsymbol{v}_k)$ and send to $P_0$ 6. $P_0$ computes $\hat{h}_{fid}(\boldsymbol{b})$ via Eq. 5, then computes $\mathcal{L}_{alf}$ and $\mathcal{L}_{pr+1}(\boldsymbol{k})$ via Eq. 6. $\triangleright$ Model Update 7. $P_0$ updates the aggregation top model $\lambda$ and adversarial models $\{\phi_k\}_k=1$ via gradients. 8. $\{P_k\}_k=1$ update bottom models $\{\psi_k\}_k=1$ via gradients. $\triangleright$ End for Ensure: Local model $\theta$, top model $\lambda$, bottom models $\{\psi_k\}_k=1$. 5 EXPERIMENTAL SETTING Datasets: We evaluate our proposed VFGBoost on four real-world datasets, including two widely used recommendation click-through rate (CTR) prediction datasets: Critica1 and Avazina2, and two healthcare datasets: MIMIC-III Johnson et al. (2016) and Cardio. Each dataset is partitioned into five (Avazina) or seven (others) parties. We defer detailed descriptions of the datasets in Appendix F.1. Implementation: Without specification, we use LogitBoost Friedman et al. (2000) for LPSC. LPSC is computed following Table 6. We adopt DeepFM Guo et al. (2017) for both local and bottom models on Crites and Avazina. We use MLP for both local and bottom models on MIMIC-III and Cardio datasets. We defer details of hyperparameter choices and platform descriptions in Appendix F.2. Comparsed Data: For fair comparisons, we select a set of label privacy protection methods and the gradient approaches not included due to their extensive communication and computational cost. 1) FE-VFL Sun et al. (2022) trains a top model to directly predict labels using forward embeddings, while simultaneously minimizing the distance correlation between the forward embeddings and the labels. 2) CoAE Zao et al. (2022) trains a deterministic mapping function that transforms original labels to surrogate labels. The bottom models are trained to predict this surrogate labels. 3) MID Zao et al. (2023) employs a VAE-based ML estimator Alem et al. (2016) to explicitly estimate and minimize the entropy of the forward embedding during training. 4) LabelDP Ghazi et al. (2021) leverages a random response mechanism to randomly flip labels to generate perturbed gradients. 5) Marvel Li et al. (2022) uses adaptable Gaussian noise to prefit the gradient as that the distribution difference of positive and negative class gradients are eliminated. 6) LPSC+LabelDP combines the gradient boosting-based offline LPSC with training-phase Marvel Li et al. (2022). Metrics: We evaluate our method against baselines regarding utility and privacy. We use the AUC (Area Under ROC curve) metric in our experiments. 1) Utility: To gauge the use of the federated model (FA), we evaluate the federated model (FA) on a larger dataset of values of FL-AUC indicate superior model utility. 2) Privacy: We evaluate the effectiveness of various defense approaches using three label privacy attacks: the Norm attack Li et al. (2022), Spectral attack Tran et al. (2018), and Passive Model Completion (PMC) attack Fu et al. (2022). For privacy evaluation, we calculate the average AUC of the label predictions made by the passive parts, which we refer to as label leakage AUC (LL-AUC). A low LL-AUC value, close to 0.5, signifies strong privacy protection. *https://labs.critico/category/dataset/ *https://www.kaggle.com/c/avazin-crt-prediction *The code is available at https://anonymous.hopen.science/vfGBoost-D620 --- ### Figure 4: Distributions of passive parties¹ output logits by fitting original labels p₀ᵣ vs. LPSC-encoded labels p₀ᶜᵣ for the first four passive parties on the Critex dataset. (Adversarial loss β = 0.) #### 5.2 LPSC PROTECTS LABEL PRIVACY WITHOUT COMPROMISING UTILITY We first evaluate the protection quality of gradient boosting-based LPSC. Specifically, in the federated training phase, we train passive parties between model to fit the LPSC-encoded labels p₀ᶜᵣ(i,y) and original ground-truth labels p₀ᶜᵣ(i,y), respectively. Table 2 presents the LL-AUC against Norm, Spectral, and PMC attacks and the FL-AUC on four datasets. The results reveal that the LL-AUC of LPSC against three attacks is significantly lower than that of the original labels, indicating that LPSC proves strong label privacy protection. Meanwhile, the FL-AUC of LPSC is comparable to that of the original labels, implying that LPSC barely sacrifices model utility. This confirms that LPSC can effectively protect label privacy without compromising its. The PMC attack outperforms Norm and Spectral attacks in LL-AUC values, indicating that transformations by this stems from PMC’s ability to improve label privacy for top model reconstruction. Thus, we employ PMC for subsequent label privacy evaluations. Fig. 4 visualizes the output logits distributions of four passive parties by training with or without LPSC. The complete figures include all six passive parties can be found in the Appendix G.1. The top-side distributions in each sub-figure show that, with LPSC, the logits distributions of the two classes almost overlap and are hard to differentiate. In contrast, without LPSC, the right-side distributions in each figure reveal significant differences in the distributions of the two classes, indicating that transformations by this stems from empirical findings are important in Theorem 1, which justifies our observation that it is more challenging to distinguish the output distributions between classes when the bottom models are trained with LPSC-encoded labels. #### 5.3 PRIVACY-UTILITY TRADE-OFF COMPARISON Fig. 5 shows the privacy-utility trade-off curves on four datasets. The X-axis indicates the label leakage AUC (LL-AUC), and the Y-axis indicates the AUC of the federated model prediction (FL-AUC). An ideal trade-off should have a large FL-AUC and a small LL-AUC, thus residing in the upper bound of the LPSC-encoded label leakage. We discuss how offline LPSC and training-phase adversarial training in VFGBoost improve the privacy-utility trade-off in the following, respectively. **Label Source Coding:** To explore the effectiveness of LPSC, we compare LPSC-enhanced baseline(s i.e., LPSC+LabelDP and LPSC+Marvel) with their counterparts without LPSC (i.e., LabelIDP and Marvel). As shown in Fig. 5, LPSC significantly improves the privacy-utility trade-off of existing perturbation baselines by pushing the top-side of each curve off on each dataset. Without any training-phase perturbation (the top-right end of each curve), LPSC leads to significant LL-AUC decline with negligible FL-AUC decline on each dataset, implying that it protects --- Figure 5: Privacy-utility trade-off of different label protection methods against the PMC attack on four datasets. All methods have the same dimension of forward embedding. Note that LPSC+LabDP and LPSC+Marv1 are our LPSC combined with LabelIDP and Marvell, respectively. label privacy without sacrificing utility. This empirical observation is also justified by the theoretical guarantee in Theorem 1. Therefore, LPSC can be easily integrated with different training-phase perturbation methods for privacy-utility trade-off improvement. Adversarial Training. To investigate the effectiveness of adversarial training, we compare VFG-Boost with two LPSC-enhanced baselines (i.e., LPSC+Marvll and LPSC+LabelIDP). As shown in Fig. 5, we can easily infer the trade-off curves of VFGBoost are closer to the upper-left corner than problem two LPSC-enhanced baselines on each dataset, indicating their performance them with big margins. This validates the effectiveness and superiority of adversarial training in VFGBoost for privacy-utility trade-off. 5.4 IMPACT OF GRADIENT BOOSTING ALGORITHMS ON LPSC We compare the impact of different gradient boosting algorithms on LPSC, including AdaBoost, LogitBoost, and $L_{2}$-Boost. For each boosting algorithm, $p_{prog}(i,y)$ is computed following Table 6 in Appendix C.1. AdaBoost updates the sample-weight $w_i = p_{prog}(i)$ based on the classification error of the local model $f_i$. While, LogitBoost and $L_{2}$-Boost assign residuals $r_i = p_{prog}(i)(y_i)$ based on the negative gradient of the log-likelihood loss and the mean-square error loss, respectively. Table 3 shows the privacy-utility trade-off of different gradient boosting algorithms on Criteo and Avazu datasets. We find that LogitBoost is more effective for LPCS than the others in terms of the privacy-utility trade-off. We defer additional experiments on label privacy protection during training, feature privacy protection, and model-agnosticism in Appendix G. 6 CONCLUSION We focus on protecting label privacy in VFL without sacrificing utility and formulate the LPSC problem for offline-phase cleaning. Our analysis confirms that gradient boosting effectively tackles the LPSC problem with the proposed VFGBoost framework. VFGBoost further enables a nuanced privacy-utility trade-off. Experimental results on four datasets demonstrate the efficacy of LPSC and the superiority of our VFGBoost framework.
2O2FOO8pl4
XNAD64x9rP
The authors give a comprehensive analysis of newly defined privacy and its leakage. The idea is interesting since it introduces a new perspective on privacy. Actually, I feel a little confused about why Definition 1 and Definition 2 are required. What is the insight/intuition of privacy guarantee? What does the newly defined privacy essentially protect? How do you measure the privacy loss in practice/experiments? Why privacy leakage is defined as mutual information?
true
LABEL PRIVACY SOURCE CODING IN VERTICAL FEDERATED LEARNING Anonymous authors Paper under double-blind review ABSTRACT We study label privacy protection in vertical federated learning (VFL). VFL enables an active party who possesses labeled data to improve model performance (utility) by collaborating with passive parties who have auxiliary features. Recently, there has been a growing concern for protecting label privacy against honest passive parties who may surreptitiously deduce private labels from the output of their bottom models. However, existing studies do not remove the prior label information in the active party’s features from labels in an offline phase, thus lead label privacy to passive parties. In contrast, methods that focus on training-phase perturbation, we propose a novel offline-phase data cleansing approach to protect label privacy without compromising utility. Specifically, we first formulate a Label Privacy Source Coding (LPSC) problem to remove the redundant label information in the active party’s features from labels in multiple samples a new weight and label i.e., high federated training. We give a privacy guarantee and theoretically prove that gradient boosting efficiently optimizes the LPSC problem. Therefore, we propose the Vertical Federated Gradient Boosting (VFGBoost) framework to address the LPSC problem. Moreover, given that LPSC only provides upper-bounded privacy enhancement, VFGBoost further enables a flexible privacy-utility trade-off by incorporating adversarial training during federated training. Experimental results on four real-world datasets substantiate the efficacy of LPSC and the superiority of our VFGBoost framework. 1 INTRODUCTION Vertical federated learning (VFL) Yang et al. (2019) enable global model construction among organizations with datasets sharing overlapping sample spaces but feature features. Fig. 1(a) presents an overview of the multi-party VFL problem, where an active party possesses labeled data and has aligned samples with passive parties that own auxiliary features. The primary goal of VFL is to build a well-performed federated model in a privacy-preserving and efficient manner. Recently, label privacy protection has attracted increasing attention in VFL studies. Existing studies in VFL label privacy protection Li et al. (2022), Fu et al. (2022), Sun et al. (2022) rely on a model-simulating policy to protect label privacy and feature privacy, respectively. They protect label privacy by training a complex yet deterministic top model with various perturbation techniques. However, when a passive party steals the deterministic top model (e.g., via model completion attack Fu et al. (2022)), the worst-case label privacy leakage occurs and approximates the federated model performances to protect label privacy and feature privacy, respectively. More concrete approaches are needed to embeddings adopted for label protection exacerbate the feature privacy leakage to the active party Jin et al. (2021); Ye et al. (2022). The fundamental cause of this dilemma is that existing studies Li et al. (2022), Fu et al. (2022), Sun et al. (2022) directly optimize the forward embeddings for label prediction, making forward embedding highly correlated with and informative about private labels. Our key insight is that label privacy protection in VFL should be decoupled into two independent tasks: 1) offline-phase cleaning, which enhances privacy without compromise by removing the redundant label information from labels, and 2) training-phase perturbation, which further balances privacy-utility trade-off via inadequately learning from perturbed labels or gradients. --- Figure 1: (a) The multi-party VFL problem setting. (b) Vanilla VFL and label privacy threat As a remedy to the aforementioned loophole, we formulate a Label Privacy Source Coding (LPSC) problem to encode minimum-sufficient label privacy in an offline phase. The idea is to remove the label information present in the active party’s local features, which is redundant for VFL from the ground truth label. By doing so, the risk of label leakage from the network is eliminated, without sacrificing utility. We theoretically analyze the privacy guarantee of LPSC. LPSC is a constrained optimization problem of two mutual information. We prove that gradient boosting Freund & Schapre (1997) is a simple and efficient approach to optimize the LPSC problem. Specifically, gradient boosting converts the uniformly-weighted original labels to re-weighted residuals of the active party’s local predictions, thus eliminating the redundant label privacy. Therefore, we propose Vertical Federated Gradient Boosting (VFGBoost) to shift the federated learning target from the uniformly-weighted original labels $p_{ij}(i,j)$ to the LPSC-encoded re-weighted residuals $(p_{ij}(i,j))$ by encoding minimum-offspring training. Our proposed VFGBoost follows the aforementioned two-phased paradigm. In the offline LPSC phase (Fig. 2, phase 1), the active party trains a local model on its local data and computes the LPSC-encoded re-weighted residuals $p_{ij}(i,j)$ via gradient boosting as the learning target for VFL. Subsequently, in the federated training phase (Fig. 2, phase 2), the active parties train a federated model in the re-weighted residuals. Hence, the federated prediction is the weighted sum of the active party’s local prediction and the federated predicted residual. Crucially, the inherent label privacy enhancement of LPSC is upper-bounded by the label information present in the active party’s local features, potentially falling short in practical scenarios. To circumvent this, perturbation methods can be subsequently employed to enhance label privacy with a consequent reduction in utility. Specifically, VFGBoost utilizes adversarial training through a mass-inference technique to fine-tune trained classes (Fig. 2, phase 3), while also updating the passive parties’ bottom models to thwart the attack. Consequently, VFGBoost consists of a utility objective that learns to in the LPSC-encoded label privacy (re-weighted residuals) $p_{ij}(i,j)$, as well as an优化目标 that ensures protection against training labels $p_{ij}(i,j)$. We jointly optimize both objectives, utilizing a hyperparameter to enable flexible balancing of the privacy-utility trade-off. Moreover, VFGBoost is model-agnostic and allows any gradient-based model. Our comprehensive experiments conducted on four real-world datasets in the realm of recommendation and healthcare demonstrate that the LPSC can enhance label privacy without compromising utility, and the proposed VFGBoost framework achieves a superior privacy-utility trade-off compared to seven baseline methods. In summary, our contributions are as follows: - We decouple label privacy protection in VFL into two independent tasks: offline-phase cleaning to inherently enhance privacy without compromising utility, and training-phase perturbation for nuanced privacy-utility trade-offs. - We formulate a Label Privacy Source Coding (LPSC) problem with privacy guarantee to one-side optimization by the refined label information for offline-phase cleansing. - We further propose VFGBoost that utilizes gradient boosting to optimize LPSC and incorporates adversarial training to enable additional privacy enhancement. - We perform extensive experiments on four real-world datasets to demonstrate the efficacy of LPSC and the superiority of our proposed VFGBoost framework. --- 2 RELATED WORK **Label Protection in VFL.** Existing label privacy protection techniques in VFL mainly include cryptographic methods and perturbation methods. **Cryptographic methods** Fu et al. (2021); Cheng et al. (2021); Ren et al. (2022) incur significant overheads in computation and communication, which is typically unfeasible in practice. Therefore, they are not investigated and compared in this work. **Perturbation methods** introduce noise to labels or gradients to update the passive parties models. For instance, Li et al. Li et al. Li et al. (2022) employ adapted Gaussian noise to perturb the gradients to defend against label attacks. Sun et al. Sun et al. (2022) minimize the distance correlation between the forward embedding and the label to defend against the spectral Traan et al. (2018). Giang-based MI estimator Alemi et al. (2016) to gauge MI between the embedding and label. Social computing gradients. Yang et al. Yang et al. (2022) apply differential privacy Dwork et al. (2006) to a gradient perturbation-based split learning framework. Overall, due to the forward embeddings in existing works being optimized for label prediction Li et al. (2022); Zou et al. (2022); Sun et al. (2022), the worst-case label privacy leakage approximates the VFL utility, which is unacceptable. **Mutual Information for Privacy Protection.** MID Zuo et al. (2023) uses mutual information (MI) regularization to minimize the entropy of forward embedding during federated training. It adapts a VAE based MI estimator Alemi et al. (2016) to gauge MI between the embedding and label. Social explicit MI estimation Alemi et al. (2016); Belghazi et al. (2018), however, is resource-intensive and needs Gaussian noise, reducing utility. Conversely, our LPSC employs gradient boosting to enhance privacy efficiently without added noise. **Privacy Protection via Offline Pre-processing.** Recently, InstaHide Huang et al. (2020) and Fed-Pass Gu et al. (2023) are proposed to pre-process features to safeguard feature privacy by merging training tasks with noise. Nevertheless, to our best knowledge, there are no existing pre-processing approaches designed for label privacy protection. 3 PROBLEM APPROXIMATION **Vertical Federated Learning Setting.** In a typical VFL setting, as shown in Fig. 1(a), the aligned training data $D$ has sample identifiers (IDs) $i$ and labels $y$. The feature matrix $X = [X_0, X_1, \dots, X_k]$ is vertically partitioned among $K + 1$ parties by feature. An active party $P_0$ has labeled local samples $i \in X_{k+1}$. Moreover, the active party has local data $\mathcal{D}_{loc} = \begin{bmatrix} v^*_i & x^*_i & y^*_i \end{bmatrix}$, which additionally includes unaligned labeled samples. In VFL the active party leverages the auxiliary features from passive parties to train a federated model while protecting privacy. For simplicity, we use sample ID $i$ to represent $P_i$'s features $x_{ik}$ in functions (e.g., $h_{ij}(x_i)$) denotes $h_{ij}(x_{ik})$. A summary of notations and their corresponding descriptions is provided in the Appendix A. **Table 1: Threat Model.** (See details in Appendis B.) | Threat model | Adversary | Attack objective | Attack method | Adversary's capability | | :--- | :--- | :--- | :--- | :--- | | Semi-honest | Passive parties | min $R_{pj}(i,j)$ | Norm, Spectral, PMC | A few samples with features | **Threat Model.** We focus on privacy leakage stemming from the forward embedding of passive parties' bottom models. We assume that both active and passive parties are *semi-honest* and non-colluding, meaning that they follow the training protocol but attempt to extract private information. To attack label privacy in passive parties $P_i$, we mitigate error $h_{ip}(i,j)$ against the ground truth ID-label joint distribution $P_{ip}(i,j)$ given $D_{ip}(i,j)$ as SSL-divergence: $$\min_{A \in A} R_{pj}(i,j) = \min_{A \in A} R_{pj}(i,j)\left|A(h_{ip}(i,j))\right|$$ where $A \in H$ is any attack function that infers the raw label from $P_i$'s forward embedding $h_{ip}(i,j)$. Every attack method corresponds to a different attack function $A(i)$. We assume the adversary $P_k$ makes use of $P_{ip}(i,j)$, which is different from different attack functions $A(i)$. The feature attack object formulated similarly. As shown in Table 1, one threat model, a passive party use model norm attack Li et al. (2022), spectral traun et al. (2018), or an active model completion (PMC) attack Fu et al. (2022) to build the attack function $A(i)$. Similarly, an active adversary use model inversion attack He et al. (2019) to attack features. --- Figure 2: The VFGBoost framework. Left: the offline Label Privacy Source Coding (LPSC) phase (Section 4.2). Right: the federated training phase (Section 4.3). **Privacy Definition.** According to Equation 1, the adversary’s objective is to minimize the expected estimation error of the ground-truth conditional distribution $p_{pg}(y|i)$. Therefore, the private label information is and only is the ID-label joint distribution $p_{pg}(y,i)$, see details in Appendix B.2. Our goal is to design an offline-phase privacy mechanism $\mathcal{M}$ for the active model $P_0$ that outputs a new joint distribution $p_{jx}(i,y)$ $M(p_{jx}(i,y), X_O)$. Subsequently, the bottom model $h_O$ is trained to fit $p_{jx}(i,y)$ during federated training. Therefore, the optimal attack error given $p_{jx}(i,y)$ is: $$\mathbb{R}_{p_{jx}(i,y)}(p_{jx}(i,y)) = \min_{\psi_i^*} \mathbb{E}_{p_{jx}(i,y)}[D_{X_C}(p_{jx}(i,y))|A_{h_O}(i,y)]$$ where $\psi_i^*$ is $P_0$’s top model trained to map $h_O$’s to the new label $p_{jx}(i,y)$. We notice that, maximum $p_{jx}(i,y)$ $\mathbb{R}_{p_{jx}(i,y)}(p_{jx}(i,y)) \iff \min_{p_{jx}(i,y)} I(p_{jx}(i,y)|p_{jx}(i,y))$. Thereby, our threat model coincides with $e$-mutual information privacy ($e-MIP$), see more details in Appendix B. ### 4 PROPOSED APPROACH In this section, we introduce our two-phase VFGBoost framework that consists of an offline LPSC phase and a federated training phase, as depicted in Fig. 2. Specifically, we first formulate the LPSC problem (Section 4.1) that encodes minimum-sufficient label privacy by removing redundant label information from the active party’s features. Then, we prove that gradient boosting can efficiently optimize the LPSC problem (Section 4.2). thereby, we proposed our VFGBoost framework that leverages gradient boosting to tackle LPSC. Moreover, to flexibly balance the privacy-utility trade-off, VFGBoost incorporates adversarial training in the federated training phase (Section 4.3). The active party mimics adversarial passive parties to attack labels and, in turn, solves a max-min optimization problem to protect labels. ### 4.1 LABEL PRIVACY SOURCE CODING PROBLEM In the offline phase (Fig. 2, phase 1), we aim to encode minimum-sufficient label privacy from the ground-truth label privacy $p_{jx}(i,y)$ by removing the redundant label information $p_{rest}(i,y)$ in the active party’s local features $X_O$, as demonstrated in Fig. 3. To do so, we formally define a label privacy source coding problem as follows: **Problem 1** (Label Privacy Source Coding). Given ground-truth label privacy $p_{jx}(i,y)$ and the active party $P_0$’s learned label privacy $p_{jx}(i,y)$ from its feature $X_O$, the label privacy source coding problem is to optimize a new ID-label joint distribution $p_{jx}(i,y)$ as follows: $$\max_{p_{jx}(i,y)} I(p_{jx}(i,y); p_{jx}(i,y))$$ (Sufficient) $$s.t. I(p_{jx}(i,y); p_{jx}(i,y)) = 0$$ (Minimum), --- where $I(\cdot; \cdot)$ denotes mutual information. The optimized ID-label joint distribution $p_{psec}(i, y)$ assigns each sample a new weight through the marginal $p_{psec}(i)$ and/or label through the conditional $p_{psec}(y|i)$. We will show that gradient boosting, which is detailed in the Appendix C.1, efficiently solves the LPSC problem. The privacy leakage inherent in LPSC is rigorously bounded as described in the following theorem: **Theorem 1** (Privacy Guarantee) A LPSC algorithm is optimal information privacy ($\epsilon$ MIP). The privacy leakage is bounded by $\epsilon H(p_{psec}(i), p_{psec}(y|i))$, the conditional entropy of the ground-truth label distribution $p_{psec}(i, y)$ given the active party’s label distribution $p_{psec}(i, y)$. Formally, $$I(p_{pf}(i), y); p_{pf}(i|y_i) \leq \epsilon \text{bits},$$ where $p_{pf}^{*}_{psec}(i)$ represents the optimal solution of Equation 2 in the LPSC problem. **Proof.** The proof of Theorem 1 is provided in the Appendix B.3. The intuition behind Theorem 1 is that privacy leakage in LPSC is inversely related to the amount of label information the active party can infer from its local features. ### 4.2 GRADIENT BOOSTING SOLVES LPSC PROBLEM A recent insight of mutual information (MI) regularization for privacy protection Zou et al. (2023) relies on a notion of MI neural estimation Alemi et al. (2016); Belghazi et al. (2018), which explicitly estimates MI via Gaussian noise. However, explicit MI estimation is inefficient, and the introduced noise under model fitting Belgaz et al. (2018). In contrast, we prove that gradient boosting is a simple and efficient approach to solve the LPSC problem. To solve problem 1, 1) the active party $P_0$ first learns the label privacy $p_{psec}(i, y)$ present in its features $X_{i}^{\infty}$; 2) Then, active party $P_0$ optimizes the joint distribution $p_{psec}(i, y)$ by solving Eq. 2. We elaborate on each step as follows: **(1) Learning $p_{psec}(i, y)$.** To learn $p_{psec}(i, y)$, which is the label privacy present in local features $X_{0}$, the active party $P_0$ only needs to learn the conditional $p_{psec}(y|i)$ as the marginal $p_{psec}(i) = p_{pf}(i) \sim U$ is uniform. To do so, $P_0$ train model $f_0$ on its local data $D^{\infty\text{seq}}$ indexed by $x^{\infty}$ as follows: $$\theta^* = \arg \min_{\sum_{i=1}^{n} |f_{pc}|^2} \frac{1}{|f_{pc}|} \mathcal{L}_{\text{emp}}(y_i, f_0(i)),$$ where $\mathcal{L}_{\text{emp}}$ denotes the empirical loss $f_0(i)$ denotes $f_0(x_{pc})$ for simplicity and models the conditional label distribution $p_{psec}(i|y_i)$. Consequently, the active party learns $p_{pf}(i)$ as $p_{pf}(i|y_i)$. **(2) Optimizing $p_{psec}(i, y)$.** We point out that the gradient boosting algorithm optimizes the LPSC problem by taking AdaBoost Freund & Schapire (1997) as an example. As shown in Theorem 2 and Theorem 3, this algorithm optimizes the KL-divergence between $p_{psec}(i)$ and the uniform distribution $U$ (Eq. 4), while fixing the conditional distribution $p_{psec}(i|y_i)$ as the ground truth $p_{pf}(i|y_i)$. **Theorem 2.** Assuming fixed conditional distribution $p_{psec}(i|y_i) = p_{pf}(i|y_i)$ and let $U$ denote uniform distribution, the LPSC problem I can be reduced to: $$\min_{p_{pf}(i)} D_{pf}(p_{pf}(i)|U) \quad s.t. \sum_{i=1}^{n} p_{pf}(i) y_i f_0(i) = 0,$$ where $i \in i$ is the sample index of aligned training data with IDs $i$. **Proof.** The proof of Theorem 2 is provided in the Appendix D.1. The theorem 2 reduces the LPSC problem to a convex optimization problem Eq. 4, which can be solved via Lagrangeian. It projects the ground-truth label privacy $p_{pf}(i|y_i)$ to an information plane that is orthogonal to $p_{pf}(i|y_i)$, thus estimating the redundant label information in active party’s feature $X_0$. **Theorem 3.** Schapire & Freund (2013) The solution of the convex optimization problem Eq. 4 is equivalent to AdaBoost Freund & Schapire (1997): $$p_{pf}(i) = \frac{e^{-\alpha p_f(f_0(i))}}{\sum_{i=1}^{n} e^{-\alpha p_f(f_0(i))}},$$ where $\alpha = \frac{1}{2} \ln\left(\frac{1}{\epsilon}\right)$ and $e$ is the classification error of $f_0$. $p_{pf}(i)$ can be computed in $O(|i|)$ time-complexity. --- Thereby, AdaBoost efficiently optimizes the LPSC problem. Notably, LPSC can be reduced to different boosting algorithms under different assumptions. In Section 3.4, we evaluate the performance of AdaBoost shown in Fig. 2.1 LPSC only provides upper-bounded privacy protection without compromising utility, which is upper-bounded by $p_{fed}(i,y)$ and may not meet practical privacy requirements. Therefore, VFGboost further incorporates adversarial training to enable additional and flexible privacy enhancement by sacrificing utility in the federated training phase. It should be noted that the integration of LPSC with other perturbation methods is also a valid approach for achieving the active privacy enhancement, as evaluated in Section 3. 4.3 VFGBOOST FRAMEWORK Based on our finding that gradient boosting solves the LPSC problem, we propose a novel Vertical Federated Gradient Boosting framework, VFGBoost, to efficiently and flexibly protect label privacy in VFL, VFGBoost leverages gradient boosting to solve the LPSC problem in an offline phase. However, only provides upper-bounded privacy protection without compromising utility, which is upper-bounded by $p_{fed}(i,y)$ and may not meet practical privacy requirements. Therefore, VFGboost further incorporates adversarial training to enable additional and flexible privacy enhancement by sacrificing utility in the federated training phase. It should be noted that the integration of LPSC with other perturbation methods is also a valid approach for achieving the active privacy enhancement, as evaluated in Section 3. To achieve offline LPSC (Fig. 2, phase 1), VFGBoost leverages gradient boosting to compute the re-weighted residuals $D_{fpc}$. After LPSC, we shift the learning target from ground-truth labels to residuals with re-wheeled samples. In the federated training phase (Fig. 2, phase 2), all parties collaboratively train a federated model $h_fed$ to fit the re-weighted residuals $D_{fpc}$ as follows: $$h_fed(i) = g_1\left(\{h_v_i\}_{i=1}^k\right),$$ (5) where $g_1$ is the aggregation top model trained by the active party $P_b$, and $h_v_i$ denotes $h_v_i(x_{k,i})$ from $P_b$, for simplicity. The overall VFGboost framework $v_{FGBoost}$ can be expressed as: $$v_{FGBoost}(i) = f_0(i) + \alpha \cdot f_1(i),$$ where $\alpha > 0$ represents the weight of the aggregated residuals. 4.3.1 LEARNING OBJECTIVES The training procedure has two objectives: 1) utility objective $\mathcal{L}_{util}$ to fit the LPSC-encoded results $D_{fpc}$, and 2) privacy objective $p_{priv}$ to further enhance label privacy via adversarial training. LPSC utility objective. The utility objective训练 the federated model $h_fed$ in Eq. 5 to fit LPSC-encoded re-weighted residuals $D_{fpc} = (w,r)$ as follows: $$\min_{\lambda(w_i)} \sum_{i=1}^k w_i \cdot \mathcal{L}_{util}(r_i,h_fed(i)),$$ where $(w_i, r_i) \in D_{fpc}$ is the weight and residual of the $i$-th sample and $\mathcal{L}_{util}$ denotes utility loss. Adversarial privacy objective. Given LPSC only provides upper-bounded privacy enhancement, we employ adversarial training to enable training utility for additional privacy enhancement. Specifically, by conducting it by producing the bottom model $h_fed$ with adversarial models $a_{g_1,i}$ to attack each bottom model $h_v_i_{g_1,i}$ and in turn, trains the bottom models to defend against these attacks. Therefore, the adversarial training process can be formulated as a max-min optimization problem as follows: $$\max_{w_i} \mathbb{E}_{\text{priv}}[y_i(a_v_i \circ h_v_i(i))], \quad s.t.\forall k \in [1,\dots,K],$$ where $L_{priv}(k)$ denotes the privacy loss for passive party $P_b$. In summary, the overall objective is to solve the following max-min optimization problem: $$\min_{\lambda(w_i)} \max_{a_{g_1,i}} \mathbb{E}_{\text{priv}}[y_i(a_v_i \circ h_v_i(i))],$$ (6) where $\beta \geq 0$ is a small hyperparameter to control privacy-utility trade-off. A non-zero $\beta$ enables the trade-off of utility for additional privacy enhancement, building on the inherent, upper-bounded, privacy provided by LPSC. Algorithm 1 outlines the training process of VFGBoost. --- Algorithm 1 VFGBoost framework Require: Local data $D^{DNA} = \{i_{max}, X_{min}, y_{min}\}$, and aligned data $D = \{i, X_0, ..., K_X, y\}$. $\Rightarrow$ Phase 1: Label privacy source coding (LPSC) 1. Active party $P_0$ learns $\phi_k(i)$ by training $\rho_i$ on $D^{DNA}$ via Eq. 3. 2. Active party $P_0$ optimizes $\gamma_{php}(i, y)$ by computing weight-residual $\mathcal{D}_{php} = (\boldsymbol{w}, \boldsymbol{r})$. $\triangleright$ Phase 2: Federated training 3. $P_0$ initializes $\lambda$ and $\{\phi_k\}_k=1$. Passive parties $\{P_k\}_k=1$ initialize $\psi_k[k]=1$, respectively. 4. for each batch of samples with $D_b$ in $i$ do $\Rightarrow$ Loss Computations 5. $P_0$ computes $\tilde{p}_k(\boldsymbol{v}_k) = \phi_k(\boldsymbol{v}_k)$ and send to $P_0$ 6. $P_0$ computes $\hat{h}_{fid}(\boldsymbol{b})$ via Eq. 5, then computes $\mathcal{L}_{alf}$ and $\mathcal{L}_{pr+1}(\boldsymbol{k})$ via Eq. 6. $\triangleright$ Model Update 7. $P_0$ updates the aggregation top model $\lambda$ and adversarial models $\{\phi_k\}_k=1$ via gradients. 8. $\{P_k\}_k=1$ update bottom models $\{\psi_k\}_k=1$ via gradients. $\triangleright$ End for Ensure: Local model $\theta$, top model $\lambda$, bottom models $\{\psi_k\}_k=1$. 5 EXPERIMENTAL SETTING Datasets: We evaluate our proposed VFGBoost on four real-world datasets, including two widely used recommendation click-through rate (CTR) prediction datasets: Critica1 and Avazina2, and two healthcare datasets: MIMIC-III Johnson et al. (2016) and Cardio. Each dataset is partitioned into five (Avazina) or seven (others) parties. We defer detailed descriptions of the datasets in Appendix F.1. Implementation: Without specification, we use LogitBoost Friedman et al. (2000) for LPSC. LPSC is computed following Table 6. We adopt DeepFM Guo et al. (2017) for both local and bottom models on Crites and Avazina. We use MLP for both local and bottom models on MIMIC-III and Cardio datasets. We defer details of hyperparameter choices and platform descriptions in Appendix F.2. Comparsed Data: For fair comparisons, we select a set of label privacy protection methods and the gradient approaches not included due to their extensive communication and computational cost. 1) FE-VFL Sun et al. (2022) trains a top model to directly predict labels using forward embeddings, while simultaneously minimizing the distance correlation between the forward embeddings and the labels. 2) CoAE Zao et al. (2022) trains a deterministic mapping function that transforms original labels to surrogate labels. The bottom models are trained to predict this surrogate labels. 3) MID Zao et al. (2023) employs a VAE-based ML estimator Alem et al. (2016) to explicitly estimate and minimize the entropy of the forward embedding during training. 4) LabelDP Ghazi et al. (2021) leverages a random response mechanism to randomly flip labels to generate perturbed gradients. 5) Marvel Li et al. (2022) uses adaptable Gaussian noise to prefit the gradient as that the distribution difference of positive and negative class gradients are eliminated. 6) LPSC+LabelDP combines the gradient boosting-based offline LPSC with training-phase Marvel Li et al. (2022). Metrics: We evaluate our method against baselines regarding utility and privacy. We use the AUC (Area Under ROC curve) metric in our experiments. 1) Utility: To gauge the use of the federated model (FA), we evaluate the federated model (FA) on a larger dataset of values of FL-AUC indicate superior model utility. 2) Privacy: We evaluate the effectiveness of various defense approaches using three label privacy attacks: the Norm attack Li et al. (2022), Spectral attack Tran et al. (2018), and Passive Model Completion (PMC) attack Fu et al. (2022). For privacy evaluation, we calculate the average AUC of the label predictions made by the passive parts, which we refer to as label leakage AUC (LL-AUC). A low LL-AUC value, close to 0.5, signifies strong privacy protection. *https://labs.critico/category/dataset/ *https://www.kaggle.com/c/avazin-crt-prediction *The code is available at https://anonymous.hopen.science/vfGBoost-D620 --- ### Figure 4: Distributions of passive parties¹ output logits by fitting original labels p₀ᵣ vs. LPSC-encoded labels p₀ᶜᵣ for the first four passive parties on the Critex dataset. (Adversarial loss β = 0.) #### 5.2 LPSC PROTECTS LABEL PRIVACY WITHOUT COMPROMISING UTILITY We first evaluate the protection quality of gradient boosting-based LPSC. Specifically, in the federated training phase, we train passive parties between model to fit the LPSC-encoded labels p₀ᶜᵣ(i,y) and original ground-truth labels p₀ᶜᵣ(i,y), respectively. Table 2 presents the LL-AUC against Norm, Spectral, and PMC attacks and the FL-AUC on four datasets. The results reveal that the LL-AUC of LPSC against three attacks is significantly lower than that of the original labels, indicating that LPSC proves strong label privacy protection. Meanwhile, the FL-AUC of LPSC is comparable to that of the original labels, implying that LPSC barely sacrifices model utility. This confirms that LPSC can effectively protect label privacy without compromising its. The PMC attack outperforms Norm and Spectral attacks in LL-AUC values, indicating that transformations by this stems from PMC’s ability to improve label privacy for top model reconstruction. Thus, we employ PMC for subsequent label privacy evaluations. Fig. 4 visualizes the output logits distributions of four passive parties by training with or without LPSC. The complete figures include all six passive parties can be found in the Appendix G.1. The top-side distributions in each sub-figure show that, with LPSC, the logits distributions of the two classes almost overlap and are hard to differentiate. In contrast, without LPSC, the right-side distributions in each figure reveal significant differences in the distributions of the two classes, indicating that transformations by this stems from empirical findings are important in Theorem 1, which justifies our observation that it is more challenging to distinguish the output distributions between classes when the bottom models are trained with LPSC-encoded labels. #### 5.3 PRIVACY-UTILITY TRADE-OFF COMPARISON Fig. 5 shows the privacy-utility trade-off curves on four datasets. The X-axis indicates the label leakage AUC (LL-AUC), and the Y-axis indicates the AUC of the federated model prediction (FL-AUC). An ideal trade-off should have a large FL-AUC and a small LL-AUC, thus residing in the upper bound of the LPSC-encoded label leakage. We discuss how offline LPSC and training-phase adversarial training in VFGBoost improve the privacy-utility trade-off in the following, respectively. **Label Source Coding:** To explore the effectiveness of LPSC, we compare LPSC-enhanced baseline(s i.e., LPSC+LabelDP and LPSC+Marvel) with their counterparts without LPSC (i.e., LabelIDP and Marvel). As shown in Fig. 5, LPSC significantly improves the privacy-utility trade-off of existing perturbation baselines by pushing the top-side of each curve off on each dataset. Without any training-phase perturbation (the top-right end of each curve), LPSC leads to significant LL-AUC decline with negligible FL-AUC decline on each dataset, implying that it protects --- Figure 5: Privacy-utility trade-off of different label protection methods against the PMC attack on four datasets. All methods have the same dimension of forward embedding. Note that LPSC+LabDP and LPSC+Marv1 are our LPSC combined with LabelIDP and Marvell, respectively. label privacy without sacrificing utility. This empirical observation is also justified by the theoretical guarantee in Theorem 1. Therefore, LPSC can be easily integrated with different training-phase perturbation methods for privacy-utility trade-off improvement. Adversarial Training. To investigate the effectiveness of adversarial training, we compare VFG-Boost with two LPSC-enhanced baselines (i.e., LPSC+Marvll and LPSC+LabelIDP). As shown in Fig. 5, we can easily infer the trade-off curves of VFGBoost are closer to the upper-left corner than problem two LPSC-enhanced baselines on each dataset, indicating their performance them with big margins. This validates the effectiveness and superiority of adversarial training in VFGBoost for privacy-utility trade-off. 5.4 IMPACT OF GRADIENT BOOSTING ALGORITHMS ON LPSC We compare the impact of different gradient boosting algorithms on LPSC, including AdaBoost, LogitBoost, and $L_{2}$-Boost. For each boosting algorithm, $p_{prog}(i,y)$ is computed following Table 6 in Appendix C.1. AdaBoost updates the sample-weight $w_i = p_{prog}(i)$ based on the classification error of the local model $f_i$. While, LogitBoost and $L_{2}$-Boost assign residuals $r_i = p_{prog}(i)(y_i)$ based on the negative gradient of the log-likelihood loss and the mean-square error loss, respectively. Table 3 shows the privacy-utility trade-off of different gradient boosting algorithms on Criteo and Avazu datasets. We find that LogitBoost is more effective for LPCS than the others in terms of the privacy-utility trade-off. We defer additional experiments on label privacy protection during training, feature privacy protection, and model-agnosticism in Appendix G. 6 CONCLUSION We focus on protecting label privacy in VFL without sacrificing utility and formulate the LPSC problem for offline-phase cleaning. Our analysis confirms that gradient boosting effectively tackles the LPSC problem with the proposed VFGBoost framework. VFGBoost further enables a nuanced privacy-utility trade-off. Experimental results on four datasets demonstrate the efficacy of LPSC and the superiority of our VFGBoost framework.
cG8Q4FE0Hi
EyMZdNlpiZ
The experiments are only conducted on randomly sampled sub-sets of the test sets, which may raise concerns about the convincingness of the results.
true
RCOT: DETECTING AND RECTIFYING FACTUAL INCONSISTENCY IN REASONING BY REVERSING CHAIN-OF-THOUGHT Anonymous authors Paper under double-blind review ABSTRACT Large language Models (LLMs) have achieved promising performance on arithmetic reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting. However, LLMs face challenges in maintaining factual consistency during reasoning, exhibiting tendencies to condition overlooking, question misinterpretation, and condition hallucination over given problems. Existing methods use course-grained feedback (e.g., whether the answer is correct) to improve factual consistency. In this work, we propose RCot (Reversing Chain-of-Thought), a novel method to improve LLMs' reasoning abilities by automatically detecting and recifying factual inconsistency in LLMs' generated feedback to guide LLMs in revising solutions. Experimental results demonstrate improvements of RCot over standard CoT, Self-Consistency and Self-Refine across seven arithmetic datasets. Moreover, we find that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities (e.g., ChatGPT teaches 94.6% accuracy on GSM884), encouraging the community to further explore the fine-grained feedback generation methods. 1 INTRODUCTION Large language models (LLMs) (Brown et al., 2020b; Zhang et al., 2022; Chowdhery et al., 2022; Touron et al.) have showcased strong reasoning capabilities using chain-of-thought (CoT) (Wei et al., 2022; Chowdhery et al., 2022; Fung et al., 2022), where LLMs are prompted to generate intermediate steps before the final answer. Despite the impressive performance of CoT prompting across various research environments, there remains a need for fine-grained feedback on factual inconsistency in reasoning. Specifically, each reasoning problem usually consists of several conditions and a question, and LLMs exhibit tendencies to hallucinate, overlook conditions and misinterpret questions (Golovneva et al., 2022). While previous research has proposed various methods to enhance Chain-of-thought performance (Shum et al., 2023; Li et al., 2022; Diao et al., 2023; Shum et al., 2023; Zhou et al., 2022; Wang et al., 2022; Gao et al., 2023; Chen et al., 2022; Weng et al., 2022; Paul et al., 2023; Shim et al., 2023), there remains a noticeable absence of explicit studies addressing the issue of factual inconsistency. The most relevant work is probably Self-Verification (Weng et al., 2022), which verifies answers by swapping conditions and answers. However, it can only tell whether answers are correct and fail given fine-grained feedback on factual inconsistency to guide LLMs in revising solutions. Figure 1 shows an instance of factual inconsistent solution generated by ChatGPT, where the critical condition of "tomorrow, 10/16/1924" is mistakenly overlooked. Despite the incorrect solution, ChatGPT could be rectified if we explicitly point out that it overlooked the "tomorrow, 10/16/1924", condition. Motivated by this observation, we introduce Reverse Chain-of-thought (RCOT), a novel method for detecting and rectifying condition hallucination, overlooking and question misinterpretation (Figure 1) in CoT. More examples could be found in Appendix B.1. --- of our proposed method are illustrated in Figure 2. To detect factual inconsistency, RCoT begins with reconstructing new problems from the original solutions. Next, RCoT detects inconsistencies by constructing inconsistencies between the reconstructed problems and the original problems. To rectify inconsistencies, the detected inconsistencies will be formulated as fine-grained feedback to guide the LLMs in revising their solutions. Fine-grained feedback can not only be used to revise solutions but can also offer more interpretability of reasoning errors, allowing humans to inspect solutions easier. Overlooking Problem: Today’s meeting is rescheduled to 11 am tomorrow, 10/16/1924. What is the date one year ago from today? ChatGPT Chain-of-Thought: The current date is 10/16/1924. To find the date one year ago from today, you would subtract one year from the current year, which would be 1923. The correct answer is 10/16/1923. ✅ Hallucination Problem: A wooden bridge can carry no more than 5000 pounds. A delivery truck filled with identical boxes, each weighing 15 pounds, will pass over the bridge. The combined weight of the driver and the courier truck is 15 pounds. What is the maximum number of boxes which can be loaded onto the truck while not exceeding the bridge’s weight limit? ChatGPT Chain-of-Thought: First, we need to submit the weight of the driver and the empty truck from the maximum weight the bridge can carry. Let's assume each box weighs 16 pounds (15 pounds for the contents and 1 pound for the box). This means we can load a maximum of 77 boxes onto the truck without exceeding the bridge’s weight limit. ✅ Misinterpretation Problem: The cave is 1218 feet deep and they are already at 849 feet. If they are traveling at speed of 11 km/h, how much time will it take them to descend that distance at a speed of 17 feet per unit of time? Therefore, it will take them 22 units of time to reach the end of the cave. ✅ Figure 1: The examples of factual inconsistency. Overlooking: The problem mentions that tomorrow is 10/16/1924, whereas LLMs overlook the condition of “tomorrow”, thinking that today is 10/16/1924. Hallucination: The problem mentions that each box weighs 15 pounds. However, LLMs assume each box weighs 16 pounds, which contradicts real conditions. Misinterpretation: LLMs have correctly answered the question in the problem, but it continues answering “how long”, resulting in a wrong answer. This is a typical misinterpretation of the problem. We evaluate RCoT on seven arithmetic reasoning datasets, including GSM8k (Cobbe et al., 2021b), AQA(Ling et al., 2017), SVAMP(Patei et al., 2021), AddSub(Hosseini et al., 2014), ASDw(AQA(Ling et al., 2017), SVAMP(Patei et al., 2021), and SingHll(Qanec-Kedzirak, et al., 2016). Experimental results demonstrate the effectiveness of RCoT, outperforming competitive baselines in both zero-shot and few-shot settings. In-depth analysis and human evaluation suggest that fine-grained feedback on factual inconsistency is crucial for LLMs to revise solutions for arithmetic problems. For example, ChatGPT could achieve 94.6% accuracy on GSM8k with manually written fine-grained feedback. However, we conducted comprehensive ablation studies to improve the performance of modules. Our findings encourage the community to further explore detecting and rectifying factual inconsistency to enhance LLMs’ reasoning ability. Our contributions are summarized as follows: • We propose a novel prompting method, Reversing Chain-of-Thought (RCoT) to effectively detect and rectify the factual inconsistency of LLMs in arithmetic reasoning, focusing on overlooked, hallucinated conditions and misinterpreted questions. RCoT performs competitive baseline models consistently across seven arithmetic reasoning tasks. --- • Prompting with fine-grained feedback on factual inconsistency shows encouraging results on improving LLM's reasoning abilities. Though automatically generated by RCOt Shows improved reasoning compared with standard models on ground-truth feedback can further improve the LLM's reasoning ability (e.g., ChatGPT reaches 94.6% accuracy on GSM8k). The gap between RCOt's feedback and human-written feedback encourages the community to further explore the automatic generation of fine-grained feedback. • RCOt offers more interpretability to the reasoning errors with fine-grained feedback on factual inconsistency, allowing humans to inspect solutions easier. 2 RELATED WORK Language Model for Reasoning Reasoning ability is a critical skill to solve complex problems, such as arithmetic reasoning (Končel-Kedzioski et al., 2016; Roy & Roth, 2016; Miao et al., 2021; Cobbe et al., 2021a; Dua et al., 2019), logical reasoning (Yu et al., 2020), commensense reasoning (Bhagavataula et al., 2019; Talmor et al., 2019; Zellers et al., 2018) and tabular reasoning (Zhu et al., 2021). Recently Large Language Models (e.g., GPT3) (Brown et al., 2020) and Human-Written Language Models (e.g., LLama (Tourron et al.) have demonstrated promising reasoning capability with Chain-of-Thought methods. However, large language models exhibit tendencies to generate intermediate steps that are factually inconsistent, rendering them incapable of solving complex problems requiring multi-step reasoning. In this work, we focus on the detection and verification of factually inconsistent errors in the intermediate reasoning steps, including question misinterpretation, condition hallucination and condition overlooking. Prompt Engineering Some prompt methods can elicit useful knowledge in large language models to better solve complex tasks, two representative examples of which are In-context Learning (Dhu et al., 2019); Recursive Large Language Models (e.g., GPT3) (Brown et al., 2020) and Chain-of-Thought prompt improves performance by promoting LLMs to think of intermediate steps. Inspired by the promising performance of CoT, many methods have explored how to further improve standard CoT. Least-to-most (Zhou et al., 2022) promotes program reuse by extending the workload into a series of self-training tasks. Self-Consistency prompting (Wang et al., 2022) improves performance through majority voting on multiple solutions. Similarly, Complex CoT (Fu et al., 2022) emphasizes the importance of prompt complexity and selects the most complex examples as prompts. Auto-CoT (Shun et al., 2023) is designed to reduce the workload of manual labeling. Active-CoT (Diao et al., 2023) selects the most uncertain questions as demonstration examples to further improve performance. However, these methods fail to address the factual inconsistency problem. Probably the most relevant work are Self-Verification (Weng et al., 2022), REFINER (Paul et al., 2023), and Reflexion (Shinn et al., 2023). These approaches focus on correcting LLM outputs. However, Self-Verification can only generate data without reducing the workload of manual labeling. Reflexion requires environmental feedback, which cannot be easily obtained in artificial reasoning. Compared to these methods, RCOt entirely relies on the LLM itself to generate fine-grained feedback on factual consistency. Reverse Engineering RCOt is inspired by the concept of Reverse Engineering, which has various applications in machine learning research. (Fredrikson et al., 2015) proposes a reverse method for linear models to evaluate models' privacy safety. (Fredrikson et al., 2015) introduces a model inversion method for shallow neural networks, which can reconstruct the face information. (Geva et al., 2022) unveils the internal prediction construction process of Transformer-based language models. The key application of the facial recognition model hyperparameters is another application of reinforcement learning techniques. (Bhagavataula et al., 2019) reverses network parameters by repeatedly requesting the predicted label from the target model. (Tramer et al., 2016) develops an avatar method to estimate training data and model architectures, while (Oh et al., 2019) trains a set of white box models to estimate model parameters. (Hu et al., 2018) estimates both the structure and the weights of a CNN model on a hardware accelerator from information leaky memory access patterns. Different from their goal of opening up the --- Figure 2: The framework of RCoT. (1) Reconstruction: Ask LLMs to reconstruct the problem according to the original solution with instruction and demonstration examples. (2) Decomposition: Decompose the original problem and reconstructed problem into fine-grained condition lists. (3) Comparison: Compare both lists of sub-conditions and questions to verify whether there are hallucinations, overlookings and misinterpretations. (4) Revision: Gather all factual inconsistencies into fine-grained feedback to instruct LLMs to revise solutions. black-box of deep learning models, our work focuses on automatically detecting and rectifying factual inconsistencies that appeared in the solutions generated by LLMs. 3 REVERSING CHAIN-OF-THOUGHT (RCoT) We propose RCoT for detecting and rectifying factual inconsistency (i.e., condition hallucinations, overlookings, and question misinterpretation) in CoT to enhance LLMs’ reasoning ability. Specifically, given a complex reasoning problem $Q$ and original solution $c$ generated by the LLM, we first ask LLMs to detect factual inconsistency: (i) Problem Reconstruction: Reconstruct the problem $Q$ based on the reconstructed solution $c$. (ii) Fine-grained Comparison: Conduct a fine-grained comparison between the original problem $Q$ and the reconstructed problem $Q$ to detect condition hallucinations, overlookings, and question misinterpretation. Then we rectify LLMs using detected factual inconsistency: (iii) Fine-grained Feedback and Revision: The fine-grained comparison reveals the factual inconsistency in original solutions. The detected factual inconsistencies are formulated into fine-grained feedback to guide LLMs in revising their solution accordingly. The overall schematic illustrations of our proposed approach are illustrated in Figure 2, and an example of RCoT is shown in Appendix B.3. --- 3.1 PROBLEM RECONSTRUCTION Intuitively, if the generated step-by-step solution of an arithmetic problem is logically and factually correct and complete, it is more likely for a human to infer what is the original problem. Similarly, we ask the LLM to reconstruct the problem to get $Q$ based on its own solution $c$, in order to verify whether it truly understands the problem. We manually write instructions and in-text examples as the reconstruction prompt. We find that the factual inconsistencies in the problem $Q$ condition overlooks (e.g., the LLM uses conditions that are not mentioned in the problem $Q$ condition overlooks) e.g., the LLM overfolds some important conditions in the problem $Q$, and question misinterpretations (e.g., the LLM misunderstand the question of $Q$ can be effectively exposed by comparing the reconstructed problem $Q$ with the original problem $Q$ ($\mathcal{Q}$ 3.2), as shown in Figures 9, 6, and 15 in Appendix B.1, respectively. The prompt template can be found in Figure 21. 3.2 FINE-GRAINED COMPARISON To detect condition hallucinations and overlookings, as well as question misinterpretsions in the solution $c$ from the reconstructed problem $Q$, a naive approach is to ask the LLM to directly compare $Q$ with $Q$. However, such comparisons usually fail to produce high-quality detection results (Figure 3), which is unsurprising because $Q$ and $Q$ contain rich information, and the course-grained comparison will initially generate some information, causing a subtle result. Therefore, we use fine-grained step-by-step comparisons to improve the detection quality. All prompt templates are shown in Figure 21. The process is as follows: **Problem Decomposition** $Q$ and $Q$ are unstructured texts, which are hard to be compared in an organizer manner. To overcome this issue, we ask the LLM to decompose the problem into a list of conditions $L_Q = [L_1^0, \dots, L_q^0], L_Q = [L_1^1, \dots, L_q^1]$. The structured condition list will then be used in fine-grained comparison. **Condition Comparison** To find the differences between $Q$ and $Q$, we first check whether their condition lists $L_1^0$ and $L_1^0$ are the same. Specifically, the LLM is required to answer whether each $L_1^0$ can be inferred from $L_1^0$. If $L_1^0$ cannot be inferred from $L_1^0$, then $L_1^0$ is either (1) overlooked in the solution or (2) hallucinated by the LLM as a different condition. Similarly, we ask the LLM to tell whether $L_1^0$ can be inferred from $L_1^0$ for every $j$. If $L_1^0$ cannot be inferred from $L_1^0$, then $L_1^0$ is hallucinated. Apparently, we need to conduct comparisons for $mn$ times in total. **Question Comparison** The LLM sometimes will also misinterpret the question (Figure 1). Therefore, we also ask LLM to compare the questions being asked in $Q$ and $Q$. If LLMs find the two questions are the same, the question in the question in their solution. This comparison only needs to be done once since there is one question per problem in most cases. After these comparisons, we detect hallucinated conditions, overlooked conditions, and misinterpreted questions. We then use them to formulate our fine-grained feedback to guide the LLM in revising its solution. 3.3 FINE-GRAINED FEEDBACK AND REVISION We assume the original solution is correct if we do not detect any factual inconsistency through fine-grained comparison. On the contrary, we formulate fine-grained feedback to guide the LLM in revising its solution if any factual inconsistency is detected. Specifically, the fine-grained feedback will first state that the solution is incorrect, then list the detected factual inconsistency, and finally ask the LLM to revise its solution. Figure 21 shows the template we use to formulate the feedback. We take the answer of the revised solution as the final output for evaluation. 4 EXPERIMENT Our extensive experiments aim to show that (1) RCoT benefits arithmetic reasoning by automatically detecting and rectifying hallucination and overlooking, and question misinterpretation; --- (2) fine-grained feedback of factual consistency is critical for LLMs to self-revise the solution. (3) fine-grained comparison is essential for constructing high-quality fine-grained feedback. 4.1 EXPERIMENT SETTING We used closed-source ChatGPT and open-source LaLMa-13B-Chat (Vuonr et al.) as the backbone LLMs for solution generation and set the temperature to 0 to improve reproducibility. We evaluate RCOT of seven intrinsic datasets with different difficulties, including GSMR8 (Cobet et al., 2019b), AQuA (Ling et al., 2017), SVAMP (Patel et al., 2021), AddSub (Hossein et al., 2014), ASDir (Miao et al., 2021), Date (Sivattava et al., 2022) and SingelEq (Koneck-Kedzikowski et al., 2016). Due to the high time cost of API calls, we do not use the whole test set but randomly sample test sub-sets. To reduce the redundant caused by test set sampling and make our results more convincing, we sample three test sub-sets that each contains 256 inputs. We report the average accuracy with deviation on the three test sub-sets. For the dataset that has less than 256 test inputs, we still evaluate three times since ChatGPT's outputs may change and report the average accuracy with deviation. A detailed description of each dataset is shown in Appendix B.5. We consider both zero-shot and few-shot settings. For the zero-shot setting, we add the prompt "Let’s think step by step to encourage LLMs to think intermediate steps without any demonstration example (Goal for the few-shot setting, we use four-shot COt prompts that consist of problems, solutions, and final answers). We compare our method with five baselines: (1) Chain-of-thought (CoT) (Wei et al., 2022) (2) Active-Prompting (Diao et al., 2023), a method that selects the most uncertainty problems as demonstration examples. (3) Double-Check asks LLMs to check their answers but does not point out whether the answer is correct. In our experiment, we use the prompt "You should double-check your answer via Rel Confidence CoT (Wang et al., 2022) through majority firing capability solutions to improve the performance." (4) Self-Refine (Mudaa et al., 2023) uses iterative feedback and refinement to revise the answer. We use TikTok from Openai to calculate the cost of average tokens.` 4.2 RCOT BENEFITS ARITHMETIC REASONING Table 1 shows the results of RCOT on seven arithmetic datasets. Our method consistently outperforms the standard CoT and the double-check methods in the zero-shot setting. Moreover, LLMs benefit more when they see a real confident CoT (Wang et al., 2022) because the AQuA dataset contains diverse problems, and the Date dataset requires multi-hop reasoning and common sense date knowledge. Both CoT and LaLMa achieve lower accuracy scores on AQuA and Date (51.3% and 66.7% for CoT and 27.2% and 52.4% for LaLMa) among all seven tasks. Meanwhile, we observe that our method helps LLMs improve by apparent margins on AQuA and DCOt. This improves the performance of CoT and LaLMa. Our method also remains effective for easier tasks. For example, CoT enhances the performance of the SVAMP dataset, which contains problems that usually only one-step calculation, by 2.8% and 2.5%. Moreover, we also observe greater improvements from our method on ChatGpt than LaLMa, potentially due to the stronger abilities of ChatGpt to detect and correct errors. We can observe similar results in the zero-shot setting to the zero-shot setting but selecting the most uncertain problems for LLMs as demonstrations (Diao et al., 2023). RCOT still improves the accuracy. It is worth noting that the performance of Double-Check method in the few-shot CoT sets decreases immensely. On the AQuA and GSMR8 datasets, its performance drops by 27.0% and 4.0%, suggesting that few-shot examples may increase the risk of revising correct solutions to the incorrect ones. LaLMa achieves a lower degree of susceptibility compared to CoT. We also compared RCOT with other stronger baselines (i.e., Self-Consistency, SC for short, and Self-Refine). Specifically, we conducted 30 trials per problem for SC and 3 trials per problem for RCOt in the zero-shot setting (set temperature to 0.7 Wang et al., 2022), which uses similar costs. Due to the extremely high cost, we do not experiment with the few-shot setting and leave it as our future work. We set max attempt to 5 for Self-Refine. Table 2 has shown the results, RCOT could achieve... --- Table 1: Average accuracy and standard deviation on seven arithmetic reasoning datasets. **Bold** denotes the best result. **Glenin**: The performance improvement compared with Standard CoT and Active-Promising in Zero-shot and Few-shot settings, respectively. *denotes the LLM that uses Manual-CoT, denotes that Active-Prompling (Diao et al., 2023) does not support the dataset in their source codes. | Model | Method | Artificially | | :--- | :--- | :--- | | | GSMKM | AQiA AddSub Date SingleEq ASDw SVAMP | | ULZ-308' | Standard | 14.4 23.6 18.2 14.4 20.2 16.9 12.5 | | LAMDN-178' | Standard | 14.3 20.6 11.9 18.7 18.7 12.5 | | Tet-drama-002' | Standard | 46.9 24.8 81.3 52.1 86.6 71.3 68.9 | ### Zero-shot CoT | Model | Method | Artificially | | :--- | :--- | :--- | | ChuCPT | Standard | 79.0±0.0 51.4±0.0 58.2±1.2 66.7±1.4 90.3±0.0 84.0±1.2 70.7±1.1 | | +DoubleCheck | 79.2±1.1 42.7±0.0 85.6±1.2 60.5±1.4 88.8±1.4 77.6±1.1 | | +CoT | 82.0±0.0 55.3±0.0 87.1±1.2 71.7±1.0 91.4±0.0 60.0±1.0 | | ΔRCoT*Standard (Acc) | - | (+0.2)±0.1 - (+0.1 --- Table 2: Average accuracy on seven arithmetic reasoning datasets among Self-Consistency (Wang et al., 2022), RCoT and Self-Refine (Madan et al., 2023). Hold denotes the best result. | Method | GSM8K | AQA | Adab | Date | Singalig | ASVN | SWAMP | Avg Acc | Ayu Token | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | SC (30 trials per problem) | 81.6 | 70.8 | 88.6 | **80.0** | 92.9 | 90.2 | 80.4 | 83.5 | 5015.0 | | RCoT (1 trial per problem) | 82.0 | 56.3 | 87.2 | 71.9 | 92.4 | 86.3 | 79.7 | 79.4 | 1831.0 | | RCoT (3 trials per problem) | **83.2** | **72.8** | 89.8 | 78.9 | 93.8 | **91.8** | **81.2** | **84.5** | 5453.3 | | Self-Refine | 79.1 | 45.2 | 90.6 | 51.3 | 97.6 | 83.5 | 78.2 | 74.7 | 1902.3 | | attempt 0 | 80.7 | 45.2 | 91.4 | 52.7 | 96.0 | 84.3 | 76.8 | 76.1 | 1308.4 | | attempt 1 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3324.9 | | attempt 2 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3359.6 | | attempt 3 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3367.7 | | attempt 4 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3367.7 | Table 3: The performance of RCoT using fine-grained feedback and coarse-grained feedback. **w/o reasons**: remove explanations of specific mistakes from the original fine-grained feedback. The prompt becomes "Your answer is wrong. No reason, please check your answer," w/o judgment-reasons: further remove the high-level judgment. The prompt becomes "You should double-check your answer." Rel.: The performance drops compared with RCoT method. | Method | GSM8K | AQUA | SVAMP | Method | GSM8K | AQUA | SVAMP | Method | GSM8K | AQUA | SVAMP | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | Standard RCoT | 79.0 | 51.3 | 76.7 | RCoT | 79.0 | 51.3 | 76.7 | RCoT | 79.0 | 51.3 | 76.7 | | RCo(Tours) | **82.0** | **55.5** | **79.6** | w/o question comparison | 82.0 | **55.5** | **79.6** | w/o question comparison | 82.0 | **55.5** | **79.6** | | w/o reasons | 80.0 (-2.0) | 52.3 (-3.2) | 79.0 (-0.7) | w/o judgment-reasons | 80.0 (-2.0) | 52.3 (-3.2) | 79.0 (-0.7) | w/o judgment-reasons | 80.0 (-2.0) | 52.3 (-3.2) | 79.0 (-0.7) | always use "You should double-check your answer" to guide LLMs in revising solutions regardless of the detection results of RCoT. Table 3 shows the results on SVAMP(easy), GSM8K(nodium), and AQa(tardy) datasets. We can see consistent performance drop when we remove detected factual inconsistency and only keep a high-level judgment, showing the effectiveness of fine-grained feedback. Moreover, we can observe that further removing judgment will make the performance even worse than standard RCoT. This is not surprising because LLMs may mistakenly revise the correct solution to the incorrect one. Appendix B.6 shows our example of Double-Check, where we can see that RCoT could help the LLM correct the solution, but Double-Check could fail. To further show the power of fine-grained feedback, we conduct a human evaluation. Specifically, amazingly, the LLM could reach 94.6% accuracy on the GSM8K dataset and could only reach 86.3% if we remove explanations of factual inconsistencies from our feedback (i.e., the same setting as "w/o reasons" in Table 3). Appendix B.2 shows examples of manually written and RCoT guided feedback. Since RCoT still has a gap (12.6% accuracy gap) compared with humans, we encourage the community to further explore fine-grained feedback generation. ### 4.4 FINE-GRAINED COMPARISON LEADS TO FINE-GRAINED FEEDBACK To get fine-grained feedback, conditions and questions are compared in a fine-grained manner in RCoT. A simpler way is to ask LLMs to generate fine-grained feedback directly by comparing the original problems and reconstructed problems. Appendix A has illustrated that LLMs can... --- reconstruct questions well when the reasoning factual consistency but not vice versa. Table 4 shows that coarse-grained comparison will cause a significant accuracy drop (even worse than salientity GLI) which successes it to generate high calls to generate high calls for fine-grained feedback (Figure 3). Therefore, problem decomposition and fine-grained comparison are essential. We also show that both condition comparison and question comparison are important. Removing either of them will lead to worse performance. This is because LLMs may hallucinate/overlook conditions (Figure 1) as well as misinterpret questions (Figure 1). **Original Problem:** Rebecca wants to train a LLM on the correctness of reconstructed problems. Reconstructed problem: Rebecca has 99 bananas and 27 marbles. How many groups can we make in total? **Course-Grained Comparison:** Yes, the reconstructed problem does not add new conditions or lack any conditions that were present in original problem. Original problem and reconstructed problem essentially ask the same question, with only the difference being the way the information is presented. Both problems ask how many groups of 3 items can be made from a collection of 9 eggs, 99 bananas, and 27 marbles. Therefore, reconstructed problem does not introduce any new conditions or leave out any conditions from the original problem. **Fine-Grained Comparison:** No, the candidate condition “Group items into sets of 3” cannot be deduced from the given condition list as it is not applicable to all items in the condition list. The condition only applies to the eggs that Rebecca wants to split into groups of 3. It does not apply to the bananas or the marbles. Additionally, the candidate condition implies that all items must be grouped into sets of 3; but there is no indication in the context that Rebecca wants to group all items into sets of 3. She only wants to split the eggs into groups of 3. Figure 3: This is an example showing the failure of the course-grained comparison. Red: The inconsistent condition between the original problem and the reconstructed problem. ### 4.5 ADDITIONAL QUANTITATIVE ANALYSIS OF RCOT To explore the effectiveness of RCOT, we construct further quantitative analysis on 100 problems the ChatGPT answered incorrectly. We manually divided these problems into four categories: condition overlooking, question misinterpretation and other errors. The statistical results are shown in Table 5. We find that RCOT is better at detecting overlooking and misinterpretation errors than hallucination errors. | Type | Found | Not Found | total | | :--- | :--- | :--- | :--- | | Overlooking | 5 | 1 | 6 | | Hallucinating | 16 | 15 | 31 | | Misinterpreting | 5 | 3 | 8 | | Other errors | 0 | 55 | 55 | ### 5 CONCLUSION In this paper, we propose RCOT—a method that enables LLMs to detect and rectify factual inconsistency and automate LLMs' reasoning abilities. RCOT can fine-grained comparison between the reconstructed problems and original questions, and then asks LLMs to recrystallize inconsistencies through fine-grained feedback. Experimental results on seven arithmetic reasoning datasets demonstrate the effectiveness of RCOT. Our experiments also show encouraging results of LLMs' reasoning abilities with the help of manual written fine-grained feedback, encouraging the community to further explore fine-grained feedback generation. RCOT could, in principle, be applied to other tasks requiring CoT solutions. We discuss the limitations and future work in Appendix C.
cG8Q4FE0Hi
EyMZdNlpiZ
Can the three different examples in the Introduction be unified?
true
RCOT: DETECTING AND RECTIFYING FACTUAL INCONSISTENCY IN REASONING BY REVERSING CHAIN-OF-THOUGHT Anonymous authors Paper under double-blind review ABSTRACT Large language Models (LLMs) have achieved promising performance on arithmetic reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting. However, LLMs face challenges in maintaining factual consistency during reasoning, exhibiting tendencies to condition overlooking, question misinterpretation, and condition hallucination over given problems. Existing methods use course-grained feedback (e.g., whether the answer is correct) to improve factual consistency. In this work, we propose RCot (Reversing Chain-of-Thought), a novel method to improve LLMs' reasoning abilities by automatically detecting and recifying factual inconsistency in LLMs' generated feedback to guide LLMs in revising solutions. Experimental results demonstrate improvements of RCot over standard CoT, Self-Consistency and Self-Refine across seven arithmetic datasets. Moreover, we find that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities (e.g., ChatGPT teaches 94.6% accuracy on GSM884), encouraging the community to further explore the fine-grained feedback generation methods. 1 INTRODUCTION Large language models (LLMs) (Brown et al., 2020b; Zhang et al., 2022; Chowdhery et al., 2022; Touron et al.) have showcased strong reasoning capabilities using chain-of-thought (CoT) (Wei et al., 2022; Chowdhery et al., 2022; Fung et al., 2022), where LLMs are prompted to generate intermediate steps before the final answer. Despite the impressive performance of CoT prompting across various research environments, there remains a need for fine-grained feedback on factual inconsistency in reasoning. Specifically, each reasoning problem usually consists of several conditions and a question, and LLMs exhibit tendencies to hallucinate, overlook conditions and misinterpret questions (Golovneva et al., 2022). While previous research has proposed various methods to enhance Chain-of-thought performance (Shum et al., 2023; Li et al., 2022; Diao et al., 2023; Shum et al., 2023; Zhou et al., 2022; Wang et al., 2022; Gao et al., 2023; Chen et al., 2022; Weng et al., 2022; Paul et al., 2023; Shim et al., 2023), there remains a noticeable absence of explicit studies addressing the issue of factual inconsistency. The most relevant work is probably Self-Verification (Weng et al., 2022), which verifies answers by swapping conditions and answers. However, it can only tell whether answers are correct and fail given fine-grained feedback on factual inconsistency to guide LLMs in revising solutions. Figure 1 shows an instance of factual inconsistent solution generated by ChatGPT, where the critical condition of "tomorrow, 10/16/1924" is mistakenly overlooked. Despite the incorrect solution, ChatGPT could be rectified if we explicitly point out that it overlooked the "tomorrow, 10/16/1924", condition. Motivated by this observation, we introduce Reverse Chain-of-thought (RCOT), a novel method for detecting and rectifying condition hallucination, overlooking and question misinterpretation (Figure 1) in CoT. More examples could be found in Appendix B.1. --- of our proposed method are illustrated in Figure 2. To detect factual inconsistency, RCoT begins with reconstructing new problems from the original solutions. Next, RCoT detects inconsistencies by constructing inconsistencies between the reconstructed problems and the original problems. To rectify inconsistencies, the detected inconsistencies will be formulated as fine-grained feedback to guide the LLMs in revising their solutions. Fine-grained feedback can not only be used to revise solutions but can also offer more interpretability of reasoning errors, allowing humans to inspect solutions easier. Overlooking Problem: Today’s meeting is rescheduled to 11 am tomorrow, 10/16/1924. What is the date one year ago from today? ChatGPT Chain-of-Thought: The current date is 10/16/1924. To find the date one year ago from today, you would subtract one year from the current year, which would be 1923. The correct answer is 10/16/1923. ✅ Hallucination Problem: A wooden bridge can carry no more than 5000 pounds. A delivery truck filled with identical boxes, each weighing 15 pounds, will pass over the bridge. The combined weight of the driver and the courier truck is 15 pounds. What is the maximum number of boxes which can be loaded onto the truck while not exceeding the bridge’s weight limit? ChatGPT Chain-of-Thought: First, we need to submit the weight of the driver and the empty truck from the maximum weight the bridge can carry. Let's assume each box weighs 16 pounds (15 pounds for the contents and 1 pound for the box). This means we can load a maximum of 77 boxes onto the truck without exceeding the bridge’s weight limit. ✅ Misinterpretation Problem: The cave is 1218 feet deep and they are already at 849 feet. If they are traveling at speed of 11 km/h, how much time will it take them to descend that distance at a speed of 17 feet per unit of time? Therefore, it will take them 22 units of time to reach the end of the cave. ✅ Figure 1: The examples of factual inconsistency. Overlooking: The problem mentions that tomorrow is 10/16/1924, whereas LLMs overlook the condition of “tomorrow”, thinking that today is 10/16/1924. Hallucination: The problem mentions that each box weighs 15 pounds. However, LLMs assume each box weighs 16 pounds, which contradicts real conditions. Misinterpretation: LLMs have correctly answered the question in the problem, but it continues answering “how long”, resulting in a wrong answer. This is a typical misinterpretation of the problem. We evaluate RCoT on seven arithmetic reasoning datasets, including GSM8k (Cobbe et al., 2021b), AQA(Ling et al., 2017), SVAMP(Patei et al., 2021), AddSub(Hosseini et al., 2014), ASDw(AQA(Ling et al., 2017), SVAMP(Patei et al., 2021), and SingHll(Qanec-Kedzirak, et al., 2016). Experimental results demonstrate the effectiveness of RCoT, outperforming competitive baselines in both zero-shot and few-shot settings. In-depth analysis and human evaluation suggest that fine-grained feedback on factual inconsistency is crucial for LLMs to revise solutions for arithmetic problems. For example, ChatGPT could achieve 94.6% accuracy on GSM8k with manually written fine-grained feedback. However, we conducted comprehensive ablation studies to improve the performance of modules. Our findings encourage the community to further explore detecting and rectifying factual inconsistency to enhance LLMs’ reasoning ability. Our contributions are summarized as follows: • We propose a novel prompting method, Reversing Chain-of-Thought (RCoT) to effectively detect and rectify the factual inconsistency of LLMs in arithmetic reasoning, focusing on overlooked, hallucinated conditions and misinterpreted questions. RCoT performs competitive baseline models consistently across seven arithmetic reasoning tasks. --- • Prompting with fine-grained feedback on factual inconsistency shows encouraging results on improving LLM's reasoning abilities. Though automatically generated by RCOt Shows improved reasoning compared with standard models on ground-truth feedback can further improve the LLM's reasoning ability (e.g., ChatGPT reaches 94.6% accuracy on GSM8k). The gap between RCOt's feedback and human-written feedback encourages the community to further explore the automatic generation of fine-grained feedback. • RCOt offers more interpretability to the reasoning errors with fine-grained feedback on factual inconsistency, allowing humans to inspect solutions easier. 2 RELATED WORK Language Model for Reasoning Reasoning ability is a critical skill to solve complex problems, such as arithmetic reasoning (Končel-Kedzioski et al., 2016; Roy & Roth, 2016; Miao et al., 2021; Cobbe et al., 2021a; Dua et al., 2019), logical reasoning (Yu et al., 2020), commensense reasoning (Bhagavataula et al., 2019; Talmor et al., 2019; Zellers et al., 2018) and tabular reasoning (Zhu et al., 2021). Recently Large Language Models (e.g., GPT3) (Brown et al., 2020) and Human-Written Language Models (e.g., LLama (Tourron et al.) have demonstrated promising reasoning capability with Chain-of-Thought methods. However, large language models exhibit tendencies to generate intermediate steps that are factually inconsistent, rendering them incapable of solving complex problems requiring multi-step reasoning. In this work, we focus on the detection and verification of factually inconsistent errors in the intermediate reasoning steps, including question misinterpretation, condition hallucination and condition overlooking. Prompt Engineering Some prompt methods can elicit useful knowledge in large language models to better solve complex tasks, two representative examples of which are In-context Learning (Dhu et al., 2019); Recursive Large Language Models (e.g., GPT3) (Brown et al., 2020) and Chain-of-Thought prompt improves performance by promoting LLMs to think of intermediate steps. Inspired by the promising performance of CoT, many methods have explored how to further improve standard CoT. Least-to-most (Zhou et al., 2022) promotes program reuse by extending the workload into a series of self-training tasks. Self-Consistency prompting (Wang et al., 2022) improves performance through majority voting on multiple solutions. Similarly, Complex CoT (Fu et al., 2022) emphasizes the importance of prompt complexity and selects the most complex examples as prompts. Auto-CoT (Shun et al., 2023) is designed to reduce the workload of manual labeling. Active-CoT (Diao et al., 2023) selects the most uncertain questions as demonstration examples to further improve performance. However, these methods fail to address the factual inconsistency problem. Probably the most relevant work are Self-Verification (Weng et al., 2022), REFINER (Paul et al., 2023), and Reflexion (Shinn et al., 2023). These approaches focus on correcting LLM outputs. However, Self-Verification can only generate data without reducing the workload of manual labeling. Reflexion requires environmental feedback, which cannot be easily obtained in artificial reasoning. Compared to these methods, RCOt entirely relies on the LLM itself to generate fine-grained feedback on factual consistency. Reverse Engineering RCOt is inspired by the concept of Reverse Engineering, which has various applications in machine learning research. (Fredrikson et al., 2015) proposes a reverse method for linear models to evaluate models' privacy safety. (Fredrikson et al., 2015) introduces a model inversion method for shallow neural networks, which can reconstruct the face information. (Geva et al., 2022) unveils the internal prediction construction process of Transformer-based language models. The key application of the facial recognition model hyperparameters is another application of reinforcement learning techniques. (Bhagavataula et al., 2019) reverses network parameters by repeatedly requesting the predicted label from the target model. (Tramer et al., 2016) develops an avatar method to estimate training data and model architectures, while (Oh et al., 2019) trains a set of white box models to estimate model parameters. (Hu et al., 2018) estimates both the structure and the weights of a CNN model on a hardware accelerator from information leaky memory access patterns. Different from their goal of opening up the --- Figure 2: The framework of RCoT. (1) Reconstruction: Ask LLMs to reconstruct the problem according to the original solution with instruction and demonstration examples. (2) Decomposition: Decompose the original problem and reconstructed problem into fine-grained condition lists. (3) Comparison: Compare both lists of sub-conditions and questions to verify whether there are hallucinations, overlookings and misinterpretations. (4) Revision: Gather all factual inconsistencies into fine-grained feedback to instruct LLMs to revise solutions. black-box of deep learning models, our work focuses on automatically detecting and rectifying factual inconsistencies that appeared in the solutions generated by LLMs. 3 REVERSING CHAIN-OF-THOUGHT (RCoT) We propose RCoT for detecting and rectifying factual inconsistency (i.e., condition hallucinations, overlookings, and question misinterpretation) in CoT to enhance LLMs’ reasoning ability. Specifically, given a complex reasoning problem $Q$ and original solution $c$ generated by the LLM, we first ask LLMs to detect factual inconsistency: (i) Problem Reconstruction: Reconstruct the problem $Q$ based on the reconstructed solution $c$. (ii) Fine-grained Comparison: Conduct a fine-grained comparison between the original problem $Q$ and the reconstructed problem $Q$ to detect condition hallucinations, overlookings, and question misinterpretation. Then we rectify LLMs using detected factual inconsistency: (iii) Fine-grained Feedback and Revision: The fine-grained comparison reveals the factual inconsistency in original solutions. The detected factual inconsistencies are formulated into fine-grained feedback to guide LLMs in revising their solution accordingly. The overall schematic illustrations of our proposed approach are illustrated in Figure 2, and an example of RCoT is shown in Appendix B.3. --- 3.1 PROBLEM RECONSTRUCTION Intuitively, if the generated step-by-step solution of an arithmetic problem is logically and factually correct and complete, it is more likely for a human to infer what is the original problem. Similarly, we ask the LLM to reconstruct the problem to get $Q$ based on its own solution $c$, in order to verify whether it truly understands the problem. We manually write instructions and in-text examples as the reconstruction prompt. We find that the factual inconsistencies in the problem $Q$ condition overlooks (e.g., the LLM uses conditions that are not mentioned in the problem $Q$ condition overlooks) e.g., the LLM overfolds some important conditions in the problem $Q$, and question misinterpretations (e.g., the LLM misunderstand the question of $Q$ can be effectively exposed by comparing the reconstructed problem $Q$ with the original problem $Q$ ($\mathcal{Q}$ 3.2), as shown in Figures 9, 6, and 15 in Appendix B.1, respectively. The prompt template can be found in Figure 21. 3.2 FINE-GRAINED COMPARISON To detect condition hallucinations and overlookings, as well as question misinterpretsions in the solution $c$ from the reconstructed problem $Q$, a naive approach is to ask the LLM to directly compare $Q$ with $Q$. However, such comparisons usually fail to produce high-quality detection results (Figure 3), which is unsurprising because $Q$ and $Q$ contain rich information, and the course-grained comparison will initially generate some information, causing a subtle result. Therefore, we use fine-grained step-by-step comparisons to improve the detection quality. All prompt templates are shown in Figure 21. The process is as follows: **Problem Decomposition** $Q$ and $Q$ are unstructured texts, which are hard to be compared in an organizer manner. To overcome this issue, we ask the LLM to decompose the problem into a list of conditions $L_Q = [L_1^0, \dots, L_q^0], L_Q = [L_1^1, \dots, L_q^1]$. The structured condition list will then be used in fine-grained comparison. **Condition Comparison** To find the differences between $Q$ and $Q$, we first check whether their condition lists $L_1^0$ and $L_1^0$ are the same. Specifically, the LLM is required to answer whether each $L_1^0$ can be inferred from $L_1^0$. If $L_1^0$ cannot be inferred from $L_1^0$, then $L_1^0$ is either (1) overlooked in the solution or (2) hallucinated by the LLM as a different condition. Similarly, we ask the LLM to tell whether $L_1^0$ can be inferred from $L_1^0$ for every $j$. If $L_1^0$ cannot be inferred from $L_1^0$, then $L_1^0$ is hallucinated. Apparently, we need to conduct comparisons for $mn$ times in total. **Question Comparison** The LLM sometimes will also misinterpret the question (Figure 1). Therefore, we also ask LLM to compare the questions being asked in $Q$ and $Q$. If LLMs find the two questions are the same, the question in the question in their solution. This comparison only needs to be done once since there is one question per problem in most cases. After these comparisons, we detect hallucinated conditions, overlooked conditions, and misinterpreted questions. We then use them to formulate our fine-grained feedback to guide the LLM in revising its solution. 3.3 FINE-GRAINED FEEDBACK AND REVISION We assume the original solution is correct if we do not detect any factual inconsistency through fine-grained comparison. On the contrary, we formulate fine-grained feedback to guide the LLM in revising its solution if any factual inconsistency is detected. Specifically, the fine-grained feedback will first state that the solution is incorrect, then list the detected factual inconsistency, and finally ask the LLM to revise its solution. Figure 21 shows the template we use to formulate the feedback. We take the answer of the revised solution as the final output for evaluation. 4 EXPERIMENT Our extensive experiments aim to show that (1) RCoT benefits arithmetic reasoning by automatically detecting and rectifying hallucination and overlooking, and question misinterpretation; --- (2) fine-grained feedback of factual consistency is critical for LLMs to self-revise the solution. (3) fine-grained comparison is essential for constructing high-quality fine-grained feedback. 4.1 EXPERIMENT SETTING We used closed-source ChatGPT and open-source LaLMa-13B-Chat (Vuonr et al.) as the backbone LLMs for solution generation and set the temperature to 0 to improve reproducibility. We evaluate RCOT of seven intrinsic datasets with different difficulties, including GSMR8 (Cobet et al., 2019b), AQuA (Ling et al., 2017), SVAMP (Patel et al., 2021), AddSub (Hossein et al., 2014), ASDir (Miao et al., 2021), Date (Sivattava et al., 2022) and SingelEq (Koneck-Kedzikowski et al., 2016). Due to the high time cost of API calls, we do not use the whole test set but randomly sample test sub-sets. To reduce the redundant caused by test set sampling and make our results more convincing, we sample three test sub-sets that each contains 256 inputs. We report the average accuracy with deviation on the three test sub-sets. For the dataset that has less than 256 test inputs, we still evaluate three times since ChatGPT's outputs may change and report the average accuracy with deviation. A detailed description of each dataset is shown in Appendix B.5. We consider both zero-shot and few-shot settings. For the zero-shot setting, we add the prompt "Let’s think step by step to encourage LLMs to think intermediate steps without any demonstration example (Goal for the few-shot setting, we use four-shot COt prompts that consist of problems, solutions, and final answers). We compare our method with five baselines: (1) Chain-of-thought (CoT) (Wei et al., 2022) (2) Active-Prompting (Diao et al., 2023), a method that selects the most uncertainty problems as demonstration examples. (3) Double-Check asks LLMs to check their answers but does not point out whether the answer is correct. In our experiment, we use the prompt "You should double-check your answer via Rel Confidence CoT (Wang et al., 2022) through majority firing capability solutions to improve the performance." (4) Self-Refine (Mudaa et al., 2023) uses iterative feedback and refinement to revise the answer. We use TikTok from Openai to calculate the cost of average tokens.` 4.2 RCOT BENEFITS ARITHMETIC REASONING Table 1 shows the results of RCOT on seven arithmetic datasets. Our method consistently outperforms the standard CoT and the double-check methods in the zero-shot setting. Moreover, LLMs benefit more when they see a real confident CoT (Wang et al., 2022) because the AQuA dataset contains diverse problems, and the Date dataset requires multi-hop reasoning and common sense date knowledge. Both CoT and LaLMa achieve lower accuracy scores on AQuA and Date (51.3% and 66.7% for CoT and 27.2% and 52.4% for LaLMa) among all seven tasks. Meanwhile, we observe that our method helps LLMs improve by apparent margins on AQuA and DCOt. This improves the performance of CoT and LaLMa. Our method also remains effective for easier tasks. For example, CoT enhances the performance of the SVAMP dataset, which contains problems that usually only one-step calculation, by 2.8% and 2.5%. Moreover, we also observe greater improvements from our method on ChatGpt than LaLMa, potentially due to the stronger abilities of ChatGpt to detect and correct errors. We can observe similar results in the zero-shot setting to the zero-shot setting but selecting the most uncertain problems for LLMs as demonstrations (Diao et al., 2023). RCOT still improves the accuracy. It is worth noting that the performance of Double-Check method in the few-shot CoT sets decreases immensely. On the AQuA and GSMR8 datasets, its performance drops by 27.0% and 4.0%, suggesting that few-shot examples may increase the risk of revising correct solutions to the incorrect ones. LaLMa achieves a lower degree of susceptibility compared to CoT. We also compared RCOT with other stronger baselines (i.e., Self-Consistency, SC for short, and Self-Refine). Specifically, we conducted 30 trials per problem for SC and 3 trials per problem for RCOt in the zero-shot setting (set temperature to 0.7 Wang et al., 2022), which uses similar costs. Due to the extremely high cost, we do not experiment with the few-shot setting and leave it as our future work. We set max attempt to 5 for Self-Refine. Table 2 has shown the results, RCOT could achieve... --- Table 1: Average accuracy and standard deviation on seven arithmetic reasoning datasets. **Bold** denotes the best result. **Glenin**: The performance improvement compared with Standard CoT and Active-Promising in Zero-shot and Few-shot settings, respectively. *denotes the LLM that uses Manual-CoT, denotes that Active-Prompling (Diao et al., 2023) does not support the dataset in their source codes. | Model | Method | Artificially | | :--- | :--- | :--- | | | GSMKM | AQiA AddSub Date SingleEq ASDw SVAMP | | ULZ-308' | Standard | 14.4 23.6 18.2 14.4 20.2 16.9 12.5 | | LAMDN-178' | Standard | 14.3 20.6 11.9 18.7 18.7 12.5 | | Tet-drama-002' | Standard | 46.9 24.8 81.3 52.1 86.6 71.3 68.9 | ### Zero-shot CoT | Model | Method | Artificially | | :--- | :--- | :--- | | ChuCPT | Standard | 79.0±0.0 51.4±0.0 58.2±1.2 66.7±1.4 90.3±0.0 84.0±1.2 70.7±1.1 | | +DoubleCheck | 79.2±1.1 42.7±0.0 85.6±1.2 60.5±1.4 88.8±1.4 77.6±1.1 | | +CoT | 82.0±0.0 55.3±0.0 87.1±1.2 71.7±1.0 91.4±0.0 60.0±1.0 | | ΔRCoT*Standard (Acc) | - | (+0.2)±0.1 - (+0.1 --- Table 2: Average accuracy on seven arithmetic reasoning datasets among Self-Consistency (Wang et al., 2022), RCoT and Self-Refine (Madan et al., 2023). Hold denotes the best result. | Method | GSM8K | AQA | Adab | Date | Singalig | ASVN | SWAMP | Avg Acc | Ayu Token | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | SC (30 trials per problem) | 81.6 | 70.8 | 88.6 | **80.0** | 92.9 | 90.2 | 80.4 | 83.5 | 5015.0 | | RCoT (1 trial per problem) | 82.0 | 56.3 | 87.2 | 71.9 | 92.4 | 86.3 | 79.7 | 79.4 | 1831.0 | | RCoT (3 trials per problem) | **83.2** | **72.8** | 89.8 | 78.9 | 93.8 | **91.8** | **81.2** | **84.5** | 5453.3 | | Self-Refine | 79.1 | 45.2 | 90.6 | 51.3 | 97.6 | 83.5 | 78.2 | 74.7 | 1902.3 | | attempt 0 | 80.7 | 45.2 | 91.4 | 52.7 | 96.0 | 84.3 | 76.8 | 76.1 | 1308.4 | | attempt 1 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3324.9 | | attempt 2 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3359.6 | | attempt 3 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3367.7 | | attempt 4 | 80.7 | 49.2 | 91.4 | 52.7 | 98.0 | 84.3 | 76.8 | 76.1 | 3367.7 | Table 3: The performance of RCoT using fine-grained feedback and coarse-grained feedback. **w/o reasons**: remove explanations of specific mistakes from the original fine-grained feedback. The prompt becomes "Your answer is wrong. No reason, please check your answer," w/o judgment-reasons: further remove the high-level judgment. The prompt becomes "You should double-check your answer." Rel.: The performance drops compared with RCoT method. | Method | GSM8K | AQUA | SVAMP | Method | GSM8K | AQUA | SVAMP | Method | GSM8K | AQUA | SVAMP | | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | | Standard RCoT | 79.0 | 51.3 | 76.7 | RCoT | 79.0 | 51.3 | 76.7 | RCoT | 79.0 | 51.3 | 76.7 | | RCo(Tours) | **82.0** | **55.5** | **79.6** | w/o question comparison | 82.0 | **55.5** | **79.6** | w/o question comparison | 82.0 | **55.5** | **79.6** | | w/o reasons | 80.0 (-2.0) | 52.3 (-3.2) | 79.0 (-0.7) | w/o judgment-reasons | 80.0 (-2.0) | 52.3 (-3.2) | 79.0 (-0.7) | w/o judgment-reasons | 80.0 (-2.0) | 52.3 (-3.2) | 79.0 (-0.7) | always use "You should double-check your answer" to guide LLMs in revising solutions regardless of the detection results of RCoT. Table 3 shows the results on SVAMP(easy), GSM8K(nodium), and AQa(tardy) datasets. We can see consistent performance drop when we remove detected factual inconsistency and only keep a high-level judgment, showing the effectiveness of fine-grained feedback. Moreover, we can observe that further removing judgment will make the performance even worse than standard RCoT. This is not surprising because LLMs may mistakenly revise the correct solution to the incorrect one. Appendix B.6 shows our example of Double-Check, where we can see that RCoT could help the LLM correct the solution, but Double-Check could fail. To further show the power of fine-grained feedback, we conduct a human evaluation. Specifically, amazingly, the LLM could reach 94.6% accuracy on the GSM8K dataset and could only reach 86.3% if we remove explanations of factual inconsistencies from our feedback (i.e., the same setting as "w/o reasons" in Table 3). Appendix B.2 shows examples of manually written and RCoT guided feedback. Since RCoT still has a gap (12.6% accuracy gap) compared with humans, we encourage the community to further explore fine-grained feedback generation. ### 4.4 FINE-GRAINED COMPARISON LEADS TO FINE-GRAINED FEEDBACK To get fine-grained feedback, conditions and questions are compared in a fine-grained manner in RCoT. A simpler way is to ask LLMs to generate fine-grained feedback directly by comparing the original problems and reconstructed problems. Appendix A has illustrated that LLMs can... --- reconstruct questions well when the reasoning factual consistency but not vice versa. Table 4 shows that coarse-grained comparison will cause a significant accuracy drop (even worse than salientity GLI) which successes it to generate high calls to generate high calls for fine-grained feedback (Figure 3). Therefore, problem decomposition and fine-grained comparison are essential. We also show that both condition comparison and question comparison are important. Removing either of them will lead to worse performance. This is because LLMs may hallucinate/overlook conditions (Figure 1) as well as misinterpret questions (Figure 1). **Original Problem:** Rebecca wants to train a LLM on the correctness of reconstructed problems. Reconstructed problem: Rebecca has 99 bananas and 27 marbles. How many groups can we make in total? **Course-Grained Comparison:** Yes, the reconstructed problem does not add new conditions or lack any conditions that were present in original problem. Original problem and reconstructed problem essentially ask the same question, with only the difference being the way the information is presented. Both problems ask how many groups of 3 items can be made from a collection of 9 eggs, 99 bananas, and 27 marbles. Therefore, reconstructed problem does not introduce any new conditions or leave out any conditions from the original problem. **Fine-Grained Comparison:** No, the candidate condition “Group items into sets of 3” cannot be deduced from the given condition list as it is not applicable to all items in the condition list. The condition only applies to the eggs that Rebecca wants to split into groups of 3. It does not apply to the bananas or the marbles. Additionally, the candidate condition implies that all items must be grouped into sets of 3; but there is no indication in the context that Rebecca wants to group all items into sets of 3. She only wants to split the eggs into groups of 3. Figure 3: This is an example showing the failure of the course-grained comparison. Red: The inconsistent condition between the original problem and the reconstructed problem. ### 4.5 ADDITIONAL QUANTITATIVE ANALYSIS OF RCOT To explore the effectiveness of RCOT, we construct further quantitative analysis on 100 problems the ChatGPT answered incorrectly. We manually divided these problems into four categories: condition overlooking, question misinterpretation and other errors. The statistical results are shown in Table 5. We find that RCOT is better at detecting overlooking and misinterpretation errors than hallucination errors. | Type | Found | Not Found | total | | :--- | :--- | :--- | :--- | | Overlooking | 5 | 1 | 6 | | Hallucinating | 16 | 15 | 31 | | Misinterpreting | 5 | 3 | 8 | | Other errors | 0 | 55 | 55 | ### 5 CONCLUSION In this paper, we propose RCOT—a method that enables LLMs to detect and rectify factual inconsistency and automate LLMs' reasoning abilities. RCOT can fine-grained comparison between the reconstructed problems and original questions, and then asks LLMs to recrystallize inconsistencies through fine-grained feedback. Experimental results on seven arithmetic reasoning datasets demonstrate the effectiveness of RCOT. Our experiments also show encouraging results of LLMs' reasoning abilities with the help of manual written fine-grained feedback, encouraging the community to further explore fine-grained feedback generation. RCOT could, in principle, be applied to other tasks requiring CoT solutions. We discuss the limitations and future work in Appendix C.
CgpiO0DRrk
2mQAM45xsU
Each EBS serves a distinct region (no overlap, as per the paper) and optimizing each EBS individually should be sufficed in my opinion. How sharing among servers will help here?
true
VIDEO CACHING AT DATA-DRIFTING NETWORK EDGE: A KD-BASED CROSS-DOMAIN COLLABORATIVE SOLUTION Anonymous authors Paper under double-blind review ABSTRACT The surge in video streaming has caused network congestion and quality decline, posing a significant challenge for efficient content delivery. Edge caching, using mobile edge caching servers like edge base stations (EBS), has emerged as a cost-effective solution. Collaborative edge caching, addressing space limitations by fostering cooperation and content sharing among servers, improves caching hit rates (CHR). However, little attention has been paid to the impact of request characteristics on different servers. To tackle this issue, we conducted a study using data collected from Kuaisbou company over a period of four weeks, complying with video requests. Our research finding suggests sparse EBSs significantly impede the overall CHR improvement. Knowledge distillation (KD), a technique that transfers knowledge from strong models to weak models, is expected to solve this bottleneck problem. However, traditional KD methods often rely on the assumption of independent and identically distributed data but may not be real-world scenarios where data occurs. We identify two major types of data drift in caching data: temporal drift and spatial drift. To overcome these challenges, we propose an adaptive KD-based cross-domain collaborative edge caching (KDCdCEC) framework, incorporating three tailored components: i) side wise DRL based KD-enhanced caching agent capable of adapting to EBSs with varying storage sizes, ii) a deep deterministic policy gradient-based algorithm that adaptively configures the reference weights of EBS on their KD partners, andiii) a content-aware request routing mechanism for partner adjustment. Experimental results show that KDCdCEC outperforms state-of-the-art approaches in average CHR, average latency, and traffic cost. 1 INTRODUCTION Video streaming, including live streams and clips, is projected to comprise over 65% of global internet traffic by the end of 2023, as reported by Sandvine Sandvine (2023). This rapid growth has caused network congestion and potential quality decline, posing a challenge for content providers. Edge caching, often through a content delivery network (CDN), aims to address this issue by enabling the development of mobile communication technologies, such as 5G, mobile edge caching has introduced a novel solution for video content delivery by bringing the content closer to users with lower cost Hu et al. (2015). However, the storage space of mobile edge caching servers (i.e., edge base stations, abbreviated as EBSs) is limited, impacting the increase of CHR. Collaborative edge caching has recently emerged as a promising solution to address this limitation Poulmeenas et al. (2018). Similarly, Bampili and Pampilli (2018). Collaborative edge caching cooperation and content sharing, boosts CHR, leading to decreased average latency and traffic costs. Despite notable progress in edge caching technologies, scarcity attention has been given to the impact of request characteristics on various servers. To bridge this gap, we conducted a comprehensive study using data collected from Kuaisbou company’ over a four-week period, encompassing 350 1 Kuaisbou stands as one of the premier short video platform of China, boasting a daily active user base of 355.7 million by the end of 2022Homaia (2023). --- million video requests, revealing substantial variations in the number of requests handled by different EBSs in real-world scenarios (shown in Fig. 10)$^3$. Specifically, at least 78% of EBSs received forward compressive.Was applied during a strong positive correlation between model CHR and the number of requests handled by various servers (coefficient of determination $R^2 = 0.8671$). Furthermore, the average CHR of request-sparse EBSs was 45% lower than that of request-dense EBSs (request volume higher than midrange). Therefore, request-sparse EBSs significantly bottleneck the overall CHR improvement in the edge caching problem. Recently, knowledge distillation (KD) (Gu et al., 2021) has proven effective in transferring valuable information two major types of data drift in caching data. In the context of real-world models that models on request-dense EBSs harbor richer knowledge for decision-making, guiding cache replacement for other EBSs. To tackle this, we propose a KD-based collaborative solution for video caching. However, traditional KD methods often rely on the independent and identically distributed assumption, which may not hold in real-world scenarios with data drift. Through our measurements, we identified two major types of data drift in caching data: temporal drift and spatial drift. Temporal drift refers to the rapid changes in the popularity of requested videos over time. To quantify this phenomenon, we defined frequency (number of requests required by a single video), request proportion (ratio of request frequency to total request volume in the corresponding period), and request similarity (proportion of the same requested video to all requested videos in two different minutes). We randomly select various time periods and observe that, on average, the request frequency per day was changed by 34% after 2 hours, while the average request similarity after 24 hours was only 59%. Different video types exhibit distinct frequency change trends. Spatial drift, on the other hand, relates to the uneyeshed distributed requested content of different geographical domains$^4$. We use the coefficient of variation (CV) as a standardized measure of different values of requested video time per request, so that the request proportion CV for 56% in each domain exceeds 10% again. Additionally, the average CV of the request frequency change ratio after 24 hours in each domain for all requested videos was 1.18. More measurement results are shown in Fig. 1. Given the observations, we aim to address the following challenge: Facing the complicated imbalance requests of EBSs, how to develop an effective collaborative framework against temporal and spatial drift simultaneously? To achieve this goal, we present an adaptive KD-based cross-domain KDCdCFCK model, called KDCdCFCK. Firstly, i) Motivated by the great success of deep reinforcement learning (DRL) on the temporal drift problem, we formulate edge caching as a Markov decision process (MDP) and integrate it with KD to facilitate collaborative edge caching. To reduce the computational complexity for EBSs with large storage space, we design a slot-wise reinforcement learning agent that can be directly applied to EBSs with different storage sizes. ii) To handle spatial drift request, we propose a deep deterministic policy gradient (DDPG) based algorithm that adaptively configure the reference weights of servers on their partners. As such, dynamically adjusting weights may impact the performance of intermediate processes. To mitigate this issue, we introduce a content-aware request routing mechanism that can directly discard unsuitable partners, thus enhancing the preprocessing but half of the Kuai-shan backbone in comparison to the best-performing collaborative baseline, KDCdCFCK can improve CHR by 5.26% with less cost (0.94% latency reduction and 5.16% traffic cost reduction). 2 PRELIMINARY 2.1 ANALYSIS OF VIDEO REQUEST MEASUREMENT For in-depth insights into request heterogeneity and dynamics, we analyzed 350 million real-world traces, dividing the area into a 7×7 grid for examination. $^{2}$Measurements are based on the result of non-collaborative approach in Table 3. $^{3}$Each EBS serverIndustry region referred to as a domain, with no overlap between them. $^{4}$ACV greatest is indexed by degree of variability relative to the mean of the dataset and the dataset is not normally distributedAdiDi (2010). --- (a) Volume Drift: Request amount (b) Temporal Drift: Popularity trend (c) Spatial Drift: CV of the request frequency change ratio all content after 24 hours in each domain Figure 1: Real-world video request measurements and analysis. Fig. 1(a) shows the positive correlation between request amount and EBS performance as mentioned in introduction. In addition, we observe the spatio-temporal characteristics of user traces. We recently sampled 510 videos to construct their highest requests for 14 consecutive days to construct daily highest request vector as feature. We adopt cosine similarity to calculate the distance between videos and classify them into 4 clusters according to the average distance scene plot. Subsequently, we select 4 representatives from the top 100 most requested videos, who are the closest to the center of mass of each cluster respectively. The daily highest request of these representatives is depicted in Fig. 1(b), which reveals that different types of content exhibit varying popularity trends. Fig. 1(c) shows the request proportion CV for 56% of videos in each domain exceeded 1 on average, indicating the proportion of more than half content varies greatly among different EBSs at the same moment. Apart from that, we change the domain width from 1km to 4km and calculate the request frequency, change rate of all content. The result indicates the average CV of the request frequency change ratio after 24 hours in each domain for all requested videos is 1.18 when the domain width is set to 2km. This insight reveals the trend of the same content in different domains is generally inconsistent. Overall, our measurement techniques highlight the broad and dynamic nature of content popularity, showcasing diverse preferences and trends among different EBSs. According to our measurement of daily content similarity in 14 consecutive days, we found two distinct features, which are framed with rounded rectangular box, as marked in Fig. 2. Furthermore, we also measured the similarity of the content of EBS requests over times. We divide requests within two five-minute intervals and calculate the request similarity in each slot. As illustrated in Fig. 3, after one hour, the similarities drop to 0.85, two hours later, the similarity decay below 0.75. This result indicates content similarities of each EBS are rapidly decaying. These insights derived from our data indicate a significant motivation for the design of our algorithm, which will be elaborated on in Section III. 2.2 KD-BASED EDGE CACHE FRAMEWORK We consider a KD-based edge caching scenario for mobile video streaming, as illustrated in Fig. 4. We assume that the CDN server have cached all videos and connect to EBSs via backbone networkMa et al. (2017a). EBSs are distributed in a citywide area equipped with edge servers, which provide minor storage capacity for video caching and DRL-based model for cache decision-making. Each EBS serves the local video requests within its coverage area with two states: a cache hit and miss, with the following steps. When a request arrives, if cache hit, EBS returns the cached content immediately (The short-dashed arrows labeled prestore and 1), or it will fetch the content from CDN --- server (The long-dashed arrows labeled 1, 2, and 3). In our context, time is divided into continuous caching periods, where each EBS conducts cache replacement at an end of each period. EBSs communicate through a KG network. Wang et al. (2010), c. Wang et al. (2010), can collaborate and periodically acquire knowledge from partners and strengthen their own cache replacement strategies through KD (The double sided arrow between EBSs). We set the CHR as the corn optimization target, which is a measurement of how many content requests an EBS can fill successfully compared to how many requests it receives. 3 FRAMEWORK AND SYSTEM MODEL 3.1 SLOT-WISE DRL-BASED KD-ENHANCED COLLABORATIVE CACHING Following the past works Zhong et al. (2020a); Wang et al. (2020); Ye et al. (2021); Kirrlin et al. (2019), we formulate the cache replacement as MDP and use a DRL agent to solve it. **State** States, serving as input environmental information (i.e., cache condition), are represented here by request statistics of cached videos. Specifically, we introduce an $F$-dim video feature $s^i = \{s_{i1}s_{i1}, \cdots, s_{iN}s_{iN}\}$, where $s_i[s_j]$ denotes the number of items $s_i$ in cache lot $i$ was requested in the past period $a_i$, e.g., 10s, 20s, and 40s. We set the cache capacity as $C^5$. Additionally, the agent considers $C$ candidate videos that possess top $C$ feature modulus among all ever requested but uncached videos. The state of both the cached and candidate videos constructs $a_i$, which has a dimension of $2C \times F$. An AO whencache miss occurs, the EBS must determine which cache slot should be replaced with a candidate video. All possible replacement actions can be represented by a $C \times C$ one-hot binary matrix $a_i$, where the element $a_{ij} = 1$ indicates the action of replacing the video in the $i$-th cache slot with the $j$-th candidate video. **Reward** Our target is to optimize cache space utilization by increasing cache hit times. The reward function is determined based on the iterative cache hit times after each replacement action $a_i$: $$r_i = \frac{t_i^{max\delta(s_{i1}, s_{i2}, \dots, s_{iN})}}{B}$$ where $t_i^{max\delta(s_{i1}, s_{i2}, \dots, s_{iN})}$ is the cumulative hit times of the $i$-th cache slot, which is filled by the $j$-th candidate video, between the action time $t$ and the next adjacent cache miss. **Online Training** In a single-step action process, the DRL agent estimates the expected rewards of the cached and candidate content. The agent will act and continue to respond user requests until the next cache miss occurs, and then obtain the hit times and feed it back as a return. In addition, the agent will observe the current state $s_{i+1}$, then construct a transition tuple $\{s_1, a_i, r_i, s_{i+1}\}$, and store it in the replay buffer $D$. Every once in a while, the agent samples a batch of transition tuples $B_i = \{s_1, a_i, r_i\}$ from the replay buffer $D$ for model training. After sampling, we apply deep Q-learning (DQN) in Math. a. at (2015), a model-free agent and construct deep neural network $Q_G(\epsilon)$ to score contents based on their states. The loss is subsequently calculated using the sampled tuples: $$L(B(\epsilon)) = \frac{1}{\|B\|} \sum_{\{s_{i1}, s_{i2}, \dots, s_{iN}\} \in B} [Q_B(s_i, a_i) - y]^2$$ where $y = r_i + Q_B(s_{i+1}, a)$ represents the expected discount reward, with $y$ being the discount factor. $d$ represent the parameter of the target network, which is initialized with the same parameter as $Q_B$ and updated by soft moving average. The action $a$ is generated as follows: $$\pi_\theta(a_{i+1}|s_{i+1}) = \arg\max_{\{s_{i+1}\}} (Q_\theta(s_{i+1}, a_{i+1}))$$ --- 1 Following Zhong et al. (2020b); Cui et al. (2023), we assume all videos have the same size, and the cache capacity $C$ can be represented as the number of videos that can be stored at a EBS. --- Afterward, the network parameters can be updated using the following loss: $$\theta \leftarrow \theta - \alpha \nabla_\theta L (\mathcal{B}).$$ (4) The target network is updated through soft moving average: $$\hat{\theta} \leftarrow \lambda \theta + (1-\lambda) \hat{\theta}.$$ (5) **Slot-wise transformation** We assume that the time taken to retrieve video contents from the cache is significantly shorter than their delivery time and can be neglected. Thus, different orders of the same contents in the cache can be considered as the same storage state. We simplify the cache replacement problem at the granularity of the cache slot, separating the cached content from its storage location. This simplification effectively reduces the complexity of the network input. Furthermore, it enables training an agent capable of handling the cache replacement problem for EBS with varying cache capacities, which is particularly crucial for facilitating collaboration between EBSs. Based on the insight from Fig. 1(a), we adopt online KD to let EBSs share knowledge for better caching decisions. Each EBS distills knowledge from the model to share with other EBSs. Specifically, we begin by extracting the sample of knowledge distillation, which requires EBSs to randomly sample a bundle of transition tuple $S_k = \{(e_i, n_i, r_i, s_i, t_i)\}$ from the replay buffer $D$. Since each action of $d$ cache replacement involves only one cache slot, the state of other cache slots remains unchanged. Therefore, during the distillation process, the agent only needs to consider the $i$-th component of the transition if the content in the $i$-in slot is replaced, $S_k$ can be simplified as $S_k = \{s_1^i, r_1^i, s_{i+1}^i\}$. Next, EBSs needs to extract the sample from the tuple set by inputting $s_i^i$ into the popularity network $Q_{\omega}(c)$ and obtains knowledge labels $Q_{\omega}(s_i)$. We can formulate the distillation knowledge set $D_k$ with pairs of states and labels: $$D_k = \{(s_j^i, Q_{\theta}(s_j^i)), (s_i^j) \sim \bar{S}_k\},$$ (6) Then EBSs learns the knowledge shared by its partners. The reference partner list is $RL_k$, and the distillation data set $X_k = \{D_p \in RL_k\}$. After collecting $X_k$, EBSs calculates losses with pairs of states and labels, denoted as $(x,y)$ in Eq. 7, from partner EBSs on $Q_{\theta_k}$. The loss of the sample from the partner EBSs on $EBS_k$ is as follows: $$L^{ki} = \frac{1}{\|\mathcal{D}_p\|} \sum_{x,y \in D_p} |Q_{\theta_k}(x) - y|^2.$$ (7) Then the loss value is weighted by reference weight $EBS_s$ to its partners and averaged to obtain the final loss value: $$L_{kk} = \frac{1}{\|\mathcal{X}_k\|} \sum_{D_p \in X_k} \left(e^{v^i_{pk}} / \sum_j e^{v^j_{pk}}\right) L^{ki},$$ (8) where $v^i_{pk}$ represents the reference weight of EBS$_k$ to EBS$_s$ in time slice $l$. The weighted loss value guides the updating of the model parameters: $$\theta_k \leftarrow \theta_k - \alpha \nabla_{\theta_k} L_{kk}.$$ (9) ### 3.2 DDPG-BASED WEIGHT LEARNING According to Fig. 1(c), the trend and proportion of the same content in different domains is generally inconsistent, which may affect the timeliness and validity of knowledge. Hence, each EBS needs to dynamically maintain its reference weight to partner EBSs to obtain more seasonable and useful knowledge. Consequently, we employ the DDPG algorithm for EBSs to dynamically adjust reference weight to EBSs. **Definitions** We define $K_{pk} = \{l^{ki}_{pk}, j \in RL_k\}$, as the state observed by the reference weight agent, which corresponds to the loss of knowledge from partner EBSs. The reward is the number of cache hits between distillations, denoted as $r^{ki}_{pk}$. The action of reference weight agent is to calculate the weighted loss of partner EBSs, and thus the parameter of the action network $\sigma_{\theta_k}$ is defined by the reference weights to EBS partners: $$a^{kk}_{pk} \sim \sigma_{\theta_k} \left| a(K_{pk}) \right|.$$ (10) --- When the agent take next action, it will observe the state $K_{s'}$ simultaneously. Subsequently, we get distillation tuple $(K_{s}, a^{d}, r_{s'}^{d}, K_{s'})$, and save it in the distillation replay buffer $M_{dL}$. At regular intervals, reference weight agent on EBSs samples form buffer $M_{dL}$ and gets $\mathcal{Z} = (K_{s}, a^{d}, r_{s'}^{d}, K_{s'})$, which is abbreviated as $\mathcal{Z} = (s, a, s) \in \mathbb{Z}$ in the following equation. After sampling, we construct the critic network $Q_{c}$ to evaluate the utility of actor network and maximize the expected discount return: $$J(\sigma_{\theta}) = E_{\sigma_{g}} \left[ \sum_{(s, a, a') \in \mathbb{Z}} Q_{c}(s, a) \right].$$ (11) The loss can be calculated with the sampled tuples: $$L_{w}(\zeta_k) = \frac{1}{\|\zeta\|} \sum_{(s, a, a') \in \mathbb{Z}} [Q_{c}(s, a) - y]^2.$$ (12) where $y = r + q\mathcal{Q}_{c}(s', \sigma_{g}(s'))$ is expected discount reward, and $\eta$ is discount factor. The parameters of the target network are denoted as $\zeta_c$ and $\vartheta_b$, which are initialized with the same parameter as $Q_{c}$, and $\vartheta_b$, and updated by soft moving average. Following that, we update the parameters of the critic network: $$\zeta_k \leftarrow \zeta_c - \alpha \nabla_{c L} L_w (\zeta_k).$$ (13) Next, we calculate the gradient of the actor network: $$\nabla_{\theta_j} J \approx -\frac{1}{\|\zeta\|} \sum_{(s, a, a') \in \mathbb{Z}} \nabla_{c L} Q_{c}(s, a) \nabla_{c L} \sigma_{g}(s).$$ (14) Afterward, we update the parameters of the actor network using its gradient: $$\vartheta_k \leftarrow \vartheta_b - \alpha \nabla_{c L} J.$$ (15) Finally, we update the target network using soft moving average: $$\vartheta \leftarrow \tau \vartheta + (1 - \tau)\vartheta,$$ $$\xi \leftarrow \tau \xi + (1 - \tau)\xi.$$ (16) ### 3.3 CONTENT-AWARE REQUEST ROUTING MECHANISM As illustrated in Fig. 5, we let EBSs refer to all the other EBSs and calculate the average episode number of reference weights to all adverse partners dropping below $10^{-3}$. We first calculate the average loss of all partners knowledge in the first 10 episodes. Then we define partners with greater loss than current EBSs as adverse partners. Measurement of daily content similarity in 14 consecutive days, we found two distinct features, as marked in Fig. 2. To sum up, requested contents follow a certain periodicity on a weekly basis, which inspires us to use one week as the strategy cycle $T_1$, and divide the time of one week into time slices of equal length. The device learns the collaboration partners and reference weight respectively in each time slice, and use them at the same time in next week. --- The popularity of video content is constantly changing, so we introduce an update cycle, denoted as $T_w$ to reset partner links and reference weight after several strategy cycles. This approach helps improve the current time scale and the experimental configuration. Each strategy cycle consists of equal-length time slices, and each update cycle comprises equal-length strategy cycles. We define $W_n = T_w/T_p$, $N_s = T_p/T_r$. For $EBS_i$ in strategy cycle, we initialize reference partner list as $RL_i = \{RL_1^i, RL_2^i, \dots, RL_N^i\}$ and reference weight as $W_l = \{WL_1^l, WL_2^l, \dots, WL_n^l\}$. For the $l$-th time slice in strategy cycle, $RL_i^l = \{EB_s^i\}, \dots, EB_s^n\}$, where $n$ represents partner number of $EBS_i$. **Partner list adjustment** For each EBS, if it average loss of the distillation knowledge set from other EBSs within the current time slice is lower than the average loss of the training sampling set $B$ on that EBS, it will be removed from the reference partner list. It should be noted that, according to the above process, in each update cycle, the EBS only learns the reference partner list in the first strategy cycle and uses it at the same time in subsequent strategy cycle, rather than dynamically updating the list in the whole process. That is because if the list is updated and iterated throughout each update cycle, 1000 times bring about large time overhead, but also affect efficiency of each EBS in determining the reference weight to partner. ### 4 EVALUATION In this section, we conduct a series of experiments to evaluate the performance of our framework. Through these experiments, we aim to showcase the superior performance of our framework compared to other existing solutions in real-world scenarios. **Evaluation setup** We use the real-world data set from Kunishou company, including 0.35 billion content requests in Beijing for four weeks, and each request consists of a timestamp, location, and content ID. In our experiments, we select the requests from a 14km × 16km area in northwest Beijing approximately 1618 million requests to 2,887 million requests to 2,887 million requests. Including Haidian, Changeng, Dongcheng, Xicheng, and Chengqi District. Since the width of the partition does not affect the characteristics of the request (As shown in Fig. 1(c)), we set the width of each domain to 3km and divide this area into 8×7 tiles and gives EBS servers each tile of each domain an amount of each domain to 300 and 300 instances --- Evaluation metrics 1) EBS CHR: As we defined in preliminary, this metric represents the proportion of hit requests to all user requests, indicating the effectiveness of caching. 2) Average latency This metric refers to the time taken from transmission latency to moment it is received. It is a crucial component of a user’s quality of experience (QoE) and reflects the efficiency of content delivery. 3) Average traffic cost: Traffic cost refers to the amount of data and computational resources required to exchange information between EBSs and CDN servers. We consider the average traffic cost of each user request an essential metric for evaluating the resource utilization. Since obtaining the real latency and traffic cost is not feasible, we estimate the average latency by assuming that fetching contents from the CDN server takes ten times longer than retrieving requested contents from the local EBS cache in the case of a cache hitZhong et al. (2020a). Similarly, the average traffic cost between EBSs and the CDN server is assumed to be ten times the average cost of retrieval services, providing an approximation of the resource consumption in the systemKrajaa & Fojova (2011). Figure 6: The average video CHR, request latency, and traffic cost of different baseline approaches. Overall performance We measure oracle results under the current experimental conditions. Specifically, we assume that all future user requests are known, and all EBSs adjust the cached contents based on future requests to achieve the maximum CHR. The average CHR and oracle results for each EBS request compare with KDGICEC-$\beta$ demonstrated in section 4.10 and KDGICEC-$\beta$ compared to the state-of-the-art multi-agent DRL-based approach and the multi-agent collaborative DRL-based approach, respectively. In comparison, the rule-based cache replacement methods LFU and LRU could only achieve 52.27% and 64.30% of the CHR achieved by KDGICEC-$\beta$. Our focus was primarily on comparing the performances of KDGICEC-$\beta$, MAAC, and MacOCache in order to evaluate the effectiveness of demonstration based on increase of the CHR in Q1, Q2, Q3, and Q4, respectively, indicating that our framework performs better in domains with sparse requests. As for MacOCache, KDGICEC-$\beta$ exhibited improvements of 4.74%, 4.98%, 2.92%, and 6.03% for CHR for groups Q1 to Q4, demonstrating that our framework can deliver results for these groups collaboratively. KDGICEC-$\beta$ is noted that the strategy of collaborative caching is less beneficial for request-dense EBS; however, KDGICEC-$\beta$ showed the most significant improvement among these EBSs compared to MacOCache. This suggests that the MacOCache approach, which forbids binder cooperative learning partners for request-dense EBS, hinders the model learning process. Moreover, as illustrated in this study, our collaboration pattern collaboratively improves KDGICEC-$\beta$ in 1.60%, 0.94% transmission latency compared with LFU, LRU, MAAC, and MacOCache, which improves the smoothness of user experience. The superiority of our approach increased with the request density of each EBS, demonstrating that KDGICEC-$\beta$ exhibits excellent scalability in complex and heterogeneous edge caching environments. Table 1: Performance metrics of different baselines | Method | CHR | Latency | Traffic cost | | :--- | :--- | :--- | :--- | | KDGICEC-$\beta$ | 1.781 | 8.3970 | 8.2661 | | KDGICEC-$\alpha$ | 0.1766 | 8.4194 | 8.6908 | | LFU | 0.0931 | 9.1621 | 9.6091 | | LRU | 0.1145 | 8.9693 | 8.8548 | | MAAC | 0.1629 | 8.5338 | 8.4521 | | MacOCache | 0.1692 | 8.4771 | 8.6942 | --- Besides, communication between EBEs incurring additional traffic costs, while improving the cache replacement strategy can reduce the traffic cost between EBEs and DGN servers. Although KDGDCEC7 depends onollization and simlly communication to show an optimal cache replacement strategy, it manages to reduce 8.55% to 6.65%, 2.20% to 4.92% traffic cost compared with LFU, LRU, MAAC, and MacOcache, as shown in Fig. 6(c). It is worth mentioning that MacOcache not only transfers video content but also share strategy between EBEs, which leads to more traffic containment and improvement of the MGI et al. This result also indicate that our framework can transmit richer information with less traffic, which is attributed to the efficient organization of collaboration. Table 1 shows the metric statistics of KDGDCEC and several baselines. 5 RELATED WORKS Collaborative Edge Cache In the scenario of edge caching, collaborative strategy between EBEs can improve the efficiency of transmission and caching, thus reducing the transmission delay of content and improving the MGI et al. This result also indicate that our framework can transmit richer information with less traffic, which is attributed to the efficient organization of collaboration. Table 1 shows the metric statistics of KDGDCEC and several baselines. Online Knowledge Distillation Our work is more inclined to use the online knowledge distillation method without the pre-trained teacher model. Through learning from the peer prediction to train a group of student models can remit the problem of missing teacher. Zhang et al. (2018) and KDGDCEC7 are vigorous who maintain corresponding to a shared communication environment. If the cache fails this, the device will be relied on for a relatively rapid response. However, coordination among cache devices at the strategic level is relatively unexplored, i.e., EBEs share caching intelligence (e.g. parameters of cache replacement model), which is the key focus of our work. Online Knowledge Distillation Some works define teacher work in advance Anil et al. (2018); Furlaniello et al. (2018); Kao et al. (2021); Ha et al. (2021); Liu et al. (2020); Gao et al. (2021; et al. 2021) introduced an assistant model to require all student models to share the same early block to further reduce the cost of training. Some works define teacher work in advance Anil et al. (2018); Furlaniello et al. (2018); Kao et al. (2021); Ha et al. (2021); Liu et al. (2020); Gao et al. (2021; et al. 2021) introduced an assistant model to require all student models to share the same early block to further reduce the cost of training. However, simply treating each peer as equally important or defining teacher in advance are not robust enough, because the unreliability of teacher selection will limit students’ learning to a certain extent. Therefore, there is still a lot of room for improvement in collaborative edge caching. 6 FURTHER DISCUSSION When constructing video popularity features, limited to the given data set, we only use the statistical results of the number of requests in different lengths of time to construct the features of each video, which is a forced simplification. According to our listed related work in the appendix, there are many successful methods for predicting content popularity, and these methods have achieved good results, but this is not our focus of this paper. In addition, the vast majority of video content in our dataset is using knowledge distillation (KD) to transfer knowledge from request-dense EBEs to request-sparse ones. However, our measurement shows that video requests are susceptible to spatio-temporal drift. Consequently, we design a dynamically adaptive collaboration framework called KDGDCEC, which incorporates a proficient slot-wise RL agent capable of concurrently training EBEs with varying capacities simultaneously. Our measurements validate the effectiveness of our algorithm, as we achieve the highly caching hit rate low latency and communication overhead.
yONJt6nFc3
sbbIyzSBbp
"The improvements reported in the paper are astonishing. I wonder if the authors conduct any signifi(...TRUNCATED)
true
"# NODE DUPLICATION IMPROVES COLD-START LINK PREDICTION\n\n## Anonymous authors\nPaper under double-(...TRUNCATED)
yONJt6nFc3
sbbIyzSBbp
"Following Q1, it is surprising that all categories of nodes get benefited a lot after duplicating ((...TRUNCATED)
true
"# NODE DUPLICATION IMPROVES COLD-START LINK PREDICTION\n\n## Anonymous authors\nPaper under double-(...TRUNCATED)
yONJt6nFc3
sbbIyzSBbp
"Compared to Hits@10, Hits@1 could be more critical in the real-world applications, especially for t(...TRUNCATED)
true
"# NODE DUPLICATION IMPROVES COLD-START LINK PREDICTION\n\n## Anonymous authors\nPaper under double-(...TRUNCATED)
yONJt6nFc3
sbbIyzSBbp
"Following W2, the authors should consider conducting a set of experiments using all remaining nodes(...TRUNCATED)
false
"# NODE DUPLICATION IMPROVES COLD-START LINK PREDICTION\n\n## Anonymous authors\nPaper under double-(...TRUNCATED)
yONJt6nFc3
sbbIyzSBbp
Following W3, I wonder why so many reported metrics are contradict with existing studies.
false
"# NODE DUPLICATION IMPROVES COLD-START LINK PREDICTION\n\n## Anonymous authors\nPaper under double-(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
41