hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
5
1.47k
content
stringlengths
0
6.67M
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.2 Analysis on ML model training
The term 'ML model training' has been defined differently by SA WG5, SA WG6 and RAN WG1, as illustrated in Table 6.2.1.2-1. RAN WG3 follows the definition of SA WG5. Table 6.2.1.2-1: Definition of ML model training as defined across 3GPP WGs TSG (TS/TR) ML model training SA WG5 TS 28.105 [9] A process performed by an ML training function to take training data, run it through an ML model algorithm, derive the associated loss and adjust the parameterization of that ML model iteratively based on the computed loss and generate the trained ML model. SA WG6 TS 23.482 [34] According to TS 28.105 [9], ML model training includes capabilities of an ML training function or service to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss. RAN WG1 TR 38.843 [3] A process to train an AI/ML Model [by learning the input/output relationship] in a data driven manner and obtain the trained AI/ML Model for inference. RAN WG3 TS 38.300 [11] AI/ML Model Training follows the definition of the "ML model training" as specified in clause 3.1 of TS 28.105 [9]. RAN WG3 TS 38.401 [13] AI/ML Model Training follows the definition of the "ML model training" as specified in clause 3.1 of TS 28.105 [9]. The following unified definition for 'ML model training' is proposed: ML model training: A process to train an ML Model by learning the input/output relationship in a data driven manner and obtain the trained ML Model for e.g. inference.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.3 Analysis on ML model re-training
The term 'ML model re-training' has been defined differently by SA WG5 and RAN WG1, as illustrated in Table 6.2.1.3-1. RAN WG1 introduces two new terms, i.e. ML model parameter update and ML model update, which is nothing but ML model re-training. Table 6.2.1.3-1: Definition of ML model re-training / ML model parameter update as defined across 3GPP WGs TSG (TS/TR) ML model re-training / ML model parameter update / ML model update SA WG5 TS 28.105 [9] ML model re-training: A process of training a previous version of an ML model and generate a new version. RAN WG1 TR 38.843 [3] ML model parameter update: A process of updating the model parameters of a model. Model update: A process of updating the model parameters and/or model structure of a model SA WG6 TS 23.482 [34] ML model update: A process of training a new version of a ML model and updating its parameters. The term ML model re-training is proposed as unified term rather than using different terms such as 'ML model parameter update' or 'ML model update' which align conceptually but vary in technical detail and scope depending on the WG definition. The following unified definition for 'ML model re-training' and 'ML model update' is proposed: ML model re-training: A process of training a previous version of an ML model and generate a new version. ML model update: A process that improves the performance or behaviour of an ML model through actions such as re-training, parameter adjustment, structural modification, or deployment of a new version. In addition, it should be clarified that the term "model update" as used by RAN WG1 [3] may include changes to both model parameters and model structure. In contrast, TS 28.105 [9] defines ML model re-training as a process that does not alter the model structure, focusing instead on updating the model's parameters, for example, when model performance degrades or when a new set of training data becomes available. TS 28.105 [9] also describes ML model update as a broader concept that may involve various internal actions to improve the inference capabilities of an AI/ML inference function. These actions may include re-training, download new configurations, or trigger related processes. The term is used in contexts where the consumer requests improved capabilities, but the exact internal update mechanism may not be exposed. This illustrates that ML model update in TS 28.105 [9] encompasses re-training but may also refer to other means of updating model behaviour or performance.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.4 Analysis on ML model testing
The term 'ML model testing' has been defined differently by SA WG5 and RAN WG1, as illustrated in Table 6.2.1.4-1. Table 6.2.1.4-1: Definition of ML model testing as defined across 3GPP WGs TSG (TS/TR) ML model testing SA WG5 TS 28.105 [9] A process of evaluating the performance of an ML model using testing data different from data used for model training and validation. RAN WG1 TR 38.843 [3] A subprocess of training, to evaluate the performance of a final AI/ML model using a dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model. The following definition for 'ML model testing' in TS 28.105 [9] is proposed as a unified definition: ML model testing: A process of evaluating the performance of an ML model using test data different from data used for model training and validation. NOTE: Regarding the definition adopted in TR 38.843 [3], it should be clarified that RAN WG1 considers ML model testing as a subprocess of training, and notes that testing does not assume subsequent tuning of the model. In contrast, TS 28.105 [9] defines ML model testing as a distinct step from training, focused solely on evaluating the performance of an ML model using test data that is different from the data used for training and validation. Furthermore, according to TS 28.105 [9], if the performance of the ML model does not meet the target set during testing, the model may be retrained. This highlights that while testing itself does not include tuning, it can lead to retraining actions based on performance outcomes. Therefore, the definition in TS 28.105 [9] reflects a clearer separation between training and testing phases, with explicit linkage to lifecycle management decisions such as retraining.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.5 Analysis on ML model inference
The term 'ML model inference' has been defined differently by SA WG5, SA WG6 and RAN WG1, as illustrated in Table 6.2.1.5-1. RAN WG3 follows the definition of SA WG5. Table 6.2.1.5-1: Definition of ML model inference as defined across 3GPP WGs TSG (TS/TR) ML model inference SA WG5 TS 28.105 [9] A process of running a set of input data through a trained ML model to produce set of output data, such as predictions. SA WG6 TS 23.482 [34] According to TS 28.105 [9], ML model training includes capabilities of an ML model inference function that employs an ML model and/or AI decision entity to conduct inference. RAN WG1 TR 38.843 [3] A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs. RAN WG3 TS 38.300 [11] AI/ML Model Inference follows the definition of the "AI/ML inference" as defined in clause 3.1 of TS 28.105 [9]. RAN WG3 TS 38.401 [13] AI/ML Model Inference follows the definition of the "AI/ML inference" as defined in clause 3.1 of TS 28.105 [9]. The following unified definition for 'ML model inference' is proposed: ML model inference: A process of running a set of inputs through a trained ML model to produce a set of outputs.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.6 Analysis on ML model activation & ML model de-activation
The term 'ML model activation' and 'ML model deactivation' have been defined by RAN WG1, as illustrated in Table 6.2.1.6-1. SA WG5 mentions the terms ML activation and ML deactivation several times in TS 28.105 [9] but does not provide a definition. It defines instead AI/ML activation/deactivation of the scope of an inference function. Table 6.2.1.6-1: Definition of ML model activation & ML model de-activation as defined across 3GPP WGs TSG (TS/TR) ML model activation & ML model de-activation SA WG5 TS 28.105 [9] AI/ML activation: a process of enabling the inference capability of an AI/ML inference function. AI/ML deactivation: a process of disabling the inference capability of an AI/ML inference function. RAN WG1 TR 38.843 [3] ML Model activation: enable an AI/ML model for a specific AI/ML-enabled feature. ML Model deactivation: disable an AI/ML model for a specific AI/ML-enabled feature. The following unified definition for 'ML model activation' and 'ML model deactivation' is proposed: ML model activation: A process to enable an ML model for a specific AI/ML-enabled feature. ML model deactivation: A process to disable an ML model for a specific AI/ML-enabled feature. Editor's note: Further analysis is required to determine whether ML model activation and deactivation should be associated specifically with inference capability (according to TS 28.105 [9]) or with the ML model more broadly (according to TR 38.843 [3]).
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.7 Analysis on ML model lifecycle
The term 'ML model lifecycle' has been defined by SA WG6, as illustrated in Table 6.2.1.7-1. However, SA WG2 TS 23.288 [8], SA WG2 TR 23.700-84 [7], SA WG4 TR 26.927 [12], SA WG5 TS 28.105 [9], SA WG6 TR 23.700-82 [4], RAN WG1 TR 38.843 [3] and RAN WG3 also mentions one or more phases of ML model life cycle without providing a clear definition of ML model lifecycle. Table 6.2.1.7-1: Definition of ML model lifecycle as defined across 3GPP WGs TSG (TS/TR) ML model lifecycle SA WG6 TS 23.482 [34] The lifecycle of an ML model aka ML model operational workflow consists of a sequence of ML operations for a given ML task / job (such job can be an analytics task or a VAL automation task). This definition is aligned with the 3GPP definition on ML model lifecycle according to TS 28.105 [9]. SA WG5 TS 28.105 [9] ML model training: includes initial training and re-training, as well as validation of the ML model using training and validation data. If the validation results do not meet expectations (e.g. unacceptable variance), re-training is required. ML model testing: evaluates the performance of a trained ML model using testing data. If the results do not meet expectations, re-training is required before proceeding. AI/ML inference emulation (optional): allows testing the inference performance of an ML model in an emulation environment before deploying it to the target network or system. If the emulation performance does not meet the target requirements, the model may require further re-training. ML model deployment: involves the process of loading a trained ML model to make it available for use at the target AI/ML inference function. Deployment may not be needed if the training and inference functions are co-located. AI/ML inference: performing inference using a trained ML model at the AI/ML inference function. The inference process may trigger model re-training or updates based on performance monitoring and evaluation. The following unified definition for 'ML model lifecycle' is proposed: ML model lifecycle: The end-to-end process typically consisting of data processing, model training, model testing, model deployment, model inference, model monitoring and model maintenance. NOTE 1: Data processing includes collecting and preparing the data for model training and model inference. NOTE 2: Model training includes training and validating the model before model deployment. NOTE 3: Model testing includes testing the model before model deployment. NOTE 4: Model deployment includes making a trained ML model available for use in the target environment. NOTE 5: Model monitoring includes observing the performance of the model during the model maintenance process. NOTE 6: Model maintenance includes updating the model, retraining the model and (de-)activating the model.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.8 Analysis on ML model lifecycle management
SA WG5 describes the ML model lifecycle in clause 4a.0 of TS 28.105 [9] and ML model lifecycle management capabilities for ML model training, ML model testing, ML inference emulation as optional, ML model deployment and AI/ML inference in clause 6.1 of TS 28.105 [9]. The terms 'ML model-based lifecycle management', 'ML-enabled functionality' and 'Functionality-based lifecycle management' have been defined by RAN1, as illustrated in Table 6.2.1.8-1. Table 6.2.1.8-1: Definitions of ML model-based lifecycle management, ML-enabled functionality and Functionality-based lifecycle management as defined across 3GPP WGs TSG (TS/TR) ML model lifecycle management / Functionality-based lifecycle management RAN WG1 TR 38.843 [3] ML model-based lifecycle management: Operates based on identified logical models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature / Feature Group and additional conditions (e.g. scenarios, sites and datasets) as determined/identified between UE-side and NW-side. The models are identified at the Network and Network/UE may activate/deactivate/select/switch individual AI/ML models via model ID. (ML-enabled) Functionality: An AI/ML-enabled Feature/Feature Group enabled by configuration(s), where configuration(s) is(are) supported based on conditions indicated by UE capability. Functionality-based lifecycle management: Signaling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signaling (e.g. RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature/FG or specific configurations of an AI/ML-enabled Feature / Feature Group. SA WG5 TS 28.105 [9] ML model training management: enables requesting, consuming and controlling ML model training and re-training processes. It includes training performance management and policy setting for producer-initiated training. ML model testing management: allows requesting and receiving ML model testing results, selecting performance metrics and triggering model re-training based on test performance. ML model loading management: supports triggering, controlling and monitoring the ML model loading process as part of model deployment. AI/ML inference management: allows managing inference functions and/or ML model(s), including activation/deactivation, output parameter configuration, performance monitoring and triggering model updates if necessary. The following unified definition for 'ML model lifecycle management' is proposed: ML model lifecycle management: The management capabilities allowing a producer or consumer to manage different phases of the ML model lifecycle as defined in clause 6.2.1.7. The following definition for 'Functionality-based lifecycle management' is proposed for adoption by all 3GPP RAN Working Groups: Functionality-based lifecycle management: Signalling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g. RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature / Feature Group or specific configurations of an AI/ML-enabled Feature/FG. NOTE 1: In the context of RAN1, RAN2 and RAN4, functionality-based lifecycle management does not consider training, testing and maintenance phases and consider them as implementation-specific. NOTE 2: Applicability of Functionality-based lifecycle management definition to/in TSG SA WGs is optional. Editor's note: The following analyses on the key differences between ML Model LC and LCM is to be revised and possibly relocated to different clause in this TR. Key differences between ML Model lifecycle (LC) and ML Model lifecycle management (LCM) TS 28.105 [9] defines both ML model lifecycle (LC) and ML model lifecycle management (LCM) within the scope of AI/ML management in 3GPP networks. The key differences between the two are: ML model lifecycle (LC) describes the essential steps (phases) an ML model undergoes, from training to inference. It consists of: - ML model training (e.g. initial training & re-training). - ML model testing. - ML emulation. - ML model deployment (including ML model loading). - AI/ML inference. ML model lifecycle management (LCM) focuses on the management capabilities that control and optimize each phase of the ML model lifecycle. LCM enables functionalities such as: - Training management (e.g. triggering re-training, setting policies). - Testing management (e.g. evaluating performance, determining retraining needs). - Deployment management (including ML model loading). - Inference management (e.g. monitoring inference results, managing AI/ML inference functions). LCM, as specified in TS 28.105 [9], encompasses the full lifecycle management of both ML models and AI/ML inference functions. This means that LCM does not only manage the ML model while inference remains a separate process; rather, it ensures a unified management approach that includes both: - ML model lifecycle management covers the entire lifecycle of the ML model itself, including its training, validation, deployment and inference. - AI/ML inference function lifecycle management is also part of LCM, ensuring that inference operations are properly activated, configured, monitored and optimized. For example, LCM in TS 28.105 [9] enables not just the deployment of an ML model but also the continuous management of its inference functions, such as their activation, configuration and real-time monitoring. This differs from a narrower view of lifecycle (LC), which only considers inference as a step where the ML model is applied, without addressing its ongoing management.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.9 Analysis on usage of ML Model identifier in each Working Group
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.9.1 RAN WG 1
As part of the RAN1 lead work " Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface AI/ML" work in RAN WGs in TR 38.843 [3], RAN is studying two flavours of LCM; Functionality-based and ML model-based (details in clause 4.2.1 of TR 38.843 [3] and clause 6.2.1.8. - Study on the usage of ML Model identifier is still ongoing and some interim agreements within TR 38.843 [3] are: - For Functionality-based LCM: Model ID, if needed, can be used in a Functionality (defined in functionality-based LCM) for LCM operations. NOTE: Functionality-based LCM is most suitable for UE-side ML models - For Model-ID-based LCM of UE-side models and/or UE-part of two-sided models, model-ID-based LCM operates based on identified models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature/FG and additional conditions (e.g. scenarios, sites and datasets) as determined/identified between UE-side and NW-side. - For two-side ML models, in order to select a UE-side ML model (CSI generation model) that is compatible with the NW-side ML model (CSI reconstruction model) pairing information (model pairing) between the UE and gNB can be established based on ML Model identifier(s). Analysis on usage of ML Model identifier: - Study is ongoing and no concrete conclusions so far. - For two-sided models, model pairing between UE-side ML model and NW-side ML model is based on ML Model identifiers. - How an ML Model identifier is assigned to a trained ML model has not been discussed. - How an ML model identifier is related to different functions has not been discussed.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.9.2 RAN WG 3
As part of the RAN WG3 work in TS 38.300 [11]. The following scenarios are supported: - AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the NG-RAN node. - AI/ML Model Training and AI/ML model inference are both located in the NG-RAN node. Analysis on usage of ML Model identifier: - For the case where AI/ML model is trained at OAM, ML model ID is used as defined in TS 28.105 [9].
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.9.3 SA WG 2
As part of the work defined in TS 23.288 [8]. - An ML model is trained by the NWDAF MTLF. Figure 6.2.1.9.3-1: ML model training/identification in AIML related work in SA WG2 - The training may be triggered by request(s) from one more ML model consumer(s) (i.e. NWDAF AnLF). The NWDAF AnLF indicates the purpose of the trained ML model by including an Analytics identifier (and other parameters) as described in TS 23.288 [8]. - The NWDAF MTLF trains an ML model and assigns an ML Model identifier. The trained ML model and assigned ML Model identifier is provisioned to the NWDAF AnLF. - The AnLF associates the trained ML model and its corresponding ML model identifier to a specific analytics request (identified by an Analytics ID). - A trained ML model may be stored at a repository (i.e. ADRF) for use by other analytics consumers. The trained ML model is identified at the ADRF based on the ML Model Identifier. No additional metadata are stored at the ADRF to identify the capabilities (e.g. supported Analytics) of the trained ML model. Analysis on usage of ML Model identifier: - The ML Model identifier identifies the provisioned ML model. - Only the ML model consumer (AnLF) is aware of the capabilities of the trained ML model by associating the trained ML model and its corresponding ML model identifier to a specific analytics request, identified by an Analytics ID, during an ML model training request. - When a trained ML model is stored in a repository, the ML Model identifier by its own cannot be used to identify the capabilities of the ML model
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.9.4 SA WG5
As part of the work defined in TS 28.105 [9]: - An ML model is trained by the ML training MnS producer. Figure 6.2.1.9.4-1: ML model training/identification in AIML related work in SA WG5 - The training may be triggered by request(s) from one or more ML training MnS consumer(s). The consumer may be for example a network function, a management function, or an operator. The MnS consumer specifies in the ML training request the inference type which indicates the function or purpose of the ML model, e.g. CoverageProblemAnalysis, that is the MDA type for the coverage problem analysis, see TS 28.104 [71] or NgRanInferenceType which indicates the type of inference that the ML model for NG-RAN supports, see TS 28.105 [9]. - The ML training MnS producer assigns an ML Model identifier to the trained ML model that is provisioned to the MnS consumer. The ML Model identifier identifies the provisioned ML model. - A trained ML model may be stored at a repository for use by other MnS consumers. The trained ML model is identified at the repository based on the ML Model Identifier. No additional metadata are stored in the repository to identify the capabilities of the trained ML model. Analysis on usage of ML Model identifier: - The ML Model identifier identifies the provisioned ML model. - The ML model identifier is used to uniquely identify an ML model instance managed within the 5G system. -- ML model consumer (MnS Consumer) is aware of the capabilities of the trained ML model by associating the trained ML model and its corresponding ML model identifier to a specific inference type during the ML training request. - When a trained ML model is stored in a repository, the ML Model identifier by its own cannot be used to identify the capabilities of the ML model. To support various management operations such as training, inference, and deployment, the following layered identification structure is defined: - ML model identifier uniquely identifies an ML model. It is assigned when an ML model is created (e.g. after initial training). - ML Model version specifies a particular version of an ML model to differentiate between versions resulting from re-training or updates. - ML Model ref is a reference construct that encapsulates both the ML model identifier and ML model version and can optionally include a URI pointing to the model in an internal or external registry/repository. The use of ML model ref is particularly relevant when: - Referring to a specific ML model version in ML training request, ML training report, or ML inference job. - Supporting re-training workflows, where the MnS consumer explicitly refers to an existing model to be updated. - Supporting external ML model repositories, by using URIs in ML model ref. - Simplifying job definitions by using a single reference field instead of separate ID and version fields. NOTE: In initial training scenarios, the ML model ref is typically not used, as the ML model does not yet exist. Instead, the AIML Inference Name is provided in the ML training request to indicate the intended inference behaviour or use case of the new ML model. The MnS producer then generates the ML model identifier and optionally assigns a model version upon successful completion of training. This structured identification framework enables: - Lifecycle management and tracking of ML models. - Flexible orchestration of ML training and ML inference operations. - Potential extension toward integration with federated or external model repositories.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.9.5 SA WG6
As part of the work defined in TS 23.482 [34] - The ML model ID uniquely identifies the application-layer ML model. Figure 6.2.1.9.5-1: ML model training/identification in AIML related work in SA WG6 - A VAL server may offload training of an ML model to an AIMLE server. - An AIML server may train an ML model based on request from consumer(s) (VAL server). Two options are supported: - A VAL server may request to offload training of an application layer ML model to an AIML server where the model training request includes the ML Model identifier. - A VAL server may request to train a model for an analytics supported by ADAES where the model training request includes the analytics identifier. - The ML model information, including ML Model identifier and model capabilities can be stored in a model repository. Analysis on usage of ML Model identifier: - In scenarios where AIMLE trains an application-layer ML model an ML Model identifier can implicitly identify the capabilities of the ML model. - In scenarios where AIMLE trains an ML model for ADAES services, ML Model identifier identifies the provisioned ML model. If the trained ML model is stored in an ML repository, then the information stored in the repository may include the capabilities of the ML model identified by an ML Model identifier (e.g. supported Analytics ID).
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.10 Analysis on ML model pre-specialized training and ML model fine-tuning
The terms 'ML model pre-specialized training' has been defined by SA WG5 as illustrated in Table 6.2.1.10-1. No other 3GPP WG has yet adopted these terms in their activities. Table 6.2.1.10-1: Definition of ML model pre-specialized training and ML model fine-tuning TSG (TS/TR) ML model pre-specialized training and ML model fine-tuning SA WG5 TS 28.105 [9] ML model pre-specialized training: the process of training an ML model on a dataset not specific to any type of inference. ML model Fine-tuning: the process of training a pre-specialised trained ML model to narrow its inference scope to a new single inference type, generating a new ML model. NOTE 1: The pre-specialised trained model supports an inference scope that may be potentially adapted to support a list of inference types, such as MDA types in MDA, analytics types in NWDAF, type of AI/ML supported functions in NG-RAN, or vendor-specific extensions. NOTE 2: The inference scope refers to a list of inference types that the ML model may be potentially adapted to support. NOTE 3: The type of inference represents the specific type of ML inference supported by the model, such as MDA types in MDA, Analytics types in NWDAF, type of AI/ML supported functions in NG-RAN, or vendor-specific extensions. The following unified definition for 'ML model pre-specialized training" and "ML model fine-tuning" is proposed: ML model pre-specialized training: the process of training an ML model on a dataset not specific to any type of inference. ML model Fine-tuning: the process of training a pre-specialised trained ML model to narrow its inference scope to a new single inference type, generating a new ML model. NOTE 1: The pre-specialised trained model supports an inference scope that may be potentially adapted to support a list of inference types, such as MDA types in MDA, analytics types in NWDAF, type of AI/ML supported functions in NG-RAN, or vendor-specific extensions. NOTE 2: The inference scope refers to a list of inference types that the ML model may be potentially adapted to support. NOTE 3 The type of inference represents the specific type of ML inference supported by the model, such as MDA types in MDA, Analytics types in NWDAF, type of AI/ML supported functions in NG-RAN, or vendor-specific extensions. The terms "ML model pre-specialized training" and "ML model fine-tuning" as defined by SA WG5 introduce a layered training paradigm that differs significantly from traditional concepts such as initial training and re-training. Initial training typically refers to the first-time development of an ML model using a task-specific dataset, while re-training involves updating an existing model with new data to refine or correct its behaviour. In contrast, pre-specialized training is task-agnostic and aims to produce a broadly capable model with a wide inference scope. Fine-tuning then adapts this general model to a specific inference type, effectively narrowing its scope and generating a new, specialized model. This two-step approach supports modularity and reuse across multiple domains, such as MDA, NWDAF, and NG-RAN, and enables more efficient deployment of AI/ML capabilities in 3GPP systems. The analysis highlighted above underscores the importance of harmonizing these definitions across WGs to avoid semantic fragmentation and ensure consistent implementation.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.11 Analysis on ML model distributed training
The terms 'ML model distributed training' has been defined by SA WG5 as illustrated in Table 6.2.1.11-1. No other 3GPP WG has yet adopted these terms in their activities. Table 6.2.1.11-1: Definition of ML model distributed training TSG (TS/TR) ML model distributed training SA WG5 TS 28.105 [9] Distributed training: a process of distributing the training workload across multiple ML training functions. The following unified definition for 'ML model distributed training" is proposed: Distributed training: a process of distributing the training workload across multiple ML training functions. Clarification on terminology: Distributed learning, Distributed training, and Federated learning Within 3GPP documents, the terms distributed learning, distributed training, and federated learning are sometimes used, which can create ambiguity if not clearly distinguished (see also clause 6.2.2): - Distributed training (as applied in SA5 NRM and AI/ML management) refers to splitting a single training job across multiple training functions or nodes to accelerate the process and/or optimise resource utilisation. Depending on the training requirements, the training data may either be partitioned and distributed among nodes (data-parallel approach) or kept intact while nodes focus on different parts of the model (model-parallel approach). - Distributed learning is defined in sub clause 6.2.2, - Federated learning is a specific form of distributed learning designed to protect data privacy.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.2 Analysis on Federated Learning
The term 'Horizontal Federated Learning' and 'Vertical Federated Learning' have been defined in SA WG2 and RAN WG1 as well as SA WG5 defines 'Federated Learning', as illustrated in Table 6.2.2-1. Table 6.2.2-1: Definition of Federated Learning as defined across 3GPP WGs TSG (TS/TR) Federated Learning SA WG2 TR 23.700-84 [7] Horizontal Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different FL clients for local model training have the same feature space for different samples (e.g. UE IDs). SA WG2 TS 23.288 [8] Vertical Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different VFL Participant for local model training have different feature spaces for the same samples (e.g. UE IDs). RAN WG1 TR 38.843 [3] Federated Learning: A machine learning technique that trains an AI/ML model across multiple decentralized edge nodes (e.g. UEs, gNBs) each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples. SA WG5 TS 28.105 [9] Federated Learning: a distributed machine learning approach where the ML model is trained collaboratively by multiple ML training functions. This includes multiple FL clients, which perform training on local data, and one FL server, which aggregates model outcomes from the clients iteratively without exchanging data samples. The definition of Federated Learning provided by RAN WG1 appears to only apply to Horizontal Federated Learning, as the phrase "each performing local model training using local data samples" implies that the data samples at individual nodes are distinct. The key difference between Horizontal Federated Learning and Vertical Federated Learning lies in the characteristics of the local datasets: - Horizontal Federated Learning: Local datasets have the same features but different samples. - Vertical Federated Learning: local data sets have different features but share same samples. The definition of Federated Learning provided by SA WG5 highlights the collaborative training process among multiple FL participants, including an FL server and FL clients, without specifying the characteristics of the client datasets. This broader definition facilitates a more comprehensive understanding of both Horizontal Federated Learning (HFL) and Vertical Federated Learning (VFL) that are already defined in the specifications and also offers greater flexibility across more general scenarios, see definitions in clause 5.2.1.2. The terms "distributed learning" and "federated learning" are often used together as "distributed/federated learning" in SA WG1 TS 22.261 [6]. "Distributed learning" typically refers to a broader set of learning techniques including "federated learning". Although the two terms are related, they are not identical and should be used appropriately based on the context. The following unified definition for 'Federated Learning' is proposed: Federated Learning: A distributed machine learning approach where the ML model(s) are collaboratively trained by multiple participants, including one acting as an FL server and multiple acting as FL clients, iteratively without exchanging data samples. The following unified definition for 'Horizontal Federated Learning' is proposed: Horizontal Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different clients for local model training have the same feature space for different samples. The following unified definition for 'Vertical Federated Learning' is proposed: Vertical Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different clients for local model training have different feature spaces for the same samples.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.4 Analysis on Decision vs Prediction vs Output
RAN WG1 and RAN WG3 only uses "prediction" in all corresponding ML related TRs/TSs. SA WG2 uses "output" in all corresponding TRs/TSs where output may include both statistics and predictions. SA WG5 uses "decision" in all corresponding TRs/TSs with few occurrences of "prediction". The term "output" is proposed as unified term since output may include decision or prediction or statistic or recommendation.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.5 Analysis on ML vs AI vs AI/ML
RAN WG1, RAN WG2, RAN WG3 and SA WG1 only uses "AI/ML" in all corresponding ML related TRs/TSs. SA WG2 uses a mix of "ML" and "AI/ML" in all corresponding ML related TRs/TSs. SA WG3, SA WG4 and SA WG6 uses a mix of "AI/ML", "AI" and "ML" in all corresponding ML related TRs/TSs. SA WG5 uses "ML" for training/testing/emulation and "AI/ML" for inference in all corresponding ML related TRs/TSs. The term "AI/ML" is to be used a unified definition encompassing "AI/ML", "AI" and "ML" in all corresponding ML related TRs/TSs.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.6 Analysis on Transfer Learning
The term 'ML Knowledge-based Transfer Learning' has been defined by SA WG5 in TR 28.858 [19], as illustrated in Table 6.2.6-1. However, the term "Transfer learning" has not been defined in the study or normative phase for SA WG5 Release 19. SA WG6 mentions the term Transfer Learning' in TS 23.482 [34], but definition of the term is not given. SA WG1 also uses the term 'Transfer Learning' in TS 22.261 [6] and TR 22.876 [21] without providing definitions of the term. Table 6.2.6-1: Definition of Transfer Learning as defined across 3GPP WGs TSG (TS/TR) Transfer Learning 3GPP SA5 TR 28.858 [19] ML Knowledge-based Transfer Learning: a technique where the knowledge gained from training of one or more ML models is applied or adapted to improve or develop another ML model. The following unified definition for 'Transfer Learning' is proposed (based on SA WG5 definition): Transfer Learning: A machine learning technique where the knowledge acquired from training one or more ML models is leveraged to enhance the performance or accelerate the training of another ML model.
30577dbcedb3290f42b3a55986900ea7
22.850
6.3 AI/ML related features
30577dbcedb3290f42b3a55986900ea7
22.850
6.3.1 Analysis on ML model training services
The analysis focuses on the specifications from SA WG2, SA WG5 and SA WG6, considering these are the working groups defining services and operations related to ML model training in 3GPP Release 18. SA WG1, SA WG3, SA WG4, RAN WG1, RAN WG2 and RAN WG3 have not defined any services or operations related to ML model training. Table 6.3.1-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: - SA WG2: Emphasizes a structured approach to ML model training services by defining a clear consumer-producer relationship. This enables specific entities to consume and produce these services, ensuring a well-defined and controlled environment for service utilization. - SA WG5: Offers a more flexible approach by defining generic ML model training services. This allows for greater adaptability in implementation and usage, without the constraints of a specific consumer-producer relationship. - SA WG6: Mirrors the approach of SA WG2, prioritizing a clear consumer-producer relationship for its defined services. This aligns with the structured approach advocated by SA WG2. Additional ML training features for applications are supported in the application enablement layer, such as: federated learning (horizontal, vertical), ML model training capability evaluation for selection of FL members, AI/ML task transfer to a suitable AIML enablement member, support for transfer learning enablement, support for FL member grouping, etc. While SA WG2 and SA WG6 restrict the potential producers and consumers, SA WG5 emphasizes flexibility and adaptability. The choice of approach will depend on the specific needs and requirements of the individual service provider and consumer. Table 6.3.1-1: ML model training related services and operations as specified across 3GPP WGs ML Model Training TSG (TS/TR) Service/API Type Service/API/IOC Name Description [Consumer, Producer] Nnwdaf_MLModelProvision_Subscribe The consumer subscribes to NWDAF ML model provision with specific parameters to receive a notification when an ML Model matching the subscription parameters becomes available. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF ML Model Provisioning Services Nnwdaf_MLModelProvision_Unsubscribe The consumer unsubscribes to NWDAF ML model provision. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF Nnwdaf_MLModelProvision_Notify The NWDAF notifies the ML model information to the consumer which has subscribed to the NWDAF ML model provision service. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF SA WG2 TS 23.288 [8] ML Model Information Services Nnwdaf_MLModelInfo_Request The consumer requests and gets NWDAF ML Model Information. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF Nnwdaf_MLModelTraining_Subscribe The consumer subscribes to NWDAF ML model training with specific parameters. Consumer: NWDAF MTLF Producer: NWDAF MTLF ML Model Training Services Nnwdaf_MLModelTraining_Unsubscribe The consumer terminates NWDAF ML model training. Consumer: NWDAF MTLF Producer: NWDAF MTLF Nnwdaf_MLModelTraining_Notify The NWDAF notifies about the trained ML model to the consumer which has subscribed to the NWDAF ML model training service. Consumer: NWDAF MTLF Producer: NWDAF MTLF ML Model Training Information Services Nnwdaf_MLModelTrainingInfo_Request The consumer requests for the information about NWDAF ML model training with specific parameters. Consumer: NWDAF MTLF Producer: NWDAF MTLF Nnwdaf_VFLTraining_Subscribe The consumer subscribes to VFL ML Model training information. Consumer: NWDAF, AF, NEF Producer: NWDAF VFL Training Services Nnwdaf_VFLTraining_Unsubscribe The consumer terminates NWDAF VFL ML Model training. Consumer: NWDAF, AF, NEF Producer: NWDAF Nnwdaf_VFLTraining_Notify The NWDAF notifies the consumer of client intermediate training result of the local ML model. Consumer: NWDAF, AF, NEF Producer: NWDAF Nnwdaf_VFLTraining_Request The consumer requests NWDAF VFL client to check if it can support requirements for VFL. Consumer: NWDAF, AF, NEF Producer: NWDAF MLTrainingRequest It represents the ML model training request to train an ML model which is triggered by the ML training MnS consumer towards the ML training MnS producer. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of training an ML model SA WG5 TS 28.105 [9] ML Training Management Services MLTrainingReport It represents the ML model training report provided by the ML training MnS producer to the ML training MnS consumer who has requested for ML model training. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of training an ML model MLTrainingProcess It represents the ML model training process. When a ML model training process starts, an instance of the MLTrainingProcess is created by the MnS Producer and notification is sent to MnS consumer who has subscribed to it. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of training an ML model SA WG6 TS 23.482 [34] ML Model Training APIs Aimles_MLModelTraining Request The consumer sends an ML model training request to the producer, requesting to assist in its ML model training. This request consists of ML model information or ML model requirement information, etc. Consumer: VAL server Producer: AIMLE Server Aimles_MLModelTraining Response If the consumer is authorized, the producer identifies and selects the appropriate ML model for training based on the ML model requirement information. The producer returns a success response indicating the selected ML model for training; otherwise, a failure response indicating the reason for failure. Consumer: VAL server Producer: AIMLE Server
30577dbcedb3290f42b3a55986900ea7
22.850
6.3.2 Analysis on analytics related services
This clause focuses on the specifications from SA WG2, SA WG5 and SA WG6, considering these are the working groups defining services and operations related to ML model inference in 3GPP Release 18. SA WG1, SA WG3, SA WG4, RAN WG1, RAN WG2 and RAN WG3 have not defined any services or operations related to ML model inference. Table 6.3.2-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: - SA WG2: Describes ML model inference by defining analytics services through a clear consumer-producer relationship. It defines several analytics types in TS 23.288 [8], each one supported by the NWDAF/RE-NWDAF AnLF and requested/subscribed by the NWDAF/RE-NWDAF AnLF consumer using the defined analytics services. - SA WG5: Describes ML model inference by defining generic analytics services without specific consumer-producer relationship in TS 28.105 [9] and TS 28.104 [71]. It defines several analytics types in TS 28.104 [71], each one supported by an MnS producer and requested by the MnS consumer using the defined analytics services. - SA WG6: Describes ML model inference by defining individual analytics services for each analytics type. It defines several analytics types in TS 23.436 [33]. While SA WG2 and SA WG6 restrict the potential producers and consumers, SA WG5 emphasizes flexibility and adaptability. Moreover, SA WG2 and SA WG5 defines several analytics types that can be supported by an entity and requested by another entity using the defined ML model inference services. In addition, in SA WG6, individual services are defined for each analytics type. Table 6.3.2-1: Analytics and inference related services and operations as specified across 3GPP WGs ML Model inference TSG (TS/TR) Service/API Type Service/API/IOC Name Description [Consumer, Producer] Network Data Analytics Subscription Services Nnwdaf_AnalyticsSubscription_Subscribe The consumer subscribes for network data analytics and optionally its corresponding analytics accuracy information with specific parameters. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsSubscription_Unsubscribe The consumer unsubscribes for network data analytics. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsSubscription_Notify The NWDAF notifies the analytics and optionally Analytics Accuracy Information to the consumer which has subscribed to the NWDAF analytics subscription service. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsSubscription_Transfer The consumer NWDAF requests NWDAF for transferring analytics subscriptions from the consumer NWDAF. Consumer: NWDAF AnLF Producer: NWDAF AnLF SA WG2 TS 23.288 [8] Network Data Analytics Information Services Nnwdaf_AnalyticsInfo_Request The consumer requests NWDAF operator specific analytics and optionally Analytics Accuracy Information with specific parameters. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsInfo_ContextTransfer The consumer NWDAF requests NWDAF to transfer context information related to analytics subscriptions. Consumer: NWDAF AnLF Producer: NWDAF AnLF Nnwdaf_RoamingAnalytics_Subscribe The consumer subscribes for network data analytics related to roaming UEs. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Network Data Roaming Analytics Services Nnwdaf_RoamingAnalytics_Unsubscribe The consumer unsubscribes for network data analytics related to roaming UEs. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingAnalytics_Notify The NWDAF notifies the analytics related to roaming UE(s) to the consumer which has subscribed to the NWDAF roaming analytics subscription service. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingAnalytics_Request The consumer requests NWDAF operator specific related to roaming UEs. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_VFLInference_Subscribe The consumer subscribes to VFL inference. Consumer: NWDAF, AF, NEF Producer: NWDAF Nnwdaf_VFLInference Nnwdaf_VFLInference_Unsubscribe The consumer unsubscribes to VFL inference. Consumer: NWDAF, AF, NEF Producer: NWDAF Nnwdaf_VFLInference_Notify The consumer notifies VFL inference result. Consumer: NWDAF, AF, NEF Producer: NWDAF Nnwdaf_VFLInference_Request The consumer requests the NWDAF to perform a one-time VFL inference. Consumer: NWDAF, AF, NEF Producer: NWDAF SA WG5 TS 28.104 [71] Management Data Analytics Services MDARequest It represents the management data analytics output request which is created by an MDA MnS consumer towards the MDA MnS producer. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of producing management data analytics MDAReport It represents the management data analytics report containing the outputs for one or more MDA types delivered to the MDA consumer who has requested for management data analytics. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of producing management data analytics SS_ADAE_VAL_performance_analytics VAL_performance_analytics_subscribe The consumer subscribes for VAL performance analytics. Consumer: VAL server Producer: ADAE server VAL_performance_analytics_notify The consumer is notified by ADAES on the VAL performance analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_slice_performance_analytics slice_performance_analytics_subscribe The consumer subscribes for slice specific performance analytics. Consumer: VAL server Producer: ADAE server slice_performance_analytics_notify The consumer is notified by ADAES on the slice specific performance analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_UE-to-UE_performance_analytics UE-to-UE performance_analytics_subscribe The consumer subscribes for UE-to-UE performance analytics. UE-to-UE performance_analytics_notify The consumer is notified by ADAES on the slice specific performance analytics. Consumer: VAL server Producer: ADAE server SA WG6 TS 23.436 [33] SS_ADAE_server-to-server_performance_analytics server-to-server_performance_analytics_subscribe The consumer subscribes to the ADAE server for Server-to-server performance analytics. Consumer: VAL server, EES Producer: ADAE server server-to-server_performance_analytics_notify The consumer is notified by the ADAE server on the Server-to-server performance analytics. Consumer: VAL server, EES Producer: ADAE server SS_ADAE_location_accuracy_analytics Location_accuracy_analytics_subscribe The consumer subscribes for location accuracy analytics. Consumer: VAL server Producer: ADAE server Location_accuracy_analytics_notify The consumer is notified by ADAES on the location accuracy analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_service_API_analytics Service_API_analytics_subscribe The consumer subscribes for service API analytics. Consumer: VAL server, Subscriber, API invoker Producer: ADAE server Service_API_analytics_notify The consumer is notified by ADAES on the location accuracy analytics. Consumer: VAL server, Subscriber, API invoker Producer: ADAE server SS_ADAE_slice_usage_pattern_analytics slice_usage_pattern_analytics_subscribe The consumer subscribes for slice usage pattern analytics. Consumer: VAL server, SEAL server Producer: ADAE server slice_usage_pattern_analytics_notify The consumer is notified by ADAES on the slice usage pattern analytics. Consumer: VAL server, SEAL server Producer: ADAE server SS_ADAE_edge_analytics edge_analytics_subscribe The consumer subscribes for edge load analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_analytics_notify The consumer is notified by ADAES on the edge load analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_analytics_get The consumer requests edge analytics data. Consumer: VAL server, ECS, EES Producer: ADAE server SS_ADAES_slice_usage_stats slice_usage_stats_get The consumer requests and receives slice usage statistics from ADAE server. Consumer: VAL server Producer: ADAE server SS_ADAES_edge_preparation_analytics edge_preparation_analytics_subscribe The consumer subscribes for edge computing preparation analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_preparation_analytics_notify The consumer is notified by the ADAE server on the edge computing preparation analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_preparation_analytics_get The consumer requests edge computing preparation analytics Consumer: VAL server, ECS, EES Producer: ADAE server SS_ADAE_collision_detection_analytics collision_detection_analytics_subscribe The consumer subscribes for collision detection analytics. Consumer: VAL server, LM server, UAE server, UAS application specific server Producer: ADAE server collision_detection_analytics_notify The consumer is notified by the ADAE server on collision detection analytics. Consumer: VAL server, LM server, UAE server, UAS application specific server Producer: ADAE server collision_detection_analytics_get The consumer requests collision detection analytics. Consumer: VAL server, LM server, UAE server, UAS application specific server Producer: ADAE server SS_ADAE_location-related_UE_group_analytics location-related_UE_group_analytics_subscribe The consumer subscribes for location-related UE group analytics. Consumer: LM server Producer: ADAE server location-related_UE_group_analytics_notify The consumer is notified by the ADAE server on location-related UE group analytics. Consumer: LM server Producer: ADAE server location-related_UE_group_analytics_get The consumer requests location-related UE group analytics. Consumer: LM server Producer: ADAE server SS_ ADAE_AIML_member_capability_analytics AIML_member_capability_analytics_subscribe The consumer subscribes for application layer AIML member capability analytics. Consumer: VAL server, AIMLE server Producer: ADAE server AIML_member_capability_analytics_notify The consumer is notified by the ADAE server on application layer AIML member capability analytics. Consumer: VAL server, AIMLE server Producer: ADAE server AIML_member_capability_analytics_get The consumer requests application layer AIML member capability analytics. Consumer: VAL server, AIMLE server Producer: ADAE server SS_ADAE_UE_RAT_connectivity_analytics API UE_RAT_connectivity_analytics_subscribe The consumer requests UE RAT connectivity analytics. Consumer: VAL server Producer: ADAE server UE_RAT_connectivity_analytics_notify The consumer is notified by the ADAE server on UE RAT connectivity analytics. Consumer: VAL server Producer: ADAE server
30577dbcedb3290f42b3a55986900ea7
22.850
6.3.3 Analysis on ML performance evaluation and monitoring
The analysis focuses on the specifications from SA WG2, SA WG5 and SA WG6, considering these are the working groups defining services and operations related to ML performance evaluation in 3GPP Release 18 / Release 19. SA WG1, SA WG3, SA WG4, RAN WG1, RAN WG2 and RAN WG3 have not defined any services or operations related to ML performance evaluation. Table 6.3.3-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: - SA WG2: Dedicated ML model monitoring services are defined with a specific consumer-producer relationship. Additionally, analytics subscription and ML model training subscription services may indicate the performance requirements which the producer has to satisfy when providing the analytics or training the ML model. The focus in SA WG2 has been on the accuracy aspects of ML model performance. - SA WG5: SA WG5 defines the framework and mechanisms for performance assurance including performance metrics (ModelPerformance clause 7.4.1 of 28.105 [9]) on which the performance of an ML Model can be ascertained. ML training, ML testing and ML inference services indicate the performance requirements which the producer has to satisfy for the consumer when training the ML model or testing the ML model or providing the inferences. The achieved performance of ML Model is communicated to the consumer via MLTrainingReportMLTestingReport and AIMLInferenceReport. - SA WG6: Dedicated ML model monitoring services are defined with specific consumer-producer relationship. Additionally, ML model training APIs indicate the performance requirements that the producer has to satisfy when providing the ML model. No specific ML model performance metrics are standardized in SA WG6. Table 6.3.3-1: ML model performance monitoring services and operations as specified in 3GPP WGs ML Model Training TSG (TS/TR) Service/API Type Service/API/IOC Name Description [Consumer, Producer] Nnwdaf_MLModelMonitor_Subscribe The consumer subscribes to NWDAF for the monitored ML Model accuracy information and Analytics Feedback Information for the analytics generated by the NWDAF with specific parameters. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Unsubscribe The consumer unsubscribes to the NWDAF for the monitored ML Model accuracy information and Analytics Feedback Information for the analytics generated by the NWDAF. Consumer: NWDAF Producer: NWDAF SA WG2 TS 23.288 [8] ML Model Monitoring Services Nnwdaf_MLModelMonitor_Notify NWDAF notifies the monitored ML Model accuracy information and Analytics Feedback Information for the analytics generated by the NWDAF to the consumer who has subscribed to the specific NWDAF service. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Register The consumer registers the use and monitoring capability for an ML Model at an NWDAF containing MTLF. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Deregister The consumer deregisters, from an NWDAF containing MTLF, a previous MLModelMonitor registration, e.g. when the consumer is no longer using or monitoring the accuracy of the analytics generated using the ML Model. Consumer: NWDAF Producer: NWDAF SA WG6 TS 23.482 [34] ML Model Performance Monitoring APIs MLModelPerfMonitor_Subscribe The consumer subscribes for ML model performance monitoring. Consumer: VAL server Producer: AIMLE Server  MLModelPerfMonitor_Notify The consumer is notified by ML repository on the ML model performance monitoring. Consumer: VAL server Producer: AIMLE Server 
30577dbcedb3290f42b3a55986900ea7
22.850
6.3.4 Analysis on data collection and management for AI/ML
The analysis focuses on the specifications from SA WG2, SA WG6 and RAN WG3, considering these are the working groups defining services and operations related to data collection for AI/ML in 3GPP Release 18. SA WG1, SA WG3, SA WG4, SA WG5, RAN WG1 and RAN WG2 have not defined any services or operations related to data collection for AI/ML in Release 18. SA WG5 specifies data collection and performance measurement services that can be leveraged for AI/ML purposes (see TS 28.622 [72]). Table 6.3.4-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: - SA WG2: Defines multiple network functions capable of producing data collection services and defines a function for data storage related services. For example, the DCCF coordinates and manages the collection of data from various network functions for purposes such as computation of analytics and Analytics/ML Model Accuracy monitoring. Leverages event exposure framework and defines event exposure services for network functions that can be consumed by NWDAF (see clause 6.2.2.1 of TS 23.288 [8]) and DCCF (see clause 6.2.6.3 of TS 23.288 [8]). Defines data collection for AI/ML services through a clear consumer-producer relationship. - SA WG6: Defines network functions similar to those in SA WG2. Data collection services defined for A-DCCF as well as data storage services defined for A-ADRF are generic and applicable to any ADAE services; however, while A-DCCF APIs are defined in a generic manner, the A-ADRF APIs (for data storage and fetching) are defined in a per use case specific manner. - RAN WG3: Defines data collection messages exchanged between two gNBs over the Xn interface, in a P2P manner. It is to be noted that procedures used for AI/ML support in the NG-RAN shall be "data type agnostic", which means that the intended use of the data (e.g. input, output, feedback) shall not be indicated. - RAN WG2: Defines data collection configuration procedures for offline training of network side models over existing RRC messages between UE and gNB. RAN WG2 also introduces the logging of data within UE and the retrieval of the logged data by gNB via UE Information procedure. While SA WG2 and SA WG6 both define data collection services, their approaches to data storage and retrieval are different. SA WG2 defines generic data storage and retrieval services that can be supported by an entity (ADRF) and requested by another entity but SA WG6 defines both generic and individual services (related to each analytics type) for storing and retrieving data. RAN WG3 operates independently and is unrelated to services defined in SA WGs and therefore can coexist. Table 6.3.4-1: Data Collection for AI/ML related services and operations as specified across 3GPP WGs Data Collection for AI/ML TSG (TS/TR) Service/API/Message Type Service/API/IOC/Message Name Description [Consumer, Producer] Namf_EventExposure_Subscribe The NWDAF uses this service operation to subscribe to or modify event reporting for one UE, a group of UE(s) or any UE. Producer: AMF Namf_EventExposure_Unsubscribe The NWDAF uses this service operation to unsubscribe for a specific event for one UE, group of UE(s), any UE. Producer: AMF Namf_EventExposure_Notify Provides the previously subscribed event information to the NWDAF which has subscribed to that event before. Producer: AMF Nsmf_EventExposure_Subscribe This service operation is used by an NWDAF to subscribe or modify a subscription for event notifications on a specified PDU Session or for all PDU Sessions of one UE, group of UE(s) or any UE. Producer: SMF SA WG2 TS 23.288 [8] Nsmf_EventExposure_UnSubscribe This service operation is used by an NWDAF to unsubscribe event notifications. Producer: SMF Event Exposure services Nsmf_EventExposure_Notify Report UE PDU Session related event(s) to the NWDAF which has subscribed to the event report service. Producer: SMF Npcf_EventExposure_Subscribe The NWDAF uses this service operation to subscribe to or modify event reporting for a group of UE(s) or any UE accessing a combination of (DNN, S-NSSAI). Producer: PCF Npcf_EventExposure_Unsubscribe The NWDAF uses this service operation to unsubscribe for a specific event for a group of UE(s) or any UE accessing a combination of (DNN, S-NSSAI). Producer: PCF Npcf_EventExposure_Notify This service operation reports the event to the NWDAF that has previously subscribed either using Npcf_EventExposure_Subscribe service operation or provided as part of the Data Set Application Data and Data Subset Service Parameters stored in UDR. Producer: PCF Nudm_EventExposure_Subscribe The NWDAF subscribes to receive an event. Producer: UDM Nudm_EventExposure_Unsubscribe The NWDAF deletes the subscription of an event if already defined in UDM. Producer: UDM Nudm_EventExposure_Notify UDM reports the event to the NWDAF that has previously subscribed. Producer: UDM Nudm_EventExposure_ModifySubscription The NWDAF requests to modify an existing subscription to event notifications. Producer: UDM Nnef_EventExposure_Subscribe The NWDAF subscribes to receive an event, or if the event is already defined in NEF, then the subscription is updated. Producer: NEF Nnef_EventExposure_Unsubscribe The NWDAF deletes an event if already defined in NEF. Producer: NEF Nnef_EventExposure_Notify NEF reports the event to the NWDAF that has previously subscribed. Producer: NEF Naf_EventExposure_Subscribe The NWDAF subscribes the event to collect AF data for UE(s), group of UEs, or any UE, or updates the subscription which is already defined in AF. Producer: AF Naf_EventExposure_Unsubscribe The NWDAF unsubscribes for a specific event. Producer: AF Naf_EventExposure_Notify The AF provides the previously subscribed event information to the NWDAF which has subscribed to that event before. Producer: AF Nnsacf_SliceEventExposure_Subscribe This service operation is used by the NWDAF to subscribe or modify a subscription with the NSACF for event based notifications of the current number of UEs registered for a network slice or the current number of PDU Sessions established on a network slice. Producer: NSACF Nnsacf_SliceEventExposure_Unsubscribe This service operation is used by the NWDAF to unsubscribe from the event notification. Producer: NSACF Nnsacf_SliceEventExposure_Notify This service operation is used by the NSACF to report the current number of UEs registered with a network slice or the current number of PDU Sessions established on a network slice in numbers or in percentage from the maximum allowed numbers, based on threshold or at expiry of periodic timer. Producer: NSACF Nupf_EventExposure_Subscribe This service operation reports the event and information to the NWDAF that has subscribed implicitly. Producer: UPF Nupf_EventExposure_Unsubscribe This service operation is used by an NWDAF to subscribe or modify a subscription to UPF event exposure notifications e.g. for the purpose of UPF data collection on a specified PDU Session or for all PDU Sessions of one UE or any UE. Producer: UPF Nupf_EventExposure_Notify The NF consumer uses this service operation to unsubscribe for a specific event. Consumer: Any NF Producer: UPF Nscp_EventExposure_Notify The NWDAF uses this service operation to unsubscribe for a specific event. Producer: SCP Nscp_EventExposure_Subscribe This service operation is used by an NWDAF to subscribe or modify a subscription to SCP event exposure notifications. Producer: SCP Nscp_EventExposure_Unsubscribe The NWDAF uses this service operation to unsubscribe from an existing subscription. Producer: SCP Nnwdaf_DataManagement_Subscribe The consumer subscribes to data exposed by an NWDAF. It can be historical data or runtime data. The subscription includes service operation specific parameters that identify the data to be provided and may include formatting and processing instructions that specify how the data is to be delivered to the consumer. Consumer: NWDAF, DCCF Producer: NWDAF NWDAF Data Management services Nnwdaf_DataManagement_Unsubscribe The consumer unsubscribes to the data exposed by an NWDAF. Consumer: NWDAF, DCCF Producer: NWDAF Nnwdaf_DataManagement_Notify The NWDAF notifies the consumer of the requested data or notifies of the availability of previously subscribed data when delivery is via an NWDAF. The NWDAF may also notify the consumer when Data or Analytics is to be deleted. Consumer: NWDAF, DCCF, MFAF, ADRF Producer: NWDAF Nnwdaf_DataManagement_Fetch The consumer retrieves from the NWDAF subscribed data, as indicated by Fetch Instructions from Nnwdaf_DataManagement_Notify. Consumer: NWDAF, DCCF, MFAF, ADRF Producer: NWDAF Nnwdaf_RoamingData_Subscribe The consumer subscribes for input data related to roaming UE(s) for NWDAF analytics. The subscription includes service operation specific parameters that identify the data to be provided and may include formatting and processing instructions that specify how the data is to be delivered to the consumer. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF NWDAF Roaming Data services Nnwdaf_RoamingData_Unsubscribe The consumer unsubscribes to input data related to roaming UE(s). Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingData_Notify NWDAF notifies the consumer about input data related to roaming UE(s) that the consumer has subscribed to. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Ndccf_DataManagement_Subscribe The consumer subscribes to receive data or analytics from the DCCF. The subscription includes service operation specific parameters that identify the data or analytics to be provided and may include formatting and processing instructions that specify how the data is to be delivered to the consumer. The consumer may also request that data be stored in an ADRF or an NWDAF hosting ADRF functionality. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Unsubscribe The consumer unsubscribes to DCCF for data or analytics. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF DCCF Data Management Services Ndccf_DataManagement_Notify DCCF notifies the consumer instance of the requested data or analytics according to the request or notifies of the availability of previously subscribed Data or Analytics when data delivery is via the DCCF. The DCCF may also notify the consumer instance when Data or Analytics is to be deleted. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Fetch The consumer retrieves from the DCCF, data or analytics as indicated by Ndccf_DataManagement_Notify Fetch Instructions. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Transfer The Source DCCF transfers UE data subscription context to the target DCCF. Consumer: DCCF Producer: DCCF Nmfaf_3daDataManagement_Configure The consumer configures or reconfigures the MFAF to map data or analytics received by the MFAF to out-bound notification endpoints and to format and process the out-bound data or analytics. Consumer: DCCF, NWDAF Producer: MFAF MFAF Data Management Services Nmfaf_3daDataManagement_Deconfigure The consumer configures the MFAF to stop mapping data or analytics received by the MFAF to one or more out-bound notification endpoints. Consumer: DCCF, NWDAF Producer: MFAF Nmfaf_3caDataManagement_Notify MFAF provides data or analytics or notification of availability of data or analytics to notification endpoints. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: MFAF Nmfaf_3caDataManagement_Fetch The consumer retrieves from the MFAF, data or analytics as indicated by Nmfaf_3caDataManagement_Notify Fetch Instructions. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: MFAF Nadrf_DataManagement_StorageRequest The consumer NF uses this service operation to request the ADRF to store data or analytics. Data or analytics are provided to the ADRF in the request message. Consumer: DCCF, NWDAF, MFAF Producer: ADRF Nadrf_DataManagement_StorageSubscriptionRequest The consumer (NWDAF or DCCF) uses this service operation to request the ADRF to initiate a subscription for data or analytics. Data or analytics provided in notifications as a result of the subsequent subscription by the ADRF are stored in the ADRF. Consumer: NWDAF, DCCF Producer: ADRF ADRF Data Management Services Nadrf_DataManagement_StorageSubscriptionRemoval The consumer NF uses this service operation to request that the ADRF no longer subscribes to data or analytics it is collecting and storing. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalRequest The consumer NF uses this service operation to retrieve stored data or analytics from the ADRF. The Nadrf_DataManagement_RetrievalRequest response either contains the data or analytics or provides instructions for fetching the data or analytics. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalSubscribe The consumer NF uses this service operation to retrieve stored data or analytics from the ADRF and to receive future notifications containing the corresponding data or analytics received by ADRF. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalUnsubscribe The consumer NF uses this service operation to request that the ADRF no longer sends data or analytics to a notification endpoint. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalNotify This service operation provides consumers with either data or analytics from an ADRF, or instructions to fetch the data or analytics from an ADRF. The notifications are provided to consumers that have subscribed using the Nadrf_DataManagement_RetrievalSubscribe service operation. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_Delete This service operation instructs the ADRF to delete stored data. Consumer: NWDAF, DCCF Producer: ADRF SS_AADRF_Data_Collection Subscribe The consumer subscribes for offline data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_Data_Collection Notify The consumer is receiving the offline data from A-ADRF as notification, based on subscription. Consumer: ADAES Producer: A-ADRF SS_ AADRF_Historical_ServiceAPI_Logs Get The consumer requests API logs from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_NetworkSlice_Data Get The consumer requests network slice data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_Location_Accuracy_Data Get The consumer is receiving offline location analytics/data from A-ADRF. Consumer: ADAES Producer: A-ADRF SA WG6 TS 23.436 [33] A-ADRF Data Collection APIs SS_AADRF_EdgeData_Collection Subscribe The consumer subscribes for offline edge data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_EdgeData_Collection Notify The consumer is receiving the offline edge data from A-ADRF as notification, based on subscription. Consumer: ADAES Producer: A-ADRF SS_AADRF_Edge_Preparation_Data Get The consumer is receiving offline edge computing preparation data from the A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_Data_Storage Request Subscription The consumer requests A-ADRF to subscribe for data or analytics from ADAE server or A-DCCF for store. This service operation provides parameters needed by the A-ADRF to initiate the subscription (to an ADAE server or A-DCCF). Consumer: ADAE server, A-DCCF Producer: A-ADRF SS_AADRF_Data_Storage Store Data The consumer requests A-ADRF to store data or analytics from ADAE server or A-DCCF. Data or analytics are provided to the A-ADRF in the request message. Consumer: ADAE server Producer: A-ADRF SS_ADRF_ ServerToServer_Analytics Get The consumer is receiving offline server-to-server analytics/data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_UE RAT connectivity analytics Get The consumer is receiving offline UE RAT connectivity analytics/data from A-ADRF. Consumer: ADAE server Producer: A-ADRF SS_ADCCF_Data_Collection Subscribe The consumer subscribes to receive data or analytics from A-DCCF. The subscription includes service operation specific parameters that identify the data or analytics to be provided. Consumer: ADAE server Producer: A-DCCF A-DCCF Data Collection APIs SS_ADCCF_Data_Collection Notify The A-DCCF notifies the consumer of the requested data or analytics according to the request or notifies of the availability of previously subscribed data or analytics when data delivery is via the A-DCCF. The A-DCCF may also notify the consumer when data or analytics is to be deleted. Consumer: ADAE server Producer: A-DCCF SS_ADCCF_Data_Collection Get The consumer retrieves data or analytics from the A-DCCF. Consumer: ADAE server Producer: A-DCCF DATA COLLECTION REQUEST NG-RAN node 1 initiates the procedure by sending the DATA COLLECTION REQUEST message to NG-RAN node 2 to start information reporting or to stop information reporting. Upon receipt, NG-RAN node 2: shall initiate the requested information reporting according to the parameters given in the request in case the Registration Request for Data Collection IE is set to "start"; or shall stop all measurements and predictions and terminate the reporting in case the Registration Request for Data Collection IE is set to "stop". Report Characteristics for Data Collection IE in the DATA COLLECTION REQUEST message indicates the type of objects NG-RAN node 2 performs measurements or predictions on. RAN WG3 TS 38.423 [15] Data Collection procedures DATA COLLECTION RESPONSE If NG-RAN node 2 is capable of providing all of the requested information, it shall initiate the information reporting as requested by NG-RAN node 1 and respond with the DATA COLLECTION RESPONSE message. If NG-RAN node 2 is capable of providing some but not all of the requested information, it shall initiate the information reporting for the admitted requested information and include the Node Measurement Initiation Result List IE or the Cell Measurement Initiation Result List IE or both in the DATA COLLECTION RESPONSE message. DATA COLLECTION FAILURE If none of the requested information can be initiated, NG-RAN node 2 shall send the DATA COLLECTION FAILURE message with an appropriate cause value. DATA COLLECTION UPDATE NG-RAN node 2 shall include in the DATA COLLECTION UPDATE message one or more of the following IEs based on the request: SSB Area Radio Resource Status List IE, Predicted Radio Resource Status, Predicted Number of Active UEs, Predicted RRC Connections, Average UE Throughput DL, Average UE Throughput UL, Average Packet Delay, Average Packet Loss, Energy Cost and Measured UE Trajectory. These IEs are specified in Rel. 18 to support three AI/ML for NG-RAN use cases, i.e. Energy Saving, Load Balancing and Mobility Optimization.
30577dbcedb3290f42b3a55986900ea7
22.850
7 Overall Evaluation
Editor's note: This clause will provide a general evaluation of potential terminology inconsistency #X and potential feature misalignment #X.
30577dbcedb3290f42b3a55986900ea7
22.850
7.1 General evaluation on AI/ML related terminology
The study reviewed AI/ML-related activities across TSG SA, TSG RAN, and TSG CT Working Groups, to identify potential misalignment in AI/ML terminology and AI/ML feature descriptions across Working Groups. The study confirms that AI/ML-related terminology across 3GPP WGs is broadly aligned at the conceptual level but shows differences in emphasis, scope, and granularity. Most variations are intentional and linked to domain requirements and the relevant specific use cases, rather than contradictions. The analysis of AI/ML terminology revealed that several AI/ML terms were overlapping across Working Groups. A set of unified terminologies has been developed during the study, providing consistent definitions for key concepts such as ML model, ML model training, ML model inference, ML model lifecycle management, Functionality-based lifecycle management, Federated Learning (including Horizontal and Vertical), Transfer Learning.
30577dbcedb3290f42b3a55986900ea7
22.850
7.2 Detailed evaluation on AI/ML-related terminology
- ML model: Defined with differences across SA5, SA6, and RAN1 but all converge on a mathematical construct producing outputs from inputs. - ML model training: Consistently described as iterative optimisation of parameters, though WG perspectives differ (e.g., lifecycle management in SA5, AI/ML service enablement via AIMLE in SA6, performance-driven framing in RAN1). These variations are complementary. - ML model distributed training: Defined only by SA5 as distributing workload across training functions. - ML model re-training / model update: SA5 defines re-training as generating a new version without altering structure, while RAN1 and SA6 use update with broader scope (parameter and/or structure). Unified definitions have been proposed to reduce redundancy: Re-training = generating a new version of a previous model. Model update = broader concept covering re-training, parameter adjustment, structural modification, or deployment of a new version. - ML model testing: SA5 defines testing as a distinct lifecycle stage; RAN1 considers it a subprocess of training. SA5’s lifecycle distinction provides clearer management separation, while RAN1 highlights operational evaluation during training. Not contradictory but conceptually different. - ML model pre-specialised training / fine-tuning: Defined only in SA5. Pre-specialised training produces a task-agnostic model with wide inference scope, while fine-tuning narrows this scope to a new single inference type. This layered paradigm differs from traditional training/re-training and supports modular reuse across domains. - Functionality-based lifecycle management: Defined only in RAN1 and used by RAN2. Signalling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g. RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature / Feature Group or specific configurations of an AI/ML-enabled Feature/FG. - Federated learning: SA5 explicitly contrasts it with distributed training (job-splitting across nodes), clarifying that federated learning is collaborative but privacy-preserving. - Reinforcement learning (RL): Explicitly defined in SA5 (TS 28.105) as part of SupportedLearningTechnology. RL is also mentioned by other WGs inluding SA2 (TR 23.700-84) in the context of NWDAF assisted QoS policy generation, though without a formal definition. The term is consistently understood as a learning paradigm where agents optimise policies through interaction with an environment. No misalignment has been identified, but a baseline reference definition (e.g. in TR 21.905) would help ensure cross-WG consistency. - ML model activation / de-activation: SA5 defines activation/deactivation at inference function level (capability enablement), while RAN1 defines them at model level (feature enablement). These are complementary perspectives. Unified definitions have been proposed to distinguish clearly between function-level and model-level activation. - AI/ML Inference emulation: Defined only by SA5 as an optional lifecycle step to verify deployment suitability. It is not mandated, and no explicit definitions exist in other WGs. Release 19 introduces enhancements such as environment selection to expand its scope, but adoption remains optional. - The term "output" is proposed as unified term for e.g. decision or prediction or statistic or recommendation.
30577dbcedb3290f42b3a55986900ea7
22.850
7.3 Evaluation summary on AI/ML-related terminology
The evaluation of AIML-related terminology yielded the following observation: - No critical inconsistencies in terminology have been identified that would block cross-domain AI/ML lifecycle management. - Differences mainly reflect WG scope and perspective, not fundamental conflicts.
30577dbcedb3290f42b3a55986900ea7
22.850
7.4 Evaluation summary on AI/ML-related features
The analysis of AI/ML related features has identified services for AI/ML model training, AI/ML model inference and AI/ML performance are specified across the SA WG2, SA WG5 and SA WG6 Working Groups. Additionally, analysis of AI/ML related features has also identified that services for data collection for AI/ML are specified across the SA WG2, SA WG6 and RAN WG3 Working Groups. No detailed analysis was conducted on potential misalignment, inconsistencies or overlap between these services.
30577dbcedb3290f42b3a55986900ea7
22.850
8 Conclusions
Editor's note: This clause will provide information on any potential outcome from clause 5, clause 6 and clause 7 to the respective WGs (according to their Terms of Reference (ToR)) to resolve any issues with appropriate SA-level co-ordination as necessary. The term "AI/ML" is to be used as a unified definition encompassing "AI/ML", "AI" and "ML" in all corresponding ML related TRs/TSs. Interim conclusions: The term "output" is proposed as unified term for e.g. decision or prediction or statistic or recommendation. The following definitions are proposed for adoption by WGs, and will be documented in a CR to TR 21.905 [1]: - ML model: A mathematical algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs. It may include metadata which consists of, e.g. information related to the model and applicable runtime context. - ML model training: A process to train an ML Model by learning the input/output relationship in a data driven manner and obtain the trained ML Model for e.g. inference. - ML model inference: A process of running a set of inputs through a trained ML model to produce a set of outputs. - ML model lifecycle management: The management capabilities allowing a producer or consumer to manage different phases of the ML model lifecycle as defined in clause 6.2.1.7. - Functionality-based lifecycle management: Signalling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g. RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature / Feature Group or specific configurations of an AI/ML-enabled Feature/FG. - Federated Learning: A distributed machine learning approach where the ML model(s) are collaboratively trained by multiple participants, including one acting as an FL server and multiple acting as FL clients, iteratively without exchanging data samples. - Horizontal Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different clients for local model training have the same feature space for different samples. - Vertical Federated Learning: A federated learning technique without exchanging/sharing local data, wherein the local data set in different clients for local model training have different feature spaces for the same samples. - Transfer Learning: A machine learning technique where the knowledge acquired from training one or more ML models is leveraged to enhance the performance or accelerate the training of another ML model. Annex A: ML Model A.1 ML model life cycle management (LCM) Rel-18 specification addressed the AI/ML LCM management capabilities (including wide range of use cases, corresponding requirements (stage 1) and solutions (stage 2 NRMs & stage 3 OpenAPIs) for the ML model, including ML model training (which also includes validation), testing, AI/ML inference emulation, deployment and AI/ML inference steps of the lifecycle as shown below for managing the entire lifecycle of the ML model. A.1.1 Observations and analyses: AI/ML LCM - The AI/ML workflow defined by SA WG5 TS 28.105 [9] represents a general framework encapsulating the various life cycle management (LCM) operations for ML model (i.e. model training, testing, emulation, deployment and inference). - The AI/ML LCM capabilities defined by SA WG5 for each of the operational steps are generic for managing of 3GPP system including the Management and orchestration, CN and RAN domains. - It is important to recognise that "domain-specific" ML model life cycle related tasks can be developed for the specific domains by the relevant 3GPP WGs, e.g. the RAN WGs can specify data collection within the RAN domain needed to train the UE-side, network-side, or the two-sided UE/network ML models and specific LCM operations for UE-side model over air-interface. - While ML model and AI/ML inference function life cycle can be specified by the relevant 3GPP WG for the specific domain (i.e. RAN, CN or Management & Orchestration), the "management aspects" of life cycle (i.e. life cycle management) remains to be primarily a "management task" that falls within the responsibility of SA WG5. - The ML models and the associated "Life Cycle" can be a use case and/or domain specific, the management of the Life Cycle (i.e. LCM) is a higher layer task which is typically a role of the OAM that encompasses the process of e.g. the governance, automation and operational practices applied to the entire AI/ML lifecycle. It is therefore imperative to distinguish the difference between Life Cycle and Life Cycle Management. - Where feasible, the ML model LCM workflow and associated management capabilities specified by SA WG5 in TS 28.105 [9] could be considered by 3GPP for the currently ongoing and future relevant specification development. The 3GPP WG(s) should potentially provide AI/ML LCM-related requirements, if any, to SA WG5 to avoid duplication and contention of effort. NOTE: SA WG5 Rel-18 specification in TS 28.105 [9] on ML model LCM and the associated management capabilities does not address the UE-side and UE/Network-side Model LCM. A.2 ML model lifecycle management capabilities Each step in the ML model lifecycle. i.e. the ML model training, ML model testing, AI/ML emulation, ML model deployment and AI/ML inference correspond to number of dedicated management capabilities. The specified capabilities are developed based on corresponding use cases and requirements. The management capabilities specified by SA WG5 TS 28.105 [9] are highlighted below: A.2.1 Observations and analyses: ML model lifecycle management capabilities - ML model lifecycle management (LCM) capabilities are crucial for the effective deployment, operation and optimization of AI/ML-enabled features and capabilities in both the NG-RAN and 5GC. These capabilities ensure that AI/ML models are not only developed and trained correctly but also tested, deployed, evaluated and operated efficiently in the network environment. - The management capabilities outlined in TS 28.105 [9] offer a structured approach to managing the various steps of the ML model lifecycle. This structured approach is applicable to AI/ML-enabled features and capabilities in NG-RAN, 5GC and management system, ensuring consistency and reliability in the deployment and operation of AI/ML technologies for different domains. - The AI/ML LCM management capabilities are foundational for integrating advanced AI/ML features into 5G networks. By ensuring that ML models are effectively managed from the training step through to inference, these capabilities provide robust and reliable AI/ML-driven network enhancements. - The AI/ML LCM workflow and associated management capabilities specified by SA WG5 in TS 28.105 [9] should be considered as the baseline for the AI/ML E2E framework for the 3GPP. These capabilities provide a comprehensive foundation for ensuring that AI/ML models and related processes are consistently managed across all steps of their lifecycle, promoting seamless integration and operation for all domain within the 5G system. A.3 AI/ML functionalities management scenarios The Rel-18 specification TS 28.105 [9] also documented AI/ML functionalities management scenarios in relation with managed AI/ML features which describe the possible locations of ML training function and AI/ML inference function involving the various 3GPP system domains. A.3.1 Observations and analyses: AI/ML functionalities management scenarios - The functional arrangement scenarios defined by SA WG5 specifications demonstrate that different part of the ML model life cycle can be managed depending on the use case. - The functional arrangements represent management deployment scenarios where for example ML model training related tasks can either be a domain specific or as a cooperative multi-domain task involving for example RAN and management & orchestration or CN and management & orchestration (OAM) domains. - The LCM workflow defined by SA WG5 serves as a management framework to accommodate and enable all the possible functional arrangement scenarios within or cross-domains in the 3GPP system. - The functional arrangement scenarios, coupled with the ML model LCM as defined by SA WG5 in TS 28.105 [9], can be considered in the ongoing and any future 3GPP relevant specification development. Annex B: Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2024-09 TSG SA#105 SP-241367 - - - Proposed skeleton agreed for FS_AIML_CAL at TSG SA#105 0.0.0 2024-09 TSG SA#105 - Implementing following approved papers: SP-241407, SP-241395, SP-241408, SP-241397, SP-241409, SP-241410. 0.1.0 2024-12 TSG SA#106 - - - - Implementing following approved papers: SP-241834, SP-241982, Sp-241839, SP-241983, SP-241965, SP-241984, SP-241985, SP-241986, SP-241987, SP-241988. 0.2.0 2025-03 TSG SA#107 - - - - Implementing following approved papers: SP-250404, SP-250405, SP-250406, SP-250299, SP-250346, SP-250407, SP-250349, SP-250408, SP-250409, SP-250410, SP-250411, SP-250412. 0.3.0 2025-06 TSG SA#108 - - - - Implementing following approved papers: SP-250578, SP-250725, SP-250753, SP-250827, SP-250828, SP-250829, SP-250843, SP-250844, SP-250846, SP-250847, SP-250885, SP-250886 0.4.0 2025-09 TSG SA#109 - - - - Implementing following approved papers: SP-251219, SP-251221, SP-251163, SP-251164, SP-251257, SP-251258, SP-251275, SP-251120, SP-251223. 0.5.0
30577dbcedb3290f42b3a55986900ea7
22.850
7.2 Detailed evaluation on AI/ML-related terminology
- ML model: Defined with differences across SA WG5, SA WG6, and RAN WG1 but all converge on a mathematical construct producing outputs from inputs. - ML model training: Consistently described as iterative optimisation of parameters, though WG perspectives differ (e.g., lifecycle management in SA WG5, AI/ML service enablement via AIMLE in SA WG6, performance-driven framing in RAN WG1). These variations are complementary. - ML model distributed training: Defined only by SA WG5 as distributing workload across training functions. - ML model re-training / model update: SA WG5 defines re-training as generating a new version without altering structure, while RAN WG1 and SA WG6 use update with broader scope (parameter and/or structure). Unified definitions have been proposed to reduce redundancy: Re-training = generating a new version of a previous model. Model update = broader concept covering re-training, parameter adjustment, structural modification, or deployment of a new version. - ML model testing: SA WG5 defines testing as a distinct lifecycle stage; RAN WG1 considers it a subprocess of training. SA WG5's lifecycle distinction provides clearer management separation, while RAN WG1 highlights operational evaluation during training. Not contradictory but conceptually different. - ML model pre-specialised training / fine-tuning: Defined only in SA WG5. Pre-specialised training produces a task-agnostic model with wide inference scope, while fine-tuning narrows this scope to a new single inference type. This layered paradigm differs from traditional training/re-training and supports modular reuse across domains. - Functionality-based lifecycle management: Defined only in RAN WG1 and used by RAN WG2. Signalling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g. RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature / Feature Group or specific configurations of an AI/ML-enabled Feature/FG. - Federated learning: SA WG5 explicitly contrasts it with distributed training (job-splitting across nodes), clarifying that federated learning is collaborative but privacy-preserving. - Reinforcement learning (RL): Explicitly defined in SA WG5 (TS 28.105 [9]) as part of SupportedLearningTechnology. RL is also mentioned by other WGs including SA WG2 (TR 23.700-84 [7]) in the context of NWDAF assisted QoS policy generation, though without a formal definition. The term is consistently understood as a learning paradigm where agents optimise policies through interaction with an environment. No misalignment has been identified, but a baseline reference definition (e.g. in TR 21.905 [1]) would help ensure cross-WG consistency. - ML model activation / de-activation: SA WG5 defines activation/deactivation at inference function level (capability enablement), while RAN WG1 defines them at model level (feature enablement). These are complementary perspectives. Unified definitions have been proposed to distinguish clearly between function-level and model-level activation. - AI/ML Inference emulation: Defined only by SA WG5 as an optional lifecycle step to verify deployment suitability. It is not mandated, and no explicit definitions exist in other WGs. Release 19 introduces enhancements such as environment selection to expand its scope, but adoption remains optional. - The term "output" is proposed as unified term for e.g. decision or prediction or statistic or recommendation.
c3de68f0eebb105002ecbb9ed169af5e
22.851
1 Scope
The present document investigates use cases and potential new requirements related to 3GPP system enhanced support of specific 5G network sharing deployment scenarios, in particular where there is no direct interconnection between the shared NG-RAN and participating operators’ core networks. It includes the following aspects: - Mobility and service continuity, e.g., when moving from a non-shared 4G/5G network to a shared 5G network and vice versa, with focus on CN aspects. - Potential security requirements. - Charging requirements (e.g., based on traffic differentiation in specific network sharing geographical areas). - User/service experience (e.g., maintain the communication latency for voice, and SMS) when accessing the shared network, including scenarios of home-routed traffic or local breakout. - Other aspects, e.g., regulatory requirements, emergency services, PWS support.
c3de68f0eebb105002ecbb9ed169af5e
22.851
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [2] 3GPP TS 22.101: "Service principles". [3] 3GPP TS 22.261: "Service requirements for the 5G system". [4] 3GPP TS 29.513: "5G System; Policy and Charging Control signalling flows and QoS parameter mapping; Stage 3". [5] 3GPP TS 23.122: "Non-Access-Stratum (NAS) functions related to Mobile Station (MS) in idle mode". [6] 3GPP TS 22.011: "Service accessibility". [7] 3GPP TS 23.502: "Procedures for the 5G System". [8] 3GPP TS 22.071: "Location Services (LCS); Service description; Stage 1". [9] “Highway makes desert travel easy”, http://en.people.cn/n3/2022/0706/c90000-10119553.html
c3de68f0eebb105002ecbb9ed169af5e
22.851
3 Definitions of terms, symbols and abbreviations
c3de68f0eebb105002ecbb9ed169af5e
22.851
3.1 Terms
For the purposes of the present document, the terms given in 3GPP TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in 3GPP TR 21.905 [1]. Indirect Network Sharing: describes the communication between the shared access NG-RAN and the Participating NG-RAN Operator’s core network being routed through the Hosting NG-RAN Operator’s core network. Hosted Service: a service containing the operator's own application(s) and/or trusted third-party application(s) in the Service Hosting Environment, which can be accessed by the user. Service Hosting Environment: the environment, located inside of 5G network and fully controlled by the operator, where Hosted Services are offered from. Hosting NG-RAN Operator: the operator that has operational control of a shared NG-RAN. NOTE 1: Hosting NG-RAN Operator can also be a Hosting RAN Operator. See 3GPP TS 22.101 [2]. Participating NG-RAN Operator: authorized operator that is sharing NG-RAN resources provided by a Hosting NG-RAN Operator. NOTE 2: Participating NG-RAN Operator can also be participating operator. See 3GPP TS 22.101 [2]. Shared NG-RAN: NG-RAN that is shared among a number of operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
3.2 Symbols
For the purposes of the present document, the following symbols apply: <symbol> <Explanation>
c3de68f0eebb105002ecbb9ed169af5e
22.851
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in 3GPP TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in 3GPP TR 21.905 [1]. HTA High Traffic Areas LTA Low Traffic Areas MOCN Multi-Operator Core Network NG-RAN Next Generation Radio Access Network SST Slice/Service Type
c3de68f0eebb105002ecbb9ed169af5e
22.851
4 Overview
The present document introduces the newly supported network sharing scenario, where a NG-RAN is shared among multiple operators without necessarily assuming a direct connection between the shared radio access network and the participating operator’s core network. Use cases including service continuity and QoS, access control and mobility, international roamers in shared network, hosted services, long-distance road transport are analyzed. This study provides alternatives for existing operators who intend to deploy a NG Radio Access Network to complement the existing market, taking into account of operators’ business consideration, such as network planning, operation and other factors.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5 Use Cases
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1 Use Case on Network Sharing without Direct Connections between the Shared Access and the Core Networks of the Participating Operators
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.1 Description
As stated in TS 22.261 [3] the increased density of access nodes needed to meet future performance objectives pose considerable challenges in deployment and acquiring spectrum and antenna locations. RAN sharing is seen as a technical solution to these issues. Sharing access networks and network infrastructure has become more important part of 3GPP systems. When two or more operators have respectively deployed or plan to deploy 5G access networks and core networks, a MOCN configuration can be considered for network sharing between these operators, i.e., a Multi-Operator Core Network (MOCN) in which multiple CN nodes are connected to the same radio access and the CN nodes are operated by different operators. One of the challenges for the partners’ network operators is the maintenance generated by the interconnection (e.g., number of network interfaces) between the shared RAN and two or more core networks, especially for a large number of shared base stations. For these reasons, it is suggested investigating other types of network sharing scenarios, where a 5G RAN is shared among multiple operators without necessarily assuming a direct connection between shared access and the core networks of the participating operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.2 Pre-conditions
Two (or more) operators provide coverage with their respective radio access networks in different parts of a country but together cover the entire country. There is an agreement between all the operators to work together and to build a shared network, but utilizing the different operator’s allocated spectrum appropriately in different parts of the coverage area (for example, Low Traffic Areas, LTA and High Traffic Areas, HTA). The hosting operator 1, as illustrated below, can share its NG-RAN with the participating operators with or without direct connections between the shared access and the core networks of the participating operators. The following preconditions apply, 1) 1. OP1 owns the NG-RAN to be shared with three other operators; OP2, OP3, and OP4. 2) 2. NG-RAN is shared with certain conditions, e.g., within a specific 5G frequency band or within specific area. 3) 3. NG-RAN does not have direct connections between the shared access and the core networks of the participating operators OP2 and OP3. 4) 4. OP4 has a MOCN arrangement with OP1. 5) 5. In this example UE 1 is subscribed to OP1, UE 2 is subscribed to OP 2, UE 3 is subscribed to OP3, and UE 4 is subscribed to OP4. Both options of direct and indirect connections between the shared access and the core networks of the participating operators are illustrated in Figure 5.1.2-1 below. Figure 5.1.2-2 shows the option of Indirect Network Sharing involving core network of hosting operator, as indirect connection between the shared access and the core networks of the participating operators. Figure 5.1.2-1: Different options both direct and indirect connections between the shared access and the core networks of the participating operators Figure 5.1.2-2: Indirect Network Sharing scenario involving core network of hosting operator between the shared access and the core networks of the participating operators
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.3 Service Flows
1) UE1 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP1. 2) UE2 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP2. 3) UE3 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP3. 4) UE4 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP4. 5) The service provider of UE 1 is OP1. 6) The service provider of UE 2 is OP2. 7) The service provider of UE 3 is OP3. 8) The service provider of UE 4 is OP4. For UEs accessing the Shared NG-RAN, the network of the hosting operator needs to know which participating operator a UE is registered to and what type of network sharing (e.g., MOCN or otherwise) is in place for that participating operator. The inter-connection between participating operators’ core networks and Shared NG-RAN of the Hosting Operator can be supported via an element of the Hosting Operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.4 Post-conditions
The hosting network will be able to provide accessing to all participating operators' users.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.5 Existing Features partly or fully covering Use Case Functionality
Network sharing has been studied in previous releases, where related normative stage 1 requirements are introduced in 3GPP TS 22.101 [2] and TS 22.261[3]. 3GPP TS 22.101 [2] introduces general requirements of network sharing, stated as follows: Network sharing shall be transparent to the user. The specifications shall support both the sharing of: (i) radio access network only; (ii) radio access network and core network entities connected to radio access network. NOTE: In a normal deployment scenario only one or the other option will be implemented. The provisioning of services and service capabilities is described in 3GPP TS 22.101 [2]. The provision of services and service capabilities that is possible to offer in a network shall not be restricted by the existence of the network sharing It shall be possible for a core network operator to differentiate its service offering from other core network operators within the shared network. It shall be possible to control the access to service capabilities offered by a shared network according to the core network operator the user is subscribed to. The selection of 3GPP access network is described in 3GPP TS 22.261 [3] clause 6.19 : The UE uses the list of PLMN/RAT combinations for PLMN selection, if available, typically during roaming situations. In non-roaming situations, the UE and subscription combination typically matches the HPLMN/EHPLMN capabilities and policies, from a SST (slice/service type) perspective. That is, a 5G UE accessing its HPLMN/EHPLMN should be able to access SSTs according to UE capabilities and the related subscription. […] The 5G system shall support selection among any available PLMN/RAT combinations, identified through their respective PLMN identifier and Radio Access Technology identifier, in a prioritised order. The priority order may, subject to operator policies, be provisioned in an Operator Controlled PLMN Selector lists with associated RAT identifiers, stored in the 5G UE. The 5G system shall support, subject to operator policies, a User Controlled PLMN Selector list stored in the 5G UE, allowing the UE user to specify preferred PLMNs with associated RAT identifier in priority order.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.6 Potential New Requirements needed to support the Use Case
[PR 5.1.6-001] The 5G system shall be able to support network sharing with indirect connection between the Shared NG-RAN and one or more Participating NG-RAN Operators’ core networks. [PR 5.1.6-002] The 5G system shall be able to support means for Participating Operators to provide their operator’s name to a registered UE, for display to the user. 5.2 Use Case on Service Continuity and QoS
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.1 Description
In Indirect Network Sharing scenario, the requirements to the services provided by the participating operator and the hosting operator for a UE moving between their service areas needs to be clearly defined. These service considerations are based on their user subscriptions and charging requirements from the participating operator. The service principle is not expected to be significantly different from the MOCN access sharing. The business here includes not only the operator’s name displayed in the UE UI, but also the service logic provided by both the participating operator and the hosting operator for the services, such as voice, SMS and data communications for the UE.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.2 Pre-conditions
Assumptions, 1. OP 1 is a Hosting NG-RAN Operator. 2. The core network of OP 2 does not have direct connection with OP1’s Shared NG-RAN. 3. There is connection between the OP1’s CN and OP2’s CN. 4. UE 1 belongs to OP 1. UE 2 and UE N belong to OP 2.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.3 Service Flows
Figure 5.2.3-1: Basic service scenario without direct connections between the Shared NG-RAN and the core networks of the participating operators 1. UE 2 successfully registers to OP 2’s PLMN via the Shared NG-RAN. 2. UE 1 successfully registers to OP 1's PLMN. 3. UE N successfully registers to OP 2's PLMN, via OP2’s wireless access network. 4. UE 2 initiates a service, for example a 5G voice call to user N, and succeeds under the shared network. - OP 2 and OP1 do not need to expose the IMS network to each other, e.g., UE2 may also access services provided by their home environment in the same way even if the UE2 moves to the coverage of Shared NG-RAN of OP1, and it is not necessary to assume that the IMS network of OP1 is involved in this scenario. - When UE 2 moves to OP 2's 4G area from the shared network, the call continues. - When UE 2 moves to OP 2's 5G area from the shared network, the call does not drop. 5. UE2 may also use other services under the shared network, just as it usually does in the OP2’s network while OP1 and OP2 have different network services and service capabilities. 6. The network sharing partners may set a specific sharing allocation model for a network sharing method communicating with. It is necessary to understand the number of the users, and how long users using a certain shared network method will take. The collection of charging information associated with the sharing method that the UE accesses with should be possible.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.4 Post-conditions
The service of UE 2 succeeds, when UE 2 moves between shared access network and participating operator’s network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.5 Existing Features partly or fully covering Use Case Functionality
SA1 has performed various studies on services aspects in previous releases. For the definition of service continuity and the description of user experience see 3GPP TS 22.261 [3]. Requirements to service capabilities are described in 3GPP TS 22.101 [2] as follows: The provision of services and service capabilities that is possible to offer in a network shall not be restricted by the existence of the network sharing It shall be possible for a core network operator to differentiate its service offering from other core network operators within the shared network. It shall be possible to control the access to service capabilities offered by a shared network according to the core network operator the user is subscribed to. The 3GPP System shall support service continuity for UEs that are moving between different Shared RANs or between a Shared RAN and a non-shared RAN. Subscribers shall have a consistent user experience regardless of the domain used, subject to the constraints of the UE and access network. Some 5G specific requirements are described in TS 22.261 [3] as follows: The 5G system shall support mobility procedures between a 5G core network and an EPC with minimum impact to the user experience (e.g. QoS, QoE). The 5G system shall support inter- and/or intra- access technology mobility procedures within 5GS with minimum impact to the user experience (e.g. QoS, QoE). In addition to the charging requirements of 5G system introduced in Chapter 9 of TS 22.261 [3], the following requirement is defined in TS 22.101 [2]: Charging and accounting solutions shall support the shared network architecture so that end users can be appropriately charged for their usage of the shared network, and network sharing partners can be allocated their share of the costs of the shared network resources. Further charging requirements are specified in TS 29.513 [4].
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.6 Potential New Requirements needed to support the Use Case
[PR 5.2.6-001] The 5G core network shall be able to support collection of charging information associated with a UE accessing a Shared NG-RAN using Indirect Network Sharing. [PR 5.2.6-002] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall support service continuity for UEs that are moving between different Shared NG-RANs or between a Shared NG-RAN and a non-shared RAN (managed by Hosting and Participating Operators). [PR 5.2.6-003] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall be able to minimize the impact to the user experience (e.g., QoS, QoE) of UEs in an active communication moving between different Shared NG-RANs or between a Shared NG-RAN and a non-shared RAN (managed by Hosting and Participating Operators). [PR 5.2.6-004] In case of Indirect Network Sharing, the 5G system shall support a mechanism for a UE to access the subscribed PLMN services when entering a Shared NG-RAN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3 Use Case on Network Access Control and Mobility between Sharing Parties
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.1 Description
It is worth mentioning that 5G networks have been designed to be able to provide shared facilities from the beginning. This means that in the case that the 4G network has both non-shared and shared E-UTRAN at the same time, there could be a number of different types of coverage in the same region: - Non-shared E-UTRA coverage, - Shared E-UTRA coverage, - Non-shared 5G NR coverage, - Shared 5G NR coverage. This study introduces a new sharing method arises as without direct connection between Shared NG-RAN and the core network of participating operators. Therefore, network access control and mobility management need to be considered when introducing the potential requirements of this new sharing method with existing network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.2 Pre-conditions
1. It is assumed that OP1, OP2 and OP3 has deployed 4G and 5G networks. NOTE: The home 4G and 5G network of OP1, OP2 and OP3 may be deployed with non-shared and shared wireless access technology. - Both OP1 and OP3 are Hosting NG-RAN Operators, which shared NG-RAN with participating OP2. - UEs subscribe to OP2’s home PLMN. - UEs may register successfully to OP1 and OP3’s shared 5G network. 2. Both operators (i.e., OP1 and OP3) agreed to share their networks via indirect connection between the shared radio access network and the OP2’s core network. 3. Potential scenario1: The coverage of OP1 and OP3’s shared 5G network may overlap with OP2’s 4G network; may also overlap with OP2’s 5G network (i.e., at OP1 and OP2’s border, at OP3 and OP2’s border). 4. Potential scenario2: The coverage of OP1’s shared 5G network may overlap with OP3’s shared 5G network (i.e., at OP1’s and OP3’s border). Some parts of shared areas do not overlap the OP2’s network. There are none-shared areas in OP1 or OP3 networks as well.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.3 Service Flows
Mobility and access control scenarios in shared network are illustrated in the following: • Scenario 1 (Figure 5.3.3-1a): a UE with a subscription from OP2 moves between OP2’s own 4G access networks and either OP1’s or OP3’s shared 5G networks. • Scenario 2 (Figure 5.3.3-1b): a UE with a subscription from OP2 moves between OP2’s own 5G access networks and either OP1’s or OP3’s shared 5G networks. • Scenario 3 (Figure 5.3.3-2): a UE with a subscription from OP2 moves between coverage of OP1’s and OP3’s shared 5G access networks. Figure 5.3.3-1a: Scenario 1: a UE with a subscription from OP2 moves between OP2’s own 4G access networks and either OP1’s or OP3’s shared 5G networks. Figure 5.3.3-1b: Scenario 2: a UE with a subscription from OP2 moves between OP2’s own 5G access networks and either OP1’s or OP3’s shared 5G networks. NOTE 1: OP1_5G/OP3_5G are OP1/OP3’s shared 5G network via indirect connection between the shared radio access network and the OP2’s core network in the Figure 5.3.3-1a and Figure 5.3.3-1b. NOTE 2: OP2_5G/OP2_4G is OP2’s network, may be MOCN networks or non-shared network. 1. UE connects to the participating OP2_4G network then accesses to the hosting OP1_5G or OP3_5G network when the UE crosses the border between the shared network managed by hosting operators and the OP2’s own 4G access networks in the scenario 1 (As shown as ① in Figure 5.3.3-1a). 2. UE connects to the participating OP2_5G network then accesses to the hosting OP1_5G or OP3_5G network when the UE crosses the border between the shared network managed by hosting operator and the OP2’s own 5G access networks in the scenario 2 (As shown as ② in Figure 5.3.3-1b). Figure 5.3.3-2: Scenario 3: a UE with a subscription from OP2 moves between coverage of OP1’s and OP3’s shared 5G access networks 3. UE connects to the hosting OP1_5G network then accesses to the hosting OP3_5G network when the UE crosses the border between the two shared networks managed by different hosting operators in the scenario 3 (As shown as ③ in Figure 5.3.3-2). 4. The UE accesses the shared network via indirect sharing network method in the specific geographical area as OP1_5G (shared) and/or OP3_5G (shared) shown in Figure 5.3.3-2, based on the agreements between the hosting and the participating operators. 5. The UE connects to an appropriate access network, when the user moves to an area where more than one operator's access networks provide connectivity, e.g., existing OP3’s 5G network and the OP2’s 4G network, or existing OP1’s 5G network and OP3’s 5G network, based on the agreement between the hosting and participating operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.4 Post-conditions
All forms of mobility (i.e., between participating operators RAN and shared RAN for both CONNECTED mode and IDLE mode UE, see the clause 3.1 of TS 23.122 [5]) are successfully processed in a sharing scenario without direct connections between the shared access and the core networks of the participating operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.5 Existing Features partly or fully covering Use Case Functionality
SA1 has performed various studies on mobility and network sharing in previous releases, where related normative stage 1 requirements are introduced in 3GPP TS 22.101 [2] and 22.261 [3]. 3GPP TS 22.261 [3] introduces requirements of Diverse mobility management, stated as follows: The 5G system shall support inter- and/or intra- access technology mobility procedures within 5GS with minimum impact to the user experience (e.g. QoS, QoE). 3GPP TS 22.261 [3] describes various access related requirements, stated as follows: Based on operator policy, the 5G system shall support steering a UE to select certain 3GPP access network(s). 3GPP TS 22.101 [2] introduces requirements of mobility of network sharing, stated as follows: It shall be possible to support different mobility management rules, service capabilities and access rights as a function of the home PLMN of the subscribers. The above requirements are based on MOCN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.6 Potential New Requirements needed to support the Use Case
[PR 5.3.6-001] In case of indirect network sharing, the 3GPP system shall support mechanisms to minimize service interruptions for UEs that are moving between different Shared RANs or between a Shared RAN and a non-shared RAN (managed by Hosting and Participating Operators). [PR 5.3.6-002] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall support access control for a UE accessing a Shared NG-RAN. [PR 5.3.6-003] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall be able to select an appropriate radio access network (i.e., 4G, 5G) for a UE. [PR 5.3.6-004] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall support a mechanism to enable a UE with a subscription to a Participating Operator to access an authorized Shared NG-RAN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4 Use Case on International Roaming Users in a Shared Network
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4.1 Description
There are network sharing scenarios in the process of 5G deployment due to different operators’ business consideration, such as network planning, operation and other factors, which introduce demand for 5G network sharing with indirect connection between the shared access network and a participating operator’s core network, especially in countries with a wide range of rural areas, where seamless 5G radio coverage is hard to achieve. Benefit from 5G network sharing technology, an operator can not only save investment for 5G deployment, but also expand 5G coverage in vaster area as well. However, in this case, it is also required to allow inbound international roaming subscribers to use the shared resources of a hosting operator. Therefore, it is suggested investigating how to realize the scenario in this situation.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4.2 Pre-conditions
In the 5G scenario, an operator may deploy its network as an applicable shared network to other operators. As illustrated in Figure 5.4.2-1, three operators (OP1, OP2, OP3) are involved: - OP1, as a Hosting NG-RAN Operator, owns an applicable Shared NG-RAN which can be shared to other operators (e.g., OP2). - OP2, as a Participating NG-RAN Operator, has 5G network sharing agreement with OP1. In addition, OP2 may have its own non-shared 5G network. - Shared NG-RAN of OP1 does not have direct connections between the shared access and the core networks of the participating operators OP2. - OP3 is a roaming partner of OP2, but has no roaming agreement with OP1. - UE subscribes to OP3’s network as a foreign operator’s network. Figure 5.4.2-1: Scenario of International Roaming Users in a Shared Network
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4.3 Service Flows
When a UE of OP3 travels to the coverage of OP2’s NG-RAN, it can enjoy services provided by their home environment in another country via OP2’s network. When a UE of OP3 travels to the coverage of OP1’s shared NG-RAN, it can enjoy services provided by their home environment in another country via OP1’s shared access network and OP2’s network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4.4 Post-conditions
It is the same for the service experience and treatment of OP2’s UE and OP3’s UE in the coverage of OP1’s Shared NG-RAN, based on the agreement between operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4.5 Existing Features partly or fully covering Use Case Functionality
3GPP TS 22.011 [6] introduces requirements on roaming in shared networks, as described below: The following requirements are applicable to GERAN, UTRAN, E-UTRAN and NG-RAN sharing scenarios: - When network sharing exists between different operators and a user roams into the shared network it shall be possible for that user to register with a core network operator (among the network sharing partners) that the user’s home operator has a roaming agreement with, even if the operator is not operating a radio access network in that area. Other roaming related requirements are described in TS 22.101 [2]: 3GPP specifications should be in compliance with the following objectives: e) to provide support of roaming users by enabling users to access services provided by their home environment in the same way even when roaming.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.4.6 Potential New Requirements needed to support the Use Case
[PR 5.4.6-001] The 5G system shall enable Shared NG-RAN of a Hosting Operator with the indirect connection between Shared NG-RAN and a Participating NG-RAN Operator’s core network to provide services for inbound roaming users. NOTE: Inbound roaming users mentioned above refer to the subscribers of a foreign operator having a roaming agreement with one participating operator. 5.5 Use Case on Hosted Services
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.5.1 Description
Subscribers have the opportunity to visit the Hosted Services via its own operator's network, specified in TS 22.261[3]. These Hosted Services contain applications provided by operators and/or trusted 3rd parties. Usually, operators use some technologies to optimize the path of the user plane to reduce latency and bandwidth pressure, if the Hosted Services run on the operator's Service Hosting Environment. GSMA provides a scenario when Hosted Services of party operator is located at a shared edge node of hosting operator, the Hosted Services are also applicable for the users accessing through the PLMN of party operator. The sharing edge node means that hosting operator deploys applications on cloud resources in its edge node as requested by a party operator and provides an application endpoint to the party operator, referred to S2-2208120. However, when users access through a shared network, it is still expected the Hosted Services to be available. This use case provides the possibility for subscribers visiting Hosted Services via shared networks.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.5.2 Pre-conditions
Assumptions, 1. OP1 is a Hosting NG-RAN Operator. 2. The core network of OP2, as the participating operator, does not have direct connection with OP1’s shared NG-RAN. 3. There is connection between the OP1’s CN and OP2’s CN. 4. OP2 provides Hosted Services for subscribers in the local area 1 and 2. OP2 wishes to provide Hosted Services for subscribers accessing through OP1’s and OP2’s network in the local area 1.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.5.3 Service Flows
Figure 5.5.3-1: Scenario of visiting Hosted Services via shared network 1. UE is the subscriber of OP2, and enjoy the Hosted Services of OP2 in the local area 2. 2. UE is ready to move to the shared network of OP1. During this process, the Service Hosting Environment hosting the service may or may not change. 3. Available Service Hosting Environment can still be found in local area 1 to provide services for UE. The Hosted Services are applicable when the UE crosses the local area 1 and 2 and/or stay within area 1.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.5.4 Post-conditions
The visiting hosted service via shared network succeeds, when UE moves between shared access network and participating operator’s network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.5.5 Existing Features partly or fully covering Use Case Functionality
Session 6.5 of TS 22.261 mainly introduces the relevant terms and related requirements of Hosted Services. Some typical requirements for service in network sharing are: 5G is designed to meet diverse services with different and enhanced performances (e.g. high throughput, low latency and massive connections) and data traffic model (e.g. IP data traffic, non-IP data traffic, short data bursts and high throughput data transmissions). User plane should be more efficient for 5G to support differentiated requirements. On one hand, a Service Hosting Environment located inside of operator's network can offer Hosted Services closer to the end user to meet localization requirement like low latency, low bandwidth pressure. These Hosted Services contain applications provided by operators and/or trusted 3rd parties. On the other hand, user plane paths can be selected or changed to improve the user experience or reduce the bandwidth pressure, when a UE or application changes location during an active communication. In addition, Chapter 4.23.9 of 23.502[7] provides some further work related with efficient user plane, which mainly describe additional PDU Session Anchor and Branching Point or UL CL controlled by I-SMF.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.5.6 Potential New Requirements needed to support the Use Case
[PR 5.5.6-001] In case of Indirect Network Sharing, the 5G system shall support a mechanism to enable a UE subscribed to the participating operator to access the participating operator’s Hosted Services. 5.6 Use Case on Emergency Call
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.6.1 Description
The network is allowed to route the emergency call to the emergency response centre in the area where the operator provides services, as specified in TS 22.101[2], in accordance with national regulations and where the subscriber is located. In order to route emergency calls to the correct emergency response centre, operators may need to use the UE location information. Operators may use different location information based on the requirement of the national regulation. Sometimes operators have to change the network configuration to provide correct routing or services related to location information in an area. This use case provides the possibility that the subscriber of the participating operator may initiate an emergency call in the shared network, which leads to the problem to the service provider of routing the emergency call according to the location information of the hosting operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.6.2 Pre-conditions
Assumptions: 1. OP-1 is a Hosting NG-RAN Operator. There is a connection between the OP-1’s CN and OP-2’s CN, while OP-2 as a participating operator indirectly sharing the network. 2. OP-2, as the participating operator, is an emergency call service provider. The network of OP-2 is connected to the emergency response centre located in Area 1. 3. OP-3 is the Hosting RAN operator of NG-RAN and E-UTRAN. And OP-3 and OP-2 have a sharing agreement with MOCN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.6.3 Service Flows
Figure 5.6.3-1: Emergency call routing in network sharing scenarios 1. UE is registered to OP-2. UE accesses the shared network of OP-1, and registered with the OP-2’s IMS. The nearest emergency response centre to UE's current geographic location is in Area 1, shown as Fig. 5.6.3-1. 2. UE initiates an emergency call in the shared network, it is most reasonable for the core network of OP-2 to route the emergency call to PSAP within area 1. 3. OP-2’s network routes the emergency call to PSAP in Area 1 according to the location information transmitted by the shared network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.6.4 Post-conditions
The emergency call can be correctly routed to the appropriate PSAP to serve the UE according to its location information.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.6.5 Existing Features partly or fully covering Use Case Functionality
Chapter 10 of TS 22.101[2] defines the emergency call service. According to National regulations, the requirement of UE’s location of emergency call service is different in deployment scenarios, examples are: National regulations may require wireless networks to provide the emergency caller's location. This requirement typically overrides the caller's right to privacy with respect to their location being revealed, but remains in effect only as long as the authorities need to determine the caller's location. The interpretation of the duration of this privacy override may also be different, subject to national regulation. For example, some countries require location to be available from the wireless network only while the call is up, while others may allow PSAP's to unilaterally decide how long the location must be made available. Therefore, the requirement for providing location availability is to allow the network to support providing a mobile caller's location to the authorities for as long as required by the national regulation in force for that network. Note: See TS 22.071 [8] for location service requirements on emergency calls.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.6.6 Potential New Requirements needed to support the Use Case
[PR 5.6.6-001] Subject to national or regional regulatory and operator policies, the 5G system shall support emergency call from a UE accessing a Shared NG-RAN via Indirect Network Sharing. [PR 5.6.6-002] Subject to national or regional regulatory and operator policies, the 5G system shall provide to the Participating Operator the location information of an emergency caller accessing a Shared NG-RAN via Indirect Network Sharing.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7 Use Case on Long-distance Mobility in and across Shared Networks
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7.1 Description
Despite the aircraft and rail have been developed to serve more and more regions, road transport remains the most important way to deliver goods. As the logistics industry is booming, the total length of long-distance road transport has increased every year in the last ten years. How to provide communication service during long-distance transport is valuable to be discussed serving the transportation and logistics customers. Network sharing offers a way for the network operator to share the heavy deployment of mobile networks, as well as improve the service availability and user experience along the transport routes. Compared with MOCN configuration, network sharing with the indirect connection between Shared NG-RAN and the core network of participating operators allows flexible access options for the sharing parties to consider. For example, one network operator needs to provide communication service along a dedicated transport route across dense urban, rural, and desert, but can’t find one hosting operator with full coverage of the whole route. It’s possible to cooperate with a terrestrial network operator and a 3GPP satellite network operator separately to ensure the full coverage. Also, with the agreement, it’s possible to leverage the temporary coverage enhancement plan of hosting operators for certain regions, such as deploying onboard base stations in dense urban, to provide better user experience to their own users. Figure 5.7.1-1 Desert Highway across shared networks [9]
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7.2 Pre-conditions
It is assumed that network operators OP1, OP2, and OP3 have deployed 5G networks respectively in a country. Their radio access network can provide coverage to different regions of the country, and together cover the entire country with overlapping in certain regions. OP1 and OP3 are agreed with OP2 to share radio access network in the planned geographic area via the indirect connection between their shared radio access network and OP2’s core network. - OP1 is a Hosting NG-RAN Operator, who has deployed different types of 5G access nodes (e.g., Macro, Micro, onboarding, and IAB). It only shares Macro base stations (e.g., 5G BS#A, BS#B, BS#C) with participating operator OP2 but allows to access to dedicated type of access nodes (e.g., onboarding) temporarily regarding the agreement and the policy. - OP3 is another Hosting RAN Operator, sharing 3GPP Satellite access nodes with OP2. - OP2 has non-shared 5G access network deployed in some regions of the country, as 4G&5G BS. - UE on the vehicle supports all 5G RATs and has the subscription with OP2’s network. Figure 5.7.2-1: Long-distance mobility in and across shared networks
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7.3 Service Flows
1. UE on the vehicle connects to OP2’s network via OP1 shared access nodes and is authorized to initiate a communication service (e.g., data transfer, IMS voice call) during the movement. 2. In geographic Area #A, UE connects to OP2’s network only through the access nodes like 5G BS #A based on the agreement and the policies of OP1 and OP2. 3. A regional earthquake happens and results in unstable performance of 5G BS #B in Area #B. OP1’s UAV base stations are shared to OP2 as a supplementary or an alternative access node based on OP1 and OP2 agreement. 4. When UE reaches Location Loc #A, where it can detect the radio signal of both 5G BS #B and UAV base station, UE will access a proper OP1 access node (e.g., 5G UAV BS ) to continue the service regarding the subscription, and the policy and agreement of OP1 and OP2. 5. When UE reaches Location Loc #B, where it can detect the radio signal of OP1 5G BS#C and OP3 Satellite access node, UE will access a proper Hosting RAN operator’s access node (e.g., OP3 Satellite) to continue the service regarding the subscription and the policy and agreement of OP1, OP3 and OP2. 6. When UE reaches Location Loc #C, where it can detect the radio signal of OP3 Satellite access node and OP2 4G&5G BS, UE will select OP2 5G BS to continue the service regarding the policies of OP3 and OP2.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7.4 Post-conditions
UE can get services with the displayed serving network as OP2’s network when move in OP1’s Shared NG-RAN, across OP1’s and OP3’s shared RANs, and across OP3’s shared RAN and OP2’s non-shared RAN. OP2 ensures service continuity with proper mobility and access control to the shared access nodes of the same or different Host RAN Operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7.5 Existing Features partly or fully covering Use Case Functionality
The normative stage1 requirements of network sharing has been introduced to TS22.101 [2] and TS 22.261 [3] in previous releases. TS 22.101[2] specifies the service principle about Network Sharing that It shall be possible to support different mobility management rules, service capabilities and access rights as a function of the home PLMN of the subscribers. [clause 4.9] The 3GPP System shall support a Shared RAN (E-UTRAN or NG-RAN) to cover a specific coverage area (e.g. from a complete network to a single cell, both outdoor and indoor). [clause 28.2.1] The network sharing has been extended to different types of access networks, as stated TS 22.261 [3]. A 5G satellite access network shall support NG-RAN sharing. TS 22.261[3] has also illustrated general requirements about access, the mobility management and service continuity between different access networks as below. Based on operator policy, the 5G system shall support steering a UE to select certain 3GPP access network(s). [clause 6.3.2] The 5G system shall support inter- and/or intra- access technology mobility procedures within 5GS with minimum impact to the user experience (e.g., QoS, QoE). [clause 6.2.2] The 5G system shall support service continuity between 5G terrestrial access network and 5G satellite access networks owned by the same operator or owned by different operators having an agreement. [clause 6.2.3] In MOCN configuration, that multiple shared access networks have direct connections to the Participating Operator’s core network, the access control and mobility management are performed by Participating Operator’s core network directly. However, in Indirect Network Sharing, besides Participating Operator’s core network, Host RAN Operator’s core network may be involved in the access control and mobility management. Also, UE may connect to Participating Operator’s core network via the shared access nodes of different Host RAN Operators. TS 22.261[3] has defined requirements about multi-network connectivity and service delivery across operators as below. To provide a better user experience for their subscribers with UEs capable of simultaneous network access, network operators could contemplate a variety of sharing business models and partnership with other network and service providers to enable its subscribers to access all services via multiple networks simultaneously, and with minimum interruption when moving. For a user with a single operator subscription, the use of multiple serving networks operated by different operators shall be under the control of the home operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.7.6 Potential New Requirements needed to support the use case
[PR 5.7.6-001] Subject to the agreement between Hosting Operators and Participating Operator, the 5G system shall support a mechanism to authorize a UE with the subscription to the Participating Operator to obtain services in a Shared NG-RAN of Hosting Operators using Indirect Network Sharing. [PR 5.7.6-002] In case of Indirect Network Sharing, based on the Hosting and Participating Operators’ policies, the 5G system shall enable the Participating Operator to provide steering information to the Hosting Operator. NOTE 1: Such steering information for a given UE of the Participating Operator could be applied by the Hosting Operator to select amongst the Hosting Operator’s available shared access network(s). [PR 5.7.6-003] In case of Indirect Network Sharing, the 5G system shall be able to apply different access control for different access networks of Shared NG-RANs based on the Hosting and Participating Operator’s agreement and policies.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8 Use Case on Support of PWS in Shared NG-RAN in Indirect Interconnection with the Participating Operators
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8.1 Description
Operators 1 and 2, both licensed to operate a 4G network and a 5G network share the 5G part of the radio access network in some areas of the country. The Public Safety Agency sends the PWS message in a certain area covered by NG-RAN1 and RAN2 of Operator 1 and Operator 2 respectively and by the shared NG-RAN area of Operator 1 and Operator 2.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8.2 Pre-conditions
• Operator 1 owns NG-RAN 1 which is not shared. • Operator 2 owns RAN 2 which is not shared. • NG-RAN is a shared NG-RAN between Operator 1 and Operator 2. • Operator 1 is the hosting operator. • Operator 2 is a participating operator. Operator 1 and Operator 2 share NG-RAN in an indirect sharing method. • Operator 1 has a regulatory obligation to broadcast PWS message to all UEs operating on NG-RAN 1 and NG-RAN. • Operator 2 has a regulatory obligation to broadcast PWS message to all UEs operating on NG-RAN and RAN 2. • UEs operating in NG-RAN 1 or RAN 2 may handover to NG-RAN during the period the PWS message is periodically rebroadcasting. • UEs subscribed to Operator 1 operating in NG-RAN may handover to NG-RAN 1 during the period the PWS message is periodically rebroadcasting (and vice versa). • UEs subscribed to Operator 2 operating in NG-RAN may handover to RAN 2 during the period the PWS message is periodically rebroadcasting (and vice versa). Figure 5.8.2-1 PWS Scenario in Indirect Network Sharing
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8.3 Service flows
1) The Public Safety Agency (PSA) sends the PWS message in a certain Shared NG-RAN radio coverage area designated by the PSA where Operator 1 and Operator 2 share the RAN access. 2) Operator 1 and Operator 2 both broadcast the PWS message in the area covered by their respective not shared RAN they have regulatory obligations (NG-RAN 1 and RAN 2). 3) The UEs served by NG-RAN 1 and RAN 2 receive the warning message from their corresponding operating radio access. 4) The UEs served by Shared NG-RAN of the hosting Operator 1 receive the public warning message created for the specific region which is covered by NG-RAN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8.4 Post-conditions
For UEs subscribed to unshared accesses NG-RAN 1 and RAN 2, the public warning message can only be received by UEs subscribed to these operators owning the access. UEs served by Shared NG-RAN receive the public warning message issued by the PSA in the radio coverage area of the Shared NG-RAN designated for the warning message. The UEs only display one copy of the PWS message according to and regardless of the RAN it is operating in or may handover to. The UEs present new content of public warning messages after entering and leaving the shared network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8.5 Existing requirements to PWS Support in Shared Networks
See TS 22.101 [2]. 28.2.5 PWS support of Shared E-UTRAN/NG-RAN A Participating Operator potentially has regulatory obligations to initiate the broadcast of PWS messages regardless of E-UTRAN/NG-RAN sharing. The following requirements apply: The shared E-UTRAN/NG-RAN shall be able to broadcast PWS messages originated from the core networks of all Participating Operators. NOTE 1: Rel-11 design requires a shared PWS core. However, some regulatory obligations require a solution in which no common PWS core network entity is involved. 28.3.5 PWS support of Shared GERAN or UTRAN A Participating Operator potentially has regulatory obligations to initiate the broadcast of PWS messages regardless of GERAN or UTRAN sharing. The following requirements apply: The shared GERAN or UTRAN shall be able to broadcast PWS. NOTE 2: The Hosting RAN Operator is responsible for the delivery of PWS messages to the UEs. Identifying, changing, and adding appropriate functionality in the network will definitely lead to a better shared-network operation.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.8.6 Potential New Requirements needed to support the Use Case
[PR 5.8.6-001] Subject to regulatory requirements and mutual agreement between the Participating Operators and the Hosting Operator, the 5G system shall be able to support PWS when connectivity is provided via Indirect Network Sharing. [PR 5.8.6-002] Subject to regulatory requirements and mutual agreement between the Participating Operators and the Hosting Operator, Shared NG-RAN in Indirect Network Sharing shall be able to broadcast PWS messages originated from the core network of the Hosting Operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
6 Other considerations
c3de68f0eebb105002ecbb9ed169af5e
22.851
6.1 Considerations on security
Different from MOCN configuration, which routes UE to the core network of a participating operator directly through Shared RAN of hosting operator, Indirect Network Sharing needs to involve the core network of the Hosting NG-RAN Operator in the signaling exchange. The interoperability between the core networks of Hosting NG-RAN Operators and participating operator will expose more information about the subscribers, the operator’s policies and operations of respective core networks, especially those irrelevant to Indirect Network Sharing. Therefore, more security relative to user privacy and the operator’s policy can be taken into account for the Indirect Network Sharing configuration.
c3de68f0eebb105002ecbb9ed169af5e
22.851
7 Consolidated requirements
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.1 Introduction
Potential requirements on Indirect Network Sharing have been identified in Section 5 out of the use cases under the following aspects: - General - Mobility - Network access control - Regulatory services - Charging These requirements have been especially introduced to support the newly defined network sharing scenario, where a NG-RAN is shared among multiple operators without necessarily assuming a direct connection between the shared NG-RAN and the Participating NG-RAN Operator’s core network. NOTE: These enhanced requirements may go beyond existing RAN sharing requirements in Rel-13.
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2 Consolidated Potential Requirements
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2.1 Description
Indirect Network Sharing is another type of NG-RAN sharing, to facilitate the operators to extend the availability of 5G services to more coverage areas for their subscribers. It can allow NG-RAN resources to be used by two or more operators simultaneously while minimizing the impact on the network and associated services. The deployment of Indirect Network Sharing is expected to be transparent to the users when they obtain network services in subscribed PLMN as other NG-RAN sharing configuration (e.g., MOCN). The involvement of the core network of the Hosting Operator e.g., for signaling exchange between the users and the core network of Participating Operator could cause exposure of the subscriber’s information to the hosting network. Thus, extra scrutiny is expected to avoid sharing information not needed for the Indirect Network Sharing operation between the hosting operator and the participating operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2.2 General
Table 7.2.2-1 General requirements CPR # Consolidated Potential Requirement Original PR # Comment CPR 7.2.2-001 The 5G system shall be able to support Indirect Network Sharing between the Shared NG-RAN and one or more Participating NG-RAN Operators’ core networks, by means of the connection being routed through the Hosting NG-RAN Operator’s core network. [PR 5.1.6-001] Align with the definitions. CPR 7.2.2-002 In case of Indirect Network Sharing, the 5G system shall be able to support means for a Participating NG-RAN Operator to provide its information on operator/network name to UEs, e.g., for display to the user. [PR 5.1.6-002] Align with the definitions. Updated wording CPR 7.2.2-003 In case of Indirect Network Sharing, UEs shall be able to access their subscribed PLMN services when accessing a Shared NG-RAN, where the subscribed PLMN is one of the Participating NG-RAN Operators. [PR 5.2.6-004] Align with the definitions. Updated wording CPR 7.2.2-004 Subject to the agreement between hosting operators and the participating operator, the 5G system shall support a mechanism to enable a UE to obtain its subscribed services, including Hosted Services, of participating operator via a Shared NG-RAN using Indirect Network Sharing. NOTE: the requirement assumes no impact to UE. [PR 5.5.6-001] [PR 5.7.6-001] Align with the definitions. PRs merged.
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2.3 Mobility
Table 7.2.3-1 Consolidated Requirements to Mobility CPR Consolidated Potential Requirement Original PR Comment CPR 7.2.3-001 In case of Indirect Network Sharing scenario and subject to hosting and participating operators’ policies, the 5G system shall support service continuity and minimize the impact on the user experience and service interruptions between different Shared NG-RANs and/or between a Shared NG-RAN and a non-Shared NG-RAN networks. [PR 5.2.6-002] [PR 5.3.6-001] [PR 5.2.6-003] PRs merged CPR 7.2.3-002 In case of indirect network sharing, the 5G system shall enable the Shared NG-RAN of a hosting operator to provide services for inbound roaming users. NOTE: Inbound roaming users may not have a roaming agreement with the hosting operator but only with a participating operator. [PR 5.4.6-001] PR modified