hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
5
1.47k
content
stringlengths
0
6.67M
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.1.2 Activities summary
The three application AI/ML operations in 5.2.2.1.1 the 5G system can support were specified in clause 6.40.1 of TS 22.261 [6] as follows: - AI/ML operation splitting between AI/ML endpoints: The AI/ML operation/model is split into multiple parts according to the current task and environment. The intention is to offload the computation-intensive, energy-intensive parts to network endpoints, whereas leave the privacy-sensitive and delay-sensitive parts at the end device. The device executes the operation/model up to a specific part/layer and then sends the intermediate data to the network endpoint. The network endpoint executes the remaining parts/layers and feeds the inference results back to the device. - AI/ML model/data distribution and sharing: Multi-functional mobile terminals might need to switch the AI/ML model in response to task and environment variations. The condition of adaptive model selection is that the models to be selected are available for the mobile device. However, given the fact that the AI/ML models are becoming increasingly diverse and with the limited storage resource in a UE, it can be determined to not pre-load all candidate AI/ML models on-board. Online model distribution (i.e. new model downloading) is needed, in which an AI/ML model can be distributed from a Network endpoint to the devices when they need it to adapt to the changed AI/ML tasks and environments. For this purpose, the model performance at the UE needs to be monitored constantly. - Distributed/Federated Learning over 5G system: The cloud server trains a global model by aggregating local models partially-trained by each end devices. Within each training iteration, a UE performs the training based on the model downloaded from the AI server using the local training data. Then the UE reports the interim training results to the cloud server via 5G UL channels. The server aggregates the interim training results from the UEs and updates the global model. The updated global model is then distributed back to the UEs and the UEs can perform the training for the next iteration. It is worth emphasizing that the above descriptions refer to AI/ML operations over the application layer. The service requirements and performance requirements for AI/ML model transfer over the application layer in 5GS with direct network connection are specified in clause 6.40.2.1 and in clause 7.10.1 of TS 22.261 [6], respectively.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.2 Rel-19 SA WG1 SID - AI/ML Model Transfer Phase 2 (FS_AIML_MT_Ph2)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.2.1 Description
The objective of this study is to explore new use cases and potential service and performance requirements to support efficient AI/ML operations using direct device connections. This includes: - Distributed AI training and inference based on direct device connections, such as traffic KPIs, various QoS and functional requirements for sidelink transmission. - Considerations for charging and security aspects.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.2.2 Activities summary
In this study, TR 22.876 [21] described study the use cases with potential functional and performance requirements to support efficient AI/ML operations over the application layer using direct device connection for various applications e.g. auto-driving, robot remote control, video recognition, etc. The agreed activities which were progressed in normative phase are described in clause 5.2.2.3.2.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.3 Rel-19 SA WG1 WID - AI/ML Model Transfer Phase 2 (AIML_MT_Ph2)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.3.1 Description
The objective of this work item is to specify KPI and functional requirements for 5GS to support the AIML data transfer by leveraging direct device connection under 5G network control. These objectives were derived based on outcome of Rel-19 study in SA WG1 that relates to how the 5GS supports the transmissions of AI/ML-based services over the application layer. The study addressed use cases and potential performance requirements for 5G system support of application layer Artificial Intelligence (AI)/Machine Learning (ML) model distribution and transfer (download, upload, updates, etc.) and identified traffic characteristics of AI/ML model distribution, transfer and training for various applications, e.g. video/speech recognition, robot control, automotive, other verticals.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.3.2 Activities summary
The service requirements and performance requirements for AI/ML model transfer over the application layer in 5GS with direct device connection are specified in clause 6.40.2.2 and in clause 7.10.2 of TS 22.261 [6], respectively.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.4 Rel-18 SA WG2 WID - Enablers for Network Automation for 5G - phase 3 (eNA_Ph3)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.4.1 Description
The objective of this work item is to further enhance NWDAF, based on what has been specified in the previous releases to allow 5GS to support network automation. This work item focuses on architecture enhancement, new scenarios and the necessary inputs and outputs to the NWDAF based on the conclusions of the study in Rel-18. The work focuses on 10 key aspects as follows: - Improve correctness of NWDAF analytics. - NWDAF-assisted application detection. - Data and analytics exchange in roaming case. - Enhancements on Data collection and Storage. - Enhancements on trained ML Model sharing. - NWDAF-assisted URSP. - Enhancements on QoS Sustainability analytics. - Supporting Federated Learning in 5GC. - Enhancement of NWDAF with finer granularity of location information. - Interactions with MDAS/MDAF.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.4.2 Activities summary
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.4.2.1 AIML related LCM activities
Data collection/storage/exposure Analysis of data collection activities as part of eNA, eNA_Ph2 work: - Data collection in TS 23.288 [8] refers to data collected by the NWDAF and DCCF. The data are collected for the purpose of analytics generation and training of ML models. NWDAF/DCCF/MFAF collects data from NFs/AFs, OAM and UE application. - Data Collected and Analytics output information may be stored at ADRF. As depicted in Figure 4.2.0-1 of TS 23.288 [8], the 5G System architecture allows NWDAF to collect data from any 5GC NF. The NWDAF is allowed to collect data from any 5GC NF directly and retrieve the management data from OAM by invoking OAM services as defined in clause 5.11 of TR 23.288 [8]. As defined in clause 4.2.0 of TS 23.288 [8], the NWDAF is allowed cdata from any 5GC NF using a DCCF or MFAF as defined in Figure 5A.3.2-1in TS 23.288 [8] and retrieve the management data from OAM by invoking OAM services as defined in clause 5.11 of TR 28.871 [73].As defined in clause 5A of TS 23.288 [8], Each Event Notification received from a Data Source NF is sent to the DCCF which propagates it to all Data Consumers / Notification Endpoints specified by the Data Consumers or determined by the DCCF. As defined in clause 5B of TS 23.288 [8], the ADRF offers services that enable a consumer to store, retrieve and delete data, analytics and ML Models. ML Model(s) may be stored in the ADRF by a consumer sending the ADRF a storage request containing the ML Model or ML Model address to be stored. Analysis of Data Collection activities as part of eNA_Ph3 work: - Exposure of input data for analytics is allowed from VPLMN to HPLMN and vice versa. - Data Processing enhancements at Data stored at ADRF. AI/ML Model Training Analysis AI/ML model training activities as part of eNA, eNA_Ph2 work: - MTLF trains an ML model for an Analytics output requested by an AnLF. Trained ML model is assigned an ML Model Identifier and may be stored in ADRF. Analysis of AI/ML Model Training activities as part of eNA_Ph3 work: - Trained ML model sharing between different NWDAF vendors by defined ML Model Interoperability Indication/Information. - Support of Model Training using Horizontal Federated Learning. AI/ML Model Inference Analysis of AI/ML Model inference activities as part of eNA_Ph3 work: - AnLF exposes analytics information to consumers (e.g. NFs). Analytics information may be stored at an ADRF. - Analytics from multiple NWDAF can be aggregated. - Analytics content can be transferred between NWDAFs, including roaming scenarios. Performance evaluation and accuracy monitoring Analysis of performance evaluation and accuracy monitoring activities as part of eNA_Ph3 work: - NWDAF (either AnLF or MTLF) supporting performance monitoring of a trained ML model.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.4.2.2 AI/ML functional entities.
As part of eNA, eNA_Ph2 and eNA_Ph3 work the following functional entities have been defined: - NWDAF (Network Data Analytics Function) is defined in TS 23.288 [8] with main function to generate analytics (statistics and/or predictions) for one or more network events. NWDAF is defined by two logical functions. - AnLF (Analytics Logical Function): Derives and exposes analytics information (statistics or predictions). - MTLF (Model Training Logical Function): Trains Machine Learning (ML) models and exposes new training services (e.g. providing trained ML Model). - DCCF (Data Collection & Coordination Function) is defined in TS 23.288 [8] with main functionality for analytics collection from NWDAFs and data collection from multiple NF(s), AF and OAM. - MFAF (Messaging Framework Adaptor NF) is defined in TS 23.288 [8] and is part of the DCCF architecture. MFAF offers 3GPP defined services that allow the 5GS to interact with a Messaging Framework. - ADRF (Analytics Data Repository Function) is defined in TS 23.288 [8] with main functionality for storing and retrieving collected data and analytics.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.5 Rel-18 SA WG2 WID - System Support for AI/ML-based Services (AIMLsys)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.5.1 Description
This work item implements the conclusions of the Rel-18 study on the 5GS architectural and functional extensions to enable 5GS to assist the Application AI/ML operations. The normative text is defined based on the agreed conclusions on 6 key issues, ensuring consistency with other 5GS features. The agreed conclusions focus on the following aspects: - Monitoring of network resource utilization to support the Application AI/ML operations. - Exposure of 5GC information to authorized 3rd party for Application AI/ML operations. - Enhancement of external parameter provisioning in 5GC to assist the Application AI/ML operations. - Enhancement in 5GC to enable Application AI/ML traffic transport. - Enhancement of QoS and Policy control to support Application AI/ML data transport over 5GS. - 5GS assistance to federated learning operation.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.5.2 Activities summary
This work item specifies a list of principles that apply when the 5GS assists the AI/ML operation at the application layer as specified in clause 5.46 of TS 23.501 [22], namely: - AF requesting 5GS assistance to AI/ML operations in the application layer shall be authorized by the 5GC using the existing mechanisms. - Application AI/ML decisions and their internal operation logic reside at the AF and UE application client and is out of scope of 3GPP. - Based on application logic, it is the application decision whether to request assistance from 5GC, e.g. for the purpose of selection of Member UEs that participate in certain AI/ML operation. The activities of this work item are limited to providing assistance to AI/ML-based applications when the participating UEs are not roaming and the AI/ML operations in the application layer are conducted within a single slice. Policy and charging control as defined in TS 23.503 [24] are assumed to be used for traffic related to application AI/ML operations. The overall objective of this item is to provide assistance by the 5GC to AI/ML operations in the application layer, which are described in clause 5.2.2.1 and specified in clause 6.40 of TS 22.261 [6]. A brief description of the specified capabilities in this work item can be found below, with further details provided in clause 5.46 of TS 23.501 [22] and clause 11.1 of TR 21.918 [2] and the references therein: - Planned Data Transfer with QoS: this capability is used to enable the AF to negotiate a variable time window for the planned AI/ML operation e.g. application data transfer with specific QoS requirements and operational conditions via the support of the NEF. - Enhanced external parameter provisioning: this capability enables an AF hosting an AI/ML based application to provision enhanced Expected UE Behaviour parameters and/or Application-Specific Expected UE Behaviour parameter(s) to the 5GC by including corresponding confidence and/or accuracy levels to the expected parameters, which UDM could check against a threshold. - Member UE selection assistance functionality: this capability provided by NEF is used to assist the AF to select member UE(s) for AI/ML application operations (e.g. Federated Learning) according to the AF's request including a list of target member UEs and a set of filtering criteria. - Multi-member AF session with required QoS: this capability enables the NEF to map a request for Multi-member AF session with required QoS to individual requests for AF session with required QoS per UE address and interact with each of the UE's serving PCFs on a per AF session basis. - End-to-end data volume transfer time analytics: this analytics provides the consumer (e.g. AF, NEF) with analytics (i.e. statistics, predictions or both) referring to a time delay for completing the transmission of a specific data volume from UE to AF, or from AF to UE. The data volume may be the expected or observed data volume from UE to AF or from AF to UE. - Enhanced NEF monitoring events: new NEF monitoring events are specified relevant to the operation of AI/ML based operations, namely session inactivity time, traffic volume exchanged between the UE and the AF and UL/DL consolidated data rate which is the aggregated data rate across all traffic flows corresponding to the list of UE addresses of the Multi-member AF session with required QoS.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.5.2.1 AI/ML related LCM activities
No AI/ML related LCM activities or functional entities were specified as part of this work. Instead, the activities summarized in clause 5.2.2.5.2 specify 5GS support for AI/ML related LCM activities (e.g. AI/ML model training and inference) assumed to be conducted at the application layer.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.6 Rel-19 SA WG2 SID - Core Network Enhanced Support for Artificial Intelligence (AI)/Machine Learning (ML) (FS_AIML_CN)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.6.1 Description
The aim of this study is to investigate and identify potential architectural and system-level enhancements to support AI/ML enhancements. Specifically, the objectives include: - AI/ML Cross-Domain Coordination Aspects: Investigate enhancements to support AI-enabled RAN based on the conclusions of the RAN study in TR 38.843 [3]. This task will discuss whether and how to support cross-domain (i.e. UE, RAN, 5GC, OAM and AF) collaborative AI/ML mechanisms for the aspects described below: - Enhancements to LCS for AI/ML-Based Positioning: Examine whether and how to consider enhancements to LCS to support AI/ML-based positioning. - Collaborative AI/ML Operations for Vertical Federated Learning (VFL): Determine potential enhancements needed to enable the 5G system to assist in collaborative AI/ML operations involving 5GC/NWDAF and/or AF for "Vertical Federated Learning (VFL)." This work will be based solely on and limited to the scope of justified use cases. - Enhancements to Support NWDAF-Assisted Policy Control and Address Network Abnormal Behaviour: - Investigate additional support needed to enhance 5GC NF operations (i.e. policy control and QoS) assisted by NWDAF. This task will first identify specific use cases to define the appropriate scope. It will analyse the impacts on NWDAF (e.g. the need to understand specific NF functionality) and the compatibility of new solutions with existing analytics to determine the necessity and benefits of new solutions. - Study the prediction, detection, prevention and mitigation of network abnormal behaviours, such as signalling storms, with the assistance of NWDAF. NOTE: The outcome of this study was used to support the AIML_CN Rel-19 SA WG2 WID, see clause 5.2.2.7.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.7 Rel-19 SA WG2 WID - Core Network Enhanced Support for Artificial Intelligence (AI)/Machine Learning (ML) (AIML_CN)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.7.1 Description
The objective of this work item is to specify the following enhancements to 5GS as per the conclusions reached within the Rel-19 study for the following aspects: - Enhancements to LCS to Support Direct AI/ML-Based Positioning: - LMF Enhancements: The LMF will be enhanced to perform location calculations based on an ML model. The triggers for data collection and model training within the LMF will be implementation-specific. - MTLF and LMF Enhancements: Both the MTLF and the LMF will be enhanced to facilitate ML model training for AI/ML-based positioning. - Procedure Development: Related procedures for data collection will be developed in coordination with RAN WGs. - 5GC Support for Vertical Federated Learning: - 5GC Enhancements: The 5GC will be enhanced to support vertical federated learning (VFL), a technique that does not involve exchanging or sharing local datasets or ML models, in the following scenarios: - VFL among NWDAFs within a single PLMN. - VFL between NWDAF(s) within a single PLMN and AF(s). - NWDAF-Assisted Policy Control and QoS Enhancement: - Assistance Information: Based on PCF requests, the NWDAF may provide assistance information to the PCF to aid in the determination and modification of QoS parameters. - NWDAF Enhancements to Support Network Abnormal Behaviours Mitigation and Prevention: - Signalling Storm Mitigation: The NWDAF will support signalling storm mitigation and prevention by providing analytics related to the detection and prediction of signalling storms caused by massive signalling from UEs and/or NFs.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.7.2 Activities summary
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.7.2.1 AIML related LCM activities
As part of the AIML_CN work the following enhancements are supported: Data collection/exposure Data collection from Direct AIML positioning. Data is collected to train an ML model for LMF-based AIML positioning and to support inference. AI/ML model training - ML model training for LMF-side Direct AIML positioning. LMF or MTLF support model training for LMF-Side Direct AIML positioning. - Collaborative ML model training using Vertical Federated Learning. Vertical Federated Learning is supported between NWDAFs or cross-domain between NWDAF and Application Functions AI/ML model inference - LMF is supports inference based on using trained ML Model for Direct AIML positioning provisioned by an MTLF or locally trained at LMF. Performance evaluation and accuracy monitoring - LMF or MTLF evaluates the performance of a trained ML Model by comparing the inference output against ground truth information.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.8 Rel-18 SA WG3 WID - Security aspects of enablers for Network Automation for 5G - phase 3 (eNA_SEC_PH3)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.8.1 Description
The main objective of this work is to produce normative specification based on the conclusions from Rel-18 study. More specifically, the following objectives are expected to be specified: - Protection of data and analytics exchange in roaming case. - Security for AI/ML model storage and sharing. - Authorization of selection of participant NWDAF instances in the Federated Learning group.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.8.2 Activities summary
As part of this work, enhancements were specified for: - The protection of data and analytics exchange in roaming case including authorization and anonymization of data/analytics. Authorization at data and analytics level is enforced by the roaming entry NWDAF producer. The roaming entry NWDAF producer is responsible to control the amount of exposed data/analytics and to abstract or hide internal network aspects in the exposed data/analytics. - The authorization for selecting participant NWDAF instances in the Federated Learning (FL) group, using token-based authorization. - The procedure for secured and authorized AI/ML model sharing between different vendors and AI/ML model storing.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.9 Rel-19 SA WG3 SID - Security aspects of Core Network Enhanced Support for AIML (FS_AIML_CN_SEC)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.9.1 Description
The objectives of this study are the following: - Security Aspects on Enhancements to LCS: Study security aspects on enhancements to LCS to support AI/ML-based positioning, considering the conclusions in TR 38.843 [3] and TR 23.700-84 [7]. - Security Aspects of Cross-Domain Vertical Federated Learning (VFL): - Authorization of VFL Group Members: Examine the authorization of members of the VFL group. - Security Aspects of Enhancements on SA WG2 Architecture: Investigate security aspects of enhancements on SA WG2 architecture to support VFL. NOTE: The outcome of this study was used to support the AIML_CN_SEC Rel-19 SA WG3 WID, see clause 5.2.2.16.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.10 Rel-19 SA WG4 SID - Artificial Intelligence (AI) and Machine Learning (ML) for Media (FS_AI4Media)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.10.1 Description
The primary objective of this study item is to identify relevant interoperability requirements and implementation constraints of AI/ML in 5G media services. The specific objectives include: - Use Cases for Media-Based AI/ML Scenarios: List and describe the use cases for media-based AI/ML scenarios, based on those defined in TR 22.874 [5]. - Media Service Architecture and Service Flows: Describe the media service architecture and relevant service flows for the scenarios. Identify the impacts on the architecture for each use case, including any potential gaps with existing 5G media service architectures. Also, describe the model operation configurations for each use case, including split AI/ML operations and identify where certain AI/ML operations occur. - Data Formats and Protocols: Identify and document the available data formats and suitable protocols for the exchange of different data components of various AI/ML models, such as model data, metadata, media data and intermediate data necessary for such model operation configurations. Investigate the data traffic characteristics of these data components for delivery over the 5G system, including any needs and potentials for data rate reduction. - Key Performance Indicators (KPIs): Identify and study key performance indicators for such scenarios, based on the initial considerations in TS 22.261 [6]. Emphasize the use cases, model operation configurations and data components identified in earlier objectives, focusing on objective performance metrics considering the identified KPIs. - Normative Work and Collaboration: Identify potential areas for normative work as the next phase. Communicate and align with SA WG2 and other potential 3GPP working groups on relevant aspects related to the study.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.11 Rel-18 SA WG5 WID - AI/ML management (AIML_MGT)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.11.1 Description
The objective of this work is to specify the AI/ML management capabilities, including use cases, requirements and solutions for each phase of the AI/ML operational workflow for managing the AI/ML capabilities in 5GS (i.e. management and orchestration, 5GC and NG-RAN), including: - Management capabilities for ML training phase, which includes control of producer-initiated ML training, data management for ML training, performance evaluation for ML training, ML entity validation, ML context management, ML entity capability discovery and ML entity testing. - Management capabilities for ML deployment phase, including management of ML entity loading. - Management capabilities for AI/ML inference phase. To describe the deployment scenarios of the AI/ML management capabilities, with consideration of alignment with other relevant 3GPP WGs (e.g. RAN WG3, SA WG2) and ETSI ISG ZSM.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.11.2 Activities summary
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.11.2.1 ML model life cycle management (LCM)
Rel-18 specification in TS 28.105 [9] addressed the AI/ML LCM management capabilities (including wide range of use cases, corresponding requirements (stage 1) and solutions (stage 2 NRMs & stage 3 OpenAPIs) for the ML model, including ML model training (which also includes validation), ML model testing, AI/ML inference emulation, ML model deployment and AI/ML inference steps of the lifecycle. The specification defined operational workflow as shown in Figure 5.2.2.11.2.1-1 below highlighting the main steps of an ML model lifecycle. Figure 5.2.2.11.2.1-1: ML model lifecycle
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.11.2.2 ML model lifecycle management capabilities
Each step in the ML model lifecycle, defined in TS 28.105 [9] (see clause 6.1) i.e. the ML model training, ML model testing, AI/ML emulation, ML model deployment and AI/ML inference correspond to number of dedicated management capabilities. The specified capabilities are developed based on corresponding use cases and requirements.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.11.2.3 AI/ML functionalities management scenarios (relation with managed AI/ML features)
The Rel-18 specification TS 28.105 [9] (see clause 4a.2) also documented AI/ML functionalities management scenarios in relation with managed AI/ML features which describe the possible locations of ML training function and AI/ML inference function involving the various 3GPP system domains.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.12 Rel-19 SA WG5 SID - AI/ML management - phase 2 (FS_AIML_MGT_Ph2)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.12.1 Description
The objectives of this study item include: - Continuation of AI/ML Studies: Continue the study on AI/ML emulation, AI/ML inference coordination and ML knowledge transfer that are left over from Rel-18. - Management Aspects of AI/ML functionalities defined by other 3GPP WGs: - AI/ML Model Transfer/delivery in RAN: Study the management aspects (LCM, CM and PM) of AI/ML model transfer in RAN. - 5GS Support for AI/ML-Based Services: Investigate the management aspects of 5GS support for AI/ML-based services, as defined in SA WG2. - Management Aspects of AI/ML Functionalities Defined by SA WG5: - Management Data Analytics (MDA): Study the management aspects (LCM, CM and PM) of AI/ML functionalities defined by SA WG5, including MDA Phase 3. - AI/ML Management and Operation Capabilities: Investigate the AI/ML management and operation capabilities to support different types of AI/ML technologies needed for AI/ML in 5GS, such as Federated Learning, Reinforcement Learning, Online and Offline Training, Distributed Learning and Generative AI. - Sustainability Aspects of AI/ML: - Energy consumption/efficiency impacts: Evaluate the energy consumption and efficiency impacts associated with AI/ML solutions for all operational phases (training, emulation, deployment, inference). - Trustworthiness Aspects Related to AI/ML Functionalities in 5GS: - Concept of Trustworthiness: Further study the concept of trustworthiness for AI/ML in the context of OAM. - Data for Trustworthiness Indicators: Identify and analyse data (e.g. measurements, events) to support the calculation of trustworthiness indicators. The outcome of this study was used to support the Rel-19 SA WG5 WID on AI/ML management phase 2 (AIML_MGT_Ph2).
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.12.2 Activities summary
The outcome of this study was used to support the Rel-19 SA WG5 WID on AI/ML management phase 2 (AIML_MGT_Ph2).
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.13 Rel-19 SA WG6 SID - Application layer support for AI/ML services (FS_AIMLAPP)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.13.1 Description
The objective of this study is to enable support for AI/ML services at the application enablement layer. This includes the following: - Analysis of Rel-18 and Rel-19 Requirements: Analyse the requirements in TS 22.261 [6] related to AI/ML model distribution, transfer and training. Identify key issues and develop corresponding architectural requirements at the application enablement layer, along with potential enhancements to the application layer architecture. - Architectural and Functional Implications: Study the architectural and functional implications on existing SA WG6 application enablers (e.g. ADAES, other SEAL services, EDGEAPP) for supporting AI/ML lifecycle operations. This includes operations such as data collection, data preparation, training, inference and federated learning for ML models used in ADAE layer analytics. - Potential Solutions and APIs: Identify potential solutions, including information flows and developer-friendly application enablement APIs, to satisfy the architectural requirements and enhancements identified in the previous points. - Impact on Deployments and Business Models: Investigate the possible impacts of application layer support for AI/ML services on different deployments and business models.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.13.2 Activities summary
In this study, TR 23.700-82 [4] described the AI/ML enablement capabilities for supporting vertical use cases. The agreed AIMLE activities which were progressed in normative phase are described in clause 5.2.2.14.2.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.14 Rel-19 SA WG6 WID - Application enablement for AI/ML services (AIML_App)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.14.1 Description
The objectives of this work include the following: Develop Stage 2 normative technical specification for AIML enablement service as a new SEAL service, based on the key issues, architecture, solutions and conclusions captured in TR 23.700-82 [4]. The Stage 2 normative technical specification will include the following aspects: - Architecture requirements, deployment models and application architecture for AIML service enablement over 3GPP networks. - Procedures, information flows and APIs supporting concluded solutions related to AIML enablement capabilities for AI/ML, FL (e.g. Vertical FL among VAL UEs, Horizontal FL), Transfer Learning. Such capabilities include: - Support AIML client management (e.g. registration, discovery) and selection. - Support AIML service lifecycle management aspects (e.g. training, inference, data management). - Support AIML operation split and ML model distribution operations. - Support AIML operations in edge / distributed deployments. - Procedures information flows and APIs supporting concluded solutions for application layer support capabilities related to new ADAE analytics services. Such new analytics services include: - DN Energy Analytics. - Analytics for supporting FL. Identify potential enhancements to other enablement frameworks (e.g. SEAL, EGDEAPP and CAPIF) based on the specified solutions for the above objectives.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.14.2 Activities summary
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.14.2.1 AI/ML Functional Entities
In AIML_App, the following logical entities have been introduced within SEAL framework: - AIMLE server includes of a common set of services for exposure of AIML functionality, including federated and distributed learning (e.g. FL client registration management, FL client discovery and selection) and reference points. The AIMLE services are offered to the vertical application layer (VAL) and include: - Support for application-layer ML model related aspects, including model retrieval, model training, model monitoring, model selection, model distribution, model update and model storage / discovery. - Assistance in AI/ML task transfer and split AI/ML operations. - Support HFL/VFL operations, including FL member registration, FL grouping and FL-related events notification, VFL feature alignment, HFL training, ML model training capability evaluation for FL (HFL/VFL). - Support for AIMLE client registration, discovery, participation and selection. - AIMLE client functional entity acts as the application client supporting AIMLE services. - ML repository is a logical entity that serves both as a registry for AI/ML members or FL members and as a repository for application layer ML model related information. It can be accessed by the AIMLE server.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.14.2.2 AI/ML related LCM activities
Model Lifecycle enablement for AI/ML Some AIMLE capabilities are applicable to ML model lifecycle enablement which provides assistance for use cases where an ASP/VAL layer wants to find and use other application entities to perform some ML operations (e.g. ML model inference) and AIMLE server as a mediator to accomplish that. An example including some capabilities related to lifecycle enablement is depicted in Annex C.4 of TS 23.482 [34]. The support capabilities are based on AIMLE capabilities identified in this specification. In particular, AIMLE is undertaking: - ML model related support capabilities such as model retrieval, discovery and storage (as covered in procedures in clauses 8.2 and 8.11 of TS 23.482 [34] ) - ML operation related support capabilities such as VFL/ HFL and TL enablement, Split AI/ML Operation support, Data management assistance, AI/ML task transfer, FL assistance in member grouping, registration and event notification (as covered in procedures in clauses 8.4, 8.6, 8.12, 8.14, 8.15-8.18 of TS 23.482 [34]). - AIMLE client related support capabilities, including AIMLE client registration, discovery, participation, monitoring, selection (as covered in procedures in clauses 8.7-8.10, 8.13 of TS 23.482 [34]). Data Collection/Storage/Exposure activities. Analysis of data collection activities as part of AIML_App work. Data Collection in TS 23.482 [34] refers to application data collection from the UE. EVEX mechanism can be reused for data collection as described in TS 26.531 [74]. ML model performance degradation can be detected in the AI/ML enablement by leveraging ADAES, e.g. based on information collected from analytics consumer. In AIML_App, one possible use of AI/ML enablement is for supporting ML-enabled ADAES analytics services (as specified in TS 23.436 [33]). For Data Collection and Storage related to ADAES analytics: - Application layer - Data Collection and Coordination Function (A-DCCF) coordinates the collection and distribution of data requested by the consumer (ADAE server). Data Collection Coordination is supported by a A-DCCF. ADAE server can send requests for data to the A-DCCF rather than directly to the Data Sources. A-DCCF may also perform data processing/abstraction and data preparation based on the VAL server requirements. - Application layer - Analytics and Data Repository Function (A-ADRF) stores historical data and/or analytics, i.e. data and/or analytics related to past time period that has been obtained by the consumer (e.g. ADAE server). After the consumer obtains data and/or analytics, consumer may store historical data and/or analytics in an A-ADRF. Whether the consumer directly contacts the A-ADRF or goes via the A-DCCF is based on configuration. AI/ML-related information storage and discovery for AI/ML Analysis of ML model storage and exposure activities as part of AIML_App work. In AIML_App, ML repository has been defined as: 1) a registry for AI/ML members or FL members (application layer entities participating in an AI/ML operation); and 2) as a repository for application layer ML model related information. AIMLE server stores the ML model to the ML repository along with the ML model information (e.g. ML model ID). AIMLE server can also discover the ML models under certain filtering criteria (e.g. applicable to an ADAES analytics ID). AIMLE server also registers and stores information on VAL servers, AIMLE servers or AIMLE clients which are expected to serve as AI/ML members or FL members in a model lifecycle operation (e.g. ML training, FL, TL). AIMLE clients or other VAL servers can discover the availability and capabilities of registered AI/ML members or FL members for a given ML model ID. Such discovery allows e.g. the VAL server identifying the candidate FL members to be considered for an FL process. Model training/delivery/ (de)-activation/inference emulation activities In AIML_App, AIMLE server or the AIMLE client (at VAL UE side) can also be used for training an application layer ML model e.g. for given analytics service. Such ML model training can be used to support ADAES analytics services (as provided in TS 23.436 [33]). Based on the VAL request to provide ML-enabled analytics, ADAES may consume AIMLE services (e.g. for ML model training for a given analytics ID) to derive application layer data analytics. The trained ML model can be delivered to VAL server or ADAES via the ML model training notification API. SA WG6 has not defined any procedures for model (de)-activation and inference emulation. AI/ML model inference and delivery support for AI/ML Analysis of ML model inference activities as part of AIML_App work. SA WG6 has not defined dedicated procedures for supporting ML model inference; however, it provides assistance for registering and discovering AIMLE clients serving as ML model inference entities for a given analytics ID or model ID or split operation pipeline. Performance evaluation and accuracy monitoring activities Analysis of ML model performance evaluation and monitoring activities as part of AIML_App work. AIMLE server based on VAL request provides a capability for monitoring and detecting a degradation related to an ML operation / analytics operation and translating to an ML model performance degradation (expected or predicted) and performing a trigger action to alleviate this issue (new model training or re-training). Such trigger action may be either an adaptation of the AIMLE service, such as training of a new ML model for the AIMLE by the same or a different AIMLE client, or re-training of the ML model. AIML_App has provided the basic capability for performance monitoring activity, which is expected to be further worked in further release. 5.2.2.15 Rel-19 CT WG4 WID - Protocol for AI Data Collection from UPF (FS_PAIDC-UPF)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.15.1 Description
In Rel-18, the UPF offers services to the NEF, AF, SMF, NWDAF, DCCF, MFAF via the Nupf service based interface for data collecting in AI/ML related activities. In Rel-19, CT WG4 is studying "Protocol for AI Data Collection from UPF", which aims at studying UPF data Collection for AI/ML and whether alternative protocols, or enhancements to the existing SBI protocol, are needed to optimize the AI/ML data collection while ensuring secure, scalable and reliable data transfers across the core network identifies.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.15.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.16 Rel-19 SA WG3 WID - Security aspects of Core Network Enhanced Support for AIML (AIML_CN_SEC)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.16.1 Description
The following objectives are expected to be specified as a result of this work item: - Security aspects on enhancements to LCS to support AIML. - Security aspects on VFL process.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.16.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.17 Rel-19 SA WG5 WID - AI/ML management phase 2 (AIML_MGT_Ph2)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.17.1 Description
The objectives of AI/ML management phase 2 work item are to specify the management capabilities to support AI/ML functions defined by 3GPP, including: - NG-RAN AIML-based Coverage and Capacity Optimization, and NG-RAN AIML-based Network Slicing defined by RAN3, - Model delivery/transfer as defined by RAN1/2, - ML model training and AI/ML inference functions for 5GC as defined by SA2, and - MDA (Management Data Analytics) as defined by SA5. To achieve these objectives, the following work tasks are defined: WT-1: Specify the AI/ML management capabilities including use cases, requirements and solutions for the relevant AI/ML lifecycle operational steps based on TR 28.858 [19], including: WT-1.1: Management capabilities for ML model training: - ML-Knowledge-based Transfer Learning; - ML pre-training and fine-tuning; - ML model training for multiple contexts; - ML training data statistics; - ML model confidence; - Management of Reinforcement Learning; - ML model Distributed training; - Management of Federated Learning; - ML authentication. WT-1.2: Management capabilities for AI/ML inference emulation: - ML inference emulation; - ML inference emulation environment selection. WT-1.3: Management capabilities for ML model deployment: - Enhancements to ML model loading; - ML model transfer/delivery. WT-1.4: Management capabilities for AI/ML inference: - Coordination between the ML capabilities; - ML remedial action management; - Managing ML models in use in a live network; - ML explainability.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.17.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.18 Rel-19 SA WG6 WID - Application Data Analytics Enablement Service (TEI19_ADAES)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.18.1 Description
The objectives of this work item include the following: - Clarify metrics related to the analytics inputs/outputs introduced in TS 23.436 [33]. - Enhance the IEs for the request/response of data collection from Data Producer and/or A-ADRF (or via A-DCCF) in TS 23.436 [33]. - Complete the definition of IEs for the data/analytics storage subscription request to A-ADRF in TS 23.436 [33].
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.18.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.19 Rel-19 CT WG1/WG3 WID – CT aspects of application enablement for AI/ML services (AIML_App)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.19.1 Description
The objective of this work item includes providing the stage 3 solutions and protocol support for application enablement for AIML services (AIML_App) based upon the normative technical specification for the functionalities defined in stage 2 requirements under the AIML App WID in the SA WG6 working group. Stage 3 work shall be started only after the applicable normative stage 2 work is available. Stage 3 protocols and solutions will be specified in CT WG1 and CT WG3 respectively, to support the following stage 2 functionalities. CT WG1, the expected work includes: a) Definition of new APIs between the AIML server and AIML client provided by AIML layer to support the application enablement for AIML services based on normative stage-2 work developed in 3GPP SA WG6, to support: 1) ML client configuration provisioning; 2) AIMLE client selection; 3) AIMLE client registration; 4) AIML service lifecycle management; 5) AIML operational splitting and provisioning management; 6) vertical federated learning (VFL) and horizontal federated learning (HFL); 7) AIML data management; 8) AIML edge services; and 9) AIML model distribution. b) Enhancement the APIs between the ADAE server and ADAE client provided by ADAE layer to support the application enablement for AIML services based on normative stage-2 work developed in 3GPP SA WG6, to support: 1) ML-enabled ADAE analytics; and 2) Application layer AIML Member Capability Analytics. CT WG3, the expected work includes: a) Definition of new APIs in network provided by AIML layer to support the application enablement for AIML services based on normative stage-2 work developed in 3GPP SA WG6, to support: 1) AIMLE server and client registration; 2) AIMLE client information retrieval; 3) AIML model lifecycle management; 4) AIML model distribution; 5) AIMLE client discovery; and 6) AIMLE client selection and reselection; 7) federated learning (FL) member registration and grouping; 8) vertical federated learning (VFL) and horizontal federated learning (HFL); 9) AIML operational splitting and provisioning management; 10) AIML policy provisioning and management; 11) AIML service lifecycle management; 12) AIML services for edge computing; and 13) AIML information transfer. b) Enhancement of the APIs provided by ADAE layer to support the application enablement for AIML services based on normative stage-2 work developed in 3GPP SA WG6, to support: 1) ML-enabled ADAE analytics; 2) AI-enabled DN Energy Analytics; and 3) Application layer AIML Member Capability Analytics.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.19.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.20 Rel-19 CT WG3 WID – Rel-19 Enhancements of Network Automation Enablers (eNetAE19)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.20.1 Description
The objective of this work item is to specify the technical improvements and enhancements to the network data analytics related services in Release 19 stage 3 level, mainly (but not exhaustively) including:
30577dbcedb3290f42b3a55986900ea7
22.850
1 Potential completion of the support of Processing Instructions in MFAF and ADRF APIs.
30577dbcedb3290f42b3a55986900ea7
22.850
2 Completion of the analytics transfer procedure.
30577dbcedb3290f42b3a55986900ea7
22.850
3 Completion of the analytics aggregation.
30577dbcedb3290f42b3a55986900ea7
22.850
4 Clarifications for input data information in ML model training procedure.
30577dbcedb3290f42b3a55986900ea7
22.850
5 Enhancements of Collective Behaviour of UEs in AF, e.g., to include the average moving speed of the UE.
30577dbcedb3290f42b3a55986900ea7
22.850
6 Enhancements of Nnwdaf_MLModelTraining_Notify service operation to provide the Global ML Model Accuracy information.
30577dbcedb3290f42b3a55986900ea7
22.850
7 Other technical enhancements and corrections for the services related to Network Automation Enablers (i.e., the pure stage 3 protocol and interface enhancements are not included), which are not covered by the other dedicated WIs.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.20.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.21 Rel-19 CT WG3/WG4 WID – CT aspects of Core Network Enhanced Support for Artificial Intelligence (AI) and Machine Learning (ML) (AIML_CN)
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.21.1 Description
The objective of this work is to specify the CT aspects of Core Network Enhanced Support for Artificial Intelligence (AI)/Machine Learning (ML) in order to implement the stage 2 normative work. The stage 3 work shall be started after the applicable normative stage 2 requirements are available, the detail impacts are subject to change when stage 2 normative CRs are agreed. The following aspects of work are expected to be covered: CT WG3: - For Direct AI/ML based Positioning: - update on NWDAF to support Direct AI/ML based Positioning, e.g., AI positioning model training, AI positioning model delivery, performance monitoring; - update on NWDAF to support data collection for AI positioning model training; - For Vertical Federated Learning (VFL): - update on NWDAF and AF to support sample alignment for VFL; - update on NWDAF and AF to support VFL training; - update on NWDAF and AF to support VFL inference; - update on NWDAF and AF to support performance monitoring for VFL; - update on NEF to support VFL in case of an untrusted AF; - For NWDAF-assisted policy control and QoS enhancement: - update on PCF and NWDAF to support the analytics for QoS and policy assistance information; - update on NFs and AF to support input data collection; - update on NWDAF to support performance monitoring for NWDAF-assisted policy control; - For signalling storm mitigation and prevention: - update on NWDAF to support analytics for signalling storm Mitigation and Prevention caused by NFs; - update on NWDAF to support analytics for Signalling storm Mitigation and Prevention caused by massive signalling of UEs; - update on NFs to support input data collection; - Potential impacts on ADRF, DCCF, and MFAF APIs for supporting the data sources or analytics. CT WG4: - For Direct AI/ML based Positioning: - update on LMF to support Direct AI/ML based positioning; - update on UDM to allow LMF as a consumer for retrieving user consent; - update on NFs to support training data collection for AI positioning model; - update to NWDAF discovery via NRF for training ML model for Direct AI/ML based Positioning; - For Vertical Federated Learning: - update on NRF to support VFL entity registration and discovery; - For signalling storm mitigation and prevention: - update on AMF and NRF to support input data collection; - update to SCP entity to support input data collection.
30577dbcedb3290f42b3a55986900ea7
22.850
5.2.2.21.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3 AI/ML related activities in TSG RAN Working Groups
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.1 AI/ML related terminology
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.1.1 TSG RAN WG1
The following definitions are provided in clause 3 of TR 38.843 [3]: - AI/ML-enabled Feature: refers to a Feature where AI/ML may be used. - AI/ML Model: A data driven algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs. - AI/ML model delivery: A generic term referring to delivery of an AI/ML model from one entity to another entity in any manner. NOTE 1: An entity could mean a network node/function (e.g. gNB, LMF, etc.), UE, proprietary server, etc. - AI/ML model ID: A logical AI/ML model is identified by a Model ID. The Model ID, if needed, can be used in a Functionality (defined in functionality-based LCM) for LCM operations. - AI/ML model Inference: A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs. - AI/ML model testing: A subprocess of training, to evaluate the performance of a final AI/ML model using a dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model. - AI/ML model training: A process to train an AI/ML Model [by learning the input/output relationship] in a data driven manner and obtain the trained AI/ML Model for inference. - AI/ML model transfer: Delivery of an AI/ML model over the air interface in a manner that is not transparent to 3GPP signalling, either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model. - AI/ML model validation: A subprocess of training, to evaluate the quality of an AI/ML model using a dataset different from one used for model training, that helps selecting model parameters that generalize beyond the dataset used for model training. - Data collection: A process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference. - Federated learning / federated training: A machine learning technique that trains an AI/ML model across multiple decentralized edge nodes (e.g. UEs, gNBs) each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples. - Functionality identification: A process/method of identifying an AI/ML functionality for the common understanding between the NW and the UE. NOTE 2: Information regarding the AI/ML functionality may be shared during functionality identification. Where AI/ML functionality resides depends on the specific use cases and sub use cases. - Management instruction: Information needed to ensure proper inference operation. This information may include selection/(de)activation/switching of AI/ML models or AI/ML functionalities, fallback to non-AI/ML operation, etc. - Model activation: enable an AI/ML model for a specific AI/ML-enabled feature. - Model deactivation: disable an AI/ML model for a specific AI/ML-enabled feature. - Model download: Model transfer from the network to UE. - Model identification: A process/method of identifying an AI/ML model for the common understanding between the NW and the UE. NOTE 3: The process/method of model identification may or may not be applicable. NOTE 4: Information regarding the AI/ML model may be shared during model identification. - Model monitoring: A procedure that monitors the inference performance of the AI/ML model. - Model parameter update: Process of updating the model parameters of a model. - Model selection: The process of selecting an AI/ML model for activation among multiple models for the same AI/ML enabled feature. NOTE 5: Model selection may or may not be carried out simultaneously with model activation. - Model switching: Deactivating a currently active AI/ML model and activating a different AI/ML model for a specific AI/ML-enabled feature. - Model update: Process of updating the model parameters and/or model structure of a model. - Model upload: Model transfer from UE to the network. - Network-side (AI/ML) model: An AI/ML Model whose inference is performed entirely at the network. - Offline field data: The data collected from field and used for offline training of the AI/ML model. - Offline training: An AI/ML training process where the model is trained based on collected dataset and where the trained model is later used or delivered for inference. NOTE 6: This definition only serves as a guidance. There may be cases that may not exactly conform to this definition but could still be categorized as offline training by commonly accepted conventions. - Online field data: The data collected from field and used for online training of the AI/ML model. - Online training: An AI/ML training process where the model being used for inference) is (typically continuously) trained in (near) real-time with the arrival of new training samples. - Reinforcement Learning (RL): A process of training an AI/ML model from input (a.k.a. state) and a feedback signal (a.k.a. reward) resulting from the model's output (a.k.a. action) in an environment the model is interacting with. - Semi-supervised learning: A process of training a model with a mix of labelled data and unlabelled data. - Supervised learning: A process of training a model from input and its corresponding labels. - Test encoder/decoder for TE: AI/ML model for UE encoder/gNB decoder implemented by TE. - Two-sided (AI/ML) model: A paired AI/ML Model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network, i.e. the first part of inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa. - UE-side (AI/ML) model: An AI/ML Model whose inference is performed entirely at the UE. - Unsupervised learning: A process of training a model without labelled data. - Proprietary-format models: ML models of vendor-/device-specific proprietary format, from 3GPP perspective. They are not mutually recognizable across vendors and hide model design information from other vendors when shared. NOTE 7: An example is a device-specific binary executable format. - Open-format models: ML models of specified format that are mutually recognizable across vendors and allow interoperability, from the 3GPP perspective. They are mutually recognizable between vendors and do not hide model design information from other vendors when shared.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.1.2 TSG RAN WG3
The following definitions are provided in clause 16.20 of TS 38.300 [11]: - AI/ML Model Training follows the definition of the "ML model training" as specified in clause 3.1 of TS 28.105 [9]. - AI/ML Model Inference follows the definition of the "AI/ML inference" as defined in clause 3.1 of TS 28.105 [9].
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2 AI/ML related activities
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.1 Rel-19 RAN WG1/RAN WG4 WID - Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface (NR_AIML_air)
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.1.1 Description
The objective of this work is to provide specification support for the following aspects: - AI/ML general framework for one-sided AI/ML models within the realm of what has been studied in the FS_NR_AIML_air project (RAN WG2): - Signalling and protocol aspects of Life Cycle Management (LCM) enabling functionality and model (if justified) selection, activation, deactivation, switching, fallback: - Identification related signalling is part of the above objective. - Necessary signalling/mechanism(s) for LCM to facilitate model training, inference, performance monitoring, data collection (except for the purpose of CN/OAM/OTT collection of UE-sided model training data) for both UE-sided and NW-sided models. - Signalling mechanism of applicable functionalities/models. - Beam management - DL Tx beam prediction for both UE-sided model and NW-sided model, encompassing (RAN WG1/RAN WG2): - Spatial-domain DL Tx beam prediction for Set A of beams based on measurement results of Set B of beams ("BM-Case1"). - Temporal DL Tx beam prediction for Set A of beams based on the historic measurement results of Set B of beams ("BM-Case2"). - Specify necessary signalling/mechanism(s) to facilitate LCM operations specific to the Beam Management use cases, if any. - Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE. - Positioning accuracy enhancements, encompassing (RAN WG1/RAN WG2/RAN WG3): - Direct AI/ML positioning: - Case 1: UE-based positioning with UE-side model, direct AI/ML positioning. - Case 3b: NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning. - AI/ML assisted positioning: - Case 3a: NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning. - Specify necessary measurements, signalling/mechanism(s) to facilitate LCM operations specific to the Positioning accuracy enhancements use cases, if any. - Investigate and specify the necessary signalling of necessary measurement enhancements (if any). - Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE for relevant positioning sub use cases. - CSI feedback enhancement, encompassing [RAN1/RAN2]: - CSI prediction (UE-sided model): - Functionality-based LCM leveraged from other use cases, when necessary and applicable, - Study, and if necessary, specify consistency of training/inference, - Core requirements for the above three use cases for AI/ML LCM procedures and UE features (RAN WG4): - Specify necessary RAN WG4 core requirements for the above three use cases. - Specify necessary RAN WG4 core requirements for LCM procedures including performance monitoring. - For Beam Management and Positioning Accuracy enhancement use cases, specify performance requirements and test cases for AI/ML LCM procedures (including performance monitoring) and UE features enabled by UE-sided models: - Specify necessary performance requirements and tests (including metrics) for the above-mentioned use cases. - Specify necessary test cases and performance requirements for LCM procedure, including performance monitoring. NOTE: The following aspects may be considered: - Relation to legacy requirements; - Performance monitoring and LCM aspects considering use-case specifics; - Generalization aspects; - Static/non-static scenarios/conditions and propagation conditions for testing (e.g. CDL, field data, etc.); - UE processing capability and limitations; - Post-deployment validation due to model change/drift. - RAN WG5 aspects related to testability and interoperability to be addressed on a request basis
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.1.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.2 Rel-19 RAN WG2 SID - AIML for mobility in NR (FS_NR_AIML_Mob)
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.2.1 Description
The study will focus on mobility enhancement in RRC_CONNECTED mode over air interface by following existing mobility framework, i.e. handover decision is always made in network side. Mobility use cases focus on standalone NR PCell change. UE-side and network-side AI/ML model can be both considered, respectively. The investigation is to evaluate potential benefits and gains of AI/ML aided mobility for network triggered L3-based handover, considering the following aspects: - AI/ML based RRM measurement and event prediction: - Cell-level measurement prediction including intra and inter-frequency (UE sided and NW sided model) (RAN WG2): - Inter-cell Beam-level measurement prediction for L3 Mobility (UE sided and NW sided model) (RAN WG2). - HO failure/RLF prediction (UE sided model) (RAN WG2). - Measurement events prediction (UE sided model) (RAN WG2). - Study the need/benefits of any other UE assistance information for the network side model (RAN WG2). - The evaluation of the AI/ML aided mobility benefits should consider HO performance KPIs (e.g. Ping-pong HO, HOF/RLF, Time of stay, Handover interruption, prediction accuracy and measurement reduction, etc.) and complexity trade-offs (RAN WG2). - Potential AI mobility specific enhancement should be based on the Rel19 AI/ML-air interface WID general framework (e.g. LCM, performance monitoring, etc.) (RAN WG2). - Potential specification impacts of AI/ML aided mobility (RAN WG2). - Evaluate testability, interoperability and impacts on RRM requirements and performance (RAN WG4). NOTE: There was no normative work performed in Rel-19.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.2.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.3 Rel-18 RAN WG3 WID - Artificial Intelligence (AI)/Machine Learning (ML) for NG-RAN (NR_AIML_NGRAN-Core)
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.3.1 Description
The objective of this work is to specify data collection enhancements and signalling support within existing NG-RAN interfaces and architecture (including non-split architecture and split architecture) for AI/ML-based Network Energy Saving, Load Balancing and Mobility Optimization. Support of AI/ML for NG-RAN, as a RAN internal function, is used to facilitate Artificial Intelligence (AI) and Machine Learning (ML) techniques in NG-RAN. The objective of AI/ML for NG-RAN is to improve network performance and user experience, through analysing the data collected and autonomously processed by the NG-RAN, which can yield further insights, e.g. for Network Energy Saving, Load Balancing, Mobility Optimization. Support of AI/ML in NG-RAN requires inputs from neighbour NG-RAN nodes (e.g. predicted information, feedback information, measurements) and/or UEs (e.g. measurement results). Signalling procedures used for the exchange of information to support AI/ML in NG-RAN are use case and data type agnostic, which means that the intended usage of the data exchanged via these procedures (e.g. input, output, feedback) is not indicated. The collection and reporting of information are configured through the Data Collection Reporting Initiation procedure, while the actual reporting is performed through the Data Collection Reporting procedure. Support of AI/ML in NG-RAN does not apply to ng-eNB. For the deployment of AI/ML in NG-RAN, the following scenarios may be supported: - AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the NG-RAN node. - AI/ML Model Training and AI/ML Model Inference are both located in the NG-RAN node.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.3.2 Activities summary
Summary is available in clause 11.2 of TR 21.918 [2].
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.4 Rel-19 RAN WG3 SID - Enhancements for Artificial Intelligence (AI)/Machine Learning (ML) for NG-RAN (FS_NR_AIML_NGRAN_enh)
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.4.1 Description
The objective of this study is to further investigate new AI/ML based use cases and identify enhancements to support AI/ML functionality and further discussions on the Rel-18 leftovers. The detailed objectives of the study are listed as follows: - Study two new AI/ML based use cases, i.e. Network Slicing and CCO, with existing NG-RAN interfaces and architecture (including non-split architecture and split architecture). - Rel-18 leftovers as candidates for normative work, based on the Rel-18 principles, as follows: - Mobility optimization for NR-DC. - Split architecture support for Rel-18 use cases based on the conclusions from Rel-18 WI. - Energy Saving enhancements, e.g. Energy Cost Prediction. - Continuous MDT collection targeting the same UE across RRC states. - Multi-hop UE trajectory across gNBs. NOTE: The outcome of this study was used to support the NR_AIML_NGRAN_enh-Core Rel-19 RAN WG3 WID, see clause 5.3.2.6.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.4.2 Activities summary
The outcome of this study was used to support the Rel-19 RAN WG3 WID on Enhancements for AI/ML for NG-RAN (NR_AIML_NGRAN_enh).
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.5 Rel-19 RAN WG1/RAN WG4 SID - Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface (FS_NR_AIML_air_Ph2)
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.5.1 Description
The objective of this work is to provide further study on some outstanding issues identified during the prior study. specifically for the following aspects: - CSI feedback enhancement [RAN1]: - For CSI temporal prediction: further update TR 38.843 [3] with additional evaluations. - For CSI compression (two-sided model), further study ways to: - Improve trade-off between performance and complexity/overhead: - e.g. considering extending the spatial/frequency compression to spatial/temporal/frequency compression, cell/site specific models, CSI compression plus prediction (compared to Rel-18 non-AI/ML based approach), etc. - Alleviate/resolve issues related to inter-vendor training collaboration, while addressing necessary specification impact analysis, as well as, other aspects requiring further study/conclusion as captured in the conclusions clause of TR 38.843 [3]: - Necessity and details of model Identification concept and procedure in the context of LCM for two-sided models [RAN2/RAN1]. - CN/OAM/OTT collection of UE-sided model training data [RAN2/RAN1]: - For the FS_NR_AIML_Air study use cases, identify the corresponding contents of UE data collection. - Analyse the UE data collection mechanisms identified during the FS_NR_AIML_Air (clause 7.2.1.3.2 of TR 38.843 [3]) study along with the implications and limitations of each of the methods. - Model transfer/delivery [RAN2/RAN1]: - Determine whether there is a need to consider standardised solutions for transferring/delivering AI/ML model(s) considering at least the solutions identified during the FS_NR_AIML_Air study. - Testability and interoperability [RAN4]: o Further analyse the various testing options for two-sided models, in collaboration with RAN1, and including at least: - Relation to legacy requirements. - Performance monitoring and LCM aspects considering use-case specifics. - Generalization aspects. - Static/non-static scenarios/conditions and propagation conditions for testing (e.g. CDL, field data, etc.). - UE processing capability and limitations. - Post-deployment validation due to model change/drift. - RAN5 aspects related to testability and interoperability to be addressed on a request basis.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.6 Rel-19 RAN WG3 WID - Enhancements for Artificial Intelligence (AI)/Machine Learning (ML) for NG-RAN (FS_NR_AIML_NGRAN_enh-Core)
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.6.1 Description
The aim of this work item is to specify new AI/ML-based use cases and introduce further enhancements to finalize the Rel-18 leftovers based on the conclusions captured in TR 38.743 [69] (FS_NR_AIML_NGRAN_enh). The objective of this work is to provide specification support for the following aspects: - Specify data collection enhancements and signalling support within existing NG-RAN interfaces and architecture (including non-split architecture and split architecture) for AI/ML-based Slicing and AI/ML based CCO. [RAN3] - Support of the Leftovers in Rel-18 AI/ML for NG-RAN [RAN3]: - Mobility Optimization for NR-DC - Split architecture support for Rel-18 use cases - Continuous MDT collection targeting the same UE across RRC states NOTE: Coordination with RAN WG2, SA WG5 when needed if any.
30577dbcedb3290f42b3a55986900ea7
22.850
5.3.2.6.2 Activities summary
Editor's note: Reference to TR 21.919 can be added when the work item summary is made available.
30577dbcedb3290f42b3a55986900ea7
22.850
6 Analysis on AI/ML across 3GPP
30577dbcedb3290f42b3a55986900ea7
22.850
6.1 General
This clause will identify any potential misalignments and inconsistencies for AI/ML across 3GPP, based on clause 5. NOTE: Any RAN related aspects are subject to early coordination and feedback from TSG RAN.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2 AI/ML related terminology
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1 Analysis on AI/ML model related terminology consistency
This clause identifies any potential misalignments and inconsistencies for AI/ML terminology across 3GPP, based on clause 5.
30577dbcedb3290f42b3a55986900ea7
22.850
6.2.1.1 Analysis on ML model
The term 'ML model' has been defined differently by SA WG5, SA WG6 and RAN WG1, as illustrated in Table 6.2.1.1-1. Table 6.2.1.1-1: Definition of ML model as defined across 3GPP WGs TSG (TS/TR) ML model SA WG5 TS 28.105 [9] A manageable representation of an ML model algorithm. (NOTE 1, NOTE 2, NOTE 3) SA WG6 TS 23.482 [34] According to TS 28.105 [9], mathematical algorithm that can be "trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information. RAN WG1 TR 38.843 [3] A data driven algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs. NOTE 1: An ML model algorithm is a mathematical algorithm through which running a set of input data can generate a set of inference output. NOTE 2: An ML model algorithm is proprietary and not in scope for standardization and therefore not treated in this specification. NOTE 3: An ML model may include metadata. Metadata may include e.g. information related to the trained model and applicable runtime context. The following unified definition for 'ML model' is proposed: ML model: A mathematical algorithm that applies ML techniques to generate a set of outputs based on a set of inputs. It may include metadata which consists of, e.g. information related to the model and applicable runtime context. NOTE: An ML model can be managed, stored and transferred as artifacts, which may be containers, images, or proprietary file formats.