hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
5
1.47k
content
stringlengths
0
6.67M
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2.4 Network Access Control
Table 7.2.4-1 Consolidated Requirements on Network Access Control CPR # Consolidated Potential Requirement Original PR # Comment CPR 7.2.4-001 In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 5G system shall support a mechanism to enable a UE with a subscription to a Participating Operator to select an appropriate radio access network (i.e., E-UTRAN, NG-RAN) and/or access an authorized Shared NG-RAN. [PR 5.3.6-004] [PR 5.3.6-003] CPR 7.2.4-002 In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 5G system shall support access control for a UE accessing a Shared NG-RAN and be able to apply differentiated access control for different access networks of Shared NG-RANs when more than one Shared NG-RAN are available for the Participating Operator to choose from. [PR 5.3.6-002] [PR 5.7.6-003] CPR 7.2.4-003 In case of Indirect Network Sharing, based on the Hosting and Participating Operators’ policies, the 5G system shall enable the Participating Operator to provide steering information in order to assist a UE with access network selection amongst the Hosting Operator’s available Shared RAN(s). [PR 5.7.6-002] NOTE: the above requirements assume no UE impacts.
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2.5 Regulatory Services
Table 7.2.5-1 Consolidated Requirements to the Regulatory Services CPR Consolidated Potential Requirement Original PR # Comment CPR 7.2.5-001 In case of Indirect Network Sharing and subject to regulatory requirements and mutual agreement between the participating operators and the hosting operator, the 5G system shall support regulatory services (e.g., PWS, emergency calls). [PR 5.6.6-001] [PR 5.8.6-001] PRs merged CPR 7.2.5-002 In Indirect Network Sharing, the 5G system shall be able to provide a UE accessing a Shared NG-RAN network with positioning service in compliance with regulatory requirements. [PR 5.6.6-002] PR modified CPR 7.2.5-003 Subject to regulatory requirements and mutual agreement between the participating operators and the hosting operator the Shared NG-RAN operator shall be able to broadcast PWS messages originated from the core network of the hosting operator. [PR 5.8.6-002] PR modified
c3de68f0eebb105002ecbb9ed169af5e
22.851
7.2.6 Charging
Table 7.2.6-1 Charging requirements CPR # Consolidated Potential Requirement Original PR # Comment CPR 7.2.6-001 The 5G core network shall be able to support collection of charging information associated with a UE accessing a Shared NG-RAN using Indirect Network Sharing. [PR 5.2.6-001] Align with the definitions. shared network ->Shared NG-RAN
c3de68f0eebb105002ecbb9ed169af5e
22.851
8 Conclusion and recommendations
This document analyzes a number of use cases to enhance the support of network sharing. Some considerations and the resulting potential consolidated requirements have been captured in clauses 6 and 7. It is recommended to proceed with normative work and include the consolidated requirements identified in order to better serve communication services. Annex A (informative): Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 05/2022 SA1#98e S1-221092 Initial Skeleton 0.0.0 05/2022 SA1#98e Incorporation of approved pCRs: S1-221271; S1-221292; S1-221272 0.1.0 09/2022 SA1#99e Incorporation of approved pCRs: S1-222392; S1-222393; S1-222394; S1-222395; S1-222396 0.2.0 11/2022 SA1#100 Incorporation of approved pCRs: S1-223623; S1-223624; S1-223682; S1-223626; S1-223683; S1-223599 0.3.0 12/2022 SA#98e SP-221266 Raised to v.1.0.0 by MCC for presentation for information to SA#98e 1.0.0 02/2023 SA1#101 Incorporation of approved pCRs: S1-230579; S1-230746; S1-230361; S1-230580; S1-230067; S1-230781; S1-230782; S1-230583 1.1.0 05/2023 SA1#102 Incorporation of approved pCRs: S1-231523; S1-231182; S1-231138; S1-231525; S1-231506; S1-231526; S1-231723; S1-231527; S1-231522 1.2.0 06/2023 SA#100 SP-230508 MCC clean-up for approval by SA 2.0.0 06/2023 SA#100 SP-230508 Raised to v.19.0.0 by MCC following approval by SA 19.0.0 2023-09 SA#101 SP-231019 0001 1 F Updates on the TR 22.851 consolidation and consideration 19.1.0 2023-09 SA#101 SP-231019 0003 1 F Update Network Access Control CPR 19.1.0
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
1 Scope
The present document investigates specific use cases and service requirements for 5GS support of enhanced XR-based services, (as XR-based services are an essential part of "Metaverse" services considered in this study,) as well as potentially other functionality, to offer shared and interactive user experience of local content and services, accessed either by users in the proximity or remotely. In particular, the following areas are studied: - Support of interactive XR media shared among multiple users in a single location, including: - performance (KPI) aspects; e.g. latency, throughput, connection density - efficiency and scalability aspects, for large numbers of users in a single location. - the combination of haptics type of XR media and other non-haptics types of XR media. - Identification of users and other digital representations of entities interacting within the Metaverse service. - Acquisition, use and exposure of local (physical and digital) information to enable Metaverse services, including: - acquiring local spatial/environmental information and user/UE(s) information (including viewing angle, position and direction); - exposing local acquired spatial, environmental and user/UE information to 3rd parties to enable Metaverse services. - Other aspects, such as privacy, charging, public safety and security requirements. The study also investigates gaps between the identified new potential requirements and the requirements already specified for the 5G system. It is acknowledged that there are activities related to the topic Metaverse outside of 3GPP, such as the W3C Open Metaverse Interoperability Group (OMI). These activities may be considered in the form of use cases and related contributions to this study, but there is no specific objective for this study to consider or align with external standardization activities. A difference between this study and TR 22.847 " Study on supporting tactile and multi-modality communication services" is that Metaverse services would involve coordination of input data from different devices/sensors from different users and coordination of output data to different devices at different destinations to support the same task or application.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [2] 3GPP TS 22.228: "Service requirements for the Internet Protocol (IP) Multimedia core network Subsystem (IMS)". [3] 3GPP TS 22.173: "IP Multimedia Core Network Subsystem (IMS) Multimedia Telephony Service and supplementary services". [4] 3GPP TS 22.101: "Service principles". [5] 3GPP TS 22.261: "Service requirements for the 5G system". [6] Talaba D, Horvath I, Lee K H, "Special issue of Computer-Aided Design on virtual and augmented reality technologies in product design," Computer-Aided Design 42:361 - 363. [7] Wang X, Tsai J J-H, "Collaborative Design in Virtual Environments," In: ISCA 48:1-2. [8] Benford S, Greenhalgh C, Rodden T, "Collaborative virtual environments." Communications of the ACM 44: 79–85. [9] M. Eid, J. Cha, and A. El Saddik, "Admux: An adaptive multiplexer for haptic-audio-visual data communication", IEEE Tran. Instrument. and Measurement, vol. 60, pp. 21–31, Jan 2011. [10] K. Iwata, Y. Ishibashi, N. Fukushima, and S. Sugawara, "QoE assessment in haptic media, sound, and video transmission: Effect of playout buffering control", Comput. Entertain., vol. 8, pp. 12:1–12:14, Dec 2010. [11] N. Suzuki and S. Katsura, "Evaluation of QoS in haptic communication based on bilateral control", in IEEE Int. Conf. on Mechatronics (ICM), Feb 2013, pp. 886–891. [12] E. Isomura, S. Tasaka, and T. Nunome, "A multidimensional QoE monitoring system for audiovisual and haptic interactive IP communications", in IEEE Consumer Communications and Networking Conference (CCNC), Jan 2013, pp. 196–202. [13] A. Hamam and A. El Saddik, "Toward a mathematical model for quality of experience evaluation of haptic applications", IEEE Tran. Instrument. and Measurement, vol. 62, pp. 3315–3322, Dec 2013. [14] O. Holland et al., "The IEEE 1918.1 "Tactile Internet" Standards Working Group and its Standards," Proceedings of the IEEE, vol. 107, no. 2, Feb. 2019. [15] Lee, Donghwan et. al., "Large-scale Localization Datasets in Crowded Indoor Spaces, " NAVER LABS, NAVER LABS Europe, CVPR 2021. [16] 3GPP TR 22.837: " Study on Integrated Sensing and Communication." [17] Alriksson, F., Kang, D.H., Phillips, C., Pradas, J.L. and Zaidi, A., 2021. XR and 5G: Extended reality at scale with time-critical communication. Ericsson Technology Review. [18] 3GPP TR 26.928: "Extended Reality (XR) in 5G." [19] Ge, Y., Wen, F., Kim, H., Zhu, M., Jiang, F., Kim, S., Svensson, L. and Wymeersch, H., 2020. 5G SLAM using the clustering and assignment approach with diffuse multipath. Sensors, 20(16), p.4656. [20] Kim, H., Granström, K., Gao, L., Battistelli, G., Kim, S. and Wymeersch, H., 2020. 5G mmWave cooperative positioning and mapping using multi-model PHD filter and map fusion. IEEE Transactions on Wireless Communications, 19(6), pp.3782-3795. [21] Liu, A., Huang, Z., Li, M., Wan, Y., Li, W., Han, T.X., Liu, C., Du, R., Tan, D.K.P., Lu, J. and Shen, Y., 2022. A survey on fundamental limits of integrated sensing and communication. IEEE Communications Surveys & Tutorials, 24(2), pp.994-1034. [22] Dwivedi, S., Shreevastav, R., Munier, F., Nygren, J., Siomina, I., Lyazidi, Y., Shrestha, D., Lindmark, G., Ernström, P., Stare, E. and Razavi, S.M., 2021. Positioning in 5G networks. IEEE Communications Magazine, 59(11), pp.38-44. [23] Suhr, C. "S.L.A.M. and Optical Tracking for XR", https://medium.com/desn325-emergentdesign/s-l-a-m-and-optical-tracking-for-xr-cfabb7dd536f <accessed 1.9.22> [24] Kim, H., Granstrom, K., Svensson, L., Kim, S. and Wymeersch, H., 2022. PMBM-based SLAM Filters in 5G mmWave Vehicular Networks. IEEE Transactions on Vehicular Technology. [25] 5GAA "C-V2X Use Cases Volume II: Examples and Service Level Requirements", 5G Automobile Assocaition White Paper, https://5gaa.org/wp-content/uploads/2020/10/5GAA_White-Paper_C-V2X-Use-Cases-Volume-II.pdf <accessed 02.09.22> [26] A. Ebrahimzadeh, M. Maier and R. H. Glitho, "Trace-Driven Haptic Traffic Characterization for Tactile Internet Performance Evaluation," 2021 International Conference on Engineering and Emerging Technologies (ICEET), 2021, pp. 1-6. [27] Lee, L.-H., Braud, T., Zhou, P., Wang, L., Xu, D., Lin, Z., Kumar, A., Bermejo, C., and Hui, P. All One Needs to Know about Metaverse: A Complete Survey on Technological Singularity, Virtual Ecosystem, and Research Agenda, 2021. [28] Halbhuber, David & Henze, Niels & Schwind, Valentin. (2021). Increasing Player Performance and Game Experience in High Latency Systems. Proceedings of the ACM on Human-Computer Interaction. 5. 1-20. 10.1145/3474710. [29] ITU-T Recommendation Y.3090 (02/22): "Digital twin network - Requirements and architecture" (https://www.itu.int/rec/T-REC-Y.3090-202202-I). [30] Skalidis, I., Muller, O. and Fournier, S., 2022. CardioVerse: The Cardiovascular Medicine in the Era of Metaverse. Trends in Cardiovascular Medicine. [31] Koo, H., 2021. Training in lung cancer surgery through the metaverse, including extended reality, in the smart operating room of Seoul National University Bundang Hospital, Korea. Journal of educational evaluation for health professions, 18. [32] Mozumder, M.A.I., Sheeraz, M.M., Athar, A., Aich, S. and Kim, H.C., 2022, February. Overview: technology roadmap of the future trend of metaverse based on IoT, blockchain, AI technique, and medical domain metaverse activity. In 2022 24th International Conference on Advanced Communication Technology (ICACT) (pp. 256-261). IEEE. [33] Ning, H., Wang, H., Lin, Y., Wang, W., Dhelim, S., Farha, F., Ding, J. and Daneshmand, M., 2021. A Survey on Metaverse: the State-of-the-art, Technologies, Applications, and Challenges. arXiv preprint arXiv:2111.09673. [34] https://www.xrtoday.com/virtual-reality/ukrainian-doctors-perform-first-vr-surgery/ [35] https://www.cryptotimes.io/surgeon-performed-surgery-remotely-through-metaverse/ [36] Senk, S., Ulbricht, M., Tsokalo, I., Rischke, J., Li, S.C., Speidel, S., Nguyen, G.T., Seeling, P. and Fitzek, F.H., 2022. Healing Hands: The Tactile Internet in Future Tele-Healthcare. Sensors, 22(4), p.1404. [37] Chen, D. and Zhang, R., 2022. Exploring Research Trends of Emerging Technologies in Health Metaverse: A Bibliometric Analysis. Available at SSRN 3998068. [38] https://www.yankodesign.com/2022/05/15/the-metaverse-has-the-power-to-improve-healthcare-and-it-has-already-begun/ [39] 3GPP TS 23.501: "System architecture for the 5G System (5GS)". [40] 3GPP TS 38.300: "NR; NR and NG-RAN Overall description; Stage-2". [41] 3GPP TS 22.826: "Study on communication services for critical medical applications". [42] Tataria, H., Shafi, M., Molisch, A.F., Dohler, M., Sjöland, H. and Tufvesson, F., 2021. 6G wireless systems: Vision, requirements, challenges, insights, and opportunities. Proceedings of the IEEE, 109(7), pp.1166-1199. [43] 3GPP TS 23.682: "Architecture enhancements to facilitate communications with packet data networks and applications". [44] 3GPP TS 23.273: "5G System (5GS) Location Services (LCS); Stage 2". [45] "MPEG-4 Face and Body Animation", https://visagetechnologies.com/mpeg-4-face-and-body-animation/ (accessed 2.11.22). [46] 3GPP TS 22.105: "Services and service capabilities". [47] Virtual humans: https://en.wikipedia.org/wiki/Virtual_humans [48] Kwang Soon Kim, et al., "Ultrareliable and Low-Latency Communication Techniques for Tactile Internet Services", PROCEEDINGS OF THE IEEE, Vol. 107, No. 2, February 2019 [49] I.-J. Hirsh and C. E. J. Sherrick, "Perceived order in different sense modalities, " Journal of Experimental Psychology, vol. 62, no. 5, pp. 423–432, 1961. [50] VESA Compression Codecs - vesa.org/vesa-display-compression-codecs [51] 3GPP TR 23.700-80: "Study on 5G System Support for AI/ML-based Services". [52] EU data protection rules | European Commission (europa.eu) (https://ec.europa.eu/info/law/law-topic/data-protection/eu-data-protection-rules_en) [53] California Consumer Privacy Act (CCPA) | State of California - Department of Justice - Office of the Attorney General (https://oag.ca.gov/privacy/ccpa) [54] Y. Sun, Z. Chen, M. Tao, and H. Liu, "Communications, caching, and computing for mobile virtual reality: Modelingand trade off, " IEEE Trans. Commun., vol. 67, no. 11, pp. 7573–7586, Nov. 2019. [55] Y. Cai, J. Llorca, A. M. Tulino and A. F. Molisch, "Compute- and Data-Intensive Networks: The Key to the Metaverse," 2022 1st International Conference on 6G Networking (6GNet), 2022. [56] 3GPP TS 22.226: "Global Text Telephony" [57] ITU-T SG16 "ITU-T SG 16 Work on Accessibility - How ITU is Pioneering Telecom Accessibility for All", https://www.itu.int/en/ITU-T/studygroups/com16/accessibility/Pages/telecom.aspx, accessed 30.1.23. [58] Ankit Ojha, Ayush Pandey, Shubham Maurya, Abhishek Thakur, Dr. Dayananda P, "Sign Language to Text and Speech Translation in Real Time Using Convolutional Neural Network," INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) NCAIT – 2020 (Volume 8 – Issue 15). [59] R. Kazantsev and D. Vatolin, "Power Consumption of Video-Decoders on Various Android Devices," 2021 Picture Coding Symposium (PCS), Bristol, United Kingdom, 2021, pp. 1-5, doi: 10.1109/PCS50896.2021.9477481. [60] glTF 2.0 specification, https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html, accessed 02/02/2023.. [61] B. Egger, W. A. P. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, C. Theobalt, V. Blanz, and T. Vetter. "3D Morphable Face Models—Past, Present, and Future". ACM Trans. Graph. 39, 5, Article 157 (Jun 2020) [62] M. Zollhöfer, J. Thies, P. Garrido, D. Bradley, T. Beeler, P. Pérez, M. Stamminger, M. Nießner, and C. Theobalt. "State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications". Computer Graphics Forum (2018). [63] 3GPP TS 23.503, " Policy and charging control framework for the 5G System (5GS); Stage 2".
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
3 Definitions of terms, symbols and abbreviations
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
3.1 Terms
For the purposes of the present document, the terms given in 3GPP TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in 3GPP TR 21.905 [1]. avatar: a digital representation specific to media that encodes facial (possibly body) position, motions and expressions of a person or some software generated entity. Conference: An IP multimedia session with two or more participants. Each conference has a "conference focus". A conference can be uniquely identified by a user. Examples for a conference could be a Telepresence or a multimedia game, in which the conference focus is located in a game server. NOTE 1: This definition was taken from TS 22.228 [2]. Conference Focus: The conference focus is an entity which has abilities to host conferences including their creation, maintenance, and manipulation of the media. A conference focus implements the conference policy (e.g. rules for talk burst control, assign priorities and participant’s rights). NOTE 2: This definition was taken from TS 22.228 [2]. digital asset: digitally stored information that is uniquely identifiable and can be used to realize value according to their licensing conditions and applicable regulations. Examples of digital assets include digital image (avatar), software licenses, gift certificates and files (e.g. music files) that have been purchased under a license that allows resale. digital representation: the mobile metaverse media associated with the presentation of a particular virtual or physical object. The digital representation could present the current state of the object. One example of a digital representation is an avatar, see Annex A. digital twin: A real-time representation of physical assets in a digital world. NOTE 3: This definition was taken from ITU-T Recommendation Y.3090 [29]. gesture: a change in the pose that is considered significant, i.e. as a discriminated interaction with a mobile metaverse service. immersive: a characteristic of a service experience or AR/MR/VR media, seeming to surround the user, so that they feel completely involved. localization: A known location in 3 dimensional space, including an orientation, e.g. defined as pitch, yaw and roll. location related service experience: user interaction and information provided by a service to a user that is relevant to the physical location in which the user accesses the service. location agnostic service experience: user interaction and information provided by a service to a user that has little or no relation to the physical location in which the user accesses the service. Rather the service provides interaction and information concerning either a distant or a non-existent physical location. mobile metaverse media: media communicated or enabled using the 5G system including audio, video, XR (including haptic) media, and data from which media can be constructed (e.g. a 'point cloud' that could be used to generate XR media.) mobile metaverse: the user experience enabled by the 5G system of interactive and/or immersive XR media, including haptic media. mobile metaverse server: an application server that supports one or more mobile metaverse services to a user access by means of the 5G system. mobile metaverse service: the service that provides a mobile metaverse experience to a user by means of the 5G system. pose: the relative location, orientation and direction of the parts of a whole. The pose can refer the user, specifically used in terms of identifying the position of a user's body. The pose can also also refer to an entity or object (whose parts can adopt different locations, orientations, etc.) that the user interacts with by means of mobile metaverse services. predictive digital representation model: a model used for creating a digital representation (e.g. avatar) of a user or object in AR/MR/VR communication that helps to compensate for communication latency and/or conceal negative effects of high communication latency between the users by predicting for example events, changes or outcomes that impact the digital representation, such as predicting the current position or pose of a remote user. service information: this information is out of scope of standardization but could contain, e.g. a URL, media data, media access information, etc. This information is used by an application to access a service. spatial anchor: an association between a location in space (three dimensions) and service information that can be used to identify and access services, e.g. information to access AR media content. spatial map: A collection of information that corresponds to space, including information gathered from sensors concerning characteristics of the forms in that space, especially appearance information. spatial mapping service: A service offered by a mobile network operator that gathers sensor data in order to create and maintain a Spatial Map that can be used to offer customers Spatial Localization Service. spatial localization service: A service offered by a mobile network operator that can provide customers with Localization. User Identifier: a piece of information used to identify one specific User Identity in one or more systems. NOTE 4: This definition was taken from TS 22.101 [4]. User Identity: information representing a user in a specific context. A user can have several user identities, e.g. a User Identity in the context of his profession, or a private User Identity for some aspects of private life. NOTE 5: This definition was taken from TS 22.101 [4]. User Identity Profile: A collection of information associated with the User Identities of a user. NOTE 6: This definition was taken from TS 22.101 [4]. digital wallet: one type of digital asset container, also known as an e-wallet or mobile wallet, is a software application that securely stores digital credentials, such as payment information, loyalty cards, tickets, and other digital assets. It allows users to make electronic transactions, such as payments and transfers, conveniently and securely using their digital credentials. NOTE 7: Digital wallets typically employ encryption and authentication mechanisms to protect the stored information and ensure the security of transactions.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
3.2 Symbols
Void.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in 3GPP TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in 3GPP TR 21.905 [1]. AI Artificial Intelligence CCTV ClosedCircuit TeleVision DoF Degrees of Freedom DVE Distributed Virtual Environment FACS Facial Action Coding System FOV Field Of View LiDAR Light Detection And Ranging VRU Vulnerable Road User
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
4 Overview
Mobile metaverse services are discussed in this technical report both in the abstract and concrete. Specific services mentioned in the TR include: - Situational awareness for drivers, pedestrians, cyclists, to increase safety and efficiency of transport (see 5.2). - XR enabled collaborative and concurrent engineering, to enable local and remote collaboration (see 5.3). - Participatory in and passive observation of virtual reality events, e.g. basketball (see 5.6). - Presentation of AR content, e.g. a feature length movie, on a virtual screen (see 5.7). - Remote critical health care, including surgery and treatment (see 5.10). The study also considers a number of use cases that feature new service enablers, including: - Providing users with information and services that are of local relevance (see 5.1). - Enhancements to IMS to support multiple users and multi-modal XR communication (see 5.3). - Support for spatial anchors to link service information to specific locations (see 5.4). - Support for spatial localization and mapping services, and enablers for them in the 5GS (see 5.5). - Support for multi-service coordination for different input and output devices and diverse services (see 5.8). - Support for synchronization of different data streams and predicted network conditions (especially latency) to enable immersive remote collaboration despite significant distance and therefore communication delay between participants (see 5.9).
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5 Use Cases
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1 Use Case on Localized Mobile Metaverse Service
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1.1 Description
This use case will consider the potential service opportunities that arise when advanced location information is available to trigger AR based services. A precursor to this use case is briefly considered: i-mode service, introduced by NTT DOCOMO in 1999. The discussion of i-mode serves as an inspiration. This service was extremely successful, was one of the early mobile services beyond messaging and voice, and has many potential similarities with metaverse services. This service in many ways preceded and foresaw mobile internet services that would arise 10 years later. Users could access data on-line concerning weather, traffic, etc. While there were many revolutionary aspects to i-mode, three are particularly relevant for this use case: - i-area – a location information service that enabled the user to identify locally relevant information – concerning traffic, maps and retail store information for business in the user’s proximity. - i-channel – a distribution service of ‘latest information’ that the user could further investigate (through interaction) and whose display was user configurable. - a fully decentralized content and service creation framework allowing third parties to easily provide content, especially relevant: location specific content. This made it possible for small businesses to provide information to potential customers in the proximity such as opening hours, special offers, etc. It was even possible for those in the same location to meet and join 'virtually,' e.g. to play a computer game with other passengers in the same train car or bus. In this use case, analogously, services that are locally relevant can be accessed, with relevant information retried. The user will have the ability to selectively control which of this information to display. The content that is obtained comes from decentralized sources - in this case different individual merchants will provide this information. The use case described in this clause does not rely on or recreate the i-mode service. Rather, some of the ideas in i-mode service are carried forward given the new opportunities enabled by localized 3D interactive media. Localized mobile metaverse services creates exciting new opportunities to receive locally relevant content and interact with services. These capabilities taken to a much greater level will form the essence of the coming mobile metaverse user experience. What distinguishes this service most is that it will provide the user with information services integrated into their ordinary experience. Consider an example of a commuter navigating an underground passage. Diverse relevant information is integrated into the user's field of view, as shown in Figure 5.1.1-1 below. Figure 5.1.1-1: Localized Mobile Metaverse Services offering relevant information Here, the AR annotation provides much more than an augmented map. The user is going to catch a train. (a) The path to the platform is shown without obstructing the user's perception of their proximity, where the contrast is good and no distractions appear. The (b) store on the right can provide content that may be relevant to the potential shopper, here the store's opening hours. Further along, (c) a restaurant provides a personalized message, reminding the user that they ate there in the past and ordered soup. These services are linked to the space that the user is in. See Figure 5.1.1-2, below. Figure 5.1.1-2: Services offering relevant information are anchored in space The three information augmentations that are displayed are the result of different source of information. The path information (a) can be presented anywhere that it fits into the scene, where the information for (b) and (c) are anchored in space. The information depends on interaction between the user (or the user's preferences, application context, etc.): the content shown depends on the user's interest: (a) they are travelling, and the navigation app knows what information the user needs to see. (b) The user has sought Blue Lotion and it is available here - the user's 'persistent search' shows local results. (c) At this time of day, the user often eats, so the restaurant on the left's reminder is welcome. The 5G system enables this 'access to local services' in a number of ways.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1.2 Pre-conditions
Ulysses uses his AR capable glasses while travelling. These are tethered to a UE that he carries. This UE receives 5G service from the mobile network operator he has a subscription with, M. Ulysses is interested in using his AR capable glasses to receive relevant information, so he has selected which applications are 'relevant.' These applications are therefore configured with the operator M to be 'active.' The purpose of this configuration will be described below. Local services, that is services that are localized, have anchors that can be relevant to applications. A pre-condition of this use case is that such services exist and have spatially defined access. This is considered in this use case as an 'anchor.' The local service provider associates a service with an anchor, potentially as well as metadata concerning the service (e.g. 'is a restaurant'.) This information is available to M, either because the local services have been registered with M directly, or are available in maps, registries, or other information sources that M has access to (e.g. information that is provided by the UE).
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1.3 Service Flows
Ulysses transfers at the Osaka train station. He has some time before he catches his connecting train. As he is hungry, he activates a 'persistent search' on his mobile device to inform him of opportunities to eat as he traverses the station. His mobile device begins to collect information about his surroundings. The collected information can include information that is obtained by interacting with nearby devices (e.g. sensors and other mobile devices). He also has a shopping list (a set of items of interest) from retail stores. He makes use of a navigation facility so that he will neither lose his way nor lose track of time. This has the following consequences: a) As a result of the applications activated by the user, and the user's preferences, the UE requests the 'localized mobile metaverse' service enabler offered by M, providing a list of 'persistent search' information and the information that was collected from the user’s surroundings. b) M receiving the persistent search and the information that was collected from the user’s surroundings, engages localized service activation. The location of the UE is compared against the search criteria, the information that was collected from the user’s surroundings and information available to M of spatially defined access points. c) M identifies a match - essentially a 'JOIN' of user preferences / application persistent search criteria "restaurants" AND location (in the user's field of view) AND local spatially defined access exists. d) This match is provided to the UE, for further processing by the application. Example 1: The application [service] associated with 'restaurants' has stored information the user has eaten there and indicated it was 'good.' Thus the annotation 'Soup you liked last time' is displayed. Example 2: The application [service] associated with 'shopping' has stored a shopping list. When a local 'shopping service access point' has been identified, the shopping application queries the service provider directly. If there is success, the result is displayed. Here: "Blue Lotion you want ¥2000." e) The UE is able to interact with the application to obtain information associated with the service.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1.4 Post-conditions
Spatial information has successfully been employed to allow a user to identify services. Please see 5.5 for further information on how spatial information is obtained. The user's location has been applied. The information output of different applications are integrated into the media that the user sees through the AR display device. The information associated with the services is displayed in the proper location in the user's field of view. Ulysses may choose to eat soup or buy Blue Lotion. M may charge Ulysses for this service, e.g. for the use of a persistent search and for each successful result provided.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1.5 Existing feature partially or fully covering use case functionality
Location based services exist, to identify the position of the UE. The 5GS supports means to expose the UE's location to a 3rd party service provider. Location Services for CIoT in 23.682, 4.5.19 [43], allow an LCS client to obtain the location of a CIoT UE location, either periodically or in a triggered manner, as well as the last known location of a UE that is unreachable for an extended time. Location services, as specified in 23.273, 5.5 [44] support exposure to authorized third parties through a CAPIF API between NEF and the AF. The information that can be exposed includes the target UE identity and parameters for location events, e.g. the trigger (the UE enters, leaves or is within a target area), the time between reports if multiple reports are requested, and of course the precise location in three dimensions up to the maximum horizontal and vertical accuracy supported. Effectively, there are also numerous 'over the top' mechanisms that essentially communicate GPS or other location information that can be accessed by the application from the mobile OS and terminal equipment. The client application on the UE supplies this location information to an AS 'opaque' to the 5G system (that is, in application traffic that is not visible to the 5G system.) In this sense, the 5G system supports location services in that it provides a UE with the mobile broadband services. Using the location information obtained by any of these three mechanisms described above, a service provider can identify appropriate content for or interaction with the user, by means that are out of scope of 3GPP. This could include the services introduced in 5.1.1 in the example of i-mode. The application service can be configured or used in such a way that the user's interest is known, and location-relevant information can be 'pushed' to the client application. There is a significant gap between this support and the functionality described in this use case. The Localized Mobile Metaverse Services unlike e.g. i-mode and similar location service enabled applications does not assume the use of a specific mediating application that organizes and delivers information. Rather, this use case features and motivates a service enabler that allows the user to discover different locally relevant services and content, so that any available application service can be accessed by the UE, e.g. through a web browser or other interactive-media capable general purpose application.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.1.6 Potential New Requirements
[PR 5.1.6-1] The 5G system shall enable third parties to make known the availability of application services (i.e. provided by Application Servers) associated with a precise location. [PR 5.1.6-2] The 5G system shall provide suitable exposure mechanisms for application services (i.e. provided by Application Servers) associated with a precise location available in the user's proximity (e.g. within line of sight), such that the mobile metaverse services can conform to specific service constraints. NOTE: The term 'service constraints' implies that certain targets of service discovery are supported, e.g. to find 'restaurants.' [PR 5.1.6-3] The 5G system shall provide suitable means to discover mobile metaverse services (i.e. provided by mobile metaverse servers) associated with a precise location available in the user's proximity. [PR 5.1.6-4] The 5G system shall enable the rendering of diverse media, from one or more mobile metaverse services associated with a single location, to be combined to form a single location related service experience.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2 Use Case on Mobile Metaverse for 5G-enabled Traffic Flow Simulation and Situational Awareness
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2.1 Description
Smart transport is a very important area for 5G system as well as an important mobile metaverse service. To reduce traffic jam and minimize traffic accident, 5G, including cellular based V2X technologies becomes more and more essential. The 5G system can be utilized to support real-time information & data delivery for the traffic participants including pedestrians, bicycle riders, and vehicles with or without autonomous driving mode. As shown in Figure 5.2.1-1, the physical objects including road infrastructure and vehicles including cars and trucks in each lanes, will have a corresponding digital twin in the virtual world, and the virtual and physical objects form the mobile metaverse. In this use case, there are virtual objects which actually represent the physical objects including vehicle, road and also pedestrians. This is the basis to enable smart transport applications like traffic flow simulation and situational awareness. Figure 5.2.1-1 Example of Smart Transport Metaverse With the support of 5GS, real-time information and data about the physical objects can be delivered and the virtual objects of the road infrastructure and traffic participants including vulnerable road users can form a smart transport mobile metaverse service as shown in Figure 5.2.1-2. Then real-time processing& computing can be conducted to support traffic simulation, situational awareness, and real time path guidance. Real-time safety or security alerts can be generated for vehicles as well as the driver and passengers. Figure 5.2.1-2 Scenario of 5G-enabled Traffic Flow Simulation and Situational Awareness In order to support traffic flow simulation and situational awareness service, the 5G network need to provide low latency, high data rate and high reliability transmission, handover procedures that minimize service disruptions, and in addition, the 5G network may also need to be further enhanced to meet the service requirements for 5G-enabled traffic flow simulation and situation awareness. Meanwhile, in addition to the physical objects which may use UEs for telecommunication services, their corresponding virtual objects are also capable of interacting with each other and also interact with physical objects via 5GS. This use case employs terminology defined in Table 5.2.1-1. Table 5.2.1-1: Traffic Flow Simulation and Situational Awareness Terminology Situational Awareness The ability to know and understand what is going on in the surroundings, e.g. the perception of environmental elements and events
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2.2 Pre-conditions
1. Traffic participants may or may not be equipped with 5G-enabled terminal equipment. The 5G-enabled terminal equipment can send and receive data via 5G network. 2. Computing and storage resources are provided for the mobile metaverse servers deployed locally or over the data network (in a centralized cloud) to allow real-time processing of huge amount of data produced by human, vehicle, camera, radar etc. 3. Wired or wireless communication resources are configured among the road infrastructure and mobile metaverse server so that the server has real time information of traffic participants perceived by these sensors. The road infrastructure include camera, radar/lidar and also other devices, e.g. for traffic control and guidance etc. The infrastructure equipment may have wired connection e.g. fiber or ethernet or any other wired connection if available. The cellular wireless network can be used to provide a more flexible way to obtain communication services, especially when a wired connection is not available.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2.3 Service Flows
1. Wired or wireless communication path are configured among the road infrastructure and mobile metaverse server so that the server has real time information of traffic participants perceived by these sensors. 2. Sensors deployed in roadside are initialized and enter normal working mode which means the sensors can start to capture the traffic participants. 3. Data connections between the vehicle driver’s UE and server are established with 5G UE module being registered with the server. Vehicles without driver are equipped with a 5G UE module. 4. The traffic participants including pedestrians, bicycle riders, and vehicles, send real-time information towards the server by means of 5G UEs. The real-time status information includes position, speed, direction, brake status etc. Other related status information can be, but is not limited to, sensors for XR (haptic, audio, video, etc) or smart transport (traffic light, camera, radar/LiDAR, etc). These traffic participants are physical objects in physical world which send their property and status information to the corresponding digital-twins objects, that is, virtual objects. This real-time information may include structured and unstructured data. Structured data normally means data which has been processed and thus formatted in a certain way, can be easily stored in database and when transmitted via 5G wireless network, normally less transmission resource (e.g. lower data rate) is needed. Unstructured data are not formatted in a certain and pre-defined way and is not easy to store in database and when transmitting over 5G network, more transmission resource would be needed. 5. Within the mobile metaverse service, for 5G-enabled traffic flow simulation and situation awareness, real-time information of the physical objects including the road infrastructure, the traffic participants as well as other information from traffic light signal, camera, radar, etc, are synchronized to the virtual objects and real-time simulation are conducted. The virtual objects can also interact with each other within virtual world and interact with physical objects via 5G system. Having interaction among virtual and physical objects is essential and beneficial to support traffic simulation and situational awareness as the virtual objects can also have their intelligence which allow them to request telemetry from physical objects or other virtual objects. Thus, not only the physical but also virtual objects need to interact via 5G system and they need to be identified. Interaction means the physical objects deliver real-time status data to digital twin i.e. virtual objects in one direction, and the virtual objects can also send alert or other information to physical objects. Such interaction needs to be realized via 5GS because the physical objects like vehicles or humans need to use a 5G UE to access the 5G system and use 5G resources to establish a connection with 5G system QoS support. Identification of these physical and virtual objects by 5GS and thus associating them is needed to support such interaction, meanwhile, from 5G system perspective, this make it easier for the 5G system, based on operator policy with 3rd party, to provide bearer services with corresponding QoS guarantee for both physical and virtual objects as they can be seen as UE for 5GS. It is noted that physical objects may include static and dynamic. Some static objects deployed along road side like light poles may not move but their properties can be synchronized with their virtual objects if such properties may impact traffic simulation and situational awareness and also visualization processing of the physical world. Theses physical objects may or may not have dedicated UEs. Pedestrian and vehicles are equipped with UEs but the sensors like camera, LiDAR and radar may share a common communication module to establish connection with mobile metaverse services. The communication module can be wired node or wireless UE. 6. Traffic flow simulation is conducted which can predict whether there will be traffic jam and which path is optimal for a certain vehicle. The simulation can generate traffic assistance or guidance in a real-time manner towards the traffic participants. 7. The mobile metaverse server sends the traffic assistance or guidance information to the UE which serves the pedestrian, bicycle rider, vehicle driver or autonomous vehicle.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2.4 Post-conditions
1. The mobile metaverse server conducts big data analysis to further refine the accuracy of traffic flow simulation and situation awareness. 2. Both UEs serving (e.g. providing sensing data for) the physical objects and digital twins objects can be identified by the 5G system.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2.5 Existing features partly or fully covering the use case functionality
The QoS framework of 5G system supports low latency, high reliability or high data rate transmission of data traffic.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.2.6 Potential New Requirements needed to support the use case
[PR 5.2.6-1] The 5G system shall be able to support the following KPIs for transmission for traffic between a large number of UEs and application server (e.g. mobile metaverse server). Use Cases Characteristic parameter (KPI) Influence quantity Max allowed end-to-end latency Service bit rate: user-experienced data rate Reliability Area Traffic capacity Message Data Volume Transfer Interval Service Area 5G-enabled Traffic Flow Simulation and Situational Awareness [5-20] ms (NOTE 1) [10~100 Mbit/s] [25](NOTE 6) > 99.9% [~39.6 Tbit/s/km2 ] (NOTE 5) Typical data: Camera: 10M bits/s per sensor (unstructured) LiDAR: 90M bits/s per sensor (unstructured) Radar: 10M per sensor (unstructured) Real-time Status information including Telemetry data: [< 50K bits/s] per sensor/vehicle/VRU (structured) (NOTE 2) 20-100 ms (NOTE 3) City or Country wide (NOTE 4) NOTE 1: The mobile metaverse server receives the data from various sensors, performs data processing, rendering and provide feedback to the vehicles and user s. The end-to-end latency refers to the transmission delay between a UE and the mobile metaverse server. Exact value is FFS NOTE 2: To support at least 80 vehicles and 1600 users present at the same location (e.g. in an area of 40m*250m) to actively enjoy immersive metaverse services for traffic simulation and traffic awareness, the area traffic capacity is calculated considering 2 cameras, 2 Radars, 2 LiDARs on road side, 1600 user’s smart phones and 80 vehicles with 7 cameras, 4 radar and 2 LiDAR for each vehicle. These application layer message data need to be segmented for network transport thus doesn’t mean packet size. The real-time status information including telemetry data may be structured. NOTE 3: The frequency considers different sensor types such as Radar/LiDAR (10Hz) and Camera (10~50Hz). NOTE 4: The service area for traffic flow simulation and situational awareness depends on the actual deployment, for example, it can be deployed for a city or a district within a city or even countrywide. In some cases a local approach (e.g. the application servers are hosted at the network edge) is preferred in order to satisfy the requirements of low latency and high reliability. NOTE 5: The calculation is this table is done per one 5G network, in case of N 5G networks to be involved for such use case in the same area, this value can be divided by N. Exact value is FFS. NOTE 6: User experienced data rate refers to the data rate needed for the vehicle or human, the value is observed from industrial practice and exact value is FFS.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3 Use Case on collaborative and concurrent engineering in product design using metaverse services
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.1 Description
Since the industrial age, engineering design has become an extremely demanding activity. Collaborative and concurrent engineering occur as a concept and methodology at the end of the last century and was defined as a systematic approach to integrated and co-design of products and their related processes. The diversity and complexity of actual products, requires collaboration of engineers from different geographic locations to share the ideas and solutions with customer and to evaluate products development. VR and AR technologies have found their ways into critical applications in industrial sectors such as aerospace engineering, automotive engineering, medical engineering, and also in the fields of education and entertainment. The range of technologies include Cave Automatic Virtual Environment (better known by the recursive acronym CAVE) environments, reality theatres, power walls, holographic workbenches, individual immersive systems, head mounted displays, tactile sensing interfaces, haptic feedback devices, multi-sensational devices, speech interfaces, and mixed reality systems [6]. Figure 5.3.1-1: XR enabled collaborative and concurrent engineering in product design (Source: https://vrtech.wiki/) One of the key challenges is to how to enable a distributed virtual environment (DVE) allowing multiple users from different geographical locations (some of them are present at the same location) to interact over a network. A DVEs is defined as multi-user virtual reality that actively support communication, collaboration, and coordination [7], 3D place-like environment in which participants are provided with graphical embodiments called avatars that convey their identity, presence, location, and activities to others [8]. A DVE is the simultaneous existence of multiple users in the same virtual space represented as avatars, their communication, the shared exploration of 3D visualizations, and the collaborative construction of new content. This avatar representation is essential for every user knows about the actual perceptions of other users. The users can communicate with each other. They can interact with other users and with the virtual environment. A DVE in the terms of this study is a location agnostic service experience. To support DVEs for collaborative and concurrent engineering, the 5G system needs to fulfil certain KPIs, such as latency, data rate, reliability. Moreover the 5G system (with mobile metaverse services) is expected to support the fundamental features including: - mobile metaverse media support among multiple users; - User Identity management; - data security.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.2 Pre-conditions
Novitas, an innovative start-up company, has set up a distributed virtual environment (with the corresponding 5G communication subscriptions provided by GreenMobile) for collaborative and concurrent engineering in their product design with engineers participating locally and remotely. They have been granted a contract to work together with several partner companies to design and produce a new model of aeroplane engine. In the current phase, Novitas need to collaborate closely with Nyhet, a partner company, to design the key parts of the engine. As part of the agreement, they use the distributed virtual environment to carry out some of the design that requires interaction among engineers. Some engineers use mobile phones or computers (as well as the necessary XR devices), with the corresponding 5G communication subscriptions, to attend such engineering meetings. To protect the sensitive business information, strict security requirements for user identity management and data security are crucial. The service flows below illustrate how engineers interact with each other using services provided by 5G system. Figure 5.3.2-1: Illustration of Collaborative Workspace (Source: ESI-Icido GmbH)
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.3 Service Flows
1. Archimedes, Isambard, Leonardo and George have scheduled an XR design meeting, and Archimedes, Isambard, Leonardo attend from offices while George attends from the factory. Each participant needs to be authenticated before being admitted to the meeting. Due to the strict security requirements, typically participants need to be authenticated using bio information (e.g. finger print, facial image) at the terminal side. The result (not the original bio information) of the terminal side authentication can be forwarded to the corresponding application server of the enterprise. The final result of the network level authentication (can also include the context information, e.g. location information of the participants) is also forwarded to the corresponding application server of the enterprise. Such information helps the enterprise to decide what information (e.g. levels of confidentiality) can be disclosed to which participants during the meeting. 2. Having completed the authentication of the participants, the multimedia communication session/sessions are set up among multiple users as well as the associated devices in the mixed reality systems (e.g. head mounted displays, tactile sensing interfaces, haptic feedback devices, multi-sensational devices). This can be done by means of the IMS (including IMS CN with Data Channel capability) or via OTT applications. 3. When a session starts, multiple streams are established over the 5G network between the corresponding devices that carry multiple modalities data. Table 5.3.3-1 depicts the typical QoS requirements that have to be fulfilled in order for the users’ QoE to be satisfactory. Table 5.3.3-1 Typical QoS requirements for multi-modal streams [9] [10] [11] [12] [13] Haptics Video Audio Jitter (ms) ≤ 2 ≤ 30 ≤ 30 Delay (ms) ≤ 50 ≤ 400 ≤ 150 Packet loss (%) ≤ 10 ≤ 1 ≤ 1 Update rate (Hz) ≥ 1000 ≥ 30 ≥ 50 Packet size (bytes) 64-128 ≤ MTU 160-320 Throughput (kbit/s) 512-1024 2500 - 40000 64-128 4. The haptic information, video and voice are generated at one party and distributed to all other parties continuously. Note that based on the company security policy, some information is shielded before being distributed to certain participants. For example, George joins the meeting from the factory, which is considered less secure according to the company policy. Consequently some sensitive information is filtered before being distributed to George. Information filtering is typically done at the conference centre (i.e. conference focus). 5. George travels back to office while staying connected on the conference. The connection quality of George’s devices has deteriorated sharply, and the 5G network triggers the codec re-negotiation to maintain the reasonable quality of experience for all participants.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.4 Post-conditions
The 5G system enables efficient communication, with enhanced security and identity management, in support of DVEs for the collaborative and concurrent engineering.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.5 Existing features partly or fully covering the use case functionality
The service requirements on the support of multimedia communication among multiple users have been captured in TS 22.228 [2] with the following key definitions: Conference: An IP multimedia session with two or more participants. Each conference has a "conference focus". A conference can be uniquely identified by a user. Examples for a conference could be a Telepresence or a multimedia game, in which the conference focus is located in a game server. Telepresence: A conference with interactive audio-visual communications experience between remote locations, where the users enjoy a strong sense of realism and presence between all participants by optimizing a variety of attributes such as audio and video quality, eye contact, body language, spatial audio, coordinated environments and natural image size. Telepresence System: A set of functions, devices and network elements which are able to capture, deliver, manage and render multiple high quality interactive audio and video signals in a Telepresence conference. An appropriate number of devices (e.g. cameras, screens, loudspeakers, microphones, codecs) and environmental characteristics are used to establish Telepresence. Conference Focus: The conference focus is an entity which has abilities to host conferences including their creation, maintenance, and manipulation of the media. A conference focus implements the conference policy (e.g. rules for talk burst control, assign priorities and participant’s rights). Support of Multi-device and Multi-Identity in IMS MMTEL service is captured in TS 22.173 clause 4.6 [3]: The support of multiple devices is inherent in IMS. In addition, a service provider may allow a user to use any public user identities for its outgoing and incoming calls. The added identities can but do not have to belong to the served user. Identities may be part of different subscriptions and different operators. In addition, TS 22.101 [4] has specified in clause 26a a set of service requirements on User Identity: Identifying distinguished user identities of the user (provided by some external party or by the operator) in the operator network enables an operator to provide an enhanced user experience and optimized performance as well as to offer services to devices that are not part of a 3GPP network. The user to be identified could be an individual human user, using a UE with a certain subscription, or an application running on or connecting via a UE, or a device (“thing”) behind a gateway UE. Network settings can be adapted and services offered to users according to their needs, independent of the subscription that is used to establish the connection. By acting as an identity provider, the operator can take additional information from the network into account to provide a higher level of security for the authentication of a user. The 3GPP System shall support to authenticate a User Identity to a service with a User Identifier. The functional requirement and performance KPIs in support of XR applications are mainly captured in TS 22.261: - clause 7.6.1 AR/VR; - clause 6.43 Tactile and multi-modal communication service - clause 7.11 KPIs for tactile and multi-modal communication service Clause 8 of TS 22.261 specifies the security related requirements covering aspects such as authentication and authorization, identity management, and data security and privacy. Additional consideration need to be given to allow multiple users from different geographical locations to interact using XR techniques. The 5G system is able to collect charging information per UE or per application for the use of IMS based conferencing services.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.6 Potential New Requirements needed to support the use case
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.3.6.1 KPIs for the collaborative and concurrent engineering in product design
[PR 5.3.6.1-1] The 5G System shall provide the appropriate connectivity KPIs for the use case of collaborative and concurrent engineering in product design, see table 5.3.6.1-1. Table 5.3.6.1-1 – Potential key performance requirements for collaborative and concurrent engineering in product design Use Cases Characteristic parameter (KPI) Influence quantity Max allowed end-to-end latency Service bit rate: user-experienced data rate Reliability Area Traffic capacity Message size (byte) UE Speed Service Area Collaborative and concurrent engineering [≤10] ms Typical haptic data: [5] ms (note 1) [1-100] Mbit/s ([14]) [> 99.9%] ([14]) Typically for Haptic: [> 99.9%] (without compression) Typically for Haptic: [> 99.999%] (with compression (note 4)) [26] [3.804] Tbit/s/km2 (note 2) Typical haptic data: 1 DoF: 2-8 3 DoFs: 6-24 6 DoFs: 12-48 Video: 1500 Audio: 100 ([14]) Stationary or Pedestrian typically < 100 km2 (note 3) NOTE 1: The network based conference focus is assumed, which receives data from all the participants, performs rendering (image synthesis), and then distributes the results to all participants. The latency refers to the transmission delay between a UE and the application server. NOTE 2: To support at least 15 users present at the same location (e.g. in an area of 20m*20m) to actively enjoy immersive Metaverse service concurrently, the area traffic capacity is calculated considering per user consuming non-haptic XR media (e.g. for video per stream up to 40000 kbit/s) and concurrently 60 haptic sensors (per haptic sensor generates data up to 1024 kbit/s). NOTE 3: In practice, the service area depends on the actual deployment. In some cases a local approach (e.g. the application servers are hosted at the network edge) is preferred in order to satisfy the requirements of low latency and high reliability. NOTE 4: The arrival interval of compressed haptic data usually follow some statistical distributions, such as generalized Pareto distribution, and Exponential distribution [26]. 5.3.6.2 Service requirements for collaborative and concurrent engineering in product design [PR 5.3.6.2-1] The 5G system shall enhance the interaction between IMS CN and 5G CN to allow 5G CN to provide the IMS CN with real-time feedback in support of XR communication among multiple users simultaneously. NOTE: The feedback can include information such as network condition, achieved QoS. Such information can be used by the IMS CN, for example, to trigger the codec negotiation. [PR 5.3.6.2-2] Subject to regulatory requirements, operator policies and user consent, the 5G system shall be able to support mechanisms to expose to a trusted third party (e.g. the conference focus) the result of authenticating a user identity to a UE. NOTE: Authenticating a user identity to a UE at the terminal side is out of 3GPP scope. [PR 5.3.6.2-3] The 5G system shall provide a means to synchronize multiple data flows from multiple UEs associated with one user.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4 Use Case on Spatial Anchor Enabler
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4.1 Description
In use case 5.1 "Localized Mobile Metaverse Service Use Case," we introduced the term spatial anchor to describe an association between space and service information. This use case elaborates the concept of the spatial anchor to enable diverse mobile metaverse services, including those described in use case 5.1. This use case focuses on the creation, management and use of spatial anchors. The overall purpose of this enabler is to make it possible to create AR content and share it with others. The 'spatial anchor producer' determines what to share and its location, and any constraints (e.g. who to share the spatial anchor with) and additional information, most importantly the 'resource' associated with the anchor (e.g. AR media, mobile metaverse service to access, etc.). The 'spatial anchor consumer' is able to recognize anchors associated with locations, and use the spatial anchor to obtain the associated information. Functionally, a spatial anchor has the following model: - Spatial Anchor: information that can be provided by a content producer to a content consumer. How this is done is out of scope of this use case. - Precise spatial location information: where the produced content is located, including the content's extent, orientation, etc. - Service information: this information is out of scope of standardization but could contain, e.g. a URL, media data, media access information, etc. This information is used by an application to access a service. The spatial anchor enabler will benefit retail environments. Here, a cheese seller has extensive information about her wares that she will share with customers. Figure 5.4.1: Spatial Anchors - created by a user to share with other users In this use case there are several actors that are relevant. - Leka the cheese seller is the content producer. She creates content and also anchors them to her wares. (She is very adept at putting the cheese back in the same places, and moving the anchors when this is not possible.) - I am the customer. - Warez is the mobile metaverse service provider stores that updates Leka's content, generates its presentation (that is, the AR content that is presented to customers), supports any interactive features, etc. - FineNet is the network operator that enables anchored services for any content producer, customer and mobile metaverse service provider.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4.2 Pre-conditions
Leka makes use of a UE that has a subscription with FineNet. She has a sensor that can be used in combination with the UE to indicate precise locations. In figure 5.4.1-1, the sensor can identify the location of the tip of the cheese fork she holds. Leka obtains services from Warez, where she stores information related to her inventory. She also has arranged in advance what information to display to customers and in what format. I wear glasses that provide AR experience and communicate by means of my UE. I have a mobile subscription with FineNet also.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4.3 Service Flows
Leka places wares on display. She indicates the location of wares that are associated with inventory information so that the association of location (listed below as 'pose' including position in 3 dimensional space, orientation and possibly more spatial data) and service information are captured by the 5GS. This inventory information is captured also by the mobile network operator as spatial anchors. This is shown on the top half of Figure 5.4.3-1. Figure 5.4.3-1: Spatial Anchor Enabler Service Flow NOTE: The text in parentheses in the figure above are examples. 1. Creating, modifying, removing spatial anchors To add new wares to display, Leka captures the location of the item with her cheese for (which includes sensors.) She associates a new spatial anchor with this location and product information (e.g. by scanning the bar code on the cheese). The Warez inventory management system also generates AR content on demand (mobile metaverse media): this application is external to 3GPP standards. Leka's UE accesses the mobile metaverse service that establishes the association that comprises the spatial anchor, the physical location and the service information. When Leka moves wares, Leka can adjust the spatial location of items or remove them entirely (e.g. when the cheese has been sold) by registering the new location or removal with the Warez inventory management system. The 'location' and 'service information' can be updated over time. Leka can also update the information that will be displayed as AR (mobile metaverse media) to the customer by the mobile metaverse server offered by the Warez inventory management system, for example the description of the cheese, its price, etc. This interaction is out of scope of 3GPP standards. 2. Accessing and using Spatial Anchors I enter the store and examine what is on display. I capture the scene with sensors and share this information with the Spatial Mapping and Localization Service Enabler (5.5). This allows me to identify my localization information, including orientation, precisely. I put on (activating) my AR glasses. By means of my UE with access over FineNet, share the location and orientation information (the area of interest) with the 5G system. The 5G system uses the localization information to identify all applicable spatial anchors in the area of interest. These are returned to the UE and the AR glasses. This function is illustrated in the lower half of Figure 5.4.3-1. The service information suffices to access the media server offered by Warez. The location information indicates the location of each spatial anchor. I now perceive the spatial anchors as shown in Figure 5.4.1. As Leka indicates the Halloumi in her counter, and my gaze focuses on that location (known to a very high degree of precision), the AR glasses use the service information associated with the spatial anchor to activate the Halloumi cheese media. I now perceive the AR information panel associated with the Halloumi cheese.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4.4 Post-conditions
I am able to observe the AR content (mobile metaverse media) associated with the wares on display, as shown in Figure 5.4.1-1. As wares are moved or removed from the display, the content shifts as well. The display of AR content is the result of the service information (i.e. how to access the media) and the localization information (i.e. where the media is placed, oriented, etc.) At any time Leka can add new wares and associated AR content, update the content that is displayed, etc. I perceive AR content associated with the items in the shop and happily buy the cheese that meets my needs.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4.5 Existing feature partly or fully covering use case functionality
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.4.6 Potential New Requirements needed to support the use case
[PR 5.4.6-1] Subject to operator policy, the 5G system shall enable an authorized third party to establish an association between a physical location (in three dimensional space, an orientation, etc.) and service information, where the service information is provided to the 5G system and the spatial anchor is either provided or determined by the 5G system. [PR 5.4.6-2] Subject to operator policy, the 5G system shall be able to establish an association between a physical location (in three dimensional space, an orientation, etc.) and service information, where the service information is provided to the 5G system and the spatial anchor is either provided or determined by the 5G system. [PR 5.4.6-3] Subject to operator policy, the 5G system shall enable an authorized third party to obtain all of the spatial anchors located in a given three dimensional area. NOTE 1: How the authorized third party identifies which three dimensional area to request spatial anchors in is not in scope of the 3GPP standard. Spatial localization and mapping information could be used to identify areas of interest. [PR 5.4.6-4] Subject to operator policy, the 5G system shall enable a third party to request the service information associated with the precise location of a specific spatial anchor. Making use of this service and location information, the third party can access a mobile metaverse server to obtain AR media. NOTE 2: How the service and location information is used by the third party to access a mobile metaverse server and the AR media itself is out of scope of this requirement. [PR 5.4.6-5] Subject to operator policy, the 5G system shall provide an authorized third party a means to manage the spatial anchor(s), e.g. add, remove or modify spatial anchors, determine privacy and security aspects, and specifically to enable the third party to define which spatial anchors they manage have restricted access conditions. [PR 5.4.6-6] The 5G system shall be able to collect charging information for the establishment or management of an association between a physical location and service information, where a third party creates, deletes or changes a spatial anchor or associated service information. [PR 5.4.6-7] The 5G system shall be able to collect charging information associated with the network operator exposure of spatial anchors to authorized third parties, and of service information associated with spatial anchors. NOTE: The preceding requirements assumes that exposure of network anchors and associated service information can be a service provided by a network operator to third parties.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5 Use Case on Spatial Mapping and Localization Service Enabler
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5.1 Description
Spatial mapping is constructing or updating a map of an unknown location and localization is tracking an object to identify its location and orientation over time. For the localized mobile metaverse use case 5.1, the service provider or operator needs to provide and use spatial map information, i.e. a 3D map of indoor or outdoor environment. This use case considers how a spatial map can be created and employed, both as service enablers. The creation and maintenance of the spatial map is referred to as Spatial Mapping Service and the employment of the map to identify the customer's Localization is termed Spatial Localization Service. Spatial mapping will classify objects into modelling and tracking of stationary and moving objects. For stationary object, spatial mapping has to estimate the number of objects, type of object and position. Whereas for moving objects, spatial mapping has to determine the position, type of object, direction, speed. Once the spatial mapping service has sufficient information, it has to map all the stationary and moving objects related to UE’s environment. This information can be provided to the UE, service providers and surrounding subscribed users as well [17,19,24] Figure 5.5.1-1: Spatial Location Services Enabler Specifically, this use case proposes two spatial localization service enablers. 1) Spatial Mapping Sensing data gathered transparently is processed in order to identify the static and transient forms. For example, in Figure 5.5.1-1 (A), the Rubens room in the Louvre, sensing data captures information. (B) This information is processed to identify the static and transient information, to establish an up-to-date spatial map. (C) In combination with location information (for the sensors) and other information (e.g. architectural specifications of the Louvre), a spatial map can be achieved in which not only the forms but also their locations in 3D space are known. 2) Localization Given that a spatial mapping exists, (A) sensing data for a user (from devices communicating by means of a UE) can be captured, (B) compared with the spatial map, (C) used to identify in 3D space the position, viewing direction, angle, etc. of the user. Spatial Mapping Considerations Examples when spatial mapping could be useful: - A government conducting a digital city project can build a 3D spatial map of outdoor environment of an entire city or public spaces such as outdoor parks or indoor offices of their government building. - A navigation service provider (or Spatial Localization Service) can build a 3D spatial map of outdoor environment of entire roads or public spaces. An operator’s partner or operator also can build a 3D spatial map of outdoor environment. - A customer wanting mapping of their indoor environment, e.g. the interior of a commercial space such as the cheese shop as described in 5.4. It is a hard task to perform the mapping of the entire city using a vehicle by traversing various roads and spaces. It also requires lots of time and effort (for data conversion, etc.) if the work is performed in offline. If the multiple capturing devices are used in parallel, the spatial map data in the same location could be synthesized over the different cameras and input devices to generate the spatial map. The mobile capturing device, vehicle or robot equipped with multiple stereo/mono RGB cameras and multiple LiDAR to capture various qualities of images and depth information of the environments. As an example in [15], a mobile indoor robot is equipped with two LiDARs, 6 industrial cameras and 4 smartphone cameras. Based on the example, we can derive the uplink bandwidth requirements for one mobile indoor mapping robot. There is a corresponding use case for 'transparent sensing' in TR 22.837. Localization Considerations The environment mapping can be used for providing a visual positioning service, for enhancing the accuracy of location service or for helping the metaverse contents management system for spatial internet. In this use case, the UE provides uplink sensor information that can be interpreted, along with a spatial map, to identify Localization. This is analogous to the process used by the UE to provide sensor data that the 5G system can use to determine UE location for Location Services.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5.2 Pre-conditions
For Spatial Mapping Mobile operator or service provider can determine the target area for the spatial mapping. The target area can be divided into multiple areas where each can be mapped with multiple capturing devices (e.g. capturing indoor robot, vehicles). Each capturing device is equipped with multiple sensors, e.g. mono or stereo RGB camera or one or more multiple LiDAR cameras. The intrinsic parameter of cameras equipped with the device are pre-configured in the device. Each capturing device has a capability with high resolution positioning system, the positioning technique also can derive the altitude information. An indoor robot can be equipped with non-3GPP based high resolution positioning system such as UWB. To derive the exact the position of the capturing device, each capturing device also can utilize the structure-from-motion technologies. The capturing devices communicate via a UE that can access the mobile network of the MNO that supports spatial mapping. The capturing devices can either sense the spatial area to be mapped, or move about sufficiently that the capturing devices can be used to acquire sensing data corresponding to the area. For Localization A set of sensors are accessible by a UE. These could be built into the terminal equipment or communicate with it in some way that is out of scope of this use case, e.g. using a cable, a personal area network, etc. The UE can access a mobile network of MNO M that offers localization services. In 3GPP-R16, bistatic localization is performed on both downlink and uplink using time-difference-of-arrival (TDoA), angle-of-arrival (AoA), and angle-of-departure (AoD) measurements both at the base station. Both bistatic and monostatic have limitation with their accuracies, with the advent of high BW mmWave brings high resolution in both time-delay and angle domains. The spatial mapping problem can be divided into front and a back-end problem. In front-end, the problem is to determine the data association - between landmarks and measurement directions. The back-end problem is to find the probabilistic SLAM density for a given data association- determined in the front end. Most of the back-end algorithms are Kalman filter (EKF), Fast-SLAM and Graph-SLAM, which are based on Bayesian method [21, 22, 23]. 5.5.3 Service Flows For Spatial Mapping 1. A capturing mobile device attaches to the mobile network and becomes authorized to deliver sensing data for the purposes of spatial mapping. 2. The capturing mobile device starts the mapping operation. a. The capturing mobile device arranges for its sensors to provide information as needed by the 5G system. This could require some configuration of the capturing mobile device, e.g. the sensors or to control the uplink communication of sensing data. 3. The capturing mobile device navigates, e.g. with a pre-defined route, within the selected target region in order to provide a sufficiently complete set of sensor information for the space to be mapped. 4. The capturing mobile device uploads the captured RGB camera images and LIDAR depth images to the 5G system with the positioning information of the mobile device. 5. The mapping server collects and analyzes the information provided to cumulative create or update a spatial map. For Localization 1. A mobile device requiring localization attaches to a mobile network and becomes authorized to obtain Localization services. 2. The mobile device uses sensors to capture information corresponding to the point and direction that has to be localized. 3. The mobile device sends via uplink communication the sensing data to the 5G system. 4. The 5G system uses the sensing data and a spatial map to determine the localization, that is, the corresponding positioning information and sensor pose. 5.5.4 Post-conditions For Spatial Mapping The spatial mapping enabler re-structures the data provided by mobile capture devices to create a spatial map. This information could be combined with other information available, e.g. floor plans of buildings, survey data, satellite images, etc. Over time sufficient information will be captured to allow the spatial mapping enabler to distinguish between static and dynamic objects in the environment. For Localization The result of the localization service is available as a service enabler. This could be provided to the UE for use by applications or exposed to a third party by means of an API by the 5GS. This information can be used for many purposes, especially for media based applications that require localization to control the rendering of AR or MR content. Localization can be done over time, e.g. to track a user's movement. The localization enabler can enhance location services as it can identify with some degree of precision a location by means of diverse sensors compared with known location data in the spatial map.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5.5 Existing feature partially or fully covering use case functionality
Location Services provide location information which can be used to support services that are triggered or informed by the user's location. This information is not at the degree of accuracy needed for some applications, e.g. AR. Location Services also do not work in all environments, e.g. indoor. The 5GS supports uplink media and uplink sensor data communication. In clause 4.1.4 of TR 26.928 [18] - XR spatial mapping and localization - has been proposed. This use case specifies the key areas of XR and AR are spatial mapping, which is creating a map of the surrounding areas, and localization - positioning the user in that map. This use case depends on multiple external sensors such as monocular/stereo/depth cameras, radio beacons, GPS, inertial sensors, etc., There is no requirement on how 5G sensing can be used for spatial mapping and localization services. Positioning in 5G Networks been proposed in 3GPP release-16, it specifies positioning signals and measurements for the 5G NR. In release-16, 5G Positioning architecture extends 4G positioning architecture by adding Location Management Function (LMF) and Transmission reception points (TRP). 5G- along with enables- provides new positioning methods based on multi-cell round-trip time measurements, multiple antenna beam measurements, multiple to enable downlink angle of departure (DL-AoD) and uplink angle of arrival (UL-AoA) [M, 21, 22]. Current 5G system supports positioning of the device-based but not device-free – Objects that do not radiate EM signals. We propose to add new requirement in section- 5.5.6 to extend this architecture to support spatial mapping and localization service enablers.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5.6 Potential New Requirements needed to support the use case
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5.6.1 Requirements for Spatial Mapping
[PR 5.5.6.1-1] Subject to operator policy and relevant regional and national regulation, the 5GS shall support mechanisms for an authorized UE to provide sensing data that can be used to produce or modify a spatial map. [PR 5.5.6.1-2] Subject to operator policy, user consent and relevant regional and national regulation, the 5GS shall support mechanisms to receive and process sensing data to produce or modify a spatial map. [PR 5.5.6.1-3] Subject to operator policy and relevant regional and national regulation, the 5GS shall support mechanisms to expose a spatial map or derived localization information from that map to authorized third parties. NOTE 1: The spatial map and derived localization information supports services that produce AR and MR media, e.g. as described in clause 5.1. NOTE 2: The precision of spatial positioning of sensors that provide sensing data used to create or modify the spatial map is not specified. This could be revisited in future as more experience accumulates with spatial mapping services. [PR 5.5.6.1-4] The 5G system shall support the collection of charging information associated with the exposure of a spatial map or derived localization information to authorized third parties. [PR 5.5.6.1-5] The 5G system shall support the collection of charging information associated with the production or modification of a spatial map on behalf of an authorized third party.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.5.6.2 Requirements for Localization
[PR 5.5.6.2-1] Subject to operator policy and relevant regional and national regulation, the 5GS shall support mechanisms to authorize Spatial Localization Service. [PR 5.5.6.2-2] Subject to operator policy and relevant regional and national regulation, the 5GS shall support mechanisms for an authorized UE to provide sensor data that can be used to for Spatial Localization Service. [PR 5.5.6.2-3] Subject to operator policy and relevant regional and national regulation, the 5GS shall support mechanisms to expose Spatial Localization Service information to authorized third parties. NOTE 2: The Spatial Localization Service information supports services that produce AR and MR media, e.g. as described in clause 5.1. [PR 5.5.6.2-4] The 5G system shall support the collection of charging information associated with exposing spatial location service information to authorized third parties. 5.6 Use Case on Mobile Metaverse for Immersive Gaming and Live Shows
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.6.1 Description
The mobile metaverse combines the physical and digital world. Mobile metaverse services have already gained significant attention and will benefit multiple areas, such as gaming, social, medical, industry, transport, and so on [27]. Mobile metaverse services will bring more immersive user experience, which will bring more potential requirements to 5G systems. Among these fields, gaming is considered a pioneer in the development of mobile metaverse services. This use case aims to discuss mobile metaverse services for immersive gaming and live shows. With the support of 5GS, game players can interact with each other on the cloud or edge server, which may form a digital world we term a mobile metaverse. Figure 5.6.1-1 shows the general idea of this use case. The mobile metaverse service may be deployed at the cloud or edge server for immersive gaming and live shows. When the players are playing a basketball game, they may achieve an immersive experience with their avatars, and the avatars can interact with each other, whether the players are in proximity or non-proximity. Meanwhile, other players in the metaverse can join in this digital world as spectators to watch the live show. The sensor data obtained by the cloud or edge server may perform coding and rendering to generate the digital representation for immersive gaming and live shows, which may be displayed (as if) on a big screen, and the interactive service data could be exchanged among the players, avatars. Here, sensing data include the physical pose and gestures including movement. For a basketball game, the court and surrounding facilities also can have sensor. The sensor data obtained from these sensors is useful for the metaverse to determine how to perform 3D digital representation of the participants and setting. An immersive user experience could be provided for the players and their audience. The major impact on 3GPP is whether and how 5GS can be used to better utilize the sensor data and achieve immersive experiences of the multiple players. Fig 5.6.1-1 Mobile Metaverse for Immersive Gaming and Live Shows
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.6.2 Pre-conditions
The following are pre-conditions for this use case: 1) The computing resource used for game design and real-time processing such as game development library and rendering tools could be provided for mobile metaverse on the cloud or edge server. 2) 5GS is capable of transporting the uplink/downlink service data. The VR/AR/MR/Cloud Gaming mobile devices, such as mobile headsets or other haptic mobile devices, could be connected to the cloud or edge server for supporting the mobile metaverse immersive game and live show via 5GS.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.6.3 Service Flows
The following are service flows for this use case: 1) Player A (a lady who is a fan of the gaming mobile metaverse service) configures her smartphone (a UE.) One path is established between the cloud or edge server and the UE. The UE sends a request to the 5GS, and the 5GS authorizes the request and exposes the capability of mobile metaverse game production and game development (e.g. expose APIs) to the UE. Player A controls her avatar and performs game production and game development based on the rules made by her and the game development or material library stored on the cloud or edge server. She creates a basketball game venue, game rules, NFT characters (with player attributes), supermarkets, etc., all stored and run on the cloud or edge server. NOTE: The edge may be a burden when it comes to long-term services, such as mobile metaverse service data storage and large-scale computing. In such a case, cloud service is essential for being a centralized node to maintaining shared space for thousands or even millions of concurrent users in such a large scale mobile metaverse service, and cloud-edge interaction via 5GS is necessary. 2) Player A invites seven players (B, C, D, E, F, G, H) who are participating in the mobile metaverse service to join the game venue created by herself. These players form two teams to play a 3v3 basketball game match. Among them, B, C, and D are one group, E, F, and G are another group, and H is the game referee. Each member of each group chooses their own digital representatoin. Then, player A publishes the game match information, and other players as spectators in the metaverse can enter the game venue to watch the match. NOTE: Players A, B, C, D, E, F, G, and H can be located in different areas in the physical world. 3) The game starts. When the players are playing in a venue realized as a mobile metaverse service. 3D positioning accuracy is required for the digital representations (avatars) that represent the players’ location and also gestures in a basketball game. The team members in the physical world control the (digital representation of the) basketball through 5GS in the uplink direction by means of a typical mobile input device, e.g., VR headset, VR glasses. At the same time, the players can interact with each other and pass the basketball, etc.; though the player has no actual contact with a basketball in the physical world, he can get some haptic experience of the basketball. The team members both in the physical world and digital representation can interact with each other via 5GS anytime, anywhere for an immersive experience. 4) Spectators can watch the game match through 5G system by a typical mobile device. At the same time, the spectators can view diverse content such as the game attributes, including game rules and player information, by switching the viewing direction. Multiple mobile metaverse media can be provided to spectators an immersive live show experience through 5G system. The spectators can interact with each other via 5G system for an immersive experience. NOTE: During the running of the game, UE can access mobile metaverse services with low power consumption to reduce the metaverse game interruption. The cloud or edge server is used for coding, rendering, and generating the mobile metaverse media for immersive gaming and live show mobile metaverse services. 5) Player A terminates the game application.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.6.4 Post-conditions
The following are post-conditions for this use case: The players in the game match achieved an immersive game experience by means of a mobile metaverse serivce enabled by 5GS. The spectators in the game match had an immersive live show. The 5GS can address and meet the mobile metaverse service game requirements with the cloud or edge side. Players A, B, C, D, E, F, G, and H may earn money from the game, a mobile metaverse service.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.6.5 Existing features partly or fully covering the use case functionality
In clause 6.43.2 of 3GPP TS 22.261, there are the following requirements: The 5G system shall enable an authorized 3rd party to provide policy(ies) for flows associated with an application. The policy may contain e.g. the set of UEs and data flows, the expected QoS handling and associated triggering events, and other coordination information. The 5G system shall support a means to apply 3rd party provided policy(ies) for flows associated with an application. The policy may contain e.g. the set of UEs and data flows, the expected QoS handling and associated triggering events, and other coordination information. NOTE: The policy can be used by a 3rd party application for the coordination of the transmission of multiple UEs’ flows (e.g., haptic, audio, and video) of a multi-modal communication session.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.6.6 Potential New Requirements needed to support the use case [PR 5.6.6-1] The 5G System shall support the transmission of uplink sensor data transmission and downlink feedback information with stringent requirements on packet delay and bandwidth for real-time interaction.
[PR.5.6.6-2] The 5G System shall support a mechanism to obtain the location and gestures of the players with stringent requirements on 3D positioning accuracy. KPI requirements related to the potential requirements: Table 5.6.6-1 – Potential key performance requirements for immersive gaming and live shows Use Case Characteristic parameter (KPI) End-to-end latency Service bit rate: user-experienced data rate Reliability Positioning accuracy Mobile Metaverse for immersive gaming and live shows [5~20] ms [1~1000] Mbit/s [>99.99%] [<1] m
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7 Use Case on AR Enabled Immersive Experience
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7.1 Description
In addition to watching movies at the cinema, people will also choose to watch a movie on their mobile phones, laptops or TVs when they don’t have time to go to the cinema, e.g. when travelling or at home. However, by watching through these terminals, users will feel more or less uncomfortable in the neck or cervical spine because the users always keep their necks down. Moreover, the screen is relatively small; if users want to see more realistic screen details or watch an immersive 3D movie, using these terminals is not feasible. Figure 5.7.1-1: AR Enabled Immersive Experience (image source: www.indiegogo.com) In this use case, users can get an immersive location agnostic service experience of watching movies in certain circumstances, such as at home or on the train. They can even invite some of their friends to watch a movie at different place simultaneously by wearing a wearable device, such as AR glasses. A large screen like a movie theatre will be presented in the field of vision (FOV) of the wearable device, which not only provides an immersive watching experience like private cinema but also has very low demands on the environment and space of the user's location, 3D cinematic effects can also be easily rendered in the device. To achieve an immersive experience location agnostic service through AR glasses, the 5G system is required to provide a reliable transmission of uplink and downlink data and a way for users to synchronize their experience and interact together. In mobility scenarios, e.g., when a user is travelling on the train, the continuity of data transmission also needs to be guaranteed. Moreover, when AR glasses are wireless, the power supply relies on the battery integrated inside the AR glasses. This use case investigates how the 5G system (through direct device connection or NG-RAN) can be used to support UE to establish data connection with the mobile metaverse server. The 5G system can provides services to AR glasses so as to minimize energy consumption in the overall system. The service dataflows and requirements may differ depending on whether the AR glasses are accessing the service through a direct device connection or NG-RAN.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7.2 Pre-conditions
Bob wants to watch an AR movie with friends to relax while travelling by train. So he wears wearable equipment such as AR glasses that can access into the 5G network (through direct device connection or NG-RAN) to access metaverse services. Bob has subscribed to an immersive movie service as a mobile metaverse service that he can access via AR glasses. The service gives Bob access to a large movie catalog (2D/3D). The battery capacity of the AR glasses is enough to watch a two-hour movie. Before starting the movie, Bob can invite some friends to join him. If people join Bob, their avatars are also placed into his FOV and Bob can interact with them (speech or text). Considering the wearing comfort of users, mainstream AR glasses should not be too heavy (normally less than 150 grams). The maximum capacity of the battery (50 grams) is about 1000mAh. Usually, 25% of the battery capacity of AR glasses is allocated to the mobile termination module. When a user watches a 4K movie, some video compression techniques are usually used to reduce the amount of data transmitted while maintaining the image quality. In general, a large compression ratio will cause a delay increase. Considering the overall factors of delay and energy consumption, using AR glasses with a direct device connection would require a low compression ratio, for instance, 3:1 [50]. However, some advanced AR glasses SoC embeds hardware video decoders (e.g., AVC, HEVC, and VVC) and can render viewport efficiently. A study on energy consumption of hardware video decoders [59] shows that a typical HEVC hardware decoder embedded on an Android device is spending 40-50mA per hour of video playback (decoding only). The usage of hardware decoders seems reasonable, given the 1000mAh battery capacity assumption made in the current description. For the NG-RAN case, it is reasonable to think that AR glasses can decode and render efficiently with low energy consumption.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7.3 Service Flows
1. Data connections are set up between AR glasses and the Metaverse server, which can provide immersive location agnostic service experience of a movie service. The 5G module can be connected to the 5G network either via direct device connection or NG-RAN. 2. The mobile metaverse server provides movie access to the client AR glasses through the downlink data stream. 3. The mobile metaverse server manages communications between clients (friends), e.g., including video, avatar, speech, and text. 4. The mobile metaverse server manages synchronization between the clients (e.g., the various AR glasses) of the friends.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7.4 Post-conditions
Bob is able to watch an immersive movie with friends while travelling, obtaining a good user experience. The battery capacity of the AR glasses is enough to watch a two-hour movie. Bob is able to communicate with them in a synchronized manner. Bob’s friends (or avatars) can be visible in his FOV. The 5G system is able to support communication for immersive location agnostic AR services, providing a reliable transmission, a continuity of service and a synchronized experience across users sharing a viewing experience.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7.5 Existing features partly or fully covering the use case functionality
The performance requirements for high data rate AR services have been captured in TS 22.261 clause 7.6. The performance requirements for UE to network relaying in 5G systems have been captured in TS 22.261 clause 7.7. The functional and performance requirements for tactile and multi-modal communication services have been captured in TS 22.261 clauses 6.43 and 7.11, respectively. However, existing requirements still need to consider the power consumption of the 5G UE onboard AR terminals.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.7.6 Potential New Requirements needed to support the use case
[PR 5.7.6-1] Subject to operator policy, the 5G system shall support a means to provide high data rate transmission to a UE during an extended period of time, including when in high-speed mobility. NOTE 1: Metaverse service experience over an extended period of time (e.g. 2h) requires significant power consumption by the UE. In some cases, a device with no external power supply cannot sustain downloading and rendering of media over a long interval, e.g. for the duration of an entire feature film or athletic event. [PR 5.7.6-2] Subject to operator policy, the 5G system shall support a mechanism that enables flexible adjustment of communication services based on the type of devices (e.g., wearables), such that the services can be operated with reduced energy utilization. [PR 5.7.6-3] Subject to operator policy, the 5G system shall support a means to enable interactive immersive multiparty communications in the metaverse service. NOTE 2: The multiparty immersive communication (e.g. amongst multiple friends) could be location related or location agnostic. Table 5.7.6-1 – Potential key performance requirements for Immersive AR interactive experience: tethered link Characteristic parameter (KPI) Influence quantity Max allowed end-to-end latency Service bit rate: user-experienced data rate Reliability # of UEs UE Speed Service Area Viewports streaming from rendering device to AR glasses through direct device connection (tethered/relaying case) (note 1) 10 ms (i.e., UL+DL between AR Glasses display and the rendering UE) (note 2) [200-2000] Mbit/s 99,9 % (note 2) 1-2 Stationary or pedestrian Pose information from AR glasses to rendering device through direct device connection (tethered/relaying case) (note 1) 5 ms (note 2) [100-400] Kbit/s (note 2) 99,99 % (note 2) 1-2 Stationary or pedestrian Note 1: These KPIs are only valid for cases where the viewport rendering is done in the tethered device and streamed down to the AR glasses. In the case of rendering capable AR glasses, these KPIs are not valid. Note 2: These values are aligned with the tactile and multi-modal communication KPI table in TS 22.261 [5], cl 7.11 Table 5.7.6-2 – Potential key performance requirements for Immersive AR interactive experience: NG-RAN multimodal communication link Characteristic parameter (KPI) Influence quantity Max allowed end-to-end latency Service bit rate: user-experienced data rate Reliability # of UEs UE Speed Service Area Movie streaming from metaverse server to the rendering device (note 2) Only relevant for live streaming. [1-5] s in case of live streaming [0,1-50] Mbit/s (i.e., covering a complete OTT ladder from low resolution to 3D-8K) (note 1) 99,9 % 1 to [10] [up to 500 km/h] - Avatar information streaming between remote UEs (end to end) (note 3) 20 ms (i.e., UL between UE and the interface to metaverse server + DL back to the other UE) [0,1-30] Mbit/s 99,9 % 1 to [10] [up to 500 km/h] - Interactive data exchange: voice and text between remote UEs (end to end) (note 4) 20 ms (i.e., UL between UE and the interface to metaverse server + DL back to the other UE) [0,1-0,5] Mbit/s 99,9 % 1 to [10] [up to 500 km/h] - Note 1: These values are aligned with “high-speed train” DL KPI from TS 22.261 [5] cl 7.1 Note 2: To leverage existing streaming assets and delivery ecosystem, it is assumed that the legacy streaming data are delivered to the rendering device, which incrusts this in the virtual screen prior to rendering. For a live streaming event, the user-experience end-to-end latency is expected to be competitive with traditional live TV services, typically [1-5] seconds. Note 3: For example, the glTF format [60] can be used to deliver avatar representation and animation metadata in a standardized manner. Based on this format, the required bitrate for transmitting such data is highly dependent on avatar’s complexity (e.g., basic model versus photorealistic). Note 4: These values are aligned with “immersive multi-modal VR” KPIs in TS 22.261 [5], cl 7.11. End-to-end latency in this table is calculated as twice the value of the DL “immersive multi-modal VR” latency in TS 22.261 [5], cl 7.11.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8 Use Case on multi-service coordination in one mobile metaverse service
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8.1 Description
There’s a major difference between a metaverse service and traditional multi-media service. A mobile metaverse service provides a platform which supports different applications to complete a task such as gaming, online-working, online-education, etc. Users will have no limitations on the terminals they use. In existing XR applications, specific brand of VR glasses or gloves are required to be used in a game, different brands of VR glasses and gloves will be very hard to map and coordinate in a same game. But in mobile metaverse services, the nature of the standard will support the coordination between different equipment belonging to different applications or brands. Figure 5.8.1-1: multi-service coordination in one mobile metaverse service
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8.2 Pre-conditions
John has a pair of VR glasses and a pair of tactile gloves. Usually, he uses VR glasses for VR games and tactile gloves for vertical painting where he can feel the brushstrokes. These two activities were running on two different network slices. As the VR glasses was bought to play VR games, the VR game application has a network slice A which is better support the game service. Tactile gloves belong to Brand B which has another network slice B. In the mobile metaverse, there are many different types of services such as games, concerts, education, etc. And the mobile metaverse services has subscribed different network slice for these different types of service, and different QoS for different flows accordingly for better user experience.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8.3 Service Flows
1. John opens a mobile metaverse service, in which both VR glasses and tactile gloves are needed, and he would like to draw a picture with tactile gloves and see a live music show at the same time. 2. In the subscription between mobile metaverse service (which can be hosted by operators or other companies) and network, the video flow, audio flow in live music condition were subscribes to QoS 1 and QoS2 in slice A, the video flow and tactile flow in painting condition were subscribes to QoS 3 and QoS4 in slice B in art painting. 3. In John’s VR glasses he can see the singer and other listeners and at the same time. At the same time, he can see his painting on a virtual easel and use a virtual brush to paint, while he can feel the brushstrokes with the tactile feedback. 4. In this case, the mobile metaverse service will have a policy to use a same QoS level for the video flows in live music condition and painting condition and inform network on this decision. 5. As John is painting and enjoying the live show at the same time, the coordination between video flow, audio flow in live music mobile metaverse service and the video flow and tactile flow in painting mobile metaverse service need to be coordinated. This coordination information need to be share with the network for policy modification. 6. Network will do this dynamic policy modification for John.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8.4 Post-conditions
John used both the VR glasses and the tactile gloves in distinct mobile metaverse services with very good user combined experience.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8.5 Existing features partly or fully covering the use case functionality
The 5G system can support different communication performance policies for services and provides some support for resolving conflicts between the policies of different services. There is however no way to for the 5G system to coordinate the communication performance delivered so that divergence in communication performance is reduced for distinct services (i.e. from different service providers). 3GPP TS 23.503 [63] clause 4.3.1 includes the following general requirement "The PCC framework shall allow the resolution of conflicts which would otherwise cause a subscriber's Subscribed Guaranteed Bandwidth QoS to be exceeded.". 3GPP TS 23.503 [63] clause 6.1.3.7 explains that "Service pre-emption priority enables the PCF to resolve conflicts where the activation of all requested active PCC rules for services would result in a cumulative authorized QoS which exceeds the Subscribed Guaranteed bandwidth QoS.". A note in 3GPP TS 23.503 [63] clause 6.1.3.7 includes the following sentence: "Normative PCF requirements for conflict handling are not defined."
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.8.6 Potential New Requirements needed to support the use case
[PR 5.8.6-1] The 5G system shall provide the capability of reducing the differences between different mobile metaverse services communication performance for a given UE to prevent inconsistency of experience due to XR media with divergent or conflicting characteristics, e.g., resolution, latency or packet loss. NOTE: The UE can provide communication services for more than one terminal equipment.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.9 Use Case on Synchronized predictive avatars
5.9.1 Description In this first use case, three users are using the 5GS to join an immersive mobile metaverse activity (which may be an IMS multimedia telephony call using AR/MR/VR). The users Bob, Lukas, and Yong are located in the USA, Germany and China, respectively. Each of the users can be served by a local mobile metaverse service edge computing server (MECS) hosted in the 5GS, each of the mobile metaverse servers is located close to the user it is serving. In case of IMS such MECS could be an AR Media Function that provides network assisted AR media processing. When a user joins a mobile metaverse activity, such as a joint game or teleconference, the avatar of the user is loaded in the MECS of the other users. For instance, the MECS close to Bob hosts the avatars of Yong and Lukas. The distance between the users, e.g., the distance between USA and China is around 11640 Km, determines minimum communication latency, e.g., 11640/c = 38 msec. This latency might also be higher due to different causes such as, e.g., hardware processing. This latency might also be variable due to multiple reasons, such as, e.g., congestion or delays introduced by (variable processing time of) hardware components such as sensors or rendering devices. Since this value maybe too high and variable for a truly immersive joint location agnostic metaverse service experience, each of the deployed avatars includes one or more predictive models of the person it represents and that allow rendering in the local edge server a synchronized predicted (current) digital representation (i.e. avatar) of the remote users. Similar techniques have been proposed for example in [28]. Figure 5.9.1-1 shows an exemplary scenario in which a MECS at location 3 (USA) runs the predictive models of remote users (Yong and Lukas) and takes as input the received sensed data from all users (Yong, Lukas, and Bob) as well as the current end-to-end communication parameters (e.g., latency) and generates a synchronized predicted (current) avatar digital representation (i.e. avatar) of the users to be rendered in local rendering devices of Bob. A particular example of such scenario might be about gaming: Yong, Lukas, and Bob are playing baseball in an immersive mobile metaverse activity , and it is Yong’s turn to hit the ball that is going to be thrown by Lukas. If Yong hits the ball, then Bob can continue running since Yong and Bob are playing in the same team. In this example, the digital representation (e.g. avatar) predictive models of Lukas and Yong (deployed at the MECS close to Bob) will allow creating a combined synchronized prediction at Location 3 of Lukas throwing the ball and Yong reacting to the ball and hitting the ball so that Bob can start running without delays and can enjoy a great immersive mobile metaverse experience. This example aims at illustrating how predictive models can improve the location agnostic service experience in a similar was as in [28]. Synchronized predictive digital representation (e.g. avatars) are however not limited to the gaming industry and can play a relevant role in other metaverse services, e.g., immersive healthcare or teleconferencing use cases. This scenario involving synchronized predictive digital representation (e.g. avatars) assumes to require synchronization of user experiences to a single clock. Figure 5.9.1-1: Example of a joint metaverse experience with synchronized predicted avatar representation.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.9.2 Pre-conditions
The following pre-conditions and assumptions apply to this use case: 1. Up to three different MNOs operate the 5GS providing access to mobile metaverse services. 2. The users, Bob, Lukas, and Yong have subscribed to the mobile metaverse services. 3. Each of the users, e.g., Bob, decide to join the immersive mobile metaverse service activity.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.9.3 Service Flows
The following service flows need to be provided for each of the users: 1. Each of the users, e.g., Bob, decide to join the immersive mobile metaverse service activity and give consent to the deployment of their avatars. 2. Sensors at each user sample the current representation of each of the users where sampling is done as required by the sensing modalities. The sampled representation of each of the users is distributed to the metaverse edge computing servers of the other users (which may be an AR Media Function in case of IMS) in the metaverse activity. 3. Each of the edge computing servers applies the incoming data stream representing each of the far located users to the corresponding digital representation (e.g. avatar) predictive models – taking into account the current communication parameters/performance, e.g., latency – to create a combined, synchronized, and current digital representation of the remote users that is provided as input to rendering devices in the local environment. The predictive model also ensures that it correctly synchronizes with the actual state of the remote users based on which it can make the necessary corrections to the digital representation in case of differences between a predicted state and the actual state. The service flows for the other users (i.e., Yong in China and Lukas in Germany) are the mirrored equivalent. For instance, even if not shown in Figure 5.9.1-1, the local edge computing server associated to Lukas will run the digital representation (e.g. avatar) predictive models of Yong and Bob and consume the data streams coming from those users.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.9.4 Post-conditions
The main post-condition is that each of the users enjoy an immersive metaverse service activity.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.9.5 Existing features partly or fully covering the use case functionality
TS 22.261 includes in Clause 6.40.2 the following requirement related to AI/ML model transfer in 5GS: “Based on operator policy, 5G system shall be able to provide means to predict and expose predicted network condition changes (i.e. bitrate, latency, reliability) per UE, to an authorized third party.” This requirement is related to requirement [PR 5.9.6.2], but not exactly the same since the usage of predictive digital representation (e.g. avatar) models requires the knowledge of the end-to-end network conditions, in particular, latency.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.9.6 Potential New Requirements needed to support the use case
[PR 5.9.6-1] the 5G system (including IMS) shall provide a means to synchronize the incoming data streams of multiple (sensor and rendering) devices associated to different users at different locations. [PR 5.9.6-2] the 5G system (including IMS) shall provide a means to expose predicted network conditions, in particular, latency, between remote users. [PR 5.9.6-3] The 5G system (including IMS) shall provide a means to support the distribution, configuration, and execution in a local Service Hosting Environment of a predictive digital representation model associated to a remote user involved in multimedia conversational communication. [PR 5.9.6.4] The 5G system (including IMS) shall provide a means to predict the rendering of a digital representation of a user (e.g. an avatar) and/or of an object based on the latency of a multimedia conversational communication, and to render the predicted digital representation. NOTE: The predicted rendering is expected to be updated/synchronized with real world information received about the user and/or object.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10 Use Case on mobile metaverse for Critical HealthCare Services
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10.1 Description
Mobile metaverse for Critical HealthCare Services. Immersive interactive mobile services encompass multiple services such as gaming, education, healthcare, shopping, recreation etc. Healthcare is a lifesaving critical service, which will benefit the most from mobile metaverse services. Remote surgery and surgeon training are already emerging examples, for geographically spread specialized surgeons, students and patients. Healthcare making use of mobile metaverse services can save lives by providing healthcare services at the earliest, train students better and free surgeons/doctors from being physically present at patient’s location. This area will encompass services such as surgery, medical student training, surgeon training. It will enable remote physician and patient examination. Mobile metaverse services can be characterized as a typical class of application that involves a server and a client device. This class of application will typically exchange haptic signals (forces, torques, position, velocity, vibration etc.), video and audio signals. Mobile metaverse services largely depend on low latency, highly reliable, secure wireless communication networks. [14, 32, 33, 36, 37] Mobile metaverse-Surgeries. Normally, surgeons have to be physically present at hospital to perform their surgeries. Surgeons and patients may have to travel a great distance for surgeries, which is resource intensive and burdensome. The outbreak of coronavirus has further proves the case for remote surgeries. At present, there are increasingly more surgical rooms adapted with advanced surgical robots and doctors can remotely operate on the patients by controlling these surgical robots as shown in Figure-5.10.1-1b [35]. Mobile metaverse services can bring doctors and patients closer virtually, which further improves the accuracy and surgery flexibility. It can also facilitate consultation to provide suggestions and domain knowledge to reduce risks in actual operation. Recently, a real time remote breast cancer surgery was performed using private 5G network and head mounted display. Dr. G, physically present in the operating room, wore a head mounted display and could see the crucial information displayed by the goggles and performed breast cancer surgery remotely. Dr. G received constant advice from Dr. A, who was seated on the stage at the congress of Spanish association of Breast Surgeons, 900 km away from the surgery room [34, 35]. Mobile metaverse Physician Consultation. With the outbreak of COVID, virtual consultation through audio and video conference calls has gained considerable momentum. Mobile metaverse services enabled by tactile sensors along with video and audio media can provide a rich and successful experience to both physician and patient. The ability to perform mobile metaverse service consultation without the need for the doctors and patient to be physically co-located is an extremely efficient prospect. Doctors can potentially use mobile metaverse services to examine and interact with representations of aspects and views of the patient, and benefit from a plethora of medical advice database as shown in Figure-5.10.1-1.c. 5G communication is one the key factors to enable mobile metaverse service based physician consolation. Doctors and patients both need high throughput, ultra-reliable and low latency 5G communication for these services to succeed. [30, 32]. Mobile metaverse Body scan and vitals. Mobile metaverse services can significantly improve and change the way current body scan diagnostics and vital statistics are gathered. Mobile metaverse services can be utilized to see the real-time diagnostic data of the patient such as body temperature, heart rate, blood pressure, breathing rate along with MRI, CT and 3D scans as show in Figure-5.10.1-1.d. Medical challenges such as vein detection for IV, shots etc., can easily be resolved using mobile metaverse services. These will help medical professionals to detect, diagnose and treat a patient [30, 32]. Figure-5.10.1-1: Mobile metaverse service examples The examples in Figure 5.10.1-1 feature the use of mobile metaverse services for a) Training Surgeons, b) remote surgery, c) physician consultation d) remote medical diagnostics. (Image source: healthcareoutlook.net, courthousenews.com, gmw3.com, ourplnt.com) Mobile metaverse Training Medical Students. Surgeries performed by surgeons all over the world can be used to train students using mobile metaverse services. Students can observe a live surgery with almost all the important vital and view on display, as they would be displayed to a surgeon. Further, the students can view the live surgery from different viewing perspectives, hear surgeon’s instruction, display of suggestions and domain knowledge as show in Figure-5.10.1-1.a. In May 2021, a live lung surgery was performed using an extended reality (XR) technology platform. More than 200 thoracic surgeons from Asian countries attended the outreach program and received training. The participants wore a head-mounted display (HMD) at their respective locations and participated in the program virtually represented by a digital representation (e.g. an avatar). They participants viewed the live lung surgery with lecture and 360deg high resolution surgical scenes as show in Figure-5.10.1-2 [31]. Figure-5.10.1-2: Live lung surgery training through metaverse The source of figure 5.10.1-2 is [31] (Image source: Journal of Educational Evaluation for Health Professions (Jeehp))
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10.2 Pre-conditions
Hospital Y has an enterprise subscription with MNO X, with which MNO provides the hospital and its staff with fault-tolerant and ultra-high reliable 5G communication as well as mobile metaverse services. Dr. Alex and Dr. Bob are renowned surgeons of Hospital Y, who are also trained to perform surgeries using immersive interactive mobile healthcare services. Dr. Alex and Dr. Bob both have 5G based head mounted device (HMDs) and tactile gloves, which can communicate with those in the hospital surgery room virtually using digital representation of themselves (an avatar). The doctors are able to make remote use of actuators in operating room, though remote. The service requires extremely high reliability as a patient's life is at risk. 5G allocates sufficient communications resources, e.g. through the use of GBR QoS policy, to both the surgeons communications for the entire duration of the surgery.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10.3 Service Flows
1) Dr. Alex and Dr, Bob are surgeons and senior surgeons of a renowned hospital. Dr. Alex is on a vacation with the family and Dr. Bob is attending a conference 300 miles away from Madrid, Spain. 2) A patient, David, has had an accident and has been rushed to the emergency room of a local hospital. David has a bad head injury and needs an urgent brain surgery. 3) The hospital requests that the surgeons Dr. Alex and Dr. Bob perform the surgery. 4) Dr. Alex and Dr. Bob wear HMDs and tactile gloves and connect to the hospital surgery room virtually making use of the 5G system. 5) David is in an operating room of the hospital equipped with advanced surgical robots and attended by the local doctors and nurse. 6) The vital diagnostic information such as heart rate, BP reading, ECG etc., are virtually displayed to both the surgeons. The surgeons are able to view the surgery room and able to see each other using digital representations (e.g. avatars.) 7) Dr. Alex and Dr. Bob perform the surgery remotely with the assistance of the robots and doctors and nurses at the hospital. 8) The surgery successfully ends, after three hours of surgery and remote treatment without an interruption in the 5G connections. 9) After the surgery, the patient is shifted from the surgery bed to an Intensive Care Unit (ICU) by the doctors and nurses in the hospital. Dr. Alex and Dr. Bob return to their vacation and conference respectively.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10.4 Post-conditions
After the surgery, the doctors and surgeons stay connected virtually to read the vitals of the patients. Once the patient’s condition has stabilized, Dr. Alex and Dr. Bob disconnect their devices. The 5G system then releases dedicated communication resources (GBR QoS policy) from the devices used for surgery using 5GS.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10.5 Existing feature partly or fully covering use case functionality
1) URLLC system design in clause 5.33 of 23.501 [39] has proposed dual redundant system to achieve ultrahigh reliability. Though - in this system design- there are dual RAN connections, PDU session established, SMF and UPF but has a common DN and AMF nodes, which is a single point failure in the system architecture. URLLC system design is ultrareliable but not end to end ultra-reliable and fault tolerant. It also relies on the upper layer protocol such as IEEE 802.1 TSN (Time sensitive Network) FRER (Frame replication & elimination for reliability), to manage dual redundant systems such as replication and elimination of redundant packets or frames. A typical fault-tolerant system should not have single point of failures and manage system flow. Moreover, the use of FRER implies that the content is exchanged over both paths continuously, thus doubling the resources used over the radio. 2) Redundant user plane paths based on multiple UEs per device has been proposed in Annex F of 23.501[39]. In this system design, the device is expected to have two UE(s) and they independently connect to their RAN and have their own PDU sessions with a common DN. This system is not End-to-End fault tolerant since it has a common DN – a single point of failure- and requires dual UE(s) to achieve ultrahigh reliability. This architecture too assumes that some upper layer protocol (e.g. FRER) is used for replication and frame elimination, thus doubling the resources used over the radio. 3) As per Multimedia Priority Service (MPS), mentioned in clause 5.16.5 of TS 23.501[39], allows service users priority access to the system resources under congestion, creating the ability to deliver or complete session of a high priority nature. Service users are priority users such as government officials, authorized users. Currently there are no requirements to identify mission critical users and its priorities such as emergency (under surgeries), surgeon training, scans and physician consultation. 4) RRC controls the scheduling of user data in the uplink by associating each logical channel with a logical channel priority, a prioritised bit rate (PBR), and a buffer size duration (BSD), mentioned in clause 10.5 of TS 38.300 [40]. We can extend this to allocate logical channels for mission critical services such as metaverse for Critical-HealthCare services. 5) The existing CMED in TR 22.826 specifies various possible healthcare support using ultra high-definition videos, tactile sensors and audio. Though this TR specifies the reliability of 99.999% but does not specify the fault tolerant end-to-end reliability, and also does not specify for metaverse, which is an interactive VR 360˚ streaming [41].
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.10.6 Potential New Requirements needed to support the use case
[PR 5.10.6-1] The 5G system shall provide a means to synchronize multiple service data flows (e.g., heart rate, video, audio) of multiple UEs associated with Critical HealthCare services.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11 Use case of IMS-based 3D Avatar Communication
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11.1 Description
This use case identifies two fundamental scenarios and one sub-scenario for 3D Avatar Communication by means of the IMS. The intention of the proposal is to fully specify this system in 3GPP, to provide a standard for a new form of media to be used in telecommunication by mobile users. In the terminology of this use case, the avatar is a digital representation of a user, and this digital representation is exchanged (with other media, notably audio), with one or more users as mobile metaverse media. An Avatar Call is similar to a Video Call in that both are visual, interactive, provide live feedback to participants regarding their emotions, attentiveness and other social information. They differ in that an Avatar call can be more private - neither revealing the environment where the caller is, nor their actual appearance. An avatar may be preferable to display to one's own face in a call for a number of reasons - a user may not feel presentable, may want to make a specific impression, or may have to communicate when only limited data communication is possible. The key difference between an Avatar Call and a Video Call is that the Avatar Call requires only a very constrained data rate, e.g. 5 kbps, to support. This use case is timely because the key enabling technologies have reached a sufficient maturity. The key avatar technologies are the means to (1) capture facial and calculate values according to a model, (2) efficiently send both media and model components through a communication channel, both initially and over time, (3) produce media for presentation to a user for the duration of the communication. We anticipate services will be increasingly available in the coming months and years. The current approaches under development are effectively proprietary and they are not integrated with the IMS. The scenarios considered in this use case are: 1(a). IMS users initiate an avatar call. 1(b). An IMS users initiate a video call, but one (or both) users decide instead to provide Avatar Call representation instead of video representation. For both 1(a) and 1(b) the goal is to capture sensing data of the communicating users (especially facial data) to create an animated user digital representation (avatar). This media is provided to communicating users as a new teleservice user experience enabled by the IMS. 2. A user interacts with a computer-generated system. Avatar communication is used to generate an appearance for a simulated entity with whom the user communicates.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11.2 Pre-conditions
Users Adonis and Aphrodite are 3GPP subscribers. Both have terminal equipment sufficient to capture their facial expression and movements adequately for computing avatar modeling information. The terminal equipment also includes a display, e.g. a screen, to display visual media. The terminal equipment is capable of initiating and terminating the IMS multimedia application 'avatar call.' The terminal equipment is also capable of capturing the facial appearance and movements sufficiently to produce data required by a Facial Action Coding System (FACS). A network accessible service is capable of initiating and terminating an IMS session and the IMS multimedia application 'avatar call.'
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11.3 Service Flows
1(a). Avatar Call i. Adonis is on a business trip and filthy after a day servicing industrial equipment. He calls Aphrodite, who is several time zones away and reading in bed after an exhausting day. ii. Adonis doesn't want to initiate a video call since he hasn't had a chance to clean up and is still at work, surrounded by ugly machines. He initiates an 'avatar call' explicitly with his terminal equipment interface. iii. Aphrodite, several time zones away, reading in bed after an exhausting day, is alerted of an incoming call. She sees it is from Adonis and that it is an avatar call. She accepts the call, pleased that she will be presented on the call as an avatar. Figure 5.11.3-1: Avatar media prepared for an avatar call In more detail, the media that is provided uplink is generated on each terminal. This is analogous to the way in which speech and video codecs operate today. Figure 5.11.3-2: Avatar generation on each UE Once the avatar call is established the communicating parties provide information uplink. The terminal (a) captures facial information of the call participants and (b) locally determines an encoding that captures the facial information (e.g. consisting of data points, colouring and other metadata.) This information is transmitted as a form of media uplink, and provided by the IMS to the other participant(s) in the avatar call. When (c) the media is received by a participant, the media is rendered as a two (or three) dimensional digital representation, shown above as the 'comic figure' on the right. In this use case, the UE performs processing of the data acquired by the UE to generate the avatar codec. It is possible to send the acquired data, e.g. video data from more than one camera, uplink so that the avatar codec could be rendered by the 5G network. It is however advantageous from a service perspective to support this capability on the UE. First, the uplink data requirement is greatly reduced. Second, the confidentiality of the captured data could prevent the user from being willing to expose it to the network. Third, the avatar may not be based on sensor data at all, if it is a 'software generated' avatar (as by a game or other application, etc.) and in this case there is no sensor data to send uplink to be rendered. 1(b). Video call falls back to an Avatar call i. Adonis is striking as can be, and standing at an awe-inspiring vista on mount Olympus. He initiates a video call with Aphrodite. ii. Unfortunately, Adonis has forgotten to consider the time zone difference. For Aphrodite, it is the middle of the night. What's more, Aphrodite has been up for several hours in the middle of the night to clean up a mess made by her sick cat. While she wants to take the call from Adonis, she prefers to be presented by an avatar, and not to take the call as a video call from her side. She explicitly requests an 'avatar presentation' instead of a 'video presentation' and picks up Adonis' call. iii. The call between Adonis and Aphrodite is established. Adonis sees Aphrodite's avatar representation. Aphrodite sees Adonis in the video media received as part of the call. iv. Adonis walks further along the mountain trail while still speaking to Aphrodite. The coverage gets worse and worse until it is no longer possible to transmit video uplink adequately. Rather than switching to a voice-only call, Adonis activates 'avatar call' representation. This requires very little data throughput. v. Adonis and Aphrodite enjoy the rest of their avatar call. 2. Aphrodite calls automated customer service. Aphrodite calls a customer service of company "Inhabitabilis" to initiate a video call. ii. Inhabitabilis customer service employs a 'receptionist' named Nemo, who is actually not a person at all. He is a software construct. There is an artificial intelligence algorithm that generates his utterances. At the same time, an appearance is generated as a set of code points using a FACS system, corresponding to the dialog and interaction between Aphrodite and Nemo. iii. Aphrodite is able to get answers to her questions and thanks Nemo. In all the above scenarios, the following applies. 3. Aphrodite uses a terminal device without cameras, or whose cameras are insufficient and/or Adonis uses a terminal device without avatar codec support In this scenario, the UE used by either calling party is not able to support an IMS 3D avatar call. Through the use of transcoding, this lack of support can be overcome. In the service flow shown below, as an example, Aphrodite's UE cannot capture her visually so as to generate an avatar encoding, so she expresses herself in text. i. Aphrodite calls Adonis and wants to share an avatar call. She cannot however be captured via FACS due to a lack of sufficient camera support on her UE. Instead she uses text-based avatar media. ii. The text-based avatar media is transported to the point at which this media is rendered as a 3D avatar media codec. Figure 5.11.3-3: Example of a text-based Avatar Media enables avatar call without camera, etc. support on a UE The transcoding rendering of the avatar media to 3D avatar media could be at any point in the system - Aphrodite's UE, the network, or Adonis' UE. iii. Adonis' UE is able to display an avatar version of Aphrodite and hear it speak (text to voice.) To the extent that the avatar configuration and voice generation configuration is well associated with Aphrodite, Adonis can hear and see her speaking, though Aphrodite only provides text as input to the conversation. Other examples (not further described here) could, for example, transcode the media provided by Aphrodite (e.g. text, binary, avatar encoding, etc.) and transcode it to video for presentation to Adonis. This would be useful if Adonis' UE did not have support for avatar encoding.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11.4 Post-conditions
In each of the scenarios above, avatar media provides an acceptable interactive choice for a video call experience. The advantages are privacy, efficiency and ease of integration with computer software to animate a simulated conversational partner.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11.5 Existing feature partly or fully covering use case functionality
TS 22.228 define the service requirements for IMS. IMS supports different IMS multimedia applications. IMS supports a wide range of services, notably voice and video calls. There is extensive support for services, tightly integrated with the 3GPP system, with extensive support for roaming and integration with both PSTN and ISDN telephony, emergency services and more. The requirements for a 3D avatar application are largely covered by existing requirements in the 5G standard for IMS. TS 22.173 defines the media handling capabilities of the IMS Multimedia Telephony service The specific gaps that are addressed in 5.A.6 include: extended feature negotiation, enabling the user to decide whether to present video or avatar communication, the ability to support Avatar communication and content efficiently, the ability to support standardized Avatar media in the 5G system. The following KPIs are easily supported by the 5G system. They are included in order to contrast the requirements of an avatar call with a video call. Use Case Characteristic parameter (KPI) End-to-end latency Service bit rate: user-experienced data rate Avatar call [NOTE1] <5 kbps [45] Video call < 150 msec preferred <400 msec limit Lip-synch: < 100 msec [46] 32-384 kb/s [46] NOTE1: The latency requirement for real time immersive service experience would be the same as the video call below. For some user experiences (smaller devices or an embedded icon-sized representation in other application, etc.) the latency tolerance could be greater. NOTE2: The video call KPIs are from TS 22.105 and have not changed since Rel-99. Actual transactional video call parameters may be higher now.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.11.6 Potential New Requirements needed to support the use case
[PR 5.11.6-1] The IMS shall allow multimedia conversational communications between two or more users providing real time conversational transfer of animated user digital representation and speech data. [PR 5.11.6-2] The 5G system shall support a means for UEs to produce 3D avatar media to be sent uplink, and to receive this media downlink. NOTE 1: In some scenarios, avatar media transmission entails a significantly lower data transfer rate than video. [PR 5.11.6-3] The 5G system shall support a means for the production of 3D avatar media to be accomplished on a UE to support confidentiality of the data used to produce the 3D avatar (e.g. from the UE cameras, etc.) [PR 5.11.6-4] Subject to user consent, the 5G system shall support a means to provide bidirectional transitioning between video and avatar media for parties of an IMS call. NOTE 2: An example where an IMS call could transition to an IMS based 3D avatar call is where the communication performance of one or more parties declines to the extent that video is no longer of sufficient quality or even possibility. In this case, an avatar call between the same parties can replace the video call. [PR 5.11.6-5] The 5G system shall support a means to enable locally generated media (e.g. text or video) of a party to be transcoded before it is rendered for the receiving party. NOTE 3: The locally generated media could allow a party to control the appearance of its avatar, e.g. to express behavior, movement, affect, emotions, etc. NOTE 4: The transcoding of media enables 3D avatar communication to be supported in scenarios in which UE participating in the IMS call does not support e.g. FACS, encoding avatar media, presenting avatar media, etc. [PR 5.11.6-6] The 5G system shall support collection of charging information associated with initiating and terminating an IMS-based 3D avatar call.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12 Use Case on Virtual humans in metaverse
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12.1 Description
Virtual humans (or digital representations of humans, also referred to as 'avatars' in this use case) are simulations of human beings on computers [47]. There is a wide range of applications, such as games, film and TV productions, financial industry (smart adviser), telecommunications (avatars), etc. In the coming era, the technology of virtual humans is one of foundations of mobile metaverse service. A virtual human can be a digital representation of a natural person in a mobile metaverse service, which is driven by the natural person. Or a virtual human also can be a digital representation of a digital assistant driven by AI model. Mobile metaverse services offer an important opportunity for socialization and entertainment, where user experience of the virtual world and the real world combine. This use case focuses on the scenario of a natural person's digital embodiment in a metaverse as a location agnostic service experience. A virtual human is customized according to a user's personal characteristics and shape preferences. Users wear motion capture devices, vibrating backpacks, haptic gloves, VR glasses to drive the virtual human in a meta-universe space for semi-open exploration. The devices mentioned above are 5G UEs, which need to collaborate with each other to complete the actions of user and get real-time feedback. Figure 5.12.1-1: Virtual humans in metaverse (Source: https://vr.baidu.com/product/xirang, https://en.wikipedia.org/wiki/Virtual_humans) For smooth experience, the motion-to-photon latency should be less than 20ms [48]. The motion-to-photon latency requires that the latency between the moment that players do one movement and the corresponding new videos shown in VR glasses and tactile from vibrating backpacks or haptic gloves should be less than 20ms. As the asynchrony between different modalities increases, users’ experience will decrease because uses are able to detect asynchronies. Therefore, the synchronisation among audio, visual and tactile is also very important. The synchronisation thresholds regarding audio, visual and tactile modalities measured by Hirsh and Sherrick are described as follows [49]. The obtained results vary, depending on the kind of stimuli, biasing effects of stimulus range, the psychometric methods employed, etc. - audio-tactile stimuli: 12 ms when the audio comes first and 25 ms when the tactile comes first to be perceived as being synchronous. - visual-tactile stimuli: 30 ms when the video comes first and 20 ms when the tactile comes first to be perceived as being synchronous. - audio-visual stimuli: 20 ms when the audio comes first and 20 ms when the video comes first to be perceived as being synchronous. NOTE 1: Taking audio-tactile stimuli as an example, when the audio comes first, users are not able to detect asynchronies if the tactile comes within 12ms. Accordingly, when the tactile comes first, users are not able to detect asynchronies if the audio comes within 25ms.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12.2 Pre-conditions
Alice’s virtual human exists as a digital representation in a mobile metaverse service. Alice’s virtual human wants to explore a newly opened area, including both natural environment and the humanities environment. The equipment Alice wears are all connected to 5G network. The mobile metaverse service interacts with 5G network to provide QoS requirements. The network provides the pre-agreed policy between the mobile metaverse service provider and operator on QoS requirements appropriate to each mobile metaverse media data flow.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12.3 Service Flows
1. Alice’s virtual human digital representation exists as part of the mobile metaverse service. 2. Alice’s virtual human digital representation can interact with other virtual humans. These could correspond to virtual humans representing other players or to machine generated virtual humans. Interactions could include a handshake, shopping, visiting an exhibition together, etc. 3. When someone or something touches Alice’s virtual human (e.g. Alice's virtual human's hand or back touches some virtual object or human in the mobile metaverse service), Alice can see the object and feel the temperature and weight of the object at the same time. For example, when a virtual leaf falls on the hand of Alice’s virtual human, Alice should see the leaf fall on her hand and sense the presence of a leaf at the same time. It means that the tactile impression from haptic gloves should come within 30ms after the video in VR glasses (assuming the video media precedes the haptic media.) Or, the video in VR glasses should come within 20ms if the tactile impression resulting from tactile media presented by the haptic gloves, if the tactile media precedes the video media.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12.4 Post-conditions
Alice can physically experience what is represented in mobile metaverse services. The experience is very realistic and consistent.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12.5 Existing features partly or fully covering the use case functionality
3GPP TS 22.261 [6] specifies KPIs for high data rate and low latency interactive services including Cloud/Edge/Split Rendering, Gaming or Interactive Data Exchanging, Consumption of VR content via tethered VR headset, and audio-video synchronization thresholds. Support of audio-video synchronization thresholds has been captured in TS 22.261: Due to the separate handling of the audio and video component, the 5G system will have to cater for the VR audio-video synchronisation in order to avoid having a negative impact on the user experience (i.e. viewers detecting lack of synchronization). To support VR environments, the 5G system shall support audio-video synchronisation thresholds: - in the range of [125 ms to 5 ms] for audio delayed and - in the range of [45 ms to 5 ms] for audio advanced. Existing synchronization requirements in current SA1 specification are only for data transmission of one UE. Existing specifications do not contain requirements for coordination of synchronization transmission of data packet for multiple UEs.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.12.6 Potential New Requirements needed to support the use case
[PR 5.12.6-1] The 5G system shall provide a mechanism to support coordination and synchronization of multiple data flows transmitted via one UE or different UEs, e.g., subject to synchronization thresholds provided by 3rd party. [PR 5.12.6-2] The 5G system shall provide means to achieve low end-to-end round-trip latency (e.g., [20ms]).
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.13 Use Case on digital asset container information access and certification
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.13.1 Description
The network operators offer the digital asset management services for the users, with which some information (e.g. IDs) can be certified by the operator. The digital asset management services can also include - The management of the digital asset container is performed according to the applicable regulations. - The digital asset container has security properties (cannot be spoofed, access control with a policy determined by the user, etc.). In the case of immersive XR media services, the user can choose, in the digital asset container, his/her digital representation and the related information, for example, the digital representation of the user (e.g. avatar), electronic money and associated financial services, identity, purchased items (the format of this information is at application layer and is not studied in 3GPP). This information can be used when accessing immersive XR media services or for real life services as the presentation of the identity.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.13.2 Pre-conditions
Alice has a service subscription with the operator. As part of the service, the network operator provides - The digital asset container management according to the future applicable regulations; - Security protection options of the digital asset container (e.g. cannot be spoofed, access control with a policy determined by the user, etc.).
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.13.3 Service Flows
1. Alice accesses the digital asset container data services. The digital asset container is initiated with Alice information (digital representation (e.g. avatar) profile, IDs ...). The digital assets are completed and modified over time. The service (allowing to store and update information) can be provided by the network operator or by a third party using an operator’s trusted API. 2. Alice wants to dispose of old paint and solvent at a local dump. She has to identify herself as being a local resident, authorized to use the dump. She has to provide payment information, to pay the fee to dispose of toxic waste. She interacts with the dump (services) and the ID and payment information is shared with the service. The authorities that run the facility now allowing Alice to put down the paint and solvent. Alice access to her digital asset container to select the list of information (local resident, payment information, ID) that she has already configured and saved (e.g. her digital representation (e.g. avatar) and other information like her electronic money and associated financial services, ID, purchased items,…). The choice of information can be automated (without action on the part of the user). 3. She then connects to the digital service, e.g. mobile metaverse service, with the information she authorises to share for the successful provision of the service.
8fc4e7e237d7663b7a5c6a2b6436bde3
22.856
5.13.4 Post-conditions
Alice is authorized to access and use the dump.