hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
5
1.47k
content
stringlengths
0
6.67M
a1d9979ec3fb8c2091ddd4425cd961c2
38.195
7.76.3 Test purpose
The test purpose is to verify the ability of the BS receiver to inhibit the generation of intermodulation products in its non-linear elements caused by the presence of two high-level interfering signals at frequencies with a specific relationship to the frequency of the wanted signal.
a1d9979ec3fb8c2091ddd4425cd961c2
38.195
7.76.4 Method of test
a1d9979ec3fb8c2091ddd4425cd961c2
38.195
7.76.4.1 Initial conditions
Test environment: Normal; see annex B.2. RF channels to be tested for single carrier (SC): M; see clause [4.9.1]
a1d9979ec3fb8c2091ddd4425cd961c2
38.195
7.76.4.2 Procedure
The minimum requirement is applied to all connectors under test. 1) Connect the connector under test to measurement equipment as shown in annex D.2.7 for BS type 1-C. 2) Set the signal generator for the wanted signal to transmit as specified in table 7.7.5-1. 3) Set the signal generator for the interfering signal to transmit at the frequency offset and as specified in table 7.7.5-1. 4) Measure the BLER performance according to annex A.1.
a1d9979ec3fb8c2091ddd4425cd961c2
38.195
7.76.5 Test requirements
The BLER performance shall be 10% of the reference measurement channel as specified in annex A.1, with a wanted signal at the assigned channel frequency and two interfering signals coupled to the BS type 1-C antenna connector, with the conditions specified in Tables 7.7.5-1 for narrowband intermodulation performance. The reference measurement channel for the wanted signal is identified in tables 7.2.2-1 for each BS channel bandwidth and further specified in annex A.1. The characteristics of the interfering signal is further specified in annex D. The receiver intermodulation requirement is applicable outside the Base Station RF Bandwidth or Radio Bandwidth edges. The interfering signal offset is defined relative to the Base Station RF Bandwidth edges or Radio Bandwidth edges. Table 7.76.5-1: Narrowband intermodulation performance requirement for A-IoT Medium Range BS Channel bandwidth of the lowest/highest carrier received [kHz] Wanted signal mean power [dBm] Interfering signal mean power [dBm] Interfering RB centre frequency offset from the lower/upper Base Station RF Bandwidth edg [kHz] Type of interfering signal 200 PREFSENS + 6dB (Note 1) -53 ±340 CW -53 ±880 5 MHz NR signal, 1 RB (Note 2) 3520 PREFSENS + 6dB (Note 1) -53 ±270 CW -53 ±780 5 MHz NR signal, 1 RB (Note 2) NOTE 1: PREFSENS depends on the sub-carrier spacing as specified in Table 7.2.2-1. NOTE 2: Interfering signal consisting of one resource block positioned at the stated offset, the channel bandwidth of the interfering signal is located adjacently to the lower/upper Base Station RF Bandwidth edge.
297e549a46ed295a2284fabdceea7f0d
26.870
1 Scope
This clause shall start on a new page. The present document studies media-related aspects for 6G mobile networks for improvement of existing services and support of new services, to meet the 6G system requirements as developed in TR 22.870 [22870] and captured by TS 22.abc [22ABC], as well as in alignment with the architecture study documented in TR 23.801-01 [23801]. Editor's note: The above references should be replaced with a reference to normative specification, when available. This study is identifiesidentifies media-related opportunities and gaps in the context of 6G. Objectives include, but are not limited to - support the 6G studies in other working groups with media-related aspects - identify media-related industry trends from operators, third-party providers and verticals that may impact 6G media architectures. The conclusions of this study will form the basis for further detailed studies as well normative work.
297e549a46ed295a2284fabdceea7f0d
26.870
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [2] 3GPP TR 22.870: "Study on 6G Use Cases and Service Requirements". [3] 3GPP TR 23.801-01: "Study on Architecture for 6G System Stage 2". [4] 3GPP TS 26.501: "5G Media Streaming (5GMS); General description and architecture". [5] 3GPP TS 26.506: "5G Real-time Media Communication Architecture (Stage 2)". [6] 3GPP TS 22.ABC: "6G System Requirements". …
297e549a46ed295a2284fabdceea7f0d
26.870
3 Definitions of terms, symbols and abbreviations
297e549a46ed295a2284fabdceea7f0d
26.870
3.1 Terms
For the purposes of the present document, the terms given in TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905 [1]. example: text used to clarify abstract rules by applying them literally.
297e549a46ed295a2284fabdceea7f0d
26.870
3.2 Symbols
For the purposes of the present document, the following symbols apply: <symbol> <Explanation>
297e549a46ed295a2284fabdceea7f0d
26.870
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 [1]. <ABBREVIATION> <Expansion>
297e549a46ed295a2284fabdceea7f0d
26.870
4 Preliminaries: assumptions and requirements
297e549a46ed295a2284fabdceea7f0d
26.870
4.1 Assumptions
Editor's note: This clause documents the common architecture assumptions identified for the study. This is primarily defined as based on the decisions in SA2 as well as the existing functions in earlier Gs.
297e549a46ed295a2284fabdceea7f0d
26.870
4.2 Requirements
Editor's note: This clause defines the architectural and media-related requirements that serve as the foundation for the study. It collects SA1 defined requirements and associated use cases.
297e549a46ed295a2284fabdceea7f0d
26.870
4.3 Existing media services
Editor's note: This clause collects existing media services that are already addressed in 4G and 5G, and identifies the status of the services in terms of relevancy and deployments.
297e549a46ed295a2284fabdceea7f0d
26.870
5 New trends and expected services related to media
Editor's note: identify media-related industry trends from operators, third-party providers and verticals that may impact 6G media architectures
297e549a46ed295a2284fabdceea7f0d
26.870
6 Work topics: Description and discussion
Editor's note: This clause identifies work topics based on the objective of the study item and newly defined work topics.
297e549a46ed295a2284fabdceea7f0d
26.870
6.0 Introduction
297e549a46ed295a2284fabdceea7f0d
26.870
6.1 Work topic #1: Media delivery architecture
Editor's note: The present Work Task is structured according to the agreed subsection format, including: Description, Key Issues, Context and External Factors, Potential Solutions, Mapping of Issues to Solutions, and Conclusions. The subsection ordering may be adapted as appropriate for the specific content of the Work Task. .
297e549a46ed295a2284fabdceea7f0d
26.870
6.2 Work topic #2: 6G media
Editor's note: The present Work Task is structured according to the agreed subsection format, including: Description, Key Issues, Context and External Factors, Potential Solutions, Mapping of Issues to Solutions, and Conclusions. The subsection ordering may be adapted as appropriate for the specific content of the Work Task.
297e549a46ed295a2284fabdceea7f0d
26.870
6.3 Work topic #3: Media aspects related to SA2 topics
Editor's note: The present Work Task is structured according to the agreed subsection format, including: Description, Key Issues, Context and External Factors, Potential Solutions, Mapping of Issues to Solutions, and Conclusions. The subsection ordering may be adapted as appropriate for the specific content of the Work Task.
297e549a46ed295a2284fabdceea7f0d
26.870
6.4 Work topic #4: Media for ubiquitous access
Editor's note: The present Work Task is structured according to the agreed subsection format, including: Description, Key Issues, Context and External Factors, Potential Solutions, Mapping of Issues to Solutions, and Conclusions. The subsection ordering may be adapted as appropriate for the specific content of the Work Task.
297e549a46ed295a2284fabdceea7f0d
26.870
6.5 Work topic #5: Trusted and private communication for media
Editor's note: The present Work Task is structured according to the agreed subsection format, including: Description, Key Issues, Context and External Factors, Potential Solutions, Mapping of Issues to Solutions, and Conclusions. The subsection ordering may be adapted as appropriate for the specific content of the Work Task. 6.X Work topic #X: Editor's note: The present Work Task is structured according to the agreed subsection format, including: Description, Key Issues, Context and External Factors, Potential Solutions, Mapping of Issues to Solutions, and Conclusions. The subsection ordering may be adapted as appropriate for the specific content of the Work Task.
297e549a46ed295a2284fabdceea7f0d
26.870
7 Consolidated findings
Editor's note: This clause can be used to consolidate findings based on the considerations in clause 62.
297e549a46ed295a2284fabdceea7f0d
26.870
8 Recommendations for follow-up work
Editor's note: This clause will provide recommendations for follow-up work. Annex A: Additional background on selected work topics Editor's note: The present annex collects supplementary background information related to selected work topics. The intention is to maintain a consolidated and persistent record of contextual material to support ongoing and future work. Annex X: Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2026-0102 SA4#135 SA4-Ad Hoc group call on FS_6G_MED S4aP260004 TR skeleton for FS_6G_MED 0.0.01
297e549a46ed295a2284fabdceea7f0d
26.870
5 New trends and expected seervices related to media
Editor's note: identify media-related industry trends from operators, third-party providers and verticals that may impact 6G media architectures
297e549a46ed295a2284fabdceea7f0d
26.870
6.1.1 Description
297e549a46ed295a2284fabdceea7f0d
26.870
6.1.2 Key questions
297e549a46ed295a2284fabdceea7f0d
26.870
6.1.3 Context and external factors
297e549a46ed295a2284fabdceea7f0d
26.870
6.1.4 Potential solutions and way forward
297e549a46ed295a2284fabdceea7f0d
26.870
6.1.5 Conclusions
66b5fc78a94ec8f2de89f788f52261bb
26.958
1 Scope
This clause shall start on a new page. The present document …
66b5fc78a94ec8f2de89f788f52261bb
26.958
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [aa] Kerbl et al. "3D Gaussian Splatting for Real-Time Radiance Field Rendering", ACM Transactions on Graphics, volume 42(4), July 2023. [ac] 3GPP TR 26.928: "Extended Reality (XR) in 5G". [ad] Wang et al., "Image Quality Assessment: From Error Visibility to Structural Similarity", IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 4, APRIL 2004 [ae] Satish et al., "Designing Efficient Sorting Algorithms for Manycore GPUs", Proceedings of IEEE International Symposium on Parallel & Distributed Processing (pp. 1-10). [af] Porter et al., "Compositing Digital Images", Computer Graphics (SIGGRAPH '84 Proceedings), 18(3), 253-259. [ag] J. Wang, M. Chen, N. Karaev, A. Vedaldi, C. Rupprecht, D. Novotny: "VGGT: Visual Geometry Grounded Transformer", arXiv:2503.11651, March 2025. [ah] H. Xu, H. Yu, S. Peng, R. Pautrat, M. Pollefeys, A. Geiger: "DepthSplat: Connecting Gaussian Splatting and Depth", arXiv:2410.13862, October 2024. [ai] L. Jiang, Y. Mao, L. Xu, et al.: "AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views", arXiv:2505.23716, May 2025. [aj] K. Zhang, S. Bi, H. Tan, Y. Xiangli, N. Zhao, K. Sunkavalli, Z. Xu: "GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting", arXiv:2404.19702, April 2024. [ak] G. Kang, S. Nam, S. Yang, X. Sun, S. Khamis, A. Mohamed, E. Park: "iLRM: An Iterative Large 3D Reconstruction Model", arXiv:2507.23277, July 2025. [al] W. Lin, Y. Feng, Y. Zhu: "MetaSapiens: Real-Time Neural Rendering with Efficiency-Aware Pruning and Accelerated Foveated Rendering", arXiv:2407.00435, July 2024. [am] F. Hahlbohm, F. Friederichs, T. Weyrich, et al.: "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency", arXiv:2410.08129, October 2024. (Correspond à la mention "HTGS" dans votre texte). [an] Q. Hou, F. Farhadzadeh, H. Le, et al.: "Sort-free Gaussian Splatting via Weighted Sum Rendering", arXiv:2410.18931, October 2024. [ao] S. Kheradmand, D. Vicini, G. Kopanas, et al.: "StochasticSplats: Stochastic Rasterization for Sorting-Free 3D Gaussian Splatting", arXiv:2503.24366, March 2025. It is preferred that the reference to 21.905 be the first in the list.
66b5fc78a94ec8f2de89f788f52261bb
26.958
3 Definitions of terms, symbols and abbreviations
This clause and its three subclauses are mandatory. The contents shall be shown as "void" if the TS/TR does not define any terms, symbols, or abbreviations.
66b5fc78a94ec8f2de89f788f52261bb
26.958
3.1 Terms
For the purposes of the present document, the terms given in 3GPP TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in 3GPP TR 21.905 [1]. anisotropic: refers to Gaussians whose shape and orientation vary by direction, allowing them to take ellipsoidal forms instead of uniform spheres. This enables more accurate modelling of local geometry and surface details by adapting each splat’s spread and rotation in 3D space. 3D tile: A discrete spatial partition of a massive geospatial dataset, defined by a specific bounding volume enabling the optimized streaming and progressive rendering of content based on the viewer's proximity and field of view.
66b5fc78a94ec8f2de89f788f52261bb
26.958
3.2 Symbols
For the purposes of the present document, the following symbols apply: Symbol format (EW) <symbol> <Explanation>
66b5fc78a94ec8f2de89f788f52261bb
26.958
3.4 Abbreviations
For the purposes of the present document, the abbreviations given in 3GPP TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in 3GPP TR 21.905 [1]. 3DGS 3D Gaussian Splatting SH Spherical Harmonic PLY Polygon file format UE User Equipment XR eXtended Reality LOD Level Of Detail SfM Structure from Motion
66b5fc78a94ec8f2de89f788f52261bb
26.958
4 3DGS representation format
[Editor’s note: Placeholder for the description of the 3DGS format and characteristics]
66b5fc78a94ec8f2de89f788f52261bb
26.958
4.1 Introduction
A 3D Gaussian Splatting (3DGS) scene is represented as a set of continuous primitives, anisotropic 3D Gaussians, each carrying geometric parameters and radiometric attributes. It was first introduced in 2023 in research paper 3D Gaussian Splatting for Real-Time Radiance Field Rendering from INRIA [aa]. The data model captures a primitive’s spatial support (position, orientation, shape) and its appearance (view-dependent color).
66b5fc78a94ec8f2de89f788f52261bb
26.958
4.2 Primitives
A 3DGS primitive is an oriented 3D Gaussian with the following fields. The items below describe data elements, independent of any specific encoding: - Position: 3D scene position of the primitive expressed with x, y, and z coordinates in the local space system. - Rotation: Primitive orientation, that may be defined with normalized quaternion, defining the local axes of the Gaussian. - Scale: Per-axis scales that set the primitive’s spatial sizes. - Opacity: Blending weight controlling the splat’s contribution during compositing. - Direct color (DC): Per-channel base color (linear RGB) used when no view dependence is applied. - Spherical harmonics (SH) color: Per-channel coefficient set up to a declared order between 0 and 3 enabling view-dependent color. According to the order, the number of SH the number of stored SH are 0, 3, 8 and 15. The source 3DGS data is generally stored in a PLY file containing 32 bits floating point values but other data types may also be used such as 16 bits integer or 16 bits floating points.
66b5fc78a94ec8f2de89f788f52261bb
26.958
4.3 Camera parameters
To ensure accurate and high-quality rendering, it is important to reuse the position and settings of the cameras used to capture the 3DGS scenes during the rendering process. For each acquired view, complete camera information may be necessary: extrinsic parameters (pose as a matrix or quaternion translation in the scene's coordinate system), intrinsic parameters (models: pinhole, fisheye, etc.), lens parameters, as well as temporal metadata (time stamp, shutter model), photometric parameters (exposure, white balance), and a camera identifier for multi-sensor systems. This information is generally available during training and may be retained with the 3DGS .ply files. This data makes it possible to constrain the user's positions to the correctly acquired position zones, thus limiting low-quality renderings and allowing the reuse of camera information during rendering to update projection parameters and obtain renderings close to the acquired images. [Editor’s Note: How those parameters may improve the rendering could be good to study.]
66b5fc78a94ec8f2de89f788f52261bb
26.958
5 Use cases
[Editor’s note: Placeholder for the description of the use cases]
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.1 Introduction
The present clause describes service scenarios illustrating the generation and the consumption of 3DGS scenes as well as associated working assumptions on the service configurations that serve as basis for detailed analysis documented in the following clauses of this technical report.
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.2 On-device capture and sharing of a static 3DGS scene
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.2.1 Description
A user initiates a short capture session on a mobile device (UE) using the rear or front camera(s). Various typical capture patterns may be supported, per example: - Object/person sweep ("object scan", "3D selfie"): the user moves around a subject at close range, recording multiple viewpoints to ensure good coverage and parallax. - Panorama-like clip: the user records a brief handheld video from one fixed position to capture a landscape, room, or monument by a small camera motion (panning). During capture, the application may guide the user (e.g., coverage hints, exposure/focus locks) and may collect auxiliary signals (e.g., estimated pose, depth sensor data, lens parameters, focal, inertial measurement unit, device GPS, capture framerate…). From the captured frames, the UE generates a static 3DGS model. Depending on device capability and policy: - Generation may happen locally in the UE (e.g. for small objects and short captured clips). - Generation may be offloaded to edge/cloud (e.g. for faster turnaround or higher quality). - The generation pipeline may be configured to meet the capabilities of the UE, and may include image selection, photometric normalization, background segmentation, UE metadata-based initialization, optimization, decimation, and quantization. Other techniques suitable for mobile devices may also be used. The 3DGS models may be packaged for exchange and sent to another UE via commonly available channels (e.g., MMS, OTT messaging, or file transfer). Upon reception on a UE, the application loads the 3DGS model and provides an interactive viewing experience: - 6DoF or constrained-6DoF: the user is offered the ability to rotate the scanned object/person, zoom, and slightly shift the viewpoint. - For larger scenes or non 360 scanned models, the application may constrain motion to the positions around the original capture position(s) to avoid out-of-bound views. - The 3DGS model may be rendered in AR or VR, depending on device capabilities and user preferences. - In addition to 2D displays, dedicated AR/VR devices (glasses or headsets) could be used to display the 3DGS models within a 360-degree environment. 3GPP TR 26.928 [ac] outlines the 5G XR framework, including a definition of XR use cases and delivery modes. This use case is aligned with Annex A.2 "3D Image Messaging", which describes a capture, 3D model creation, shared and local viewing flow.
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.2.2 Working assumptions
This section outlines the end-to-end processing chain, from capture to rendering on the receiving UE. It enumerates the key functional blocks, and device capability requirements. - Acquisition and 3DGS content generation - Sensor capture (RGB video, potentially depth and position and orientation if available). - On-device 3DGS generation (including structure from motion and training) or upload of 2D captures to the edge/cloud for content generation, then reception of the generated static 3DGS asset into the UE. [Editor’s note: both workflows are expected to be documented because they have different uplink/downlink traffic and latency profiles.] - Compression and packaging - The resulting static 3DGS scene is serialized into a delivery format. [Editor’s note: characterize which Gaussian parameters need to be signalled (position, scale, orientation, color, spherical harmonics, opacity, etc.) and what level of precision and number of gaussians is required for acceptable quality meeting the EU's performance capabilities. This directly impacts file size and bitrate] [Editor’s note: To be considered whether existing 3GPP media delivery frameworks (e.g. MMS, messaging, file transfer) may carry a static 3DGS models without new protocol work, or whether new signalling is needed] - Transport and delivery - For 3DGS model sharing using 3D messaging, the static 3DGS object could be delivered as a discrete file or binary stream using existing 3GPP services (e.g. MMS, messaging, file download, …). - Given the amount of data in a static scene and the limits in terms of the number of gaussians for efficient rendering on a UE, latency is not considered critical. - Decoding and decompression - The UE receiving the 3DGS model parses and decompresses the 3DGS structure and may load it into GPU memory. - Rendering - Real-time splat-based renderer on a mobile via CPU and/or GPU. - Viewpoint movement is locally constrained to the "captured frustum envelope", i.e. viewpoint positions to where the original cameras were during capture, to avoid visual holes and to respect creative intent. - User navigation may be restricted by the application to ensure collision avoidance with the 3DGS object and/or to limit the possible views to the captured areas. [Editor’s note: how “allowed navigation volume” is expressed to the receiver for safety, privacy, and quality reasons]
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.3 Exploration of a large 3DGS environment
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.3.1 Description
In this scenario the user explores a large 3DGS environment on a UE with responsive 6DoF or constrained-6DoF navigation. A user launches an application and selects a large 3DGS scene (e.g., museum, mall level, outdoor plaza, city, …). The UE requests visible parts of the scenes around the current pose and prefetches likely next regions based on motion prediction. The navigation is expected be constrained to captured spaces to avoid out-of-bounds views. The environment is delivered as an adaptive set of 3D tiles at various level of details (LOD). Edge/cloud assistance may be used for content preparation and low-latency delivery. At session start, the service negotiates device capabilities and network constraints, then selects an initial set of 3D tiles at various level of details to display the 3DGS scenes according to user's position and orientation. The selection of 3D tiles and their level of detail (LOD) is performed based on user movement and device capabilities and bandwidth. A buffering process may be used to mask variations in bandwidth and quality in each region. The selection process must maintain a constant number of 3D Gaussian splats displayed at any given time to ensure good quality and smooth navigation by minimizing variations in visual quality and the number of frames per second rendered. According to the 3D tiles loaded, the rendering process may display the 3DGS scene. A smooth transition may be applied by the renderer to limit the visual effect of the level of detail changes. For safety and quality, motion is restricted to the captured region. Interactive delivery of 3D tiles is used to receive the 3DGS data at various level of details. The 3DGS model may be rendered in AR or VR, depending on device capabilities and user preferences. Adding to smartphone device, dedicated AR/VR devices (glasses or headsets) could be used to display the 3DGS models. TR 26.928 [aa] alignment (informative): This use case maps to interactive 6DoF streaming with optional split compute/rendering at the edge; downloaded media with local interactivity may apply when scenes are cached for offline revisit.
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.3.2 Working assumptions
This section outlines the end-to-end processing chain, emphasising adaptive delivery and device capability requirements. - Acquisition and content generation - The capture and the generation of large 3DGS scenes are not addressed in this use case. - Based on the 3DGS models, the region-based parts of the 3DGS scenes are generated for adaptive delivery. - Compression and packaging - 3D tiled LODs are serialized into a delivery format with signalling for spatial and LOD indices and dependencies. [Editor’s note: workflow is expected to be documented because it has different uplink/downlink traffic and latency profiles according to the precision per LOD and 3D tiles.] [Editor’s note: characterize which Gaussian parameters need to be signalled (position, scale, orientation, color, spherical harmonics, opacity, etc.) and what level of precision and number of gaussians is required for acceptable quality meeting the EU's performance capabilities. This directly impacts file size and bitrate] [Editor’s note: To be considered whether existing 3GPP media delivery frameworks (e.g. MMS, messaging, file transfer) may carry a static 3DGS models without new protocol work, or whether new signalling is needed] - Transport and delivery - Interactive delivery and predictive prefetch; edge-assisted content hosting is recommended for latency control. - Latency targets are tighter due to interactive navigation; buffering strategies aim to minimise latencies and popping visual artifacts - Decoding and decompression - The UE parses the 3D tile indices, fetches/decompresses 3DGS chunks, and manages GPU residency for active 3D tiles. - Rendering - Real-time splat-based renderer on mobile GPU with 3D tile/LOD switching and temporal stability safeguards. - Navigation is constrained to the allowed-view volume derived from the capture information. - Navigation may be further constrained to collision detection with 3DGS objects in the scene. For example, by analysing the local splats density or using objects bounding boxes, or with a pre-defined authorized navigation area. [Editor’s note: how “allowed navigation volume” is expressed to the receiver for safety, privacy, and quality reasons]
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.4 Dynamic 3DGS content
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.4.1 Description
A UE receives time-varying 3DGS content depicting a dynamic subject or scene (e.g., a performer, dancer, singer, exhibition moment, band, sport action …). The UE renders the 3DGS content sequence in real time. The delivery and rendering process may also be assisted by the network through mechanisms such as partial delivery or network-assisted rendering. The user is offered the ability to adjust the viewpoint locally within a constrained navigation volume while the subject or scene itself changes dynamically over time. This is analogous to volumetric video streaming, except the rendering primitive is 3D Gaussian splats rather than textured meshes or voxels. This use-case is mainly focused on the delivery, decoding, and real-time rendering of pre-recorded dynamic 3DGS sequences (e.g. on-demand streaming of file download). Depending on feasibility, live dynamic 3DGS capturing and delivery may also be considered at a later stage. This use case aligns with 3GPP TR 26.928 [aa], specifically Use Case 3: Streaming of Immersive 6DoF (non-live/on-demand variant).
66b5fc78a94ec8f2de89f788f52261bb
26.958
5.4.2 Working assumptions
This section outlines the end-to-end processing chain covering the delivery and rendering of dynamic 3DGS content, emphasising adaptive delivery and device capability requirements. - Acquisition and content generation - The capture and the generation of dynamics 3DGS models are not the focus of this use case, but the feasibility of such processes using non-professional setups may be considered at a later stage. - Compression and packaging - Sequence serialization with time indices - The 3DGS sequence is compressed to meet the service and bandwidth constraints in term of bitrate. [Editor’s note: the scene complexity may impact the feasibility of this use case on mobile platforms and associated limitations need to be identified.] [Editor’s note: workflow is expected to be documented because it has different uplink/downlink traffic and latency profiles.] [Editor’s note: characterize which Gaussian parameters need to be signalled (position, scale, orientation, color, spherical harmonics, opacity, etc.) and what level of precision and number of gaussians is required for acceptable quality meeting the EU's performance capabilities. This directly impacts file size and bitrate] [Editor’s note: To be considered whether existing 3GPP media delivery frameworks (e.g. MMS, messaging, file transfer) can carry a static 3DGS models without new protocol work, or whether new signalling is needed] - Transport and delivery - On-demand streaming and file delivery is used to transmit the 3DGS compressed data. - Partial delivery or streaming of the 3DGS compressed data may also be supported depending on the 3DGS data characteristics as well as UE and network capabilities. - The delivery of the UE’s pose information is needed when performing partial delivery. - Decoding and decompression - The UE and/or network parses the dynamic scene, fetches/decompresses 3DGS data, and manages GPU residency for visible gaussian splats. - Rendering - Real-time splat-based renderer. - Navigation is constrained to the allowed-view volume derived from the capture information. - Navigation may be further constrained to collision detection with 3DGS objects in the scene. For example, by analysing the local splats density or using objects bounding boxes, or with a pre-defined authorized navigation area. - Rendering may also be done by the network if using network-assisted rendering. [Editor’s note: how “allowed navigation volume” is expressed to the receiver for safety, privacy, and quality reasons] 5.5 3D Avatar communication A user is represented by an avatar that combines a rigged mesh and a 3D Gaussian Splat layer for fine detail. The sender transmits a time aligned animation stream and a static 3DGS as part of the base avatar. The animation stream drives the rig and blendshapes. The renderer composites mesh shading with 3DGS contributions. The approach reduces texture distribution and enables photoreal micro geometry without heavy UV textures. The following figure depicts the use case: Figure 1: Use case of a 3DGS avatar in a 3D avatar call. [Editor’s note: We expect refinement on the format later.]
66b5fc78a94ec8f2de89f788f52261bb
26.958
6 Quality factors
[Editor’s note: Placeholder for the description of the quality factors]
66b5fc78a94ec8f2de89f788f52261bb
26.958
6.1 Introduction
66b5fc78a94ec8f2de89f788f52261bb
26.958
6.2 Discussion
The objectives of FS_3DGS_MED cover a wide range of aspects in the end-to-end workflow related to 3DGS, including on related to workflow aspects: b. Consistent end-to-end quality across different capturing and rendering systems for 3DGS representations. In order to be able to approach this objective it is necessary to have a understanding of the different factors which may affect the quality for a certain 3DGS representation, including those related to capturing, generation, representation formats, as well as rendering. Whilst it is useful to study these different aspects separately in order to identify the different solutions and technologies available, each of these aspects also have significant impacts on the target quality of experience for the end consumer. Table 1 below shows a concise list of different factors which may affect the overall quality of an experience using 3DGS: Table 1: Factors which may affect the quality of a 3DGS experience Capture Generation Format Rendering - Number of cameras - Camera placement or capture path(s) - Depth information - Initialization tools (e.g. SfM) - Projection tools - Adaptive density control tools (dropout, overfitting etc.) - AI tools - Attributes (what and how many) - Bit depths for attributes - Compression aspects - Hardware / software implementations - Device display type In order to be able to study the factors at each stage in the workflow, it may be useful to define a set of requirements for each reference scenario to be used in the study. Requirements may further be categorized into content requirements and workflow requirements. Content requirements here may be associated to the nature of the volumetric scene which is to be captured as a 3DGS representation format, for example whether the capture is an object or a scene, the volume of the scene, and the characteristics of the details of interest within the scene. In contrasts, workflow requirements may be a result of the limitations in the capturing, generation or rendering environments of the use case or scenario. A brief example of these requirements is listed in table 2 below. Table 2: Requirements for consideration associated to reference scenarios Content requirements Workflow requirements - Object vs scene - Static vs dynamic - Scene volumetric size - Details of interest (number of persons / objects in the scene) including material properties - Limitations in capturing (no. cameras, capture placement) - Limitations in generation (processing power, hardware) - Live vs on-demand aspects
66b5fc78a94ec8f2de89f788f52261bb
26.958
6.3 Complexity
66b5fc78a94ec8f2de89f788f52261bb
26.958
6.4 Metrics
66b5fc78a94ec8f2de89f788f52261bb
26.958
6.5 Data size
66b5fc78a94ec8f2de89f788f52261bb
26.958
7 Static 3DGS content creation
[Editor’s note: Placeholder for the description of the 3DGS content creation processes]
66b5fc78a94ec8f2de89f788f52261bb
26.958
7.1 3DGS generic workflow
As an example, a 3DGS model construction from the capture of 2D data follows the production workflow as illustrated in figure 2: Figure 2: 3DGS model production workflow from a 2D video The workflow consists of three parts. First, the capture phase. During this phase, numerous views of a real-world object or scene are acquired from different positions, typically as photos or video frames. Next, a Structure from Motion (SfM) step estimates the camera's intrinsic parameters and poses and reconstructs a sparse 3D structure, which serves as initialization for a set of Gaussians roughly aligned with the object. If the captures were performed using 3D scanning, this step may use depth information to improve accuracy. Finally, during training, the rendered image of the current Gaussian scene is compared to the corresponding real-world image for each view and minimize the photometric error. As optimization progresses, the Gaussians are split or pruned, resized, reoriented, and their color/opacity is adjusted, progressively densifying and refining the scene to produce an accurate and consistent 3D representation with viewpoints.
66b5fc78a94ec8f2de89f788f52261bb
26.958
7.2 Capture
Content acquisition for 3D Gaussian Splats (3DGS) relies on capturing accurate 3D data from real-world objects and environments. Primary methods include: - 2D image and video capture: video sequences or sets of 2D images capture from various positions and orientations offer coverage of dynamic or complex environments. Sequential frames may provide dense colour and positional cues for downstream 3D reconstruction and Gaussian splat generation, making it practical for capturing motion or large-scale scenes efficiently. • 3D Scanning: High-precision laser or LiDAR scanners provide accurate spatial measurements of objects and scenes, capturing dense geometric detail suitable for Gaussian splat representation. These devices enable large-scale and high-fidelity acquisition, though they require specialized equipment and controlled capture conditions. - Multi-Image Capture: Still images taken from multiple viewpoints allow detailed reconstruction of object surfaces. Overlapping images ensure sufficient coverage and colour information for later conversion to 3DGS format. This approach is cost-effective and flexible, particularly for small to medium scale scenes. These capture methods focus on acquiring raw visual and spatial data, forming the foundation for 3DGS content generation.
66b5fc78a94ec8f2de89f788f52261bb
26.958
7.3 Structure from Motion
The Structure from Motion (SfM) step consists in operating the feature extraction and matching and retrieving the camera parameters when unknown. After image alignment, the process creates a sparse point cloud that is further densified with depth calculation methods. Camera parameters, often known by the capturing system include the focal length and centre and its distortion model from the lens. Even if the focal parameters may be estimated from the captured video from the SfM software, the lens distortion model if ignored may result in an anamorphic 3d model. Given frames with unknown intrinsics/extrinsics : • Feature detection and matching: detect keypoints (e.g., using Scale-Invariant Feature Transform (SIFT) or Oriented FAST and Rotated BRIEF (ORB), build tracks and a view graph, then perform geometric verification. • Relative pose: estimate the essential matrix via a 5‑point solver within RANdom SAmple Consensus (RANSAC); decompose to and select the cheirality‑consistent solution. • Incremental Structure from Motion (SfM)+ Bundel Adjustments (BA): grow the reconstruction, triangulate , and refine intrinsics/extrinsics/structure with bundle adjustment (BA) using a robust loss and optional radial distortion . This translates into the following optimization problem, with observations and projection : This equation minimizes the reprojection error between observed 2D points and projected 3D points across multiple views, where: • - Camera parameters for each view : - : Intrinsic calibration matrix (focal length, principal point, skew) - : Rotation matrix (camera orientation) - : Translation vector (camera position) - - 3D coordinates of the -th point in the scene (the sparse reconstruction) - : Visibility set, pairs where point is visible in camera - : Observed 2D pixel coordinates of point in image - : Projection function that maps 3D point to 2D image coordinates using camera parameters and optional distortion - : Squared L2 norm for squared Euclidean distance - : Robust loss function (e.g., Huber, Cauchy) to reduce influence of outliers The goal is to minimize the difference between where points actually appear in images () and where they should appear given the estimated 3D structure and camera parameters ().
66b5fc78a94ec8f2de89f788f52261bb
26.958
7.4 Training
7.4.1 Introduction The third step is about the creation of the gaussian splats associated to each 3D point using iterative optimization process that will search for the splats that match as much as possible the source video for a given pose (position + orientation) by optimizing the size, shape, colour and transparency of the splats. 7.4.2 Gaussian initialization From the sparse cloud, instantiate an anisotropic Gaussian per point/patch with mean , rotation , scales , and covariance Each splat has opacity scale and view‑dependent color encoded by SH coefficients , a standard low‑order lighting basis. 7.4.3 Projective footprint In camera : Using perspective with Jacobian at , the screen‑space ellipse is For pixel , Front‑to‑back alpha compositing (Porter–Duff) yields [ca]: 7.4.4 Loss and optimization For training, we supervise the reconstruction using observed frames and corresponding masks with the following objective: where follows the structural similarity metric of [ad]. The 3DGS generation becomes a differentiable optimization problem where the entire rendering pipeline, from 3D Gaussian parameters through projection, alpha blending, to final pixel colors, may be backpropagated through to compute gradients. This differentiability enables machine learning techniques to automatically refine all parameters simultaneously: 3D positions {μᵢ}, covariances {Σᵢ}, opacities {αᵢ}, colors {cᵢ}, and camera parameters {Kₖ, Rₖ, tₖ}. Machine learning optimizers like Adaptive Moment Estimation (Adam) then use gradients to iteratively minimize error through gradient descent, just as in neural network training.
66b5fc78a94ec8f2de89f788f52261bb
26.958
7.4.5 Discussion
The original paper on 3D Gaussian Splatting for Real-Time Radiance Field Rendering [aa] published in August of 2023 presents a workflow as shown below: Figure 1: Organization of the workflow [aa] Whilst the original workflow process includes a closed loop optimization problem using convergence with ground truths to output a learned representation, it should be noted that neural networks were not used in any part of the workflow. Since the publish of the original paper, there has been an influx of research in academia to improve all aspects of the workflow, including the adoption of neural network-based techniques. In particular, some methods gaining interest include: • VGGT: Visual Geometry Grounded Transformer [ag], a feed-forward neural network that directly infers all key 3D attributes of a scene, including camera parameters, points maps, depths amps and 3D point tracks, from one, a few, or hundreds of its view. VGGT is quickly replacing the more traditional Structure from Motion technique for the initialization process to create a sparce point cloud due to its faster and more accurate algorithm, especially when the number of input views is limited. • Optimized learned representations to create 3DGS representations directly, such as DepthSplat [ah] (which connects depth estimation and 3DGS with a shared architecture) and AnySplat [ai] (which feeds uncalibrated input images into a feed-forward network without the need to known camera poses and per-scene optimization). • Scalable large reconstruction models for 3DGS, which focus on using transformer-based large reconstruction models that predicts 3D Gaussian primitives (as opposed to adopting triplane NeRF as the scene representation). Methods such as GS-LRM [aj] and iLRM [ak] have advantages of supporting scalability whilst being fast and maintaining good visual quality.
66b5fc78a94ec8f2de89f788f52261bb
26.958
8 3DGS rendering
[Editor’s note: Placeholder for the description of the 3DGS rendering processes]
66b5fc78a94ec8f2de89f788f52261bb
26.958
8.1 Pipeline description
The commonly used pipeline to render 3DGS objects is outlined in this clause. It defines the per-frame inputs, configurable options, and outputs of a representative implementation, without prescribing specific algorithms. The inputs are, per 3DGS frame and per rasterized image: • 3DGS data: per-Gaussian attributes as defined in section 4.2 of this document (e.g., position, scale, rotation, opacity, color (DC) and spherical harmonics (SH) coefficients). • Rendering parameters: camera pose (position/orientation), camera intrinsics (focal lengths, principal point), image resolution/viewport, and the view–projection matrix derived from these. • Rendering options (implementation-dependent). - Appearance model: SH degree/order used for view-dependent color (e.g., DC only, order-1, order-3). - Numerical precision: choice of FP16/FP32 for buffers and arithmetic, and tolerances for early termination. - Compositing rule: color blending mode for semi-transparent accumulation. - Parallelization strategy: work decomposition per-tile (screen-space bins), per-primitive (per-Gaussian), or per-pixel, including tile size and binning policy. - Culling policy: frustum and projected-size thresholds. The outputs are: • Primary: an image buffer containing the rasterization of the 3DGS object at the requested resolution and color space. • Optional auxiliaries: alpha, opacity or depth buffers
66b5fc78a94ec8f2de89f788f52261bb
26.958
8.2 Rasterization process
8.2.1 Introduction This section presents a rendering process used to render a 3DGS model. A 3DGS model is rendered based on the observer’s position and orientation and on the Gaussian, primitives defined in Section 4.1. Depending on the chosen representation format, the rasterization described below may need to be updated to consider the format details. 8.2.2 Main stages The main steps of the rasterization process are: • Culling: all Gaussians are examined, and those that do not affect the image are eliminated: elements located behind the observer or entirely outside the targeted display area are removed. • Sorting: the remaining Gaussians are then sorted by distance from the observer, from farthest to closest, so that their semi-transparent contributions stack in a visually correct manner. • Gaussian color: For each Gaussian thus sorted, its apparent color is calculated from the observer's viewpoint by combining its base color (DC) with its spherical harmonic coefficients to capture viewpoint-related variations. • Projection: based on the position, scale, and rotation parameters of the Gaussian, the rasterization engine determines its location on the 2D screen and calculates the footprint it covers: an elliptical region centered on its projected position. Precise pixel boundaries are computed in the two 2D directions so only potentially covered pixels are processed. • Pixel color: For each pixel within its footprint, the Gaussian’s contribution is evaluated based on its opacity and its distance from the projection centre in screen space, with the influence decreasing progressively toward the edges. • Blending: This processing continues for all Gaussians, covering the pixel from the furthest Gaussian to the nearest and accumulating contributions until the final pixel color is obtained. Very small contributions may be ignored, and the updating of nearly opaque pixels may be stopped to optimize rendering. 8.2.3 Detailed implementation 8.2.3.1 Introduction This section describes a typical rendering process close to the implementation presented in [aa]. Different variants and optimizations exist and are being used. 8.2.3.2 Optional spatial binning Each 3D Gaussian projects to a 2D ellipse characterized by its center and covariance matrix . We compute an axis-aligned bounding box for each ellipse by extracting the contour (typically with ) from using either eigen-decomposition or direct bounds computation. Each Gaussian is then assigned to all overlapping pixel tiles in the image. For each tile, we store pairs where is the Gaussian index and is the camera-space depth of the Gaussian's center, or an occlusion-aware depth proxy [aa]. This step is optional but helps improving the computational performance when rendering large GS scenes. 8.2.3.3 Front-to-back ordering To achieve correct alpha blending, Gaussians within each tile must be sorted by depth in front-to-back order. We quantize the depth values to fixed-width integer keys: where is a scaling factor that maps the depth range to the integer key space. We then perform a stable least-significant-digit (LSD) radix sort per tile, typically using either three 8-bit passes or two 12-bit passes [ae]. 8.2.3.4 Per-pixel fused evaluation and optional early termination For each pixel within a tile, we iterate through the depth-sorted Gaussian indices. For each Gaussian that passes the ellipse membership test: we compute the Gaussian weight and opacity in a fused manner to minimize memory bandwidth consumption. We then accumulate the color and transmittance using the over operator [af]: where is the accumulated color, is the remaining transmittance, and is the view-dependent color of Gaussian evaluated at viewing direction . In an optimization, the iteration terminates early when the transmittance falls below a threshold , as subsequent Gaussians contribute negligibly to the final pixel color. 8.2.3.5 Numerical precision and robustness All accumulation operations use float32 (32-bit floating point) precision with alpha values clamped to the range . The transmittance is biased away from zero to avoid denormal floating-point numbers, which may significantly degrade performance. The depth scaling factor must be chosen to preserve correct near-to-far ordering throughout the depth range, with special consideration for reverse-Z depth buffer configurations if employed. As an optional optimization, per-tile occlusion culling may terminate processing once the cumulative transmittance for all pixels in the tile drops below a threshold, thereby reducing computational work in densely populated regions of the scene. 8.3 Variants and optimization techniques As noted in the detailed sections above, visual results and performance may vary significantly depending on implementation choices. Common variations include: • Appearance simplification: Reducing spherical harmonics order from 3 to 1 or 0 (DC only) to accelerate rendering at the cost of view-dependence and non-anisotropic rendering. • Sorting strategy: Choosing between various sorting strategies: software, hardware, Radix-sort, by blocks. • Precision: Using FP16 for storage buffers while keeping FP32. • Culling: Adjusting projected size thresholds or using per-tile occlusion culling to stop processing tiles that are already fully opaque. Each choice affects sharpness, temporal stability, energy conservation, and GPU cost. It is common to adjust these parameters differently depending on the rasterization objectives: quality, latency, or real-time compromise. Beyond these parameter adjustments, new 3DGS rendering technics targeting real-time deployment on heterogeneous platforms. Recent work focuses on more efficient splat rasterization processes: • For example, MetaSapiens introduces an efficiency-aware pruning strategy and foveated point-based neural rendering to reach real-time (>100 FPS) 3DGS on mobile GPUs [al]. • Hybrid Transparency Gaussian Splatting (HTGS) proposes several blending modes, including hybrid schemes that sort only the most important splats and treat the remaining ones with order-independent transparency, greatly reducing the cost of depth sorting while preserving quality [am]. • Other approaches aim for fully sort-free rasterization, such as Weighted-Sum Rendering, which replaces non-commutative alpha blending by commutative weighted sums so that splats can be rendered using standard hardware blending without any explicit depth sorting [an]. • StochasticSplats rasterization uses a Monte-Carlo estimator of the volume-rendering equation to blend overlapping Gaussians correctly while entirely removing the sorting step and gaining more than four-time speed-up over sorted rasterization [ao].
66b5fc78a94ec8f2de89f788f52261bb
26.958
9 High level media data workflows
[Editor’s note: Placeholder for the description of the workflows]
66b5fc78a94ec8f2de89f788f52261bb
26.958
9.1 All-in-client configuration
66b5fc78a94ec8f2de89f788f52261bb
26.958
9.1 Client-server configuration
66b5fc78a94ec8f2de89f788f52261bb
26.958
10 Mapping to the 3GPP services
[Editor’s note: Placeholder for the description of the 3GPP services used]
66b5fc78a94ec8f2de89f788f52261bb
26.958
10.1 All in UE configuration
66b5fc78a94ec8f2de89f788f52261bb
26.958
10.2 Client-server configuration
66b5fc78a94ec8f2de89f788f52261bb
26.958
11 Related activities and products and services
[Editor’s note: Placeholder for the description of the products and services]
66b5fc78a94ec8f2de89f788f52261bb
26.958
11.1 Standardization activities
66b5fc78a94ec8f2de89f788f52261bb
26.958
11.2 Services
66b5fc78a94ec8f2de89f788f52261bb
26.958
11.3 Software and products
12 Reference implementation [Editor’s note: Placeholder for the description of the reference implementation]
66b5fc78a94ec8f2de89f788f52261bb
26.958
12.1 Capture
66b5fc78a94ec8f2de89f788f52261bb
26.958
12.2 Transmission
12.4 Rendering Annex <B>: <Informative annex title for a Technical Report> Informative annexes in Technical Reports do not use "(informative") in the title, since all annexes in TRs are informative. Use style "Heading 9" in TRs. B.1 Heading levels in an annex Heading levels within an annex are used as in the main document, but for Heading level selection, the "A.", "B.", etc. are ignored. e.g. B.1.2 is formatted using Heading 2 style. Annex <X> (informative): Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2025-11 SA4#134 S4-251765 Technical report skeleton 0.0.1 2025-11 SA4#134 S4-252138 3DGS representation format: S4-252069 Use cases: S4-252029, S4-252033, S4-252034, S4-251737 (2nd use case is agreed) Quality factors: S4-252030 3DGS content generation: S4-252070, S4-25203, S4-252056 3DGS rendering: S4-252073, S4-252056 0.1.0 2025-11 SA4#134 S4-252137 Editorial changes 0.1.1
1a27a93f1820dce4437159b281db5215
28.893
1 Scope
The present document …
1a27a93f1820dce4437159b281db5215
28.893
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". … [x] <doctype> <#>[ ([up to and including]{yyyy[-mm]|V<a[.b[.c]]>}[onwards])]: "<Title>".
1a27a93f1820dce4437159b281db5215
28.893
3 Definitions of terms, symbols and abbreviations
1a27a93f1820dce4437159b281db5215
28.893
3.1 Terms
For the purposes of the present document, the terms given in TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905 [1]. example: text used to clarify abstract rules by applying them literally.
1a27a93f1820dce4437159b281db5215
28.893
3.2 Symbols
For the purposes of the present document, the following symbols apply: <symbol> <Explanation>
1a27a93f1820dce4437159b281db5215
28.893
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 [1]. <ABBREVIATION> <Expansion> Annex <X> : Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2026-1 SA5#165 Initial skeleton 0.0.0
dbf6dca34b12a6d0d977b45733f7173e
32.801-01
1 Scope
The present document …
dbf6dca34b12a6d0d977b45733f7173e
32.801-01
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". … [x] <doctype> <#>[ ([up to and including]{yyyy[-mm]|V<a[.b[.c]]>}[onwards])]: "<Title>".
dbf6dca34b12a6d0d977b45733f7173e
32.801-01
3 Definitions of terms, symbols and abbreviations
dbf6dca34b12a6d0d977b45733f7173e
32.801-01
3.1 Terms
For the purposes of the present document, the terms given in TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905 [1]. example: text used to clarify abstract rules by applying them literally.
dbf6dca34b12a6d0d977b45733f7173e
32.801-01
3.2 Symbols
For the purposes of the present document, the following symbols apply: <symbol> <Explanation>
dbf6dca34b12a6d0d977b45733f7173e
32.801-01
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 [1]. <ABBREVIATION> <Expansion> Annex X (informative): Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2026-01 SA5#165 Initial version 0.0.0
37da702570fd408d755f579b39acc541
38.742
1 Scope
This technical report addresses the study of GNSS (Global Navigation Satellite System) resilient operation in NR-NTN (Non-Terrestrial Networks), targeting Release 20 enhancements. The objective is to evaluate and define solutions to ensure NR-NTN functionality in scenarios where GNSS signals are unavailable, degraded, or spoofed, while maintaining seamless network operation and user experience. The study specifically investigates the impact of GNSS loss or impairment on initial access and connected mode procedures in NR-NTN deployments. As part of this assessment, the study aims to minimize modifications to physical layer procedures and avoid any changes to physical layer channels and signals. Maintaining backward compatibility with legacy NTN-capable user equipment (UEs) is a key requirement; any proposed solutions must be thoroughly evaluated to prevent issues for existing deployments. The scope encompasses NR-NTN operating over both non-geostationary (NGSO) and geostationary (GSO) satellite constellations, spanning frequency bands below and above 10 GHz, and considering both transparent and regenerative satellite architectures. The findings and recommendations will be applicable to a wide range of NTN deployment scenarios.
37da702570fd408d755f579b39acc541
38.742
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [2] 3GPP TR 38.811 v15.2.0: "Study on New Radio (NR) to support non-terrestrial networks (Release 15)" [2] 3GPP TR 38.821 v15.2.0: "Solutions for NR to support non-terrestrial networks (NTN) (Release 16)" [x] <doctype> <#>[ ([up to and including]{yyyy[-mm]|V<a[.b[.c]]>}[onwards])]: "<Title>".
37da702570fd408d755f579b39acc541
38.742
3 Definitions of terms, symbols and abbreviations
37da702570fd408d755f579b39acc541
38.742
3.1 Terms
For the purposes of the present document, the terms given in TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905 [1]. Feeder link: Wireless link between NTN Gateway and satellite. Geostationary Earth orbit: Circular orbit at 35,786 km above the Earth's equator and following the direction of the Earth's rotation. An object in such an orbit has an orbital period equal to the Earth's rotational period and thus appears motionless, at a fixed position in the sky, to ground observers. GNSS (Global Navigation Satellite System): is a general term for satellite-based systems that provide global positioning, navigation, and timing (PNT) services. GNSS systems use a constellation of satellites orbiting the Earth that transmit signals enabling a receiver (such as a smartphone, vehicle, or specialized equipment) to determine its precise location (latitude, longitude, and altitude) and time anywhere on or near the Earth's surface. Low Earth Orbit: Orbit around the Earth with an altitude between 300 km, and 1500 km. Medium Earth Orbit: region of space around the Earth above low Earth orbit and below geostationary Earth Orbit. Minimum Elevation angle: minimum angle under which the satellite or UAS platform can be seen by a terminal. Non-Geostationary Satellites: Satellites (LEO and MEO) orbiting around the Earth with a period that varies approximately between 1.5 hour and 10 hours. It is necessary to have a constellation of several Non-Geostationary satellites associated with handover mechanisms to ensure a service continuity. Non-terrestrial networks: Networks, or segments of networks, using an airborne or space-borne vehicle to embark a transmission equipment relay node or base station. Regenerative payload: payload that transforms and amplifies an uplink RF signal before transmitting it on the downlink. The transformation of the signal refers to digital processing that may include demodulation, decoding, re-encoding, re-modulation and/or filtering. Round Trip Delay: time required for a signal to travel from a terminal to the sat-gateway or from the sat-gateway to the terminal and back. This is especially used for web-based applications. Satellite: a space-borne vehicle embarking a bent pipe payload or a regenerative payload telecommunication transmitter, placed into Low-Earth Orbit (LEO), Medium-Earth Orbit (MEO), or Geostationary Earth Orbit (GEO). Satellite beam: A beam generated by an antenna on-board a satellite. Service link: Radio link between satellite and UE. Transparent payload: payload that changes the frequency carrier of the uplink RF signal, filters and amplifies it before transmitting it on the downlink.
37da702570fd408d755f579b39acc541
38.742
3.2 Symbols
For the purposes of the present document, the following symbols apply: <symbol> <Explanation>
37da702570fd408d755f579b39acc541
38.742
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 [1]. GEO Geostationary Earth Orbiting gNB next Generation Node B GW Gateway LEO Low Earth Orbiting MEO Medium Earth Orbiting NGEO Non-Geostationary Earth Orbiting NTN Non-Terrestrial Network RAN Radio Access Network RTD Round Trip Delay SNR Signal-to-Noise Ratio Rx Receiver UE User Equipment
37da702570fd408d755f579b39acc541
38.742
4 Background and motivation
4.1  Review of previous NR-NTN releases
37da702570fd408d755f579b39acc541
38.742
4.2 Needs for GNSS resilience NR NTN operation
37da702570fd408d755f579b39acc541
38.742
4.3 Use cases and deployment scenarios