hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
5
1.47k
content
stringlengths
0
6.67M
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3.3 IOPS floor control during silence
If a floor arbitrator does not exist, figure 10.5.3.3-1 shows the successful high level floor control procedure during periods when there is no detectable talker in an IOPS group call based on the IP connectivity functionality. NOTE 1: The description also applies to IOPS private calls. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs. - MCPTT users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality - The IOPS MCPTT group ID and its associated IOPS group IP multicast address are pre-configured in the MCPTT clients (for the case of an IOPS group call) - The IOPC MC connectivity function may have established a broadcast/multicast session and announced it to the MCPTT clients - The MCPTT users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCPTT clients has retrieved connectivity information from the target MCPTT user (for the case of an IOPS private call). - An IOPS private call or IOPS group call based on the IP connectivity functionality has been established. No participant is currently talking (i.e. the floor is idle) and no floor arbitrator is identified. Figure 10.5.3.3-1: Successful floor taken flow in an IOPS group call based on the IP connectivity functionality (no floor contention) 1. The MCPTT client 1 sends the IOPS floor request message to the target IOPS MCPTT group. The MCPTT client 1 transmits the group session packets carrying the IOPS floor request message to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. 2. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. 3. The IOPS MC connectivity function distributes the group session packets carrying the IOPS floor request to the MCPTT clients from the target IOPS MCPTT group. 4. The MCPTT client 1 does not detect any floor contention. Floor contention occurs when multiple floor requests may exist simultaneously. NOTE 2: The mechanism for detecting floor contention in the IOPS mode of operation is out of scope of the present document. 5. The MCPTT client 1 sends the IOPS floor taken message to the IOPS MCPTT group. The MCPTT client 1 transmits the group session packets carrying the IOPS floor taken message to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. 6. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. 7. The IOPS MC connectivity function distributes the group session packets carrying the IOPS floor taken message to the MCPTT clients from the target IOPS MCPTT group. 8. The MC user at MCPTT client 1 gets a notification that the IOPS floor request was successful (the floor has been granted). NOTE 3: Step 8 can also occur prior to steps 6 and 7. 9. The MCPTT client 1 begins voice transmission with the target IOPS MCPTT group based on the IP connectivity functionality.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6 MCData service
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1 IOPS short data service (IP connectivity functionality)
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.1 General
The support of the MCData short data service (SDS) based on the IP connectivity functionality in the IOPS mode of operation enables that the service is provided by the MCData clients over the IOPS MC connectivity function. The IOPS MC connectivity function provides IP connectivity for the communication among MCData users.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2 Information flows
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2.1 IOPS MCData standalone data request
Table 10.6.1.2.1-1 describes the information flow for the IOPS MCData standalone data request from one MCData client to another MCData client. The packet(s) carrying the IOPS MCData standalone data request are transmitted from the sending MCData client to the IOPS MC connectivity function for distribution to the target MCData client. Table 10.6.1.2.1-1: IOPS MCData standalone data request Information element Status Description IOPS MCData ID M The identity of the MCData user sending data IOPS MCData ID M The identity of the MCData user towards which the data is sent Conversation Identifier M Identifies the conversation Transaction Identifier M Identifies the MCData transaction Reply Identifier O Identifies the original MCData transaction to which the current transaction is a reply to Disposition Type O Indicates the disposition type expected from the receiver (i.e., delivered or read or both) Payload Destination Type M Indicates whether the payload is for application consumption or MCData user consumption Application identifier (see NOTE) O Identifies the application for which the payload is intended (e.g. text string, port address, URI) Payload M SDS content NOTE: The application identifier shall be included only if the payload destination type indicates that the payload is for application consumption.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2.2 IOPS MCData data disposition notification
Table 10.6.1.2.2-1 describes the information flow for the IOPS MCData data disposition notification from one MCData client to another MCData client. The packet(s) carrying the IOPS MCData data disposition notification are transmitted from the sending MCData client to the IOPS MC connectivity function for distribution to the target MCData client. Table 10.6.1.2.2-1: IOPS MCData data disposition notification Information element Status Description IOPS MCData ID M The identity of the MCData user towards which the notification is sent IOPS MCData ID M The identity of the MCData user sending notification Conversation Identifier M Identifies the conversation Disposition association M Identity of the original MCData transaction Disposition M Disposition which is delivered or read or both
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2.3 IOPS MCData group standalone data request
Table 10.6.1.2.3-1 describes the information flow for the IOPS MCData group standalone data request from one MCData client to other MCData clients. The packet(s) carrying the IOPS MCData group standalone data request are transmitted from the sending MCData client to the IOPS MC connectivity function for distribution to the target MCData clients. Table 10.6.1.2.3-1: IOPS MCData group standalone data request Information element Status Description IOPS MCData ID M The identity of the MCData user sending data IOPS MCData group ID M The IOPS MCData group ID to which the data is to be sent Conversation Identifier M Identifies the conversation Transaction Identifier M Identifies the MCData transaction Reply Identifier O Identifies the original MCData transaction to which the current transaction is a reply to Disposition Type O Indicates the disposition type expected from the receiver (i.e., delivered or read or both) Payload Destination Type M Indicates whether the payload is for application consumption or MCData user consumption Application identifier (see NOTE) O Identifies the application for which the payload is intended (e.g. text string, port address, URI) Payload M SDS content NOTE: The application identifier shall be included only if the payload destination type indicates that the payload is for application consumption.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.3 IOPS one-to-one standalone SDS using signalling control plane
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.3.1 General
When an MCData user initiates an IOPS standalone SDS data transfer with another MCData user using the signalling control plane based on the IP connectivity functionality, the MCData client retrieves the connectivity information of the target MCData user (i.e. the MCData UE's IP address) from the IOPS connectivity client. Then, the MCData client enables the IOPS SDS data transfer over the IOPS MC connectivity function. The related session packets, i.e. signalling messages, carrying the data are transmitted to the IOPS MC connectivity function addressing the corresponding target MCData UE's IP address. NOTE: The IOPS connectivity client can only provide connectivity information of the target MCData user if it is already available (see clauses 10.3 on IOPS subscription and notification procedures). The IOPS MC connectivity function distributes the received session packets over unicast transmissions to the target MCData client.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.3.2 Procedure
The procedure in figure 10.6.1.3.2-1 describes the case where an MCData user is initiating an IOPS one-to-one MCData communication for sending standalone SDS data over signalling control plane to another MCData user, with or without disposition request. Standalone refers to sending unidirectional data in one transaction. Pre-conditions: - MCData user profile used for the IOPS mode of operation is pre-provisioned in the MCData UEs. - MCData users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality. - The MCData users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCData clients have retrieved connectivity information from target MCData users. Figure 10.6.1.3.2-1: IOPS one-to-one standalone SDS using signalling control plane based on the IP connectivity functionality 1. The MCData user at MCData client 1 would like to initiate an IOPS SDS data transfer with the MCData user at MCData client 2 based on the IP connectivity functionality. The MCData client 1 checks whether the MCData user 1 is authorized to send an IOPS MCData standalone data request. 2. The MCData client 1 retrieves the connectivity information of the target MCData user from the IOPS connectivity client 1 (not shown in figure) and sends an IOPS MCData standalone data request towards the MCData client 2. The MCData client 1 transmits the session packets carrying the IOPS MCData standalone data request to the IOPS MC connectivity function for distribution to the corresponding target MCData UE 2's IP address. The IOPS MCData standalone data request contains the data payload, i.e. the SDS content. The request also contains a conversation identifier for message thread indication and may contain a disposition request if indicated by the user at MCData client 1. 3. The IOPS MC connectivity function receives the session packets addressing the MCData UE 2's IP address. The IOPS MC connectivity function checks if the MCData UE 2's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCData client over unicast transmissions. 4. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCData standalone data request to the MCData client 2. 5. Upon the receipt of the IOPS MCData standalone data request, the MCData client 2 checks whether any policy is to be asserted to limit certain types of message or content to certain members due to, for example, location or user privilege. If the policy assertion is positive and the payload is for MCData user consumption (e.g. it is not application data, not command instructions, etc.) then the MCData client 2 notifies the target MCData user. The actions taken when the payload contains application data or command instructions are based on the payload content. Payload content received by MCData client 2 which is addressed to a known local non-MCData application that is not yet running shall cause the MCData client 2 to start the local non-MCData application (i.e., remote start application) and shall pass the payload content to the just started application. NOTE: If the policy assertion was negative, the MCData client 2 sends an appropriate notification to MCData client 1. 6. If MCData data disposition was indicated (for delivery, read or both) within the request sent by the MCData client 1, then the receiving MCData client 2 initiates the corresponding IOPS MCData data disposition notification(s) towards the MCData client 1, i.e. addressing the MCData UE 1's IP address. 7. The IOPS MC connectivity function receives the session packets addressing the MCData UE 1's IP address. The IOPS MC connectivity function checks if the MCData UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCData client over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCData data disposition notification to the MCData client 1.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.4 IOPS group standalone SDS using signalling control plane
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.4.1 General
IOPS group standalone SDS using signalling control plane based on the IP connectivity functionality can use pre-configured information provided to MCData clients prior to initiating the data service. When an MCData client initiates an IOPS group standalone SDS based on the IP connectivity functionality it uses the pre-configured IOPS group IP multicast address associated to the target IOPS MCData group ID. The related group session packets, i.e. signalling messages, carrying the data are transmitted to the IOPS MC connectivity function for distribution to the corresponding discovered MC users of the target IOPS MCData group. The IOPS MC connectivity function can distribute the group session packets to the discovered MC users over broadcast/multicast sessions as described in clause 10.4.5. The IOPS MC connectivity function can also replicate and distribute the group session packets over unicast transmissions to MCData UEs associated to the target IOPS MCData group. MCData UEs receiving the group session packets are associated to discovered MC users that included the target IOPS MCData group ID within the IOPS discovery request, as described in clause 10.5.2.3.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.4.2 Procedure
The procedure in figure 10.6.1.4.2-1 describes the case where an MCData user is initiating an IOPS group MCData communication for sending standalone SDS data over signalling control plane to an IOPS MCData group, with or without disposition request. Standalone refers to sending unidirectional data in one transaction. Pre-conditions: - MCData user profile used for the IOPS mode of operation is pre-provisioned in the MCData UEs. - The IOPS MCData group ID and its associated IOPS group IP multicast address are pre-configured in the MCData clients. - MCData users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality. - MCData users affiliated to the target IOPS MCData group are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - The IOPS MC connectivity function may have established a broadcast/multicast session and announced it to the MCData clients. - MCData client 1 may have retrieved group connectivity information from the IOPS connectivity client related to the target IOPS MCData group. - MCData clients 1, 2 … n are configured within the same IOPS MCData group. Figure 10.6.1.4.2-1: IOPS group standalone SDS using signalling control plane based on the IP connectivity functionality 1. The MCData user at MCData client 1 would like to initiate an IOPS SDS data transfer with a specific IOPS MCData group based on the IP connectivity functionality. The MCData client 1 checks whether the MCData user 1 is authorized to send an IOPS MCData group standalone data request. 2. The MCData client 1 sends an IOPS MCData group standalone data request to the target IOPS MCData group. The MCData client 1 transmits the group session packets carrying the IOPS MCData group standalone data request to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. The IOPS MCData group standalone data request contains the data payload, i.e. the SDS content. The request also contains a conversation identifier for message thread indication and may contain a disposition request if indicated by the user at MCData client 1. 3. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCData group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCData clients over broadcast/multicast session and/or unicast transmissions. 4. The IOPS MC connectivity function distributes the group session packets carrying the IOPS MCData group standalone data request to the discovered MCData clients from the target IOPS MCData group. 5. The MCData clients receiving the IOPS MCData group standalone data request check whether any policy is to be asserted to limit certain types of messages or content to certain members due to, for example, location or user privilege. If the policy assertion is positive and the payload is for MCData user consumption (e.g. it is not application data, not command instructions, etc.) then the MCData clients notify the target MCData users. The actions taken when the payload contains application data or command instructions are based on the payload content. Payload content received by an MCData client which is addressed to a known local non-MCData application that is not yet running shall cause the MCData client to start the local non-MCData application (i.e., remote start application) and shall pass the payload content to the just started application. NOTE: If the policy assertion was negative, the corresponding MCData client sends an appropriate notification to MCData client 1. 6. If MCData data disposition was indicated (for delivery, read or both) within the request sent by the MCData client 1, then the receiving MCData clients initiate the corresponding IOPS MCData data disposition notification(s) towards the MCData client 1, i.e. addressing the MCData UE 1's IP address. 7. The IOPS MC connectivity function receives the session packets addressing the MCData UE 1's IP address. The IOPS MC connectivity function checks if the MCData UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCData client over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCData data disposition notification to the MCData client 1.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7 MC IOPS notification
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.1 General
In the IOPS mode of operation, it is assumed that the IOPS MC system does not have connectivity to the primary MC system due to the backhaul failure. Therefore, the primary MC system cannot be aware of the initiation of the IOPS operation and the corresponding activation of an IOPS MC connectivity function within the primary MC system coverage. When an IOPS MC system is active, MC service UEs can move around and may enter and leave the IOPS MC system coverage, i.e. the MC service users may switch from the active IOPS MC connectivity function to the MC service server of the primary MC system, and vice versa. In order to notify the primary MC service server about the active IOPS MC connectivity function, when MC service users register to the primary MC service server after being recently registered to the IOPS MC connectivity function, the MC service users can provide information to the primary MC service server about the active IOPS MC connectivity function and optionally include associated dynamic information. The primary MC service server uses the provided information to become aware of the active IOPS MC connectivity function. NOTE: Dynamic information can be, e.g., information about other available MC service users or active MC service groups that the MC service user identified while on the IOPS MC connectivity function. The primary MC service server can use the dynamic information to determine that affiliated MC service users might be registered on the active IOPS MC connectivity function and might not be reachable on the system. Upon the receipt of the notification, the primary MC service server may notify other MC service users in the proximity of the IOPS MC system coverage about the corresponding active IOPS MC connectivity function. This information can be used by the MC service users to optimize the user experience, e.g. to improve the switching time between systems and to obtain information about the potential availability of other registered MC service users or active MC service groups on the IOPS MC connectivity function.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.2 Information flows
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.2.1 MC IOPS notification
Table 10.7.2.1-1 describes the information flow MC IOPS notification from the MC service client to the primary MC service server. Table 10.7.2.1-1: MC IOPS notification Information element Status Description MC service ID M The identity of the MC service user providing the notification IOPS MC system information M Information related to the identified active IOPS MC connectivity function (see NOTE) List of MC service group IDs O The list of groups identified by the MC service user as active on the IOPS MC connectivity function List of MC service IDs O The list of other users identified by the MC service user as available on the IOPS MC connectivity function NOTE: The IOPS MC system information consists of the following elements: IOPS PLMN ID, server URI of the IOPS MC connectivity function, and location information (set of coordinates including altitude, longitude and latitude, and time of measurement and optional accuracy) related to the MC service user registration on the IOPS MC connectivity function. Table 10.7.2.1-2 describes the information flow MC IOPS notification from the primary MC service server to the MC service client. Table 10.7.2.1-2: MC IOPS notification Information element Status Description MC service ID (see NOTE 1) M The identity of the MC service user receiving the notification IOPS MC system information M Information related to the identified active IOPS MC connectivity function (see NOTE 2) List of MC service group IDs (see NOTE 3) O The list of MC service groups identified as active on the IOPS MC connectivity function List of MC service IDs (see NOTE 3) O The list of MC service users identified as available on the IOPS MC connectivity function NOTE 1: This information element is not included if the notification is transmitted over broadcast/multicast session. NOTE 2: The IOPS MC system information consists of the following elements: server URI of the IOPS MC connectivity function, and location information (set of coordinates including altitude, longitude and latitude) where the IOPS MC connectivity function is identified as active. NOTE 3: The MC service server may provide information about identified active MC service groups or available MC service users on the IOPS MC connectivity function. This information is only included if the MC service user receiving the notification is authorized, e.g. if the MC service user is a member of the corresponding MC service groups. This information element is not included if the notification is transmitted over broadcast/multicast session.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.3 MC IOPS notification procedure
Figure 10.7.3-1 describes the IOPS MC notification procedure when an MC service user has left an active IOPS MC system and enters the primary MC system. Pre-conditions: - There is an active IOPS MC connectivity function and the neighbouring cells of the IOPS MC system are part of the primary MC system. - The MC service user 1 is initially registered to the IOPS MC connectivity function for the support of MC services in the IOPS mode of operation. The MC service user 1 is authorized to provide MC IOPS notifications to the primary MC service server. - The MC service user 2 is registered to the primary MC service server and is in the proximity of the IOPS MC system coverage. - MC server users 1 and 2 are members of the same MC service group or are authorized to have a one-to-one MC service communication. Figure 10.7.3-1: MC IOPS notification procedure 1. An IOPS mode of operation is active, and the MC service communication is handled by the IOPS MC connectivity function. The MC service client 1 is registered to the IOPS MC connectivity function. 2. The MC service client 1 moves out of the coverage of the IOPS MC system and registers to the primary MC service server. 3. The MC service client 1 sends a MC IOPS notification to the primary MC service server to provide information about an active IOPS MC connectivity function in the area. This notification includes information about the active IOPS MC connectivity function such as server URI, associated IOPS PLMN ID, and location information. Also, the MC service client 1 may indicate which MC service groups and MC service users were identified as active and available on the IOPS MC connectivity function. 4. The primary MC service server becomes aware of the active IOPS MC connectivity function and can use the received information to determine that affiliated MC service users might be registered on the active IOPS MC connectivity function and might not be reachable on the system. If the primary MC service server determines that the IOPS MC connectivity function is active and identifies that affiliated MC service users are in the proximity of the IOPS MC system, the primary MC service server may notify the corresponding MC service users about the IOPS MC connectivity function. NOTE 1: The primary MC service server can use received information from different MC IOPS notifications (e.g. location information including the time of measurement) and the information obtained from location information subscriptions (as described in 3GPP TS 23.280 [3]) to determine if the IOPS MC connectivity function might be still active. For instance, if information received from location information subscriptions indicates that MC service users are located within the notified active IOPS MC system coverage, the primary MC service server can determine that the IOPS MC connectivity function is no longer active. 5. If the primary MC service server determines that the IOPS MC connectivity function is active, it sends a MC IOPS notification to the MC service client 2 in proximity of the active IOPS MC system coverage. This information can be used by the MC service user to become aware of the active IOPS MC connectivity function. Hence, the MC service user might decide to not move into the IOPS MC system coverage or to improve the switching time between the systems. Also, the MC service user can be aware of the potential availability of MC service users or active MC service groups on the IOPS MC connectivity function. NOTE 2: The MC IOPS notification can be sent on a broadcast/multicast session configured within the proximity of the active IOPS MC system to target multiple MC service users. In this case, information about the active MC service IDs and MC service group IDs is not included in the MC IOPS notification. Annex A (normative): Configuration data for the support of MC services in the IOPS mode of operation A.1 General This Annex provides information about the static configuration data needed for the support of MC services in the IOPS mode of operation. The configuration data belong to one of the following categories: - MC service UE configuration data (see subclause A.2); - MC service user profile configuration data (see subclause A.3); - MC service group configuration data (see subclause A.4); and - MC service configuration data (see subclause A.5). - Location user profile configuration data (see subclause A.8) The configuration data in each configuration category corresponds to a single instance of the category type i.e. the MC service UE, MC service group, MC service user and MC service configuration data refers to the information that will be stored against each MC service UE, MC service group, MC service user and MC service. NOTE: The configuration data described in this Annex together with corresponding configuration data provided in 3GPP TS 23.280 [3], 3GPP TS 23.289 [12], 3GPP TS 23.379 [5] and 3GPP TS 23.282 [6] represent the complete set of data for each configuration data category element. The columns in the tables have the following meanings: - Reference: the reference of the corresponding requirement in 3GPP TS 22.346 [9] and 3GPP TS 22.280 [11] or the corresponding clause from either the present document or the referenced document. - Parameter description: A short definition of the semantics of the corresponding item of data, including denotation of the level of the parameter in the configuration hierarchy. - When it is not clear to which functional entities the parameter is configured, then one or more columns indicating this are provided where the following nomenclature is used: - "Y" to denote "Yes" i.e. the parameter denoted for the row needs to be configured to the functional entity denoted for the column. - "N" to denote "No" i.e. the parameter denoted for the row does not need to be configured to the functional entity denoted for the column. Parameters within a set of configuration data have a level within a hierarchy that pertains only to that configuration data. The level of a parameter within the hierarchy of the configuration data is denoted by use of the character ">" in the parameter description field within each table, one per level. Parameters that are at the top‑most level within the hierarchy have no ">" character. Parameters that have one or more ">" characters are child parameters of the first parameter above them that has one less ">" character. Parent parameters are parameters that have one or more child parameters. Parent parameters act solely as a "grouping" of their child parameters and therefore do not contain an actual value themselves i.e. they are just containers for their child parameters. Each parameter that can be configured online shall only be configured through one online reference point. Each parameter that can be configured offline shall only be configured through one offline reference point. The most recent configuration data made available to the MC service UE shall always overwrite previous configuration data, irrespective of whether the configuration data was provided via the online or offline mechanism. A.2 MC service UE configuration data MC service UE configuration data has to be known by an MC service UE after MC service authorization. The CSC-4 reference point, specified in 3GPP TS 23.280 [3], is used for configuration between the configuration management server and the configuration management client on the MC service UE when the MC service UE is on-network. MC service UE configuration data can be configured offline using the CSC-11 reference point specified in 3GPP TS 23.280 [3], and 3GPP TS 23.289 [12]. Within each MC service, the MC service UE configuration data can be the same or different across MC service UEs. The MCPTT UE configuration data specified in table A.2-1 in 3GPP TS 23.379 [5] is also used, as needed, in the IOPS mode of operation for the MCPTT service. The MCData UE configuration data specified in table A.2-1 in 3GPP TS 23.282 [6] is also used, as needed, in the IOPS mode of operation for the MCData service. A.3 MC service user profile configuration data The MC service user profile configuration data is stored in the MC service user database. The configuration management server is used to configure the MC service user profile configuration data to the MC service user database (CSC-13) and MC service UE (CSC-4), as specified in 3GPP TS 23.280 [3]. MC service user profile configuration data can be configured offline using the CSC-11 reference point specified in 3GPP TS 23.280 [3]. For the MCPTT service, the MCPTT user profile configuration data specified in table A.3-1 in 3GPP TS 23.379 [5] is also used, as needed, in the IOPS mode of operation, wherein the IOPS MCPTT user identity (IOPS MCPTT ID) can be the MCPTT user identity (MCPTT ID) or a specific ID configured for the IOPS mode of operation. For the MCData service, the MCData user profile configuration data specified in table A.3-1 in 3GPP TS 23.282 [6] is also used, as needed, in the IOPS mode of operation, wherein the IOPS MCData user identity (IOPS MCData ID) can be the MCData user identity (MCData ID) or a specific ID configured for the IOPS mode of operation. Table A.3-1 described below contains additional MC service user profile configuration required to support MC services in the IOPS mode of operation. Table A.3-1: MC service user profile data (IOPS) Reference Parameter description MC service UE Configuration management server MC service user database [R-10-001] of 3GPP TS 22.280 [11] List of IOPS MC service groups for use by an MC service user Y Y Y > IOPS MC service Group ID > Application plane server identity information of group management server where group is defined >> Server URI Y Y Y > Application plane server identity information of identity management server which provides authorization for group (see NOTE 1) >> Server URI Y Y Y [R-10-001] of 3GPP TS 22.280 [11] Authorization for participant to change an IOPS group call in-progress to IOPS emergency group call (see NOTE 2) Y Y Y [R-10-001] of 3GPP TS 22.280 [11] Authorization for MC services in the IOPS mode of operation Y Y Y Clause 10.2.2.3 Authorization for participant to indicate availability of connectivity information Y Y Y Clause 10.2.2.3 Authorization for participant to request priority state Y Y Y NOTE 1: If this parameter is not configured, authorization to use the group shall be obtained from the identity management server identified in the initial MC service UE configuration data configured in 3GPP TS 23.280 [3]. NOTE 2: This parameter only applies for the MCPTT service. A.4 Group configuration data As specified in 3GPP TS 23.280 [3], the group configuration data is stored in the group management server. The group management server is used to configure the group configuration data to the MC service UE (CSC-2). The group configuration data can be configured offline using the CSC-12 reference point. The common group configuration data specified in table A.4-1 in 3GPP TS 23.280 [3] is also used, as needed, in the IOPS mode of operation. Table A.4-1 described below contains additional group configuration data required to support MC services in the IOPS mode of operation. Table A.4-1: Group configuration data (IOPS) Reference Parameter description MC service UE Group management server [R-10-001] of 3GPP TS 22.280 [11] List of IOPS MC service groups Y Y > IOPS MCPTT Group ID Clause 8.1.3 >> IOPS group IP multicast address Y Y >> Preferred voice codecs for IOPS MCPTT group Y Y >> Indication whether emergency group call is permitted on the IOPS MCPTT group Y Y > IOPS MCData Group ID Clause 8.1.3 >> IOPS group IP multicast address Y Y >> MCData sub-services and features enabled for the group >>> Short data service enabled Y Y >>> Whether MCData user is permitted to transmit data in the group Y Y >>> Maximum amount of data that the MCData user can transmit in a single request during group communication Y Y >>> Maximum amount of time that the MCData user can transmit in a single request during group communication Y Y A.5 MC service configuration data As specified in 3GPP TS 23.280 [3], the configuration management server is used to configure the MC service configuration data to the MC service UE (CSC-4). The MC service configuration data can be configured offline using the CSC-11 reference point. Tables A.5-1 and A.5-2 describe the configuration data required to support in IOPS the use of MCPTT service and MCData service, respectively. Table A.5-1: MCPTT service configuration data (IOPS) Reference Parameter description MCPTT UE Configuration management server [R-10-001] of 3GPP TS 22.280 [11] Max IOPS private call (with floor control) duration Y Y [R-10-001] of 3GPP TS 22.280 [11] Hang timer for private calls in IOPS Y Y [R-10-001] of 3GPP TS 22.280 [11] Priority hierarchy for floor control override in IOPS Y Y [R-10-001] of 3GPP TS 22.280 [11] Transmit time limit from a single request to transmit in a group or private call. Y Y [R-10-001] of 3GPP TS 22.280 [11] Configuration of warning time before time limit of transmission is reached in an IOPS call Y Y [R-10-001] of 3GPP TS 22.280 [11] Configuration of warning time before hang time is reached in an IOPS call Y Y [R-10-001] of 3GPP TS 22.280 [11] Configuration of metadata to log Y Y Table A.5-2: MCData service configuration data (IOPS) Reference Parameter description MCData UE Configuration management server [R-10-001] of 3GPP TS 22.280 [11] Transmission and reception control > Time limit for the temporarily stored data waiting to be delivered to a receiving user Y Y > Timer for periodic announcement with the list of available recently invited data group communications Y Y A.6 Initial MC service UE configuration data Initial MC service UE configuration data is essential to the MC service UE to successfully connect to the MC system, as described in 3GPP TS 23.280 [3], and 3GPP TS 23.289 [12]. The configuration data defined in table A.6-1 is additionally provided to the MC service UE's clients to successfully connect to the IOPS MC system in the IOPS mode of operation. The MC service UE's clients (e.g. MC service client, IOPS connectivity client) obtain the data during the bootstrap process (described in clause 10.1.1 in 3GPP TS 23.280 [3]), and can be configured on the MC service UE offline using the CSC-11 reference point or via other means e.g. as part of the MC service client's provisioning on the UE, using a device management procedure. Table A.6-1: Initial MC service UE configuration data (IOPS) Reference Parameter description Clause 5.4.3 PDN connectivity information in IOPS (see NOTE 1) > IOPS HPLMN ID and optionally IOPS VPLMN ID to which the data pertains > MC services PDN in IOPS >> APN >> PDN access credentials PDU session information in IOPS (see NOTE 2) > IOPS HPLMN ID and optionally IOPS VPLMN ID to which the data pertains > MC service PDU in IOPS >> DNN >> PDU access credentials >Default configured slice(s) S-NSSAI(s) Application plane server identity information > Indication of whether the UE shall use IPv4 or IPv6 for the support of MC services in IOPS > IOPS MC connectivity function >> Server URI NOTE 1: These configurations shall only be used to access via IOPS EPS system. NOTE 2: These configurations shall only be used to access via IOPS 5GS system. A.7 Location user profile configuration data The location user profile configuration data defined in annex A.8 of 3GPP TS 23.280 [3] is applicable as needed in the IOPS mode of operation. Annex B (informative): Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2019-09 SA6#33 TS skeleton 0.0.0 2019-09 SA6#33 Implementation of the following pCRs approved by SA6: S6-191814, S6-191821, S6-191815, S6-191816, S6-191817, S6-191818, S6-191851, S6-191820, S6-191822. Editorial changes by the rapporteur. 0.1.0 2019-11 SA6#34 Implementation of the following pCRs approved by SA6: S6-192226, S6-192227, S6-192228, S6-192229, S6-192230, S6-192231, S6-192233, S6-192349, S6-192350. Editorial changes by the rapporteur. 0.2.0 2020-01 SA6#35 Implementation of the following pCRs approved by SA6: S6-200280, S6-200185, S6-200186, S6-200187, S6-200097, S6-200098. Editorial changes by the rapporteur. 0.3.0 2020-04 SA6#36 BIS-e Implementation of the following pCRs approved by SA6: S6-200588, S6-200589, S6-200590, S6-200591, S6-200561, S6-200562, S6-200563. Editorial changes by the rapporteur. 0.4.0 2020-05 SA6#37-e Implementation of the following pCRs approved by SA6: S6-200928, S6-200933. Editorial changes by the rapporteur. 0.5.0 2020-06 SA#88-e SP-200334 Presentation for information at SA#88-e 1.0.0 2020-07 SA6#38-e Implementation of the following pCRs approved by SA6: S6-201084, S6-201085, S6-201087, S6-201088, S6-201108, S6-201109, S6-201110. Editorial changes by the rapporteur. 1.1.0 2020-09 SA6#39-e Implementation of the following pCRs approved by SA6: S6-201432, S6-201433, S6-201434, S6-201435, S6-201532. Editorial changes by the rapporteur. 1.2.0 2020-09 SA#89-e SP-200826 Presentation for approval at SA#89-e 2.0.0 2020-09 SA#89-e SP-200826 MCC Editorial update for publication after TSG SA approval (SA#89) 17.0.0 2024-05 Update to Rel-18 version (MCC) 18.0.0 2024-12 SA#106 SP-241726 0001 1 B Updates to sections 10.6.1.4 and 10.7 to support generic IOPS 19.0.0 2024-12 SA#106 SP-241726 0002 2 B Updates to sections 10.5.1.3, 10.5.1.4, 10.5.3.1, and 10.5.3.3 to support generic IOPS 19.0.0 2024-12 SA#106 SP-241726 0003 B Clause 4 update 19.0.0 2024-12 SA#106 SP-241726 0004 B Clause 5 update 19.0.0 2024-12 SA#106 SP-241726 0005 B Clause 6 update 19.0.0 2024-12 SA#106 SP-241726 0006 1 B Clause 7 update 19.0.0 2024-12 SA#106 SP-241726 0007 B Clauses 10.2.3, 10.3.3 and 10.6.1.3.2 update 19.0.0 2024-12 SA#106 SP-241726 0008 1 B Clause A.1, A.2 and A.6 update 19.0.0 2024-12 SA#106 SP-241726 0010 1 B Updates to clauses 2 and 3 (References, Definitions, Abbreviations) to support access agnostic IOPS 19.0.0 2024-12 SA#106 SP-241726 0011 1 B Updates to clause 1 (Scope) to support access generic IOPS 19.0.0 2024-12 SA#106 SP-241726 0012 1 B Update the IOPS network deployment 19.0.0 2024-12 SA#106 SP-241726 0013 1 B Update MBMS transmissions aspect 19.0.0 2024-12 SA#106 SP-241726 0014 1 B Updates to clause 10.2.2.3 (IOPS discovery request) to support access generic IOPS 19.0.0
5817f2cb61f7e615e7879961cd761fa6
23.203
1 Scope
The present document specifies the overall stage 2 level functionality for Policy and Charging Control that encompasses the following high level functions for IP‑CANs (e.g. GPRS, Fixed Broadband, EPC, etc.): - Flow Based Charging for network usage, including charging control and online credit control, for service data flows and application traffic; - Policy control (e.g. gating control, QoS control, QoS signalling, etc.). The present document specifies the Policy and Charging Control functionality for Evolved 3GPP Packet Switched domain, including both 3GPP accesses GERAN/UTRAN/E-UTRAN and Non-3GPP accesses, according to TS 23.401 [17] and TS 23.402 [18]. The present document specifies functionality for unicast bearers. Broadcast and multicast bearers, such as MBMS contexts for GPRS, are out of scope of the present document. NOTE: For E-UTRAN access, the usage of functionalities covered in this specification for features such as MBMS, CIoT and V2X is described in TS 23.246 [6], TS 23.682 [42] and TS 23.285 [48], respectively.
5817f2cb61f7e615e7879961cd761fa6
23.203
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TS 41.101: "Technical Specifications and Technical Reports for a GERAN-based 3GPP system". [2] Void. [3] 3GPP TS 32.240: "Telecommunication management; Charging management; Charging architecture and principles". [4] IETF RFC 4006: "Diameter Credit-Control Application". [5] 3GPP TS 23.207: "End-to-end Quality of Service (QoS) concept and architecture". [6] 3GPP TS 23.246: "Multimedia Broadcast/Multicast Service (MBMS); Architecture and functional description". [7] 3GPP TS 23.125: "Overall high level functionality and architecture impacts of flow based charging; Stage 2". [8] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [9] 3GPP TS 32.251: "Telecommunication management; Charging management; Packet Switched (PS) domain charging". [10] 3GPP TS 29.061: "Interworking between the Public Land Mobile Network (PLMN) supporting packet based services and Packet Data Networks (PDN)". [11] 3GPP TR 33.919: "3G Security; Generic Authentication Architecture (GAA); System description". [12] 3GPP TS 23.060: "General Packet Radio Service (GPRS); Service description; Stage 2". [13] Void. [14] 3GPP TS 23.107: "Quality of Service (QoS) concept and architecture". [15] "WiMAX End-to-End Network Systems Architecture" (http://www.wimaxforum.org/technology/documents). [16] 3GPP TS 23.003: "Numbering, addressing and identification". [17] 3GPP TS 23.401: "General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access". [18] 3GPP TS 23.402: "Architecture Enhancements for non-3GPP accesses". [19] 3GPP TS 36.300: "Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall description; Stage 2". [20] 3GPP2 X.S0057-B v2.0: "E UTRAN - HRPD Connectivity and Interworking: Core Network Aspects", July 2014. [21] 3GPP TS 23.167: "IP Multimedia Subsystem (IMS) emergency sessions". [22] 3GPP TS 29.213: "Policy and Charging Control signalling flows and QoS parameter mapping". [23] 3GPP TS 23.261: "IP Flow Mobility and seamless WLAN offload; Stage 2". [24] 3GPP TS 23.198: "Open Service Access (OSA); Stage 2". [25] 3GPP TS 23.335: "User Data Convergence (UDC); Technical realization and information flows; Stage 2". [26] 3GPP TS 29.335: "User Data Convergence (UDC); User Data Repository Access Protocol over the Ud interface; Stage 3". [27] 3GPP TS 22.115: "Service aspects; Charging and billing". [28] 3GPP TS 23.216: "Single Radio Voice Call Continuity (SRVCC); Stage 2". [29] 3GPP TS 23.139:"3GPP-Fixed Broadband Access Interworking". [30] Broadband Forum TR-203: "Interworking between Next Generation Fixed and 3GPP Wireless Access" (work in progress). [31] Broadband Forum TR-134: "Policy Control Framework " (work in progress). [32] 3GPP TS 25.467: "UTRAN architecture for 3G Home Node B (HNB); Stage 2". [33] Broadband Forum TR-291: "Nodal Requirements for Interworking between Next Generation Fixed and 3GPP Wireless Access" (work in progress). [34a] Broadband Forum TR-124 issue 2: "Functional Requirements for Broadband Residential Gateway Devices". [34b] Broadband Forum TR-124 issue 3: "Functional Requirements for Broadband Residential Gateway Devices". [35] Broadband Forum TR-101: "Migration to Ethernet-Based Broadband Aggregation". [36] Broadband Forum TR-146: "Internet Protocol (IP) Sessions". [37] Broadband Forum TR-300: "Nodal Requirements for Converged Policy Management". [38] 3GPP TS 22.278: "Service requirements for the Evolved Packet System (EPS)". [39] 3GPP TS 23.228: "IP Multimedia Subsystem (IMS); Stage 2". [40] Broadband Forum TR-092: "Broadband Remote Access Server (BRAS) Requirements Document". [41] Broadband Forum TR-134: "Broadband Policy Control Framework (BPCF)". [42] 3GPP TS 23.682: "Architecture enhancements to facilitate communications with packet data networks and applications". [43] 3GPP TS 23.161: "Network-based IP flow mobility and Wireless Local Area Network (WLAN) offload; Stage 2". [44] 3GPP TS 23.303: "Proximity-based services (ProSe); Stage 2". [45] 3GPP TS 26.114: "Multimedia telephony over IP Multimedia Subsystem (IMS); Multimedia telephony; media handling and interaction". [46] 3GPP TS 23.179: "Functional architecture and information flows to support mission-critical communication service; Stage 2". [47] IETF RFC 6066: "Transport Layer Security (TLS) Extensions: Extension Definitions". [48] 3GPP TS 23.285: "Architecture enhancements for V2X services". [49] 3GPP TS 22.011: "Service accessibility". [50] 3GPP TS 24.008: "Mobile radio interface Layer 3 specification; Core network protocols; Stage 3". [51] 3GPP TS 22.261: "Service requirements for the 5G system; Stage1". [52] 3GPP TS 23.272: "Circuit Switched (CS) fallback in Evolved Packet System (EPS); Stage 2". [53] 3GPP TS 26.238: "Uplink streaming". [54] 3GPP TR 26.939: "Guidelines on the Framework for Live Uplink Streaming (FLUS)". [55] 3GPP TS 23.221: "3rd Generation Partnership Project; Technical Specification Group Services and Systems Aspects; Architectural Requirements". [56] 3GPP TS 23.204: "Support of Short Message Service (SMS) over generic Internet Protocol (IP) access; Stage 2".
5817f2cb61f7e615e7879961cd761fa6
23.203
3 Definitions, symbols and abbreviations
5817f2cb61f7e615e7879961cd761fa6
23.203
3.1 Definitions
For the purposes of the present document, the terms and definitions given in TR 21.905 [8] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905 [8]. application detection filter: A logic used to detect packets generated by an application based on extended inspection of these packets, e.g. header and/or payload information, as well as dynamics of packet flows. The logic is entirely internal to a TDF or a PCEF enhanced with ADC and is out of scope of this specification. application identifier: An identifier referring to a specific application detection filter. application service provider: A business entity responsible for the application that is being / will be used by a UE, which may be either an AF operator or has an association with the AF operator. ADC decision: A decision consists of references to ADC rules, associated enforcement actions (for dynamic ADC rules) and TDF session attributes and is provided by the PCRF to the TDF for application detection and control. ADC rule: A set of information enabling the detection of application traffic and associated enforcement actions. ADC rules are directly provisioned into the TDF and referenced by the PCRF. authorised QoS: The maximum QoS that is authorised for a service data flow. In case of an aggregation of multiple service data flows within one IP‑CAN bearer (e.g. for GPRS a PDP context), the combination of the "Authorised QoS" information of the individual service data flows is the "Authorised QoS" for the IP‑CAN bearer. It contains the QoS class identifier and the data rate. binding: The association between a service data flow and the IP‑CAN bearer (for GPRS the PDP context) transporting that service data flow. binding mechanism: The method for creating, modifying and deleting bindings. charging control: The process of associating packets, belonging to a service data flow, to a charging key and applying online charging and/or offline charging, as appropriate. charging key: information used by the online and offline charging system for rating purposes. detected application traffic: An aggregate set of packet flows that are generated by a given application and detected by an application detection filter. dynamic ADC Rule: an ADC rule, for which the PCRF can provide and modify some parameters via the Sd reference point. dynamic PCC Rule: a PCC rule, for which the definition is provided to the PCEF via the Gx reference point. event report: a notification, possibly containing additional information, of an event which occurs that corresponds with an event trigger. Also, an event report is a report from the PCRF to the AF concerning transmission resources or requesting additional information. event trigger: a rule specifying the event reporting behaviour of a PCEF or BBERF or TDF. Also, it is a trigger for credit management events. gating control: The process of blocking or allowing packets, belonging to a service data flow / detected application's traffic, to pass through to the desired endpoint. Gateway Control Session: An association between a BBERF and a PCRF (when GTP is not used in the EPC), used for transferring access specific parameters, BBERF events and QoS rules between PCRF and BBERF. GBR bearer: An IP‑CAN bearer with reserved (guaranteed) bitrate resources. GPRS IP‑CAN: This IP‑CAN incorporates GPRS over GERAN and UTRAN, see TS 23.060 [12]. IP‑CAN bearer: An IP transmission path of defined capacity, delay and bit error rate, etc. See TR 21.905 [8] for the definition of bearer. IP‑CAN session: The association between a UE and an IP network. The association is identified by one IPv4 and/or an IPv6 prefix together with UE identity information, if available and a PDN represented by a PDN ID (e.g. an APN). An IP‑CAN session incorporates one or more IP‑CAN bearers. Support for multiple IP‑CAN bearers per IP‑CAN session is IP‑CAN specific. An IP‑CAN session exists as long as UE IP addresses/prefix are established and announced to the IP network. non-GBR bearer: An IP‑CAN bearer with no reserved (guaranteed) bitrate resources. operator-controlled service: A service for which complete PCC rule information, including service data flow filter information, is available in the PCRF through configuration and/or dynamic interaction with an AF. packet flow: A specific user data flow from and/or to the UE. Presence Reporting Area: An area defined within 3GPP Packet Domain for the purposes of reporting of UE presence within that area due to policy control and/or charging reasons. There are two types of Presence Reporting Areas: "UE-dedicated Presence Reporting Areas" and "Core Network pre-configured Presence Reporting Areas". They are further defined in TS 23.401 [17]. PCC decision: A decision consists of PCC rules and IP‑CAN bearer attributes and is provided by the PCRF to the PCEF for policy and charging control and, for PCEF enhanced with ADC, application detection and control. PCC rule: A set of information enabling the detection of a service data flow and providing parameters for policy control and/or charging control and, for PCEF enhanced with ADC, for application detection and control. PCEF enhanced with ADC: PCEF, enhanced with application detection and control feature. policy control: The process whereby the PCRF indicates to the PCEF how to control the IP‑CAN bearer. Policy control includes QoS control and/or gating control. predefined PCC Rule: a PCC rule that has been provisioned directly into the PCEF by the operator. policy counter: A mechanism within the OCS to track spending applicable to a subscriber. policy counter identifier: A reference to a policy counter in the OCS for a subscriber. policy counter status: A label whose values are not standardized and that is associated with a policy counter's value relative to the spending limit(s) (the number of possible policy counter status values for a policy counter is one greater than the number of thresholds associated with that policy counter, i.e policy counter status values describe the status around the thresholds). This is used to convey information relating to subscriber spending from OCS to PCRF. Specific labels are configured jointly in OCS and PCRF. Packet Flow Description (PFD): A set of information enabling the detection of application traffic provided by a 3rd party service provider. A PFD is further defined in TS 23.682 [42]. QoS class identifier (QCI): A scalar that is used as a reference to a specific packet forwarding behaviour (e.g. packet loss rate, packet delay budget) to be provided to a SDF. This may be implemented in the access network by the QCI referencing node specific parameters that control packet forwarding treatment (e.g. scheduling weights, admission thresholds, queue management thresholds, link layer protocol configuration, etc.), that have been pre-configured by the operator at a specific node(s) (e.g. eNodeB). QoS rule: A set of information enabling the detection of a service data flow and defining its associated QoS parameters. Monitoring key: information used by the PCEF, TDF and PCRF for usage monitoring control purposes as a reference to a given set of service data flows or application (s), that all share a common allowed usage on a per UE and APN basis. RAN user plane congestion: RAN user plane congestion occurs when the demand for RAN resources exceeds the available RAN capacity to deliver the user data for a prolonged period of time. NOTE 1: Short-duration traffic bursts is a normal condition at any traffic load level and is not considered to be RAN user plane congestion. Likewise, a high-level of utilization of RAN resources (based on operator configuration) is considered a normal mode of operation and might not be RAN user plane congestion. Redirection: Redirect the detected service traffic to an application server (e.g. redirect to a top-up / service provisioning page). service data flow: An aggregate set of packet flows carried through the PCEF that matches a service data flow template. service data flow filter: A set of packet flow header parameter values/ranges used to identify one or more of the packet flows. The possible service data flow filters are defined in clause 6.2.2.2. service data flow filter identifier: A scalar that is unique for a specific service data flow (SDF) filter (used on Gx and Gxx) within an IP‑CAN session. service data flow template: The set of service data flow filters in a PCC Rule or an application identifier in a PCC rule referring to an application detection filter, required for defining a service data flow. service identifier: An identifier for a service. The service identifier provides the most detailed identification, specified for flow based charging, of a service data flow. A concrete instance of a service may be identified if additional AF information is available (further details to be found in clause 6.3.1). session based service: An end user service requiring application level signalling, which is separated from service rendering. spending limit: A spending limit is the usage limit of a policy counter (e.g. monetary, volume, duration) that a subscriber is allowed to consume. spending limit report: a notification, containing the current policy counter status generated from the OCS to the PCRF via the Sy reference point. subscribed guaranteed bandwidth QoS: The per subscriber, authorized cumulative guaranteed bandwidth QoS which is provided by the SPR/UDR to the PCRF. subscriber category: is a means to group the subscribers into different classes, e.g. gold user, the silver user and the bronze user. (S)Gi-LAN: The network infrastructure connected to the 3GPP network over the SGi or Gi reference point that provides various IP-based services. (S)Gi-LAN service function: A function located in the (S)Gi-LAN that provides value-added IP-based services e.g. NAT, anti-malware, parental control, DDoS protection. TDF session: An association between an IP-CAN session and the assigned TDF for the purpose of application detection and control. uplink bearer binding verification: The network enforcement of terminal compliance with the negotiated uplink traffic mapping to bearers. For the purposes of the present document, the following terms and definitions given in TS 23.401 [17] apply: Narrowband-IoT: See TS 23.401 [17]. WB-E-UTRAN: See TS 23.401 [17].
5817f2cb61f7e615e7879961cd761fa6
23.203
3.2 Abbreviations
For the purposes of the present document, the abbreviations given in TR 21.905 [8] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 [8]. ADC Application Detection and Control AF Application Function AMBR Aggregated Maximum Bitrate ARP Allocation and Retention Priority ASP Application Service Provider BBERF Bearer Binding and Event Reporting Function BBF Bearer Binding Function BBF AAA Broadband Forum AAA BNG Broadband Network Gateway BPCF Broadband Policy Control Function BRAS Broadband Remote Access Server CSG Closed Subscriber Group CSG ID Closed Subscriber Group Identity DRA Diameter Routing Agent E-UTRAN Evolved Universal Terrestrial Radio Access Network H-PCEF A PCEF in the HPLMN H-PCRF A PCRF in the HPLMN HRPD High Rate Packet Data HSGW HRPD Serving Gateway IP‑CAN IP Connectivity Access Network MPS Multimedia Priority Service NB-IoT Narrowband IoT NBIFOM Network-based IP flow mobility NSWO Non-Seamless WLAN Offload OAM Operation Administration and Maintenance OFCS Offline Charging System OCS Online Charging System PCC Policy and Charging Control PCEF Policy and Charging Enforcement Function PCRF Policy and Charging Rules Function PFDF Packet Flow Description Function PRA Presence Reporting Area QCI QoS Class Identifier RAN Radio Access Network RCAF RAN Congestion Awareness Function RLOS Restricted Local Operator Services RUCI RAN User Plane Congestion Information RG Residential Gateway SCEF Service Capability Exposure Function vSRVCC video Single Radio Voice Call Continuity SPR Subscription Profile Repository TDF Traffic Detection Function TSSF Traffic Steering Support Function UDC User Data Convergence UDR User Data Repository V-PCEF A PCEF in the VPLMN V-PCRF A PCRF in the VPLMN WB-E-UTRAN Wide Band E-UTRAN
5817f2cb61f7e615e7879961cd761fa6
23.203
4 High level requirements
5817f2cb61f7e615e7879961cd761fa6
23.203
4.1 General requirements
It shall be possible for the PCC architecture to base decisions upon subscription information. It shall be possible to apply policy and charging control to any kind of 3GPP IP‑CAN and any non-3GPP accesses connected via EPC complying with TS 23.402 [18]. Applicability of PCC to other IP‑CANs is not restricted. However, it shall be possible for the PCC architecture to base decisions upon the type of IP‑CAN used (e.g. GPRS, etc.). The policy and charging control shall be possible in the roaming and local breakout scenarios defined in TS 23.401 [17] and TS 23.402 [18]. The PCC architecture shall discard packets that don't match any service data flow of the active PCC rules. It shall also be possible for the operator to define PCC rules, with wild-carded service data flow filters, to allow for the passage and charging for packets that do not match any service data flow template of any other active PCC rules. The PCC architecture shall allow the charging control to be applied on a per service data flow and on a per application basis, independent of the policy control. The PCC architecture shall have a binding method that allows the unique association between service data flows and their IP‑CAN bearer. A single service data flow detection shall suffice for the purpose of both policy control and flow based charging. A PCC rule may be predefined or dynamically provisioned at establishment and during the lifetime of an IP‑CAN session. The latter is referred to as a dynamic PCC rule. The number of real-time PCC interactions shall be minimized although not significantly increasing the overall system reaction time. This requires optimized interfaces between the PCC nodes. It shall be possible to take a PCC rule into service and out of service, at a specific time of day, without any PCC interaction at that point in time. It shall be possible to take APN-related policy information into service and out of service, once validity conditions specified as part of the APN-related policy information are fulfilled or not fulfilled anymore, respectively, without any PCC interaction at that point in time. PCC shall be enabled on a per PDN basis (represented by an access point and the configured range of IP addresses) at the PCEF. It shall be possible for the operator to configure the PCC architecture to perform charging control, policy control or both for a PDN access. PCC shall support roaming users. The PCC architecture shall allow the resolution of conflicts which would otherwise cause a subscriber's Subscribed Guaranteed Bandwidth QoS to be exceeded. The PCC architecture shall support topology hiding. It should be possible to use PCC architecture for handling IMS-based emergency service and Restricted Local Operator Services. It shall be possible with the PCC architecture, in real-time, to monitor the overall amount of resources that are consumed by a user and to control usage independently from charging mechanisms, the so-called usage monitoring control. It shall be possible for the PCC architecture to provide application awareness even when there is no explicit service level signalling. The PCC architecture shall support making policy decisions based on subscriber spending limits. The PCC architecture shall support making policy decisions based on RAN user plane congestion status. The PCC architecture shall support making policy decisions for multi-access IP flow mobility solution described in TS 23.161 [43]. The PCC architecture shall support making policy decisions for (S)Gi-LAN traffic steering.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2 Charging related requirements
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2.1 General
In order to allow for charging control on service data flow, the information in the PCC rule identifies the service data flow and specifies the parameters for charging control. The PCC rule information may depend on subscription data. In order to allow for charging control on detected application traffic identified by ADC Rule for the TDF, the information in the ADC rule contains the application identifier and specifies the parameters for charging control. The ADC rule information may depend on subscription data. For the purpose of charging correlation between application level (e.g. IMS) and service data flow level, applicable charging identifiers shall be passed along within the PCC architecture, if such identifiers are available. For the purpose of charging correlation between service data flow level and application level (e.g. IMS) as well as on-line charging support at the application level, applicable charging identifiers and IP‑CAN type identifiers shall be passed from the PCRF to the AF, if such identifiers are available.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2.2 Charging models
The PCC charging shall support the following charging models both for charging performed by PCEF and charging performed by TDF: - Volume based charging; - Time based charging; - Volume and time based charging; - Event based charging; - No charging. NOTE 1: The charging model - "No charging" implies that charging control is not applicable. Shared revenue services shall be supported. In this case settlement for all parties shall be supported, including the third parties that may have been involved providing the services. NOTE 2: When developing a charging solution, the PCC charging models may be combined to form the solution. How to achieve a specific solution is however not within the scope of this TS. NOTE 3: The Event based charging defined in this specification applies only to session based charging as defined by the charging specifications. 4.2.2a Charging requirements The requirements in this clause apply to both PCC rules based charging and ADC rules based charging unless exceptions are explicitly mentioned. It shall be possible to apply different rates and charging models when a user is identified to be roaming from when the user is in the home network. Furthermore, it shall be possible to apply different rates and charging models based on the location of a user, beyond the granularity of roaming. It shall be possible to apply different rates and charging models when a user consuming network services via a CSG cell or a hybrid cell according to the user CSG information. User CSG information includes CSG ID, access mode and CSG membership indication. It shall be possible to apply a separate rate to a specific service, e.g. allow the user to download a certain volume of data, reserved for the purpose of one service for free and then continue with a rate causing a charge. It shall be possible to change the rate based on the time of day. It shall be possible to enforce per-service identified by PCC Rule/per-application identified by ADC Rule usage limits for a service data flow using online charging on a per user basis (may apply to prepaid and post-paid users). It shall be possible to apply different rates depending on the access used to carry a Service Data Flow. This applies also to a PDN connection supporting NBIFOM. It shall be possible for the online charging system to set and send the thresholds (time and/or volume based) for the amount of remaining credit to the PCEF or TDF for monitoring. In case the PCEF or TDF detects that any of the time based or volume based credit falls below the threshold, the PCEF or TDF shall send a request for credit re-authorization to the OCS with the remaining credit (time and/or volume based). It shall be possible for the charging system to select the applicable rate based on: - home/visited IP‑CAN; - User CSG information; - IP‑CAN bearer characteristics (e.g. QoS); - QoS provided for the service; - time of day; - IP‑CAN specific parameters according to Annex A. IP-CAN bearer characteristics are not applicable to charging performed in TDF. NOTE 1: The same IP-CAN parameters related to access network/subscription/location information as reported for service data flow based charging may need to be reported for the application based charging at the beginning of the session and following any of the relevant re-authorization triggers. The charging system maintains the tariff information, determining the rate based on the above input. Thus the rate may change e.g. as a result of IP‑CAN session modification to change the bearer characteristics provided for a service data flow. The charging rate or charging model applicable to a service data flow/detected application traffic may change as a result of events in the service (e.g. insertion of a paid advertisement within a user requested media stream). The charging model applicable to a service data flow/detected application traffic may change as a result of events identified by the OCS (e.g. after having spent a certain amount of time and/or volume, the user gets to use some services for free). The charging rate or charging model applicable to a service data flow/detected application traffic may change as a result of having used the service data flow/detected application traffic for a certain amount of time and/or volume. For online charging, it shall be possible to apply an online charging action upon PCEF or TDF events (e.g. re-authorization upon QoS change). It shall be possible to apply an online charging action for detected application upon Application Start/Stop events. It shall be possible to indicate to the PCEF or TDF that interactions with the charging systems are not required for a PCC or ADC rule, i.e. to perform neither accounting nor credit control for the service data flow/detected application traffic and then no offline charging information is generated. This specification supports charging and enforcement being done in either the PCEF or the TDF for a certain IP-CAN session, but not both for the same IP-CAN session (applies to all IP-CAN sessions belonging to the same APN). NOTE 2: The above requirement is to ensure that there is no double charging in both TDF and PCEF or over charging in case of packet discarded at PCEF or TDF.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2.3 Examples of Service Data Flow Charging
There are many different services that may be used within a network, including both user-user and user-network services. Service data flows from these services may be identified and charged in many different ways. A number of examples of configuring PCC rules for different service data flows are described below. EXAMPLE 1: A network server provides an FTP service. The FTP server supports both the active (separate ports for control and data) and passive modes of operation. A PCC rule is configured for the service data flows associated with the FTP server for the user. The PCC rule uses a filter specification for the uplink that identifies packets sent to port 20 or 21 of the IP address of the server and the origination information is wildcarded. In the downlink direction, the filter specification identifies packets sent from port 20 or 21 of the IP address of the server. EXAMPLE 2: A network server provides a "web" service. A PCC rule is configured for the service data flows associated with the HTTP server for the user. The PCC rule uses a filter specification for the uplink that identifies packets sent to port 80 of the IP address of the server and the origination information is wildcarded. In the downlink direction, the filter specification identifies packets sent from port 80 of the IP address of the server. EXAMPLE 3: The same server provides a WAP service. The server has multiple IP addresses and the IP address of the WAP server is different from the IP address of the web server. The PCC rule uses the same filter specification as for the web server, but with the IP addresses for the WAP server only. EXAMPLE 4: An operator offers a zero rating for network provided DNS service. A PCC rule is established setting all DNS traffic to/from the operators DNS servers as offline charged. The data flow filter identifies the DNS port number and the source/destination address within the subnet range allocated to the operators network nodes. EXAMPLE 5: An operator has a specific charging rate for user-user VoIP traffic over the IMS. A PCC rule is established for this service data flow. The filter information to identify the specific service data flow for the user-user traffic is provided by the P‑CSCF (AF). EXAMPLE 6: An operator is implementing UICC based authentication mechanisms for HTTP based services utilizing the GAA Framework as defined in TR 33.919 [11] by e.g. using the Authentication Proxy. The Authentication Proxy may appear as an AF and provide information to the PCRF for the purpose of selecting an appropriate PCC Rule.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3 Policy control requirements
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.1 General
The policy control features comprise gating control and QoS control. The concept of QoS class identifier and the associated bitrates specify the QoS information for service data flows and bearers on the Gx and Gxx reference points.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.2 Gating control
Gating control shall be applied by the PCEF on a per service data flow basis. To enable the PCRF gating control decisions, the AF shall report session events (e.g. session termination, modification) to the PCRF. For example, session termination, in gating control, may trigger the blocking of packets or "closing the gate".
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3 QoS control
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.1 QoS control at service data flow level
It shall be possible to apply QoS control on a per service data flow basis in the PCEF. QoS control per service data flow allows the PCC architecture to provide the PCEF with the authorized QoS to be enforced for each specific service data flow. Criteria such as the QoS subscription information may be used together with policy rules such as, service-based, subscription-based, or predefined PCRF internal policies to derive the authorized QoS to be enforced for a service data flow. It shall be possible to apply multiple PCC rules, without application provided information, using different authorised QoS within a single IP‑CAN session and within the limits of the Subscribed QoS profile.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.2 QoS control at IP‑CAN bearer level
It shall be possible for the PCC architecture to support control of QoS reservation procedures (UE-initiated or network-initiated) for IP‑CANs that support such procedures for its IP‑CAN bearers in the PCEF or the BBERF, if applicable. It shall be possible to determine the QoS to be applied in QoS reservation procedures (QoS control) based on the authorised QoS of the service data flows that are applicable to the IP‑CAN bearer and on criteria such as the QoS subscription information, service based policies and/or predefined PCRF internal policies. Details of QoS reservation procedures are IP‑CAN specific and therefore, the control of these procedures is described in Annex A and Annex D. It shall be possible for the PCC architecture to support control of QoS for the packet traffic of IP‑CANs. The PCC architecture shall be able to provide policy control in the presence of NAT devices. This may be accomplished by providing appropriate address and port information to the PCRF. The enforcement of the control for QoS reservation procedures for an IP‑CAN bearer shall allow for a downgrading or an upgrading of the requested QoS as part of a UE-initiated IP‑CAN bearer establishment and modification. The PCC architecture shall be able to provide a mechanism to initiate IP‑CAN bearer establishment and modification (for IP‑CANs that support such procedures for its bearers) as part of the QoS control. The IP‑CAN shall prevent cyclic QoS upgrade attempts due to failed QoS upgrades. NOTE: These measures are IP‑CAN specific. The PCC architecture shall be able to handle IP‑CAN bearers that require a guaranteed bitrate (GBR bearers) and IP‑CAN bearers for which there is no guaranteed bitrate (non-GBR bearers).
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.3 QoS Conflict Handling
It shall be possible for the PCC architecture to support conflict resolution in the PCRF when the authorized bandwidth associated with multiple PCC rules exceeds the Subscribed Guaranteed bandwidth QoS.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.4 QoS control at APN level
It shall be possible for the PCRF to authorize the APN-AMBR to be enforced by the PCEF as defined in TS 23.401 [17]. The APN-AMBR applies to all IP‑CAN sessions of a UE to the same APN and has separate values for the uplink and downlink direction. It shall be possible for the PCRF to provide the authorized APN-AMBR values unconditionally or conditionally, i.e. per IP-CAN type and/or RAT type. It shall be possible for the PCRF to request a change of the unconditional or conditional authorized APN-AMBR value(s) at a specific point in time. The details are specified in clause 6.4b.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.4 Subscriber Spending Limits
It shall be possible to enforce policies based on subscriber spending limits as per TS 22.115 [27]. The OCS shall maintain policy counter(s) to track spending for a subscription. These policy counters must be available in the OCS prior to their use over the Sy interface. NOTE 1: The mechanism for provisioning the policy counters in the OCS is out of scope of this document. NOTE 2: A policy counter in the OCS can represent the spending for one or more services, one or more devices, one or more subscribers, etc. The representation is operator dependent. There is no explicit relationship between Charging-Key and policy counter. The PCRF shall request information regarding the subscriber's spending from the OCS, to be used as input for dynamic policy decisions for the subscriber, using subscriptions to spending limit reports. The OCS shall make information regarding the subscriber's spending available to the PCRF using spending limit reports.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.4 Usage Monitoring Control
It shall be possible to apply usage monitoring for the accumulated usage of network resources on a per IP-CAN session and user basis. This capability is required for enforcing dynamic policy decisions based on the total network usage in real-time. The PCRF that uses usage monitoring for making dynamic policy decisions shall set and send the applicable thresholds to the PCEF or TDF for monitoring. The usage monitoring thresholds shall be based either on time, or on volume. The PCRF may send both thresholds to the PCEF or TDF. The PCEF or TDF shall notify the PCRF when a threshold is reached and report the accumulated usage since the last report for usage monitoring. If both time and volume thresholds were provided to the PCEF or TDF, the accumulated usage since last report shall be reported when either the time or the volume thresholds are reached. NOTE: There are reasons other than reaching a threshold that may cause the PCEF/TDF to report accumulated usage to the PCRF as defined in clauses 6.2.2.3 and 6.6.2. The usage monitoring capability shall be possible for an individual or a group of service data flow(s), or for all traffic of an IP-CAN session in the PCEF. When usage monitoring for all traffic of an IP-CAN session is enabled, it shall be possible to exclude an individual SDF or a group of service data flow(s) from the usage monitoring for all traffic of this IP-CAN session. It shall be possible to activate usage monitoring both to service data flows associated with predefined PCC rules and dynamic PCC rules, including rules with deferred activation and/or deactivation times while those rules are active. The usage monitoring capability shall be possible for an individual or a group of detected application(s) traffic, or all detected traffic belonging to a specific TDF session. When usage monitoring for all traffic of a TDF session is enabled, it shall be possible to exclude an individual application or a group of detected application(s) from the usage monitoring for all traffic belonging to this TDF session if usage monitoring. It shall be possible to activate usage monitoring both to predefined ADC rules and to dynamic ADC rules, including rules with deferred activation and/or deactivation times while those rules are active. If service data flow(s)/application(s) need to be excluded from IP-CAN/TDF session level usage monitoring and IP‑CAN /TDF session level usage monitoring is enabled, the PCRF shall be able to provide the an indication of exclusion from session level monitoring associated with the respective PCC/ADC rule(s). It shall be possible to apply different usage monitoring depending on the access used to carry a Service Data Flow. This applies also to a PDN connection supporting NBIFOM. IP-CAN session level usage monitoring is not dependent on the access used to carry a Service Data Flow.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.5 Application Detection and Control
The application detection and control feature comprise the request to detect the specified application traffic, report to the PCRF on the start or stop of application traffic and to apply the specified enforcement and charging actions. The application detection and control shall be implemented either by the TDF or by the PCEF enhanced with ADC. Two models may be applied, depending on operator requirements: solicited and unsolicited application reporting. The unsolicited application reporting is only supported by the TDF. Solicited application reporting: The PCRF shall instruct the TDF, or the PCEF enhanced with ADC, on which applications to detect and whether to report start or stop event to the PCRF by activating the appropriate ADC/PCC rules in the TDF/PCEF enhanced with ADC. Reporting notifications of start and stop of application detection to the PCRF may be muted, in addition, per specific ADC/PCC rule. The PCRF may, in a dynamic ADC/PCC rule, instruct the TDF or PCEF enhanced with ADC, what enforcement actions to apply to the detected application traffic. The PCRF may activate application detection only if user profile configuration allows this. Unsolicited application reporting: The TDF is pre-configured on which applications to detect and report. The PCRF may enable enforcement in the PCEF based on the service data flow description provided to PCRF by the TDF. It is assumed that user profile configuration indicating whether application detection and control can be enabled is not required. The report to the PCRF shall include the same information for solicited and unsolicited application reporting that is whether the report is for start or stop, the detected application identifier and, if deducible, the service data flow descriptions for the detected application traffic. For the application types, where service data flow descriptions are deducible, the Start and Stop of the application may be indicated multiple times, including the application instance identifier to inform the PCRF about the service data flow descriptions belonging to that application instance. The application instance identifier is dynamically assigned by the TDF or by the PCEF enhanced with ADC in order to allow correlation of application Start and Stop events to the specific service data flow description. NOTE 1: The reporting to the PCRF on the start or stop of application traffic is not depending on any enforcement action of the ADC/PCC rule. Unless the PCRF muted the reporting for the ADC/PCC rule, every detected start or stop event is reported even if the application traffic is discarded due to enforcement actions of the ADC/PCC rule. For the TDF operating in the solicited application reporting model: - When the TDF cannot provide to the PCRF the service data flow description for the detected applications, the TDF shall perform charging, gating, redirection and bandwidth limitation for the detected applications, as defined in the ADC rule. The existing PCEF functionality remains unchanged. NOTE 2: Redirection may not be possible for all types of detected application traffic (e.g. this may only be performed on specific HTTP based flows). - When the TDF provides to the PCRF the service data flow description, the PCRF may take control over the actions resulting of application detection, by applying the charging and policy enforcement per service data flow as defined in this document, or the TDF may perform charging, gating, redirection and bandwidth limitation as described above. It is the PCRF's responsibility to coordinate the PCC rules with ADC rules in order to ensure consistent service delivery. Usage monitoring as described in clause 4.4 may be activated in conjunction with application detection and control. The usage monitoring functionality is only applicable to the solicited application reporting model. For TDF, ADC rule based charging is applicable. ADC rule based charging, as described in clause 4.2.2a, may be activated in conjunction with application detection and control. The charging functionality is only applicable to the solicited application reporting model. In order to avoid charging for the same traffic in both the TDF and the PCEF, this specification supports charging and enforcement implemented in either the PCEF or the TDF for a certain IP-CAN session, but not both for the same IP-CAN session. The ADC rules are used to determine the online and offline characteristics for charging. For offline charging, usage reporting over the Gzn interface shall be used. For online charging, credit management and reporting over the Gyn interface shall be used. The PCEF is in this case not used for charging and enforcement (based on active PCC rules and APN-AMBR configuration), but shall still be performing bearer binding based on the active PCC rules. In order to avoid having traffic that is charged in the TDF later discarded by the policing function in the PCEF, the assumption is that no GBR bearers are required when TDF is the charging and policy enforcement point. In addition, the DL APN-AMBR in PCEF shall be configured with such high values that it does not result in discarded packets. NOTE 3: An example of applicability is IMS APN, which would require dynamic PCC rules, would be configured such that PCEF based charging and enforcement is employed, but for regular internet access APN, the network would be configured such that the TDF performs both charging and enforcement. NOTE 4: An operator may also apply this solution with both PCEF and TDF performing enforcement and charging for a single IP-CAN session as long as the network is configured in such a way that the traffic charged and enforced in the PCEF does not overlap with the traffic charged and enforced by the TDF. NOTE 5: The PCEF may still do enforcement actions on uplink traffic without impacting the accuracy of the charging information produced by the TDF. If only charging for a service data flow identified by a PCC Rule is required for the corresponding IP-CAN session, the PCEF performs charging and policy enforcement for the IP-CAN session. The TDF may be used for application detection and reporting of application start/stop and for enforcement actions on downlink traffic.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.6 RAN user plane congestion detection, reporting and mitigation
It shall be possible to transfer RAN user plane congestion information from the RAN to the Core Network in order to mitigate the congestion by measures selected by the PCRF and applied by the PCEF/TDF/AF. The detailed description of this functionality can be found in TS 23.401 [17] and TS 23.060 [12].
5817f2cb61f7e615e7879961cd761fa6
23.203
4.7 Support for service capability exposure
It shall be possible to transfer information related to service capability exposure between the PCRF and the AF via an SCEF (see TS 23.682 [42]).
5817f2cb61f7e615e7879961cd761fa6
23.203
4.8 Traffic Steering Control
Traffic Steering Control refers to the capability to activate/deactivate traffic steering policies from the PCRF in the PCEF, the TDF or the TSSF for the purpose of steering the subscriber's traffic to appropriate operator or 3rd party service functions (e.g. NAT, antimalware, parental control, DDoS protection) in the (S)Gi-LAN. The traffic steering control is supported in non-roaming and home-routed scenarios only.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.9 Management of Packet Flow Descriptions in the PCEF/TDF using the PFDF
Management of Packet Flow Descriptions in the PCEF/TDF using the PFDF refers to the capability to create, update or remove PFDs in the PFDF via the SCEF (as described in TS 23.682 [42]) and the distribution from the PFDF to the PCEF or the TDF or both. This feature may be used when the PCEF or the TDF is configured to detect a particular application provided by an ASP. NOTE 1: A possible scenario for the management of PFDs in the PCEF/TDF is when an application, identified by an application detection filter in the PCEF/TDF, deploys a new server or reconfiguration at the ASP network occurs which impacts the application detection filters of that particular application. NOTE 2: The management of application detection filters in the PCEF/TDF can still be performed by using operation and maintenance procedures. NOTE 3: This feature aims for both: to enable accurate application detection at the PCEF and at the TDF and to minimize storage requirements for the PCEF and the TDF. The management of Packet Flow Descriptions is supported in non-roaming and home-routed scenarios for those ASPs that have a business relation with the home operator.
5817f2cb61f7e615e7879961cd761fa6
23.203
5 Architecture model and reference points
5817f2cb61f7e615e7879961cd761fa6
23.203
5.1 Reference architecture
The PCC functionality is comprised by the functions of the Policy and Charging Enforcement Function (PCEF), the Bearer Binding and Event Reporting Function (BBERF), the Policy and Charging Rules Function (PCRF), the Application Function (AF), the Traffic Detection Function (TDF), the Traffic Steering Support Function (TSSF), the Online Charging System (OCS), the Offline Charging System (OFCS) and the Subscription Profile Repository (SPR) or the User Data Repository (UDR). UDR replaces SPR when the UDC architecture as defined in TS 23.335 [25] is applied to store PCC related subscription data. In this deployment scenario Ud interface between PCRF and UDR is used to access subscription data in the UDR. NOTE 1: When UDC architecture is used, SPR and Sp, whenever mentioned in this document, can be replaced by UDR and Ud. The PCRF can receive RAN User Plane Congestion Information from the RAN Congestion Awareness Function (RCAF). The PCC architecture extends the architecture of an IP‑CAN, where the Policy and Charging Enforcement Function is a functional entity in the Gateway node implementing the IP access to the PDN. The allocation of the Bearer Binding and Event Reporting Function is specific to each IP‑CAN type and specified in the corresponding Annex. The non-3GPP network relation to the PLMN is the same as defined in TS 23.402 [18]. Figure 5.1-1: Overall PCC logical architecture (non-roaming) when SPR is used Figure 5.1-2: Overall PCC logical architecture (non-roaming) when UDR is used Figure 5.1-3: Overall PCC architecture (roaming with home routed access) when SPR is used Figure 5.1-4: Overall PCC architecture for roaming with PCEF in visited network (local breakout) when SPR is used NOTE 2: Similar figures for the roaming cases apply when UDR is used instead of SPR and Ud instead of Sp. NOTE 3: PCEF may be enhanced with application detection and control feature. NOTE 4: In general, Gy and Gyn don't apply for the same IP-CAN session and Gz and Gzn also doesn't apply for the same IP-CAN session. For the description of the case where simultaneous reports apply, please refer to the clause 4.5). NOTE 5: RCAF also supports Nq/Nq' interfaces for E-UTRAN and UTRAN as specified in TS 23.401 [17] and TS 23.060 [12], respectively. NOTE 6: Use of TSSF in roaming scenarios is in this release only specified for the home routed access case. NOTE 7: The SCEF acts as an AF (using Rx) in some service capability exposure use cases as described in TS 23.682 [42]. NOTE 8: Gw and Gwn interface are not supported in roaming scenario with PCEF/TDF in visited network.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2 Reference points
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.1 Rx reference point
The Rx reference point resides between the AF and the PCRF. NOTE 1: The AF may be a third party application server. This reference point enables transport of application level session information from AF to PCRF. Such information includes, but is not limited to: - IP filter information to identify the service data flow for policy control and/or differentiated charging; - Media/application bandwidth requirements for QoS control. - In addition, for sponsored data connectivity: - the sponsor's identification, - optionally, a usage threshold and whether the PCRF reports these events to the AF, - information identifying the application service provider and application (e.g. SDFs, application identifier, etc.). The Rx reference point enables the AF subscription to notifications on IP‑CAN bearer level events (e.g. signalling path status of AF session) in the IP‑CAN. In order to mitigate RAN user plane congestion, the Rx reference point enables transport of the following information from the PCRF to the AF: - Re-try interval, which indicates when service delivery may be retried on Rx. NOTE 2: Additionally, existing bandwidth limitation parameters on Rx interface during the Rx session establishment are available in order to mitigate RAN user plane congestion.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.2 Gx reference point
The Gx reference point resides between the PCEF and the PCRF. The Gx reference point enables the PCRF to have dynamic control over the PCC behaviour at a PCEF. The Gx reference point enables the signalling of PCC decision, which governs the PCC behaviour and it supports the following functions: - Establishment of Gx session (corresponding to an IP‑CAN session) by the PCEF; - Request for PCC decision from the PCEF to the PCRF; - Provision of IP flow mobility routing information from PCEF to PCRF; this applies only when IP flow mobility as defined in TS 23.261 [23] is supported; - Provision of PCC decision from the PCRF to the PCEF; - Reporting of the start and the stop of detected applications and transfer of service data flow descriptions and application instance identifiers for detected applications from the PCEF to the PCRF; - Reporting of the accumulated usage of network resources on a per IP-CAN session basis from the PCEF to the PCRF; - Delivery of IP‑CAN session specific parameters from the PCEF to the PCRF or, if Gxx is deployed, from the PCRF to the PCEF per corresponding request; - Negotiation of IP‑CAN bearer establishment mode (UE-only or UE/NW); - Termination of Gx session (corresponding to an IP‑CAN session) by the PCEF or the PCRF. NOTE: The PCRF decision to terminate a Gx session is based on operator policies. It should only occur in rare situations (e.g. the removal of a UE subscription) to avoid service interruption due to the termination of the IP‑CAN session. The information contained in a PCC rule is defined in clause 6.3.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.3 Reference points to subscriber databases
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.3.1 Sp reference point
The Sp reference point lies between the SPR and the PCRF. The Sp reference point allows the PCRF to request subscription information related to the IP‑CAN transport level policies from the SPR based on a subscriber ID, a PDN identifier and possible further IP‑CAN session attributes, see Annex A and Annex D. For example, the subscriber ID can be IMSI. The reference point allows the SPR to notify the PCRF when the subscription information has been changed if the PCRF has requested such notifications. The SPR shall stop sending the updated subscription information when a cancellation notification request has been received from the PCRF. NOTE: The details associated with the Sp reference point are not specified in this Release.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.3.2 Ud reference point
The Ud reference point resides between the UDR and the PCRF, acting as an Application Frontend as defined in TS 23.335 [25]. It is used by the PCRF to access PCC related subscription data when stored in the UDR. The details for this reference point are described in TS 23.335 [25] and TS 29.335 [26].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.4 Gy reference point
The Gy reference point resides between the OCS and the PCEF. The Gy reference point allows online credit control for service data flow based charging. The functionalities required across the Gy reference point are defined in TS 32.251 [9] and is based on RFC 4006 [4].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.5 Gz reference point
The Gz reference point resides between the PCEF and the OFCS. The Gz reference point enables transport of service data flow based offline charging information. The Gz interface is specified in TS 32.240 [3].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.6 S9 reference point
The S9 reference point resides between a PCRF in the HPLMN (H‑PCRF) and a PCRF in the VPLMN (V‑PCRF). For roaming with a visited access (PCEF and, if applicable, BBERF in the visited network), the S9 reference point enables the H‑PCRF to (via the V‑PCRF): - have dynamic PCC control, including the PCEF and, if applicable, BBERF and, if applicable, TDF, in the VPLMN; - deliver or receive IP‑CAN-specific parameters from both the PCEF and, if applicable, BBERF, in the VPLMN; - serve Rx authorizations and event subscriptions from an AF in the VPLMN; - receive application identifier, service data flow descriptions, if available, application instance identifiers, if available and application detection start/stop event triggers report. For roaming with a home routed access, the S9 enables the H‑PCRF to provide dynamic QoS control policies from the HPLMN, via a V‑PCRF, to a BBERF in the VPLMN.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.7 Gxx reference point
The Gxx reference point resides between the PCRF and the BBERF. This reference point corresponds to the Gxa and Gxc, as defined in TS 23.402 [18] and further detailed in the annexes. The Gxx reference point enables a PCRF to have dynamic control over the BBERF behaviour. The Gxx reference point enables the signalling of QoS control decisions and it supports the following functions: - Establishment of Gxx session by BBERF; Termination of Gxx session by BBERF or PCRF; - Establishment of Gateway Control Session by the BBERF; - Termination of Gateway Control Session by the BBERF or PCRF; - Request for QoS decision from BBERF to PCRF; - Provision of QoS decision from PCRF to BBERF; - Delivery of IP‑CAN-specific parameters from PCRF to BBERF or from BBERF to PCRF; - Negotiation of IP‑CAN bearer establishment mode (UE-only and UE/NW). A QoS control decision consists of zero or more QoS rule(s) and IP‑CAN attributes. The information contained in a QoS rule is defined in clause 6.5. NOTE: The Gxx session serves as a channel for communication between the BBERF and the PCRF. A Gateway Control Session utilizes the Gxx session and operates as defined in TS 23.402 [18], which includes both the alternatives as defined by cases 2a and 2b in clause 7.1.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.8 Sd reference point
The Sd reference point resides between the PCRF and the TDF. The Sd reference point enables a PCRF to have dynamic control over the application detection and control behaviour at a TDF. The Sd reference point enables the signalling of ADC decision, which governs the ADC behaviour and it supports the following functions: 1. Establishment of Sd session between the PCRF and the TDF; 2. Termination of Sd session between the PCRF and the TDF; 3. Provision of ADC decision from the PCRF for the purpose of application's traffic detection, enforcement and charging at the TDF; 4. Request for ADC decision from the TDF to the PCRF; 5. Reporting of the start and the stop of a detected applications and transfer of service data flow descriptions and application instance identifiers for detected applications from the TDF to the PCRF; 6. Reporting of the accumulated usage of network resources on a per TDF session basis from the TDF to the PCRF; 7. Request and delivery of IP‑CAN session specific parameters between the PCRF and the TDF. While 1-7 are relevant for solicited application reporting; only 1, 2 and 5 are relevant for unsolicited application reporting. When Sd is used for traffic steering control only, then the following function is supported: - Provision of ADC Rules from the PCRF for the purpose of application's traffic detection and traffic steering control. The information contained in an ADC rule is defined in clause 6.8.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.9 Sy reference point
The Sy reference point resides between the PCRF and the OCS. The Sy reference point enables transfer of policy counter status information relating to subscriber spending from OCS to PCRF and supports the following functions: - Request for reporting of policy counter status information from PCRF to OCS and subscribe to or unsubscribe from spending limit reports (i.e. notifications of policy counter status changes). - Report of policy counter status information upon a PCRF request from OCS to PCRF. - Notification of spending limit reports from OCS to PCRF. - Cancellation of spending limit reporting from PCRF to OCS. Since the Sy reference point resides between the PCRF and OCS in the HPLMN, roaming with home routed or visited access as well as non-roaming scenarios are supported in the same manner.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.10 Gyn reference point
The Gyn reference point resides between the OCS and the TDF. The Gyn reference point allows online credit control for charging in case of ADC rules based charging in TDF. The functionalities required across the Gyn reference point are defined in TS 32.251 [9] and is based on RFC 4006 [4].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.11 Gzn reference point
The Gzn reference point resides between the TDF and the OFCS. The Gzn reference point enables transport of offline charging information in case of ADC rule based charging in TDF. The Gzn interface is specified in TS 32.240 [3].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.12 Np reference point
The Np reference point resides between the RCAF and the PCRF. The Np reference point enables transport of RAN User Plane Congestion Information (RUCI) sent from the RCAF to the PCRF for all or selected subscribers, depending on the operator's congestion mitigation policy. The Np reference point supports the following functions: - Reporting of RUCI from the RCAF to the PCRF. - Sending, updating and removal of the reporting restrictions from the PCRF to the RCAF as defined in clause 6.1.15.2.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.13 Nt reference point
The Nt reference point enables the negotiation between the SCEF and the PCRF about the recommended time window(s) and the related conditions for future background data transfer. The SCEF is triggered by an SCS/AS (as described in TS 23.682 [42]) which requests for this negotiation and provides necessary information to the SCEF. The SCEF will forward the information received from the SCS/AS to the PCRF as well as the information received from the PCRF to the SCS/AS. Whenever the SCEF contacts the PCRF, the PCRF shall use the information provided by the SCS/AS via the SCEF to determine the policies belonging to the application service provider (ASP). NOTE: This interaction between the SCEF and the PCRF over the Nt reference point is not related to any IP-CAN session.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.14 St reference point
The St reference point resides between the TSSF and the PCRF. The St reference point enables the PCRF to provide traffic steering control information to the TSSF. The St reference point supports the following functions: - Provision, modification and removal of traffic steering control information from PCRF to the TSSF.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.15 Nu reference point
The Nu reference point resides between the SCEF and the PFDF and enables the 3rd party service provider to manage PFDs in the PFDF as specified in TS 23.682 [42].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.16 Gw reference point
The Gw reference point resides between the PFDF and the PCEF. The Gw reference point enables transport of PFDs from the PFDF to the PCEF for a particular Application Identifier or for a set of Application Identifiers. The Gw reference point supports the following functions: - Creation, updating and removal of individual or the whole set of PFDs from the PFDF to the PCEF. - Confirmation of creation, updating and removel of PFDs from the PCEF to the PFDF. NOTE: The interaction between the PCEF and the PFDF is not related to any IP-CAN session.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.17 Gwn reference point
The Gwn reference point resides between the PFDF and the TDF. The Gwn reference point enables transport of PFDs from the PFDF to the TDF for a particular Application Identifier or for a set of Application Identifiers. The Gwn reference point supports the following functions: - Creation, updating and removal of individual or the whole set of PFDs from the PFDF to the TDF. - Confirmation of creation, updating and removel of PFDs from the PCEF to the TDF. NOTE: The interaction between the PCEF and the TDF is not related to any IP-CAN session.
5817f2cb61f7e615e7879961cd761fa6
23.203
6 Functional description
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1 Overall description
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.0 General
The PCC architecture works on a service data flow level. The PCC architecture provides the functions for policy and charging control as well as event reporting for service data flows.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1 Binding mechanism
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.1 General
The binding mechanism is the procedure that associates a service data flow (defined in a PCC and QoS rule, if applicable, by means of the SDF template), to the IP‑CAN bearer deemed to transport the service data flow. For service data flows belonging to AF sessions, the binding mechanism shall also associate the AF session information with the IP‑CAN bearer that is selected to carry the service data flow. NOTE 1: The relation between AF sessions and rules depends only on the operator configuration. An AF session can be covered by one or more PCC and QoS rules, if applicable (e.g. one rule per media component of an IMS session). Alternatively, a rule could comprise multiple AF sessions. NOTE 2: The PCRF may authorize dynamic PCC rules for service data flows without a corresponding AF session. Such PCC rules may be statically configured at the PCRF or dynamically filled with the UE provided traffic mapping information. NOTE 3: For PCC rules with application identifier and for certain IP-CAN types, up-link traffic may be received on other/additional IP-CAN bearers than the one determined by the binding mechanism (further details provided in clause 6.2.2.2 and the IP-CAN specific annexes). The binding mechanism creates bindings. The algorithm, employed by the binding mechanism, may contain elements specific for the kind of IP‑CAN. The binding mechanism includes three steps: 1. Session binding. 2 PCC rule authorization and QoS rule generation, if applicable. 3. Bearer binding.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.2 Session binding
Session binding is the association of the AF session information to one and only one IP‑CAN session. The PCRF shall perform the session binding, which shall take the following IP‑CAN parameters into account: a) The UE IPv4 address and/or IPv6 network prefix; b) The UE identity (of the same kind), if present. NOTE 1: In case the UE identity in the IP‑CAN and the application level identity for the user are of different kinds, the PCRF needs to maintain, or have access to, the mapping between the identities. Such mapping is not subject to specification within this TS. c) The information about the packet data network (PDN) the user is accessing, if present. For an IP-CAN session to the dedicated APN for UE-to-Network Relay connectivity (as defined in TS 23.303 [44]) and using IPv6 prefix delegation (i.e. the assigned IPv6 network prefix is shorter than 64) the PCRF shall perform session binding based on the IPv6 network prefix only. A successful session binding occurs whenever a longer prefix received from an AF matches the prefix value of the IP-CAN session. PCRF shall not use the UE identity for session binding for this IP-CAN session. NOTE 2: For UE-to-Network Relay connectivity, the UE identity that the PCEF has provided (i.e. UE-to-Network Relay UE Identity) and a UE identity provided by the AF (i.e. Remote UE Identity) can be different, while the binding with the IP-CAN session is valid. NOTE 3: In this Release of the specification the support for policy control of Remote UEs behind a ProSe UE-Network Relay using IPv4 is not available. The PCRF shall identify the PCC rules affected by the AF session information, including new rules to be installed and existing rules to be modified or removed.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.3 PCC rule authorization and QoS rule generation
PCC Rule authorization is the selection of the QoS parameters (QCI, ARP, GBR, MBR, etc.) for the PCC rules. The PCRF shall perform the PCC rule authorization for complete dynamic PCC rules belonging to AF sessions that have been selected in step 1, as described in clause 6.1.1.2, as well as for PCC rules without corresponding AF sessions. Based on AF instructions (as described in clause 6.1.5) dynamic PCC rules can be authorized even if they are not complete (e.g. due to missing service information regarding QoS or traffic filter parameters). The PCC rule authorization depends on the IP‑CAN bearer establishment mode of the IP‑CAN session and the mode (UE or NW) of the PCC rule: - In UE/NW bearer establishment mode, the PCRF shall perform the authorization for all PCC rules that are to be handled in NW mode. - In UE/NW bearer establishment mode, for PCC rules that are to be handled in UE mode or when in UE-only bearer establishment mode, the PCRF shall first identify the PCC rules that correspond to a UE resource request and authorize only these. The PCRF shall compare the traffic mapping information of the UE resource request with the service data flow filter information of the services that are allowed for the user. Each part of the traffic mapping information shall be evaluated separately in the order of their related precedence. Any matching service data flow filter leads to an authorization of the corresponding PCC rule for the UE resource request unless the PCC rule is already authorized for a more specific traffic mapping information or the PCC rule cannot be authorized for the QCI that is related to the UE resource request (the details are described in the next paragraph). Since a PCC rule can contain multiple service data flow filters it shall be ensured by the PCRF that a service data flow is only authorized for a single UE resource request. NOTE 1: For example, a PCC rule containing multiple service data flow filters that match traffic mapping information of different UE resource requests could be segmented by the PCRF according to the different matching traffic mapping information. Afterwards, the PCRF can authorize the different PCC rules individually. The PCRF knows whether a PCC rule can be authorized for a single QCI only or a set of QCIs (based on SPR information or local configuration). If the processing of the traffic mapping information would lead to an authorization of a PCC rule, the PCRF shall also check whether the PCC rule can be authorized for the QCI that is related to the UE resource request containing the traffic mapping information. If the PCC rule cannot be authorized for this QCI, the PCRF shall reject the traffic mapping information unless otherwise stated in an access-specific Annex. If there is any traffic mapping information not matching to any service data flow filter known to the PCRF and the UE is allowed to request for enhanced QoS for traffic not belonging to operator-controlled services, the PCRF shall authorize this traffic mapping information by adding the respective service data flow filter to a new or existing PCC. If the PCRF received an SDF filter identifier together with this traffic mapping information, the PCRF shall modify the existing PCC rule if the PCC rule is authorized for a GBR QCI. NOTE 2: If the PCC rule is authorized for a non-GBR QCI, the PCRF may either create a new PCC rule or modify the existing PCC rule. The PCC rule that needs to be modified can be identified by the service data flow filter the SDF filter identifier refers to. The requested QoS shall be checked against the subscription limitations for traffic not belonging to operator-controlled services. If the PCRF needs to perform the authorization based on incomplete service information and thus cannot associate a PCC rule with a single IP‑CAN bearer, then the PCRF shall generate for the affected service data flow an individual PCC rule per IP‑CAN bearer that could carry that service data flow. Once the PCRF receives the complete service information, the PCC rule on the IP‑CAN bearer with the matching traffic mapping information shall be updated according to the service information. Any other PCC rule(s) previously generated for the same service data flow shall be removed by the PCRF. NOTE 3: This is required to enable the successful activation or modification of IP‑CAN bearers before knowing the intended use of the IP‑CAN bearers to carry the service data flow(s). For an IP‑CAN, where the PCRF gains no information about the uplink IP flows (i.e. the UE provided traffic mapping information contains no information about the uplink IP flows), the binding mechanism shall assume that, for bi-directional service data flows, both downlink and uplink packets travel on the same IP‑CAN bearer. Whenever the service data flow template or the UE provided traffic mapping information change, the existing authorizations shall be re-evaluated, i.e. the authorization procedure specified in this clause, is performed. The re-evaluation may, for a service data flow, require a new authorization for a different UE provided mapping information. Based on PCRF configuration or AF instructions (as described in clause 6.1.5) dynamic PCC rules may have to be first authorized for the default QCI/default bearer (i.e. bearer without UE provided traffic mapping information) until a corresponding UE resource request occurs. NOTE 4: This is required to enable services that start before dedicated resources are allocated. A PCC rule for a service data flow that is a candidate for vSRVCC according to TS 23.216 [28] shall have the PS to CS session continuity indicator set. For the authorization of a PCC rule the PCRF shall take into account the IP‑CAN specific restrictions and other information available to the PCRF. Each PCC rule receives a set of QoS parameters that can be supported by the IP‑CAN. The authorization of a PCC rule associated with an emergency service and Restricted Local Operator Services shall be supported without subscription information (e.g. information stored in the SPR). The PCRF shall apply policies configured for the emergency service and Restricted Local Operator Services. When both a Gx and associated Gxx interface(s) exist for an IP‑CAN session, the PCRF shall generate QoS rules for all the authorized PCC rules in this step. The PCRF shall ensure consistency between the QoS rules and PCC rules authorized for the same service data flow when QoS rules are derived from corresponding PCC rules. When flow mobility applies for the IP-CAN Session, one IP‑CAN session may be associated to multiple Gateway Control Sessions with separate BBRFs. In this case, the PCRF shall provision QoS rules only to the appropriate BBERF based on IP flow mobility routing rules received from the PCEF.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.4 Bearer Binding
Bearer binding is the association of the PCC rule and the QoS rule (if applicable) to an IP‑CAN bearer within that IP‑CAN session. This function resides in the Bearer Binding Function (BBF). The Bearer Binding Function is located either at the BBERF or at the PCEF, depending on the architecture (see clause 5.1). The BBF is located at the PCEF if GTP is used as the mobility protocol towards the PCEF; otherwise, the BBF is located at the BBERF. The Bearer Binding Function may also be located in the PCRF as specified in Annex A and Annex D (e.g. for GPRS running UE only IP‑CAN bearer establishment mode). NOTE 1: For an IP‑CAN, limited to a single IP‑CAN bearer per IP‑CAN session, the bearer is implicit, so finding the IP‑CAN session is sufficient for successful binding. For an IP‑CAN which allows for multiple IP‑CAN bearers for each IP‑CAN session, the binding mechanism shall use the QoS parameters of the existing IP‑CAN bearers to create the bearer binding for a rule, in addition to the PCC rule and the QoS rule (if applicable) authorized in the previous step. The set of QoS parameters assigned in step 2, as described in clause 6.1.1.3, to the service data flow is the main input for bearer binding. The BBF should not use the same bearer for rules with different settings for the PS to CS session continuity indicator. NOTE 2: When NBIFOM applies for the IP-CAN session, additional information has to be taken into account as described in clause 6.1.18.1. The BBF shall evaluate whether it is possible to use one of the existing IP‑CAN bearers or not and whether initiate IP‑CAN bearer modification if applicable. If none of the existing bearers are possible to use, the BBF should initiate the establishment of a suitable IP‑CAN bearer. The binding is created between service data flow(s) and the IP‑CAN bearer which have the same QoS class identifier and ARP. NOTE 3: The handling of a rule with MBR>GBR is up to operator policy (e.g. an independent IP‑CAN bearer may be maintained for that SDF to prevent unfairness between competing SDFs). Requirements, specific for each type of IP‑CAN, are defined in the IP‑CAN specific Annex. Whenever the QoS authorization of a PCC/QoS rule changes, the existing bindings shall be re-evaluated, i.e. the bearer binding procedures specified in this clause, is performed. The re-evaluation may, for a service data flow, require a new binding with another IP‑CAN bearer. The BBF should, if the PCRF requests the same change to the ARP/QCI value for all PCC/QoS Rules with the bearer binding to the same bearer, modify the bearer ARP/QCI value as requested. NOTE 4: A QoS change of the default EPS bearer causes the bearer binding for PCC/QoS rules previously bound to the default EPS bearer to be re-evaluated. At the end of the re-evaluation of the PCC/QoS rules of the IP-CAN session, there needs to be at least one PCC rule that successfully binds with the default bearer.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.2 Reporting
Reporting refers to the differentiated IP‑CAN resource usage information (measured at the PCEF/TDF) being reported to the online or offline charging functions. NOTE 1: Reporting usage information to the online charging function is distinct from credit management. Hence multiple PCC/ADC rules may share the same charging key for which one credit is assigned whereas reporting may be at higher granularity if serviced identifier level reporting is used. The PCEF/TDF shall report usage information for online and offline charging. The PCEF/TDF shall report usage information for each charging key value. For service data flow charging, for the case of sponsored data connectivity, the reports for offline charging shall report usage for each charging key, Sponsor Identity and Application Service Provider Identity combination if Sponsor Identity and Application Service Provider Identifier have been provided in the PCC rules. NOTE 2: Usage reports for online charging that include Sponsor Identity and Application Service Provider Identity is not within scope of the specification in this release. Online charging for sponsored data connectivity can be based on charging key as described in Annex N. The PCEF shall report usage information for each charging key/service identifier combination if service identifier level reporting is requested in the PCC/ADC rule. NOTE 3: For reporting purposes when charging is performed by the PCEF: a) the charging key value identifies a service data flow if the charging key value is unique for that particular service data flow and b) if the service identifier level reporting is present then the service identifier value of the PCC rule together with the charging key identify the service data flow. The TDF shall report usage information for each charging key/service identifier combination if service identifier level reporting is requested in the ADC rule. NOTE 4: For reporting purposes in case charging is performed by the TDF a) the charging key value identifies an application if the charging key value is unique for that application identified by ADC Rule and b) if the service identifier level reporting is present then the service identifier value of the ADC rule together with the charging key identify the application NOTE 5: If operator applies this solution with both PCEF and TDF performing enforcement and charging for a single IP-CAN session, the PCRF is recommended to use a different charging keys provided to the PCEF and to the TDF. For the case where the BBF locates in the PCEF, charging information shall be reported based on the result from the service data flow detection and measurement on a per IP‑CAN bearer basis. For the case where the BBF is not located in the PCEF, charging information shall be reported based on the result from the service data flow detection and measurement, separately per QCI and ARP combination (used by any of the active PCC rules). In case 2a defined in clause 7.1, charging ID is provided to the BBERF via the PCRF if charging correlation is needed. A report may contain multiple containers, each container associated with a charging key, charging key and Sponsor Identity (in case of sponsored connectivity) or charging key/service identifier.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.3 Credit management
The credit management applies for online charging only and shall operate on a per charging key basis. The PCEF should initiate one credit management session with the OCS for each IP‑CAN Session subject to online charging, unless specified otherwise in an IP‑CAN specific annex. Alternatively, the PCEF may initiate one credit management session for each IP‑CAN bearer as defined in the applicable annex. The TDF should initiate one credit management session with the OCS for each TDF Session subject to online charging. NOTE 1: Independent credit control for an individual service/application may be achieved by assigning a unique charging key value in the corresponding PCC/ADC rule. The PCEF/TDF shall request a credit for each charging key occurring in a PCC/ADC rule. It shall be up to operator configuration whether the PCEF/TDF shall request credit in conjunction with the PCC/ADC rule being activated or when the first packet corresponding to the service/the application is detected. The OCS may either grant or deny the request for credit. The OCS shall strictly control the rating decisions. NOTE 2: The term 'credit' as used here does not imply actual monetary credit, but an abstract measure of resources available to the user. The relationship between this abstract measure, actual money and actual network resources or data transfer, is controlled by the OCS. During IP‑CAN session establishment and modification, the PCEF shall request credit using the information after applying policy enforcement action (e.g. upgraded or downgraded QoS information), if applicable, even though the PCEF has not signalled it yet in the IP‑CAN. It shall be possible for the OCS to form a credit pool for multiple (one or more) charging keys, applied at the PCEF/TDF, e.g. with the objective of avoiding credit fragmentation. Multiple pools of credit shall be allowed per IP‑CAN bearer/TDF session. The OCS shall control the credit pooling decisions. The OCS shall, when credit authorization is sought, either grant a new pool of credit, together with a new credit limit, or give a reference to a pool of credit that is already granted for that IP‑CAN bearer/TDF session. The grouping of charging keys into pools shall not restrict the ability of the OCS to do credit authorisation and provide termination action individually for each charging key of the pool. It shall be possible for the OCS to group service data flows/applications charged at different rates or in different units (e.g. time/volume/event) into the same pool. For each charging key, the PCEF/TDF may receive credit re-authorisation trigger information from the OCS, which shall cause the PCEF/TDF to perform a credit re-authorisation when the event occurs. If there are events which can not be monitored in the PCEF/TDF, the PCEF/TDF shall provide the information about the required event triggers to the PCRF. If information about required event triggers is provided to the PCRF, it is an implementation option whether a successful confirmation is required from the PCRF in order for the PCEF/TDF to consider the credit (re-)authorization procedure to be successful. The credit re-authorisation trigger detection shall cause the PCEF/TDF to request re-authorisation of the credit in the OCS. It shall be possible for the OCS to instruct the PCEF/TDF to seek re-authorisation of credit in case of the events listed in table 6.1. Table 6.1: Credit re-authorization triggers Credit re-authorization trigger Description Applicable for Credit authorisation lifetime expiry The OCS has limited the validity of the credit to expire at a certain time. PCEF, TDF Idle timeout The service data flow identified by a PCC Rules or the application identified by an ADC Rule has been empty for a certain time. PCEF, TDF PLMN change The UE has moved to another operators' domain. PCEF, TDF QoS changes The QoS of the IP‑CAN bearer has changed. PCEF Change in type of IP‑CAN The type of the IP‑CAN has changed. PCEF, TDF Location change (serving cell) The serving cell of the UE has changed. PCEF, TDF Location change (serving area) (see note 2) The serving area of the UE has changed. PCEF, TDF Location change (serving CN node) (see note 3) The serving core network node of the UE has changed. PCEF, TDF Change of UE presence in Presence Reporting Area (see note 4) The UE has entered or left a Presence Reporting Area PCEF, TDF NOTE 1: This list is not exhaustive. Events specific for each IP‑CAN are specified in Annex A and the protocol description may support additional events. NOTE 2: A change in the serving area may also result in a change in the serving cell and possibly a change in the serving CN node. NOTE 3: A change in the serving CN node may also result in a change in the serving cell and possibly a change in the serving area. NOTE 4: The Presence Reporting Area(s) is provided by the OCS to the PCEF/TDF. The maximum number of PRA(s) per UE per PDN connection is configured in the OCS. The OCS may have independent configuration of the maximum number for Core Network pre-configured PRAs and UE-dedicated PRAs. The exact number(s) should be determined by operator in deployment. If the Location change trigger is armed, the PCEF shall activate the relevant IP‑CAN specific procedure which reports any changes in location to the level indicated by the trigger. If credit-authorization triggers and event triggers require different levels of reporting of location change for a single UE, the location to be reported should be changed to the highest level of detail required. However, there should be no request being triggered for credit re-authorization to the OCS if the report received is more detailed than requested by the OCS. NOTE 1: The access network may be configured to report location changes only when transmission resources are established in the radio access network. The OCS determines at credit management session establishment/modification, based on local configuration, if the UE is located in an access type that supports reporting changes of UE presence in Presence Reporting Area. If the access type supports it, the OCS may subscribe to Change of UE presence in Presence Reporting Area at any time during the life time of the credit management session. NOTE 2: If Presence Reporting Area reporting is not supported, the OCS may instead activate Location change reporting at cell and/or serving area level but due to the potential increase in signalling load, it is recommended that such reporting is only applied for a limited number of subscribers. When activating reporting for change of UE presence in Presence Reporting Area, the OCS provides all of the PRA Identifier(s) to be activated for Core Network pre-configured Presence Reporting Area(s) and additionally all of PRA Identifier(s) and the list(s) of its elements for UE- dedicated Presence Reporting Area(s). (See Table 6.4 in clause 6.4 for details of the PRA Identifier(s) and the list(s) of elements comprising each Presence Reporting Area). If OCS is configured with a PRA identifier referring to the list of PRA Identifier(s) within a Set of Core Network predefined Presence Reporting Areas as defined in TS 23.401 [17], it activates the reporting of UE entering/leaving the individual PRA in the Set of Core Network predefined Presence Reporting Areas without providing the complete set of individual PRAs. The OCS may change (activate/modify/remove) the Presence Reporting Area(s) to be reported by providing the updated PRA Identifier(s) to PCEF. For UE dedicated PRAs, the OCS may also change the list(s) of Presence Reporting Area elements related to the PRA Identifier(s). The OCS may unsubscribe to Change of UE presence in Presence Reporting Area at any time during the life time of the credit management session. The OCS may be notified during the life time of a credit management session that the UE is located in an access type where local OCS configuration indicates that reporting changes of UE presence in Presence Reporting Area is not supported. If so, the OCS unsubscribes to Change of UE presence in Presence Reporting Area, if previously activated. Some of the re-authorization triggers are related to IP‑CAN bearer modifications. IP‑CAN bearer modifications, which do not match any credit re-authorization trigger (received from the OCS for the bearer) shall not cause any credit re-authorization interaction with the OCS. If the PCRF set the Out of credit event trigger (see clause 6.1.4), the PCEF/TDF shall inform the PCRF about the PCC/ADC rules for which credit is no longer available together with the applied termination action.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.4 Event Triggers
The Event Reporting Function (ERF) performs event trigger detection. When an event matching the event trigger occurs, the ERF shall report the occurred event to the PCRF. The Event Reporting Function is located either at the PCEF or, at the BBERF (if applicable) or, at the TDF for solicited application reporting (if applicable). The event triggers define the conditions when the ERF shall interact again with PCRF after an IP‑CAN session establishment. The event triggers that are required in procedures shall be unconditionally reported from the ERF, while the PCRF may subscribe to the remaining events. Whether an event trigger requires a subscription by the PCRF is indicated in column 4 in table 6.2 below. The PCRF subscribes to new event triggers or remove armed event triggers unsolicited at any time or upon receiving a request from the AF, an event report or rule request from the ERF (PCEF or BBERF or TDF) using the Provision of PCC Rules procedure or the Provision of QoS Rules procedure (if applicable) or the Provision of ADC Rules procedure (if applicable). If the provided event triggers are associated with certain parameter values then the ERF shall include those values in the response back to the PCRF. Event triggers are associated with all rules at the ERF of an IP‑CAN session (ERF is located at PCEF) or Gateway Control session (ERF is located at BBERF) or with Traffic Detection session (ERF is located in TDF). Event triggers determine when the ERF shall signal to the PCRF that an IP‑CAN bearer has been modified. It shall be possible for the ERF to react on the event triggers listed in table 6.2. Table 6.2: Event triggers Event trigger Description Reported from Condition for reporting PLMN change The UE has moved to another operators' domain. PCEF PCRF QoS change The QoS of the IP‑CAN bearer has changed (note 3). PCEF, BBERF PCRF QoS change exceeding authorization The QoS of the IP‑CAN bearer has changed and exceeds the authorized QoS (note 3). PCEF PCRF Traffic mapping information change The traffic mapping information of the IP‑CAN bearer has changed (note 3). PCEF Always set Resource modification request A request for resource modification has been received by the BBERF/PCEF (note 6). PCEF, BBERF Always set Routing information change The IP flow mobility routing information has changed (when IP flow mobility as specified in TS 23.261 [23] applies) or the PCEF has received Routing Rules from the UE (when NBIFOM as specified in TS 23.161 [43] applies) (note 11) (note 16). PCEF Always set (note 15) Change in type of IP‑CAN (see note 1) The access type of the IP‑CAN bearer has changed. PCEF PCRF Loss/recovery of transmission resources The IP‑CAN transmission resources are no longer usable/again usable. PCEF, BBERF PCRF Location change (serving cell) (see note 10) The serving cell of the UE has changed. PCEF, BBERF PCRF Location change (serving area) (see notes 4 and 10) The serving area of the UE has changed. PCEF, BBERF PCRF Location change (serving CN node) (see notes 5 and 10) The serving core network node of the UE has changed. PCEF, BBERF PCRF Change of UE presence in Presence Reporting Area (see note 17) The UE is entering/leaving a Presence Reporting Area PCEF, BBERF PCRF Out of credit Credit is no longer available. PCEF, TDF PCRF Enforced PCC rule request PCEF is performing a PCC rules request as instructed by the PCRF. PCEF PCRF Enforced ADC rule request TDF is performing an ADC rules request as instructed by the PCRF. TDF PCRF UE IP address change (see note 9) A UE IP address has been allocated/released PCEF Always set Access Network Charging Correlation Information Access Network Charging Correlation Information has been assigned. PCEF PCRF Usage report (see note 7) The IP-CAN session or the Monitoring key specific resources consumed by a UE either reached the threshold or needs to be reported for other reasons. PCEF, TDF PCRF Start of application traffic detection and Stop of application traffic detection (see note 8) The start or the stop of application traffic has been detected. PCEF, TDF PCRF SRVCC CS to PS handover A CS to PS handover has been detected PCEF PCRF Access Network Information report Access information as specified in the Access Network Information Reporting part of a PCC rule. PCEF, BBERF PCRF Credit management session failure Transient/Permanent Failure as specified by the OCS PCEF, TDF PCRF for PCEF, Always set for TDF Addition / removal of an access to an IP-CAN session (note 11) The PCEF reports when an access is added or removed PCEF Always set Change of usability of an access (note 11) The PCEF reports that an access becomes unusable or usable again (note 14) PCEF Always set UE resumed from suspend state The PCEF reports to the PCRF when it detects that the UE is resumed from suspend state. PCEF PCRF NOTE 1: This list is not exhaustive. Events specific for each IP‑CAN are specified in clause A. NOTE 2: A change in the type of IP‑CAN may also result in a change in the PLMN. NOTE 3: Available only when the bearer binding mechanism is allocated to the PCRF. NOTE 4: A change in the serving area may also result in a change in the serving cell and a change in the serving CN node. NOTE 5: A change in the serving CN node may also result in a change in the serving cell and possibly a change in the serving area. NOTE 6: Available only when the IP‑CAN supports corresponding procedures for bearer independent resource requests. NOTE 7: Usage is defined as either volume or time of user plane traffic. NOTE 8: The start and stop of application traffic detection are separate event triggers, but received under the same subscription from the PCRF. For unsolicited application reporting, these event triggers are always set for the TDF. NOTE 9: If TDF for solicited application reporting is applicable, upon receiving this event report from PCEF, PCRF always updates the TDF. NOTE 10: Due to the potential increase in signalling load, it is recommended that such event trigger subscription is only applied for a limited number of subscribers. NOTE 11: Used when NBIFOM is supported by the IP-CAN session. Refer to clause 6.1.18 for the description of NBIFOM impacts to PCC. NBIFOM Routing Rules are defined in clause 6.12. NOTE 12: Void. NOTE 13: Void.. NOTE 14: Used in Network-initiated NBIFOM mode. The PCEF reports that an access becomes unusable or usable again are based on notifications received from the UE. This may correspond to the procedure "Access becomes Unusable and Usable" and to the procedure "IP flow mobility triggered by RAN Rule indication" defined in TS 23.161 [43]. NOTE 15: This event is always set when IFOM per TS 23.261 [23] applies or when NBIFOM per TS 23.161 [43] applies. In the latter case it applies in both Network-initiated NBIFOM mode and in UE-initiated NBIFOM mode. NOTE 16: In UE-initiated NBIFOM mode this event indicates that the UE has created, modified or deleted Routing Rules. In Network-initiated NBIFOM mode this event indicates that the UE requests the network to create, modify or delete Routing Rules. NOTE 17: The maximum number of PRA(s) per UE per PDN connection is configured in the PCRF. The PCRF may have independent configuration of the maximum number for Core Network pre-configured PRAs and UE-dedicated PRAs. The exact number(s) should be determined by operator in deployment. If the Location change trigger is armed, the PCEF shall activate the relevant IP‑CAN specific procedure which reports any changes in location to the level indicated by the trigger. If credit-authorization triggers and event triggers require different levels of reporting of location change for a single UE, the location to be reported should be changed to the highest level of detail required. However, there should be no request being triggered for PCC rules or QoS rules (if applicable) update to the PCRF if the report received is more detailed than requested by the PCRF. NOTE 1: The access network may be configured to report location changes only when transmission resources are established in the radio access network. The PCRF determines at IP-CAN session establishment/modification, based on local configuration, if the UE is located in an access type that supports reporting changes of UE presence in Presence Reporting Area. If the access type supports it, the PCRF may subscribe to Change of UE presence in Presence Reporting Area at any time during the life time of the IP-CAN session NOTE 2: If Presence Reporting Area reporting is not supported, the PCRF may instead activate Location change reporting at cell and/or serving area level but due to the potential increase in signalling load, it is recommended that such reporting is only applied for a limited number of subscribers. When activating reporting for change of UE presence in Presence Reporting Area, the PCRF provides all of the PRA Identifier(s) to be activated for Core Network pre-configured Presence Reporting Area(s) and additionally all of PRA Identifier(s) and list(s) of its elements for UE-dedicated Presence Reporting Area(s) (See Table 6.4 in clause 6.4 for details of the PRA Identifier(s) and the list(s) of elements comprising each Presence Reporting Area). Setting the Change of UE presence in Presence Reporting Area event trigger shall not preclude the PCRF from simultaneously setting another Location change event trigger. If PCRF is configured with a PRA identifier referring to the list of PRA Identifier(s) within a Set of Core Network predefined Presence Reporting Areas as defined in TS 23.401 [17], it activates the reporting of UE entering/leaving the individual PRA in the Set of Core Network predefined Presence Reporting Areas without providing the complete set of individual PRAs. The PCRF may change (activate/modify/remove) the Presence Reporting Area(s) to be reported by providing the updated PRA Identifier(s) to PCEF. For UE dedicated PRAs, the PCRF may also change the list(s) of Presence Reporting Area elements related to the PRA Identifier(s). The PCRF may unsubscribe to Change of UE presence in Presence Reporting Area at any time during the life time of the IP-CAN session. The PCRF may be notified during the life time of an IP-CAN session that the UE is located in an access type where local PCRF configuration indicates that reporting changes of UE presence in Presence Reporting Area is not supported. The PCRF unsubscribes to Change of UE presence in Presence Reporting Area, if previously activated. IP‑CAN bearer modifications, which do not match any event trigger, shall cause no interaction with the PCRF. The QoS change event trigger shall trigger the PCRF interaction for all changes of the IP‑CAN bearer QoS. The QoS change exceeding authorization event trigger shall only trigger the PCRF interaction for those changes that exceed the QoS of the IP‑CAN bearer that has been authorized by the PCRF previously. The ERF shall check the QoS class identifier and the bandwidth. The Resource modification request event trigger shall trigger the PCRF interaction for all resource modification requests not tied to a specific IP‑CAN bearer received by PCEF/BBERF. The resource modification request received by PCEF/BBERF may include request for guaranteed bit rate changes for a traffic aggregate and/or the association/disassociation of the traffic aggregate with a QCI and/or a modification of the traffic aggregate. The routing information change event trigger shall trigger the PCRF interaction for any change in how the IP flow is routed. The routing information change received by the PCEF is specified in TS 23.261 [23] (i.e. IP flow mobility routing rules) or TS 23.161 [43] (i.e. Routing Rules). The enforced PCC rule request event trigger shall trigger a PCEF interaction to request PCC rules from the PCRF for an established IP‑CAN session. This PCEF interaction shall take place within the Revalidation time limit set by the PCRF in the IP‑CAN session related policy information (clause 6.4). The enforced ADC rule request event trigger shall trigger a TDF interaction to request ADC rules from the PCRF for an established TDF session for solicited application reporting. This TDF interaction shall take place within the ADC Revalidation time limit set by the PCRF in the TDF session related policy information (clause 6.4). NOTE 3: The enforced PCC rule request and the enforced ADC rule request mechanisms can be used to avoid signalling overload situations e.g. due to time of day based PCC/ADC rule changes. The UE IP address change event trigger applies to the PCEF only and shall trigger a PCEF interaction with the PCRF in case a UE IPv4 address is allocated or released during the lifetime of the IP‑CAN session. The Access Network Charging Correlation Information event shall trigger the PCEF to report the assigned access network charging identifier for the PCC rules that are accompanied with a request for this event at activation. To activate usage monitoring, the PCRF shall set the Usage report event trigger and provide applicable usage thresholds for the Monitoring key(s) that are subject to usage monitoring in the requested node (PCEF or TDF, solicited application reporting). The PCRF shall not remove the Usage report event trigger while usage monitoring is still active in the PCEF/TDF. If the Usage report event trigger is set and the volume or the time thresholds, earlier provided by the PCRF, are reached, the PCEF or TDF (whichever received the event trigger) shall report this event to the PCRF. If both volume and time thresholds were provided and the thresholds, for one of the measurements, are reached, the PCEF or TDF shall report this event to the PCRF and the accumulated usage since last report shall be reported for both measurements. The Start of application traffic detection and Stop of application traffic detection events shall trigger an interaction with PCRF once the requested application traffic is detected (i.e. Start of application traffic detection) or the end of the requested application traffic is detected (i.e. Stop of application traffic detection) unless it is requested within a specific PCC Rule or ADC Rule to mute such a notification for solicited application reporting or unconditionally in case of unsolicited application reporting. The application identifier and service data flow descriptions, if deducible, shall also be included in the report. An application instance identifier shall be included in the report both for Start and for Stop of application traffic detection when service data flow descriptions are deducible. This is done to unambiguously match the Start and the Stop events. The SRVCC CS to PS handover event trigger shall trigger a PCEF interaction with the PCRF to inform that a CS to PS handover procedure has been detected. The PCRF shall ensure, as specified in TS 23.216 [28], to allow voice media over the default bearer during the course of the CS to PS SRVCC procedure. At PCC rule activation, modification and deactivation the ERF shall send, as specified in the PCC/QoS rule, the User Location Report and/or UE Timezone Report to the PCRF. NOTE 4: At PCC rule deactivation the User Location Report includes information on when the UE was last known to be in that location. The PCRF shall send the User Location Report and/or UE Timezone Report to the AF upon receiving an Access Network Information report corresponding to the AF session from the ERF. If the event trigger for Access Network Information reporting is set, the ERF shall check the need for access network information reporting after successful installation/modification or removal of a PCC/QoS rule or upon termination of the IP-CAN session/bearer. The ERF shall check the Access Network Information report parameters (User Location Report, UE Timezone Report) of the PCC/QoS rules and report the access network information received in the corresponding IP-CAN bearer establishment, modification or termination procedure to the PCRF. The ERF shall not report any subsequent access network information updates received from the IP‑CAN without any previous updates of related PCC/QoS rules unless the associated IP-CAN bearer or connection has been released. If the ERF receives a request to install/modify or remove a PCC/QoS rule with Access Network Information report parameters (User Location Report, UE Timezone Report) set and there is no bearer signalling related to this PCC/QoS rule (i.e. pending IP-CAN bearer signalling initiated by the UE or bearer signalling initiated by the ERF), the ERF shall initiate a bearer signalling to retrieve the current access network information of the UE and forward it to the PCRF afterwards. If the Access Network Information report parameter for the User Location Report is set and the user location (e.g. cell) is not available to the ERF, the ERF shall provide the serving PLMN identifier to the PCRF which shall forward it to the AF. The Credit management session failure event trigger shall trigger a PCEF or TDF interaction with the PCRF to inform about a credit management session failure and to indicate the failure reason and the affected PCC/ADC rules. NOTE 5: As a result, the PCRF may decide about e.g. TDF session termination, IP-CAN session termination (via PCC rule removal), perform gating of services in the PCEF/TDF, switch to offline charging, rating group change, etc. NOTE 6: For the PCEF the Credit management session failure event trigger applies to situations wherein the IP‑CAN session is not terminated by the PCEF due to the credit management session failure. If the UE resumed from suspend state event trigger is set and the UE is resumed from suspend state in EPC, the PCEF shall report this event to the PCRF. The PCEF shall not report any subsequent UE resumed from suspend state updates received from the IP‑CAN to the PCRF. When receiving the event report that the UE is resumed, the PCRF may provision PCC Rules to the PCEF to trigger an IP-CAN Session modification procedure.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.5 Policy Control
Policy control comprises functionalities for: - Binding, i.e. the generation of an association between a service data flow and the IP‑CAN bearer transporting that service data flow; - Gating control, i.e. the blocking or allowing of packets, belonging to a service data flow or specified by an application identifier, to pass through to the desired endpoint; - Event reporting, i.e. the notification of and reaction to application events to trigger new behaviour in the user plane as well as the reporting of events related to the resources in the GW (PCEF); - QoS control, i.e. the authorisation and enforcement of the maximum QoS that is authorised for a service data flow, an Application identified by application identifier or an IP‑CAN bearer; - Redirection, i.e. the steering of packets, belonging to an application defined by the application identifier to the specified redirection address; - IP‑CAN bearer establishment for IP‑CANs that support network initiated procedures for IP‑CAN bearer establishment. In case of an aggregation of multiple service data flows (e.g. for GPRS a PDP context), the combination of the authorised QoS information of the individual service data flows is provided as the authorised QoS for this aggregate. The enforcement of the authorized QoS of the IP‑CAN bearer may lead to a downgrading or upgrading of the requested bearer QoS by the GW (PCEF) as part of a UE-initiated IP‑CAN bearer establishment or modification. Alternatively, the enforcement of the authorised QoS may, depending on operator policy and network capabilities, lead to network initiated IP‑CAN bearer establishment or modification. If the PCRF provides authorized QoS for both, the IP‑CAN bearer and PCC rule(s), the enforcement of authorized QoS of the individual PCC rules shall take place first. QoS authorization information may be dynamically provisioned by the PCRF or, if the conditions mentioned in clause 6.3.1 apply, it can be a predefined PCC rule in the PCEF. In case the PCRF provides PCC rules dynamically, authorised QoS information for the IP‑CAN bearer (combined QoS) may be provided. For a predefined PCC rules within the PCEF the authorized QoS information shall take affect when the PCC rule is activated. The PCEF shall combine the different sets of authorized QoS information, i.e. the information received from the PCRF and the information corresponding to the predefined PCC rules. The PCRF shall know the authorized QoS information of the predefined PCC rules and shall take this information into account when activating them. This ensures that the combined authorized QoS of a set of PCC rules that are activated by the PCRF is within the limitations given by the subscription and operator policies regardless of whether these PCC rules are dynamically provided, predefined or both. For policy control, the AF interacts with the PCRF and the PCRF interacts with the PCEF as instructed by the AF. For certain events related to policy control, the AF shall be able to give instructions to the PCRF to act on its own, i.e. based on the service information currently available. The following events are subject to instructions from the AF: - The authorization of the service based on incomplete service information; NOTE 1: The QoS authorization based on incomplete service information is required for e.g. IMS session setup scenarios with available resources on originating side and a need for resource reservation on terminating side. - The immediate authorization of the service; - The gate control (i.e. whether there is a common gate handling per AF session or an individual gate handling per AF session component required); - The forwarding of IP‑CAN bearer level information or events: - Type of IP‑CAN (e.g. GPRS, etc.); - Transmission resource status (established/released/lost); - Access Network Charging Correlation Information; - Credit denied. NOTE 2: The credit denied information is only relevant for AFs not performing service charging. To enable the binding functionality, the UE and the AF shall provide all available flow description information (e.g. source and destination IP address and port numbers and the protocol information). The UE shall use the traffic mapping information to indicate downlink and uplink IP flows. If PCEF indicates that a PDN connection is carried over satellite access (of WB-E-UTRAN, NB-IoT or LTE-M RAT Types and specific values as defined in TS 23.401 [17]), the PCRF may take this information into account for the policy decision, e.g. together with any delay requirements provided by the AF.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.6 Service (data flow) Prioritization and Conflict Handling
Service pre-emption priority enables the PCRF to resolve conflicts where the activation of all requested active PCC rules for services would result in a cumulative authorized QoS which exceeds the Subscribed Guaranteed bandwidth QoS. For example, when supporting network controlled QoS, the PCRF may use the pre-emption priority of a service, the activation of which would cause the subscriber's authorized QoS to be exceeded. If this pre-emption priority is greater than that of any one or more active PCC rules, the PCRF can determine whether the deactivation of any one or more such rules would allow the higher pre-emption priority PCC rule to be activated whilst ensuring the resulting cumulative QoS does not exceed a subscriber's Subscribed Guaranteed Bandwidth QoS. If such a determination can be made, the PCRF may resolve the conflict by deactivating those selected PCC rules with lower pre-emption priorities and accepting the higher priority service information from the AF. If such a determination cannot be made, the PCRF may reject the service information from the AF. NOTE: Normative PCRF requirements for conflict handling are not defined. Alternative procedures may use a combination of pre-emption priority and AF provided priority indicator.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.7 Standardized QoS characteristics
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.7.1 General
The service level (i.e. per SDF or per SDF aggregate) QoS parameters are QCI, ARP, GBR and MBR. Each Service Data Flow (SDF) is associated with one and only one QoS Class Identifier (QCI). For the same IP‑CAN session multiple SDFs with the same QCI and ARP can be treated as a single traffic aggregate which is referred to as an SDF aggregate. An SDF is a special case of an SDF aggregate. The QCI is scalar that is used as a reference to node specific parameters that control packet forwarding treatment (e.g. scheduling weights, admission thresholds, queue management thresholds, link layer protocol configuration, etc.) and that have been pre-configured by the operator owning the node (e.g. eNodeB). When required by operator policy, the eNodeB can be configured to also use the ARP priority level in addition to the QCI to control the packet forwarding treatment in the eNodeB for SDFs having high priority ARPs.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.7.2 Standardized QCI characteristics
This clause specifies standardized characteristics associated with standardized QCI values. The characteristics describe the packet forwarding treatment that an SDF aggregate receives edge-to-edge between the UE and the PCEF (see figure 6.1.7‑1) in terms of the following performance characteristics: 1 Resource Type (GBR or Non-GBR); 2 Priority; 3 Packet Delay Budget; 4 Packet Error Loss Rate; 5 Maximum Data Burst Volume (for some GBR QCIs); 6 Data Rate Averaging Window (for some GBR QCIs). Figure 6.1.7-1: Scope of the Standardized QCI characteristics for client/server (upper figure) and peer/peer (lower figure) communication The standardized characteristics are not signalled on any interface. They should be understood as guidelines for the pre-configuration of node specific parameters for each QCI. The goal of standardizing a QCI with corresponding characteristics is to ensure that applications / services mapped to that QCI receive the same minimum level of QoS in multi-vendor network deployments and in case of roaming. A standardized QCI and corresponding characteristics is independent of the UE's current access (3GPP or Non-3GPP). The one-to-one mapping of standardized QCI values to standardized characteristics is captured in table 6.1.7-A and table 6.1.7-B. The main differences between the two parts are that, in contrast to Part A, Part B of Table 6.1.7 describes QCIs for which the Packet Error Loss Rate calculation includes those packets that are not delivered within the Packet Delay Budget; and, it provides additional information on the Data Rate Averaging Window as well as the Maximum Data Burst Volume that needs to be delivered within the Packet Delay Budget. Table 6.1.7-A: Standardized QCI characteristics QCI Resource Type Priority Level Packet Delay Budget (NOTE 13) Packet Error Loss Rate (NOTE 2) Example Services 1 (NOTE 3) 2 100 ms (NOTE 1, NOTE 11) 10-2 Conversational Voice 2 (NOTE 3) GBR 4 150 ms (NOTE 1, NOTE 11) 10-3 Conversational Video (Live Streaming) 3 (NOTE 3, NOTE 14) 3 50 ms (NOTE 1, NOTE 11) 10-3 Real Time Gaming, V2X messages Electricity distribution - medium voltage (e.g. clause 7.2.2 of TS 22.261 [51]) Process automation - monitoring (e.g. clause 7.2.2 of TS 22.261 [51]) 4 (NOTE 3) 5 300 ms (NOTE 1, NOTE 11) 10-6 Non-Conversational Video (Buffered Streaming) 65 (NOTE 3, NOTE 9, NOTE 12) 0.7 75 ms (NOTE 7, NOTE 8) 10-2 Mission Critical user plane Push To Talk voice (e.g. MCPTT) 66 (NOTE 3, NOTE 12) 2 100 ms (NOTE 1, NOTE 10) 10-2 Non-Mission-Critical user plane Push To Talk voice 67 (NOTE 3, NOTE 12) 1.5 100 ms (NOTE 1, NOTE 10) 10-3 Mission Critical Video user plane 75 (NOTE 14) 2.5 50 ms (NOTE 1) 10-2 V2X messages 71 5.6 150ms (NOTE 1, NOTE 16) 10-6 "Live" Uplink Streaming (e.g. TS 26.238 [53]) 72 5.6 300ms (NOTE 1, NOTE 16) 10-4 "Live" Uplink Streaming (e.g. TS 26.238 [53]) 73 5.6 300ms (NOTE 1, NOTE 16) 10-8 "Live" Uplink Streaming (e.g. TS 26.238 [53]) 74 5.6 500ms (NOTE 1, NOTE 16) 10-8 "Live" Uplink Streaming (e.g. TS 26.238 [53]) 76 5.6 500ms (NOTE 1, NOTE 16) 10-4 "Live" Uplink Streaming (e.g. TS 26.238 [53]) 5 (NOTE 3) 1 100 ms (NOTE 1, NOTE 10) 10-6 IMS Signalling 6 (NOTE 4) 6 300 ms (NOTE 1, NOTE 10) 10-6 Video (Buffered Streaming) TCP-based (e.g. www, e-mail, chat, ftp, p2p file sharing, progressive video, etc.) 7 (NOTE 3) Non-GBR 7 100 ms (NOTE 1, NOTE 10) 10-3 Voice, Video (Live Streaming) Interactive Gaming 8 (NOTE 5) 8 300 ms (NOTE 1) 10-6 Video (Buffered Streaming) TCP-based (e.g. www, e-mail, chat, ftp, p2p file 9 (NOTE 6) 9 sharing, progressive video, etc.) 10 9 1100 ms (NOTE 1, NOTE 17) 10-6 Video (Buffered Streaming) TCP-based (e.g. www, e-mail, chat, ftp, p2p file sharing, progressive video, etc.) and any service that can be used over satellite access with these characteristics 69 (NOTE 3, NOTE 9, NOTE 12) 0.5 60 ms (NOTE 7, NOTE 8) 10-6 Mission Critical delay sensitive signalling (e.g. MC-PTT signalling, MC Video signalling) 70 (NOTE 4, NOTE 12) 5.5 200 ms (NOTE 7, NOTE 10) 10-6 Mission Critical Data (e.g. example services are the same as QCI 6/8/9) 79 (NOTE 14) 6.5 50 ms (NOTE 1, NOTE 10) 10-2 V2X messages 80 (NOTE 3) 6.8 10 ms (NOTE 10, NOTE 15) 10-6 Low latency eMBB applications (TCP/UDP-based); Augmented Reality NOTE 1: A delay of 20 ms for the delay between a PCEF and a radio base station should be subtracted from a given PDB to derive the packet delay budget that applies to the radio interface. This delay is the average between the case where the PCEF is located "close" to the radio base station (roughly 10 ms) and the case where the PCEF is located "far" from the radio base station, e.g. in case of roaming with home routed traffic (the one-way packet delay between Europe and the US west coast is roughly 50 ms). The average takes into account that roaming is a less typical scenario. It is expected that subtracting this average delay of 20 ms from a given PDB will lead to desired end-to-end performance in most typical cases. Also, note that the PDB defines an upper bound. Actual packet delays - in particular for GBR traffic - should typically be lower than the PDB specified for a QCI as long as the UE has sufficient radio channel quality. NOTE 2: The rate of non congestion related packet losses that may occur between a radio base station and a PCEF should be regarded to be negligible. A PELR value specified for a standardized QCI therefore applies completely to the radio interface between a UE and radio base station. NOTE 3: This QCI is typically associated with an operator controlled service, i.e. a service where the SDF aggregate's uplink / downlink packet filters are known at the point in time when the SDF aggregate is authorized. In case of E-UTRAN this is the point in time when a corresponding dedicated EPS bearer is established / modified. NOTE 4: If the network supports Multimedia Priority Services (MPS) then this QCI could be used for the prioritization of non real-time data (i.e. most typically TCP-based services/applications) of MPS subscribers. NOTE 5: This QCI could be used for a dedicated "premium bearer" (e.g. associated with premium content) for any subscriber / subscriber group. Also in this case, the SDF aggregate's uplink / downlink packet filters are known at the point in time when the SDF aggregate is authorized. Alternatively, this QCI could be used for the default bearer of a UE/PDN for "premium subscribers". NOTE 6: This QCI is typically used for the default bearer of a UE/PDN for non privileged subscribers. Note that AMBR can be used as a "tool" to provide subscriber differentiation between subscriber groups connected to the same PDN with the same QCI on the default bearer. NOTE 7: For Mission Critical services, it may be assumed that the PCEF is located "close" to the radio base station (roughly 10 ms) and is not normally used in a long distance, home routed roaming situation. Hence delay of 10 ms for the delay between a PCEF and a radio base station should be subtracted from this PDB to derive the packet delay budget that applies to the radio interface. NOTE 8: In both RRC Idle and RRC Connected mode, the PDB requirement for these QCIs can be relaxed (but not to a value greater than 320 ms) for the first packet(s) in a downlink data or signalling burst in order to permit reasonable battery saving (DRX) techniques. NOTE 9: It is expected that QCI-65 and QCI-69 are used together to provide Mission Critical Push to Talk service (e.g. QCI-5 is not used for signalling for the bearer that utilizes QCI-65 as user plane bearer). It is expected that the amount of traffic per UE will be similar or less compared to the IMS signalling. NOTE 10: In both RRC Idle and RRC Connected mode, the PDB requirement for these QCIs can be relaxed for the first packet(s) in a downlink data or signalling burst in order to permit battery saving (DRX) techniques. NOTE 11: In RRC Idle mode, the PDB requirement for these QCIs can be relaxed for the first packet(s) in a downlink data or signalling burst in order to permit battery saving (DRX) techniques. NOTE 12: This QCI value can only be assigned upon request from the network side. The UE and any application running on the UE is not allowed to request this QCI value. NOTE 13: Packet delay budget is not applicable on NB-IoT or when Enhanced Coverage is used for WB-E-UTRAN (see TS 36.300 [19]). NOTE 14: This QCI could be used for transmission of V2X messages as defined in TS 23.285 [48]. NOTE 15: A delay of 2 ms for the delay between a PCEF and a radio base station should be subtracted from the given PDB to derive the packet delay budget that applies to the radio interface. NOTE 16: For "live" uplink streaming (see TS 26.238 [53]), guidelines for PDB values of the different QCIs correspond to the latency configurations defined in TR 26.939 [54]. In order to support higher latency reliable streaming services (above 500ms PDB), if different PDB and PELR combinations are needed these configurations will have to use non-standardised QCIs. NOTE 17: The worst case one way propagation delay for GEO satellite is expected to be ~270 ms, ~ 21 ms for LEO at 1200 km and ~13 ms for LEO at 600 km. The UL scheduling delay that needs to be added is also typically a two way propagation delay e.g. ~540 ms for GEO, ~42 ms for LEO at 1200 km and ~26 ms for LEO at 600 km. Based on that, the access network Packet delay budget is not applicable for QCIs that require access network PDB lower than the sum of these values when the specific types of satellite access are used (see TS 36.300 [19]). QCI-10 can accommodate the worst case PDB for GEO satellite type. Table 6.1.7-B: Standardized QCI characteristics QCI Resource Type Priority Level Packet Delay Budget (NOTE B1) Packet Error Loss Rate (NOTE B2) Maximum Data Burst Volume (NOTE B1) Data Rate Averaging Window Example Services 82 (NOTE B6) GBR 1.9 10 ms (NOTE B4) 10-4 (NOTE B3) 255 bytes 2000 ms Discrete Automation (TS 22.278 [38], clause 8 bullet g and TS 22.261 [51], table 7.2.2-1, "small packets") 83 (NOTE B6) 2.2 10 ms (NOTE B4) 10-4 (NOTE B3) 1354 bytes (NOTE B5) 2000 ms Discrete Automation (TS 22.278 [38], clause 8 bullet g and TS 22.261 [51], table 7.2.2-1, "big packets") 84 (NOTE B6) 2.4 30 ms (NOTE B7) 10-5 (NOTE B3) 1354 bytes (NOTE B5) 2000 ms Intelligent Transport Systems (TS 22.278 [38], clause 8, bullet h and TS 22.261 [51], table 7.2.2). 85 (NOTE B6) 2.1 5 ms (NOTE B8) 10-5 (NOTE B3) 255 bytes 2000 ms Electricity Distribution- high voltage (TS 22.278 [38], clause 8, bullet i and TS 22.261 [51], table 7.2.2 and Annex D, clause D.4.2). NOTE B1: The PDB applies to bursts that are not greater than Maximum Data Burst Volume. NOTE B2: This Packet Error Loss Rate includes packets that are not successfully delivered over the access network plus those packets that comply with the Maximum Data Burst Volume and GBR requirements but which are not delivered within the Packet Delay Budget. NOTE B3: Data rates above the GBR, or, bursts larger than the Maximum Data Burst Volume, are treated as best effort and, in order to serve other packets and meet the PELR, this can lead to them being discarded. NOTE B4: A delay of 1 ms for the delay between a PCEF and a radio base station should be subtracted from a given PDB to derive the packet delay budget that applies to the radio interface. NOTE B5: This Maximum Data Burst Volume value is set to 1354 bytes to avoid IP fragmentation on an IPv6 based, IPSec protected GTP tunnel to the eNB (the value is calculated as in Annex C of TS 23.060 [12] and further reduced by 4 bytes to allow for the usage of a GTP-U extension header). NOTE B6: This QCI is typically associated with a dedicated EPS bearer. NOTE B7: A delay of 5 ms for the delay between a PCEF and a radio base station should be subtracted from a given PDB to derive the packet delay budget that applies to the radio interface. NOTE B8: A delay of 2 ms for the delay between a PCEF and a radio base station should be subtracted from a given PDB to derive the packet delay budget that applies to the radio interface. The Resource Type determines if dedicated network resources related to a service or bearer level Guaranteed Bit Rate (GBR) value are permanently allocated (e.g. by an admission control function in a radio base station). GBR SDF aggregates are therefore typically authorized "on demand" which requires dynamic policy and charging control. A Non GBR SDF aggregate may be pre-authorized through static policy and charging control. The Maximum Data Burst Volume, if defined for the QCI (see Table 6.1.7-B), is the amount of data which the RAN is expected to deliver within the part of the Packet Delay Budget allocated to the link between the UE and the radio base station as long as the data is within the GBR requirements. If more data is transmitted from the application, delivery within the Packet Delay Budget cannot be guaranteed for packets exceeding the Maximum Data Burst Volume or GBR requirements. The Data Rate Averaging Window, if defined for the QCI (see Table 6.1.7-B), is the 'sliding window' duration over which the GBR and MBR for a GBR SDF aggregate shall be calculated (e.g. in the RAN, PDN-GW and UE). The Packet Delay Budget (PDB) defines an upper bound for the time that a packet may be delayed between the UE and the PCEF. For a certain QCI the value of the PDB is the same in uplink and downlink. The purpose of the PDB is to support the configuration of scheduling and link layer functions (e.g. the setting of scheduling priority weights and HARQ target operating points). Except for QCIs 82 and 83, the PDB shall be interpreted as a maximum delay with a confidence level of 98 percent. For services using QCI 82 or 83, a packet delayed by more than the PDB is included in the calculation of the PELR if the packet is within the Maximum Data Burst Volume and GBR requirements. NOTE 1: The PDB denotes a "soft upper bound" in the sense that an "expired" packet, e.g. a link layer SDU that has exceeded the PDB, does not need to be discarded (e.g. by RLC in E-UTRAN). The discarding (dropping) of packets is expected to be controlled by a queue management function, e.g. based on pre-configured dropping thresholds. The support for SRVCC requires QCI=1 only be used for IMS speech sessions in accordance to TS 23.216 [28]. NOTE 2: Triggering SRVCC will cause service interruption and/or inconsistent service experience when using QCI=1 for non-IMS services. NOTE 3: Triggering SRVCC for WebRTC IMS session will cause service interruption and/or inconsistent service experience when using QCI=1. Operator policy (e.g. use of specific AF application identifier) can be used to avoid using QCI 1 for a voice service, e.g. WebRTC IMS session. Services using a Non-GBR QCI should be prepared to experience congestion related packet drops and, except for QCI 80, 98 percent of the packets that have not been dropped due to congestion should not experience a delay exceeding the QCI's PDB. This may for example occur during traffic load peaks or when the UE becomes coverage limited. See Annex J for details. Packets that have not been dropped due to congestion may still be subject to non congestion related packet losses (see PELR below). Owing to its low latency objective, services using QCI 80 should anticipate that more than 2 percent of packets might exceed the PDB of QCI 80. Except for services using QCI 82 or 83 services using a GBR QCI and sending at a rate smaller than or equal to GBR can in general assume that congestion related packet drops will not occur and 98 percent of the packets shall not experience a delay exceeding the QCI's PDB. Exceptions (e.g. transient link outages) can always occur in a radio access system which may then lead to congestion related packet drops even for services using a GBR QCI and sending at a rate smaller than or equal to GBR. Packets that have not been dropped due to congestion may still be subject to non congestion related packet losses (see PELR below). For services using QCI 82 or 83 a packet which is delayed by more than the PDB but is within the Maximum Data Burst Volume and GBR requirements, is counted as lost when calculating the PELR. Every QCI (GBR and Non-GBR) is associated with a Priority level (see Table 6.1.7). The lowest Priority level value corresponds to the highest Priority. The Priority levels shall be used to differentiate between SDF aggregates of the same UE and it shall also be used to differentiate between SDF aggregates from different UEs. Via its QCI an SDF aggregate is associated with a Priority level and a PDB. Scheduling between different SDF aggregates shall primarily be based on the PDB. If the target set by the PDB can no longer be met for one or more SDF aggregate(s) across all UEs that have sufficient radio channel quality then the QCI Priority level shall be used as follows: in this case a scheduler shall meet the PDB of an SDF aggregate on QCI Priority level N in preference to meeting the PDB of SDF aggregates on next QCI Priority level greater than N, until the priority N SDF aggregate's GBR (in case of a GBR SDF aggregate) has been satisfied. Other aspects related to the treatment of traffic exceeding an SDF aggregate's GBR are out of scope of this specification. When required by operator policy, the eNodeB can be configured to use the ARP priority level in addition to the QCI priority level to determine the relative priority of the SDFs in meeting the PDB of an SDF aggregate. This configuration applies only for high priority ARPs as defined in clause 6.1.7.3. NOTE 4: The definition (or quantification) of "sufficient radio channel quality" is out of the scope of 3GPP specifications. NOTE 5: In case of E-UTRAN a QCI's Priority level and when required by operator policy, the ARP priority level may be used as the basis for assigning the uplink priority per Radio Bearer (see TS 36.300 [19] for details). The Packet Error Loss Rate (PELR) defines an upper bound for the rate of SDUs (e.g. IP packets) that have been processed by the sender of a link layer protocol (e.g. RLC in E‑UTRAN) but that are not successfully delivered by the corresponding receiver to the upper layer (e.g. PDCP in E‑UTRAN). Thus, the PELR defines an upper bound for a rate of non congestion related packet losses. The purpose of the PELR is to allow for appropriate link layer protocol configurations (e.g. RLC and HARQ in E‑UTRAN). For a certain QCI the value of the PELR is the same in uplink and downlink. NOTE 6: The characteristics PDB and PELR are specified only based on application / service level requirements, i.e. those characteristics should be regarded as being access agnostic, independent from the roaming scenario (roaming or non-roaming) and independent from operator policies.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.7.3 Allocation and Retention Priority characteristics
The QoS parameter ARP contains information about the priority level, the pre-emption capability and the pre-emption vulnerability. The priority level defines the relative importance of a resource request. This allows deciding whether a bearer establishment or modification request can be accepted or needs to be rejected in case of resource limitations (typically used for admission control of GBR traffic). It can also be used to decide which existing bearers to pre-empt during resource limitations. NOTE 1: The ARP priority level can be used in addition to the QCI to determine the transport level packet marking, e.g. to set the DiffServ Code Point of the associated EPS bearer, as described in TS 23.401 [17]. NOTE 2: When required by operator policy, the eNodeB can be configured to use the ARP priority level in addition to QCI priority level to control the packet forwarding treatment for SDFs having high priority ARPs. The range of the ARP priority level is 1 to 15 with 1 as the highest level of priority. The pre-emption capability information defines whether a service data flow can get resources that were already assigned to another service data flow with a lower priority level. The pre-emption vulnerability information defines whether a service data flow can lose the resources assigned to it in order to admit a service data flow with higher priority level. The pre-emption capability and the pre-emption vulnerability can be either set to 'yes' or 'no'. The ARP priority levels 1-8 should only be assigned to resources for services that are authorized to receive prioritized treatment within an operator domain (i.e. that are authorized by the serving network). The ARP priority levels 9-15 may be assigned to resources that are authorized by the home network and thus applicable when a UE is roaming. NOTE 3: This ensures that future releases may use ARP priority level 1-8 to indicate e.g. emergency and other priority services within an operator domain in a backward compatible manner. This does not prevent the use of ARP priority level 1-8 in roaming situation in case appropriate roaming agreements exist that ensure a compatible use of these priority levels.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.8 Termination Action
The termination action applies only in case of online charging. The termination action indicates the action, which the PCEF/TDF should perform when no more credit is granted. A packet that matches a PCC rule/ADC rule, indicating a charging key for which no credit has been granted, is subject to a termination action. The defined termination actions include: - Allowing the packets to pass through; - Dropping the packets; - The PCEF/TDF Default Termination Action; - The re-direction of packets to an application server (e.g. defined in the termination action). NOTE: Such a re-direction may cause an application protocol specific asynchronous close event and application protocol specific procedures may be required in the UE and/or AF in order to recover, e.g. as specified in RFC 2616 for HTTP. The Default Termination Action for all charging keys, for which no more credit is granted and there is no specific termination action shall be pre-configured in the PCEF/TDF according to operator's policy. For instance, the default behaviour may consist of allowing packets of any terminated service to pass through the PCEF/TDF. The OCS may provide a termination action for each charging key over the Gy interface. Any previously provided termination action may be overwritten by the OCS. A termination action remains valid and shall be applied by the PCEF/TDF until all the corresponding PCC/ADC rules of that charging key are removed or the corresponding IP‑CAN bearer is removed (for GPRS the PDP context). The OCS shall provide the termination action to the PCEF/TDF before denying credit; otherwise the PCEF/TDF default termination action will be performed.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.9 Handling of packet filters provided to the UE by PCEF/BBERF
The network shall ensure that the traffic mapping information negotiated with the UE reflects the bearer binding of PCC/QoS rules, except for those extending the inspection beyond what can be signalled to the UE. The PCC/QoS rules may restrict what traffic is allowed compared to what is explicitly negotiated with the UE. The PCRF may, per service data flow filter, indicate that the PCEF/BBERF is required to explicitly signal the corresponding traffic mapping information to the UE, e.g. for the purpose of IMS precondition handling at the UE. In absence of that indication, it is a PCEF/BBERF decision whether to signal the traffic mapping information that is redundant from a traffic mapping point of view. NOTE 1: A new/modified PCC/QoS rule can cause that previously redundant and therefore omitted, traffic mapping information to cease being redundant and causing the PCEF/BBERF to signal the corresponding traffic mapping information to the UE. NOTE 2: In order to signal a specific traffic mapping to a PDP context/EPS bearer without any previous TFT, if the operator policy is to continue allowing previously allowed traffic on that bearer, TFT filters that correspond to the previous traffic mapping need to be introduced as well. NOTE 3: The PCEF/BERF can use all SDF filters for the generation of traffic mapping information. However if the number of SDF filters for an IP-CAN bearer exceeds the maximum number of filters that may be signalled to the UE (e.g. as specified in TS 24.008 [50]) another bearer needs to be established and a rebinding of PCC rules to bearers (by PCEF/BBERF) or even the splitting of the SDF template into two or more PCC rules (by PCRF) may be required. The traffic mapping information (e.g. TFT filters for GPRS and EPS) that the network provides to the UE shall include the same content as the corresponding SDF filters in the SDF template received over the Gx/Gxx interface. The representation/format of the packet filters provided by the network to the UE is access-system dependent and may vary between accesses and may also be different from the representation/format of the SDF filters in the SDF template on the Gx/Gxx interface. NOTE 4: After handover from one access-system to another, if the UE needs to determine the QoS provided in the target access to the pre-existing IP flows in the source access, the UE can perform packet filter comparison between the packet filters negotiated in the old access and those provided by the target access during QoS resource activation. NOTE 5: If UE initiated procedures are supported and handover between access systems is to be supported, the content of the packet filters provided on the Gx/Gxx interface by the PCRF is restricted to the packet filter fields that all the accesses can provide to the UE. In case traffic mapping information is required for a dedicated bearer and in the PCC/QoS rules corresponding to the bearer there is no SDF filter for the uplink direction having an indication to signal corresponding traffic mapping information to the UE, the PCEF/BBERF derives traffic mapping information based on implementation specific logic (e.g. traffic mapping information that effectively disallows any useful packet flows in uplink direction as described in clause 15.3.3.4 of TS 23.060 [12]) and provides it to the UE. NOTE 6: For GPRS and EPS, the state of TFT packet filters, as defined in TS 23.060 [12], for an IP-CAN session requires that there is at most one bearer with no TFT packet filter for the uplink direction. NOTE 7: This PCEF behaviour covers also the case that a PCC rule with an application identifier is the only PCC rule that is bound to a dedicated bearer. NOTE 8: For a default bearer, the PCEF/BBERF will not add traffic mapping information that effectively disallows any useful packet flows in uplink direction on its own. Traffic mapping information is only generated from SDF filters which have an indication to signal corresponding traffic mapping information to the UE.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10 IMS Emergency Session Support
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.1 Architecture model and Reference points
Emergency bearer services (i.e. IP-CAN session for the IMS emergency services) are provided by the serving network to support IMS emergency when the network is configured to support emergency services. Emergency services are network services provided through an Emergency APN and may not require a subscription depending on operator policies and local regulatory requirements. For emergency services the architecture for the non-roaming case as described in clause 5.1 is the only applicable architecture model. For emergency services, the Sp reference point does not apply. Emergency services are handled locally in the serving network. Therefore the S9 reference point does not apply.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.2 PCC Rule Authorization and QoS rule generation
The PCC Rule Authorization and QoS Rule generation function selects QoS parameters that allow prioritization of IMS Emergency sessions. If an IMS Emergency session is prioritized the QoS parameters shall contain an ARP value that is reserved for intra-operator use of IMS Emergency services.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.3 Functional Entities
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.3.1 PCRF
The PCRF shall determine based on the PDN-id if an IP-CAN Session concerns an IMS emergency session. For an IP-CAN session serving an IMS emergency session, the PCRF makes authorization and policy decisions that restrict the traffic to emergency destinations, IMS signalling and the traffic to retrieve user location information (in the user plane) for emergency services. An IP-CAN session serving an IMS emergency session shall not serve any other service and shall not be converted to/from any IP-CAN session serving other services. If the UE IP address belongs to an emergency APN, the PCRF does not perform subscription check; instead it utilizes the locally configured operator policies to make authorization and policy decisions. For IMS, it shall be possible for the PCRF to verify that the IMS service information is associated with a UE IP address belonging to an emergency APN. If the IMS service information does not contain an emergency related indication and the UE IP address is associated with an emergency APN, the PCRF shall reject the IMS service information provided by the P‑CSCF (and thus to trigger the release of the associated IMS session), see TS 23.167 [21]. The PCRF performs according to existing procedure: - If IMS service information containing an emergency related indication is received from the P‑CSCF with an UE IP address associated to an Emergency APN, the PCRF initiates an IP-CAN session Modification Request for the IP‑CAN session serving the IMS session to the PCEF to provide PCC Rule(s) that authorize media flow(s). - At reception of an indication that the IMS emergency session is released from the P-CSCF, the PCRF removes the PCC rule(s) for that IMS session with an IP‑CAN session Modification Request. In addition, upon Rx session establishment the PCRF shall provide the IMEI(SV) (if available) and the EPC-level subscriber identifiers (IMSI, MSISDN) (if available), received from the PCEF at IP-CAN session establishment, if so requested by the P-CSCF.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.3.2 PCEF
The PCEF initiates the IP‑CAN Session termination if the last PCC rule for this IP‑CAN session is removed according to existing procedure. In addition, at reception of an IP‑CAN Session Modification Request triggered by the PCRF for an IP‑CAN session serving an IMS emergency session that removes all PCC rules with a QCI other than the default bearer QCI and the QCI used for IMS signalling, the PCEF shall start a configurable inactivity timer (e.g. to enable PSAP Callback session). When the configured period of time expires the PCEF shall initiate an IP‑CAN Session Termination Request for the IP‑CAN session serving the IMS Emergency session. If a PCRF-Initiated IP‑CAN Session Modification Request, providing new PCC rule(s) with a QCI other than the default bearer QCI and the QCI used for IMS signalling, the PCEF shall cancel the inactivity timer.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.3.3 P-CSCF
The P‑CSCF performs according to existing procedure: - At reception of an indication that an IMS emergency session is established, the P‑CSCF sends IMS service information to the PCRF. - At reception of an indication that an IMS emergency session is released, the P‑CSCF interacts with the PCRF to revoke the IMS service information. In addition, the P‑CSCF shall include an emergency related indication when providing IMS service information to the PCRF; see TS 23.167 [21]. Moreover, the P-CSCF upon Rx session establishment may request the PCRF to provide the IMEI(SV) and the EPC-level subscriber identifiers (IMSI, MSISDN) corresponding to the Rx session. NOTE: The IMEI(SV) and the EPC-level subscriber identifiers (IMSI, MSISDN) can be used to support authentication of roaming users in deployments with no IMS-level roaming interfaces or to support PSAP callback functionality for anonymous IMS emergency sessions, as described in TS 23.167 [21].
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.10.4 PCC Procedures and Flows
At Indication of IP-CAN Session Establishment that includes a PDN-id that identifies an Emergency APN the PCRF ignores subscription information from the SPR. The PCRF uses locally configured operator policies to make authorization and policy decisions. At Indication of IP-CAN Session Establishment and Gateway Control Session Establishment, the user identity (e.g. IMSI) may not be available, or can not be authenticated. In this case, the IMEI shall be used to identify the UE. An IP-CAN session for an emergency service shall be restricted to the destination address(es) associated with the emergency service only. 6.1.10a Restricted Local Operator Services Support Restricted Local Operator Services (i.e. IP-CAN session for the Restricted Local Operator Services) (as specified in TS 23.221 [55]) are provided by the serving network when the network is configured to support Restricted Local Operator Services. The PCC handling of Restricted Local Operator Services is very similar to that of emergency service as specified in clause 6.1.10 with the following differences: - RLOS APN and IMS RLOS session are used for Restricted Local Operator Services. - Architecture model and Reference points (clause 6.1.10.1). Restricted Local Operator Services do not require a subscription. - PCC Rule Authorization and QoS rule generation (clause 6.1.10.2). The Restricted Local Operator Services is not a prioritized services and the ARP can be determined based on operator policy. - Functional Entity: PCRF (clause 6.1.10.3.1). The PCRF shall determine based on the RLOS APN if an IP-CAN Session relates to an IMS RLOS session. - Functional Entity: PCEF (clause 6.1.10.3.2). Duration of PDN connection for RLOS is controlled through local policies in PCEF. Handling of inactivity timer for the emergency PDN connection is not applicable for RLOS. - Functional Entity: P-CSCF (clause 6.1.10.3.3). Indication of IMS RLOS session is used. - PCC Procedures and Flows (clause 6.1.10.4). The PDN-id identifies an RLOS APN.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.11 Multimedia Priority Service Support
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.11.1 Architecture model and Reference points
Subscription data for MPS is provided to PCC through the Sp reference point. To support MPS service, the PCRF shall subscribe to changes in the MPS subscription data for Priority EPS Bearer Service. Dynamic invocation for MPS is provided from an AF, using the Priority indicator, over Rx. Dynamic invocation for Data Transport Service is provided by sending an MPS for Data Transport Service request to the PCRF over Rx. Dynamic invocation for MPS for Messaging is provided by sending an MPS for Messaging request to the PCRF over Rx.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.11.2 PCC rule authorization and QoS rule generation
For MPS service, the PCRF shall generate the corresponding PCC/QoS rule(s) with the ARP/QCI parameters as appropriate for the prioritized service. For non-MPS service, the PCRF shall generate the corresponding PCC/QoS rule(s) as per normal procedures, without consideration whether the MPS Priority EPS Bearer Service is active or not, but upgrade the ARP/QCI values suitable for MPS when the Priority EPS Bearer Service is invoked. When the Priority EPS Bearer Service is revoked, the PCRF shall change the ARP/QCI values modified for Priority EPS Bearer Service to appropriate values. NOTE 1: The above statements for the Priority EPS Bearer Service are also applicable for the MPS for Data Transport Service and for MPS for Messaging Service. Whenever one or more AF sessions of an MPS service are active within the same PDN connection, the PCRF shall ensure that the ARP priority level of the default bearer is at least as high as the highest ARP priority level used by any authorized PCC rules belonging to an MPS service. If the ARP pre-emption capability is enabled for any of the authorized PCC rules belonging to an MPS service, the PCRF shall also enable the ARP pre-emption capability for the default bearer. NOTE 2: This ensures that services using dedicated bearers are not terminated because of a default bearer with a lower ARP priority level or disabled ARP pre-emption capability being dropped during mobility events. NOTE 3: This PCRF capability does not cover interactions with services other than MPS services.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.11.3 Priority EPS Bearer Service
The MPS Priority EPS Bearer Service targets the ARP and/or QCI of bearer(s), enabling the prioritization of all traffic on the same bearer. The PCRF shall, at the activation of the Priority EPS Bearer Service: - modify the ARP of the default bearer as appropriate for the Priority EPS Bearer Service under consideration of the requirement described in clause 6.1.11.2; and - if modification of the QCI of the default bearer is required, modify the QCI of the default bearer as appropriate for the Priority EPS Bearer Service; and - modify the ARP of PCC/QoS Rules installed before the activation of the Priority EPS Bearer Service to the ARP as appropriate for the Priority EPS Bearer Service under consideration of the requirement described in clause 6.1.11.2; and - if modification of the QCI of the PCC/QoS Rules is required, modify the QCI of the PCC/QoS Rules installed before the activation of the Priority EPS Bearer Service to the QCI as appropriate for the Priority EPS Bearer Service. The PCRF shall, at the deactivation of the Priority EPS Bearer Service: - modify the ARP of the default bearer to an appropriate value according to PCRF decision under consideration of the requirement described in clause 6.1.11.2; and - if modification of the QCI of the default bearer is required, modify the QCI of the default bearer to an appropriate value according to PCRF decision; and - for PCC/QoS rules modified due to the activation of Priority EPS bearer service: - modify the ARP to an appropriate value according to PCRF decision under consideration of the requirement described in clause 6.1.11.2; and - if modification of the QCI of PCC/QoS Rules is required, modify the QCI to an appropriate value according to PCRF decision.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.11.4 Bearer priority for IMS Multimedia Priority Services
In addition to the mechanism specified in clause 6.1.11.2, IMS Multimedia Priority Services may require upgrade of the dedicated IM CN signalling bearer and the default bearer, e.g. in order to mitigate the IP-CAN session termination due to resource limitation at a location change the default bearer and dedicated IM CN signalling bearer may need an upgraded ARP. At reception of the indication that the IMS Signalling Priority is set for the IP-CAN Session or at reception of service authorization from the P-CSCF (AF) including an MPS session indication and the service priority level the PCRF shall under consideration of the requirement described in clause 6.1.11.2: - modify the ARP of the default bearer as appropriate for the IMS Multimedia Priority Service; and - if upgrade of the dedicated IM CN signalling bearer is required, modify the ARP in all the PCC/QoS rules that describe the IM CN signalling traffic to the value appropriate for IMS Multimedia Priority Services. When the PCRF detects that the P-CSCF (AF) released all the MPS session and the IMS Signalling Priority is not set for the IP-CAN session the PCRF shall under consideration of the requirement described in clause 6.1.11.2: - modify the ARP of the default bearer to an appropriate value according to PCRF decision; and - modify the ARP in all PCC/QoS Rules that describe the IM CN signalling traffic to an appropriate value according to PCRF decision.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.11.5 Bearer priority for MPS for Data Transport Service
MPS for Data Transport Service enables the prioritization of all traffic on the default bearer and other bearers upon AF request. The QoS modification to the default bearer and other bearers is done based on operator policy and regulatory rules by means of local PCRF configuration. NOTE 1: If no configuration is provided, MPS for Data Transport Service applies only to the default bearer. Upon receipt of an MPS for Data Transport Service invocation/revocation request from the UE, the AF or the PCRF authorizes the request. If the UE has an MPS subscription, MPS for Data Transport Service is authorized by the AF or the PCRF, based on AF decision. If the Service User is using a UE that does not have an MPS subscription, the AF authorizes MPS for Data Transport Service. - In the case that the AF authorizes the MPS for Data Transport Service request, after successful authorization, the AF sends the MPS for Data Transport Service request to the PCRF over Rx for QoS modifications, including an indication that PCRF authorization is not needed. In this case, the PCRF shall not perform a subscription check for MPS for Data Transport Service requests. The AF also indicates to the PCRF whether the request is for invoking or revoking MPS for Data Transport Service. - In the case that the AF does not authorize the MPS for Data Transport Service request, the AF sends the request over Rx to the PCRF for authorization and QoS modifications, including an indication that PCRF authorization is needed. In this case, the PCRF shall perform an MPS subscription check for the MPS for Data Transport Service request. The AF also indicates whether the request is for invoking or revoking MPS for Data Transport Service. The PCRF will inform the AF when the UE does not have an MPS subscription associated with the request. After successful authorization by either AF or PCRF as described above, the PCRF shall, at the activation of MPS for Data Transport Service over Rx, perform the same steps for QoS modifications as described in clause 6.1.11.3 for the activation of the Priority EPS Bearer Service. NOTE 2: To keep the PCC rules bound to the default bearer, the PCRF can either modify the ARP/QCI of these PCC rules accordingly or set the Bind to Default Bearer PCC rule attribute. The PCRF shall inform the AF of the success or failure of the MPS for Data Transport Service invocation/revocation request. The PCRF shall at the deactivation of MPS for Data Transport Service over Rx perform the same steps described in clause 6.1.11.3 for the deactivation of the Priority EPS Bearer Service. If the bearers are deactivated for other reasons than an AF request, the PCRF shall notify the AF by terminating the Rx session. The AF may also request an SDF for priority signalling between the UE and the AF, where the AF includes the Priority indicator over Rx, in order to enable the PCRF to set appropriate QoS values for the signalling bearer.