Embodiments of the present disclosure disclose co-ordination between Master Cell Group (MCG) and Secondary Cell Group (SCG). The method comprises transmitting during a dual connectivity setup procedure for a User Equipment (UE), a Layer1/Layer2 Triggered Mobility (LTM) indicator indicating current configuration state of LTM in a Master Node (MN) to a Secondary node (SN), through a Secondary Node (S-Node) addition request, to activate a Dual Connectivity (DC) configuration in UE; and performing one of transmitting upon activating the DC configuration in the UE, subsequently, an updated configuration state of LTM in the MN through a Next-Generation Radio Access Network (NG-RAN) node configuration update upon occurrence of an event or receiving upon activating the DC configuration in the UE, subsequently, an updated configuration state of LTM in the SN through the NG-RAN node configuration update upon occurrence of the event.
Disclosed herein is an apparatus configured to receive, at a target centralized unit control plane (gNB-CU-CP), a request indicative of a Layer 2 Mobility (LTM) target cell preparation for a user equipment (UE) from a serving gNB-CU-CP. Further, the apparatus is configured to prepare one or more target centralized unit user planes (gNB-CU-UPs) for the LTM to a target cell and prepare one or more target distributed units (gNB-DUs) after preparing the one or more target gNB-CU-UPs. Further, the apparatus is configured to detect, at a target gNB-DU, a successful LTM cell switch to the target cell, transmit, after detecting the successful LTM cell switch, an LTM cell switch notification message from the target gNB-DU to the one or more target gNB-CU-UPs, and trigger, at the one or more target gNB-CU-UPs based on the received LTM cell switch notification message, downlink data transmission to the UE.
Radio nodes are monitored to obtain radio node coverage information and to obtain radio node status information. Based on the monitoring, coverage information and handover information for the radio nodes is collected, and transport traffic information and subscriber count information for the radio nodes are collected. Based on the radio node coverage information, node clustering is performed to identify one or more batches of source nodes to update based on a compensation capacity of neighbor nodes to the source nodes. Based on the one or more batches of source nodes identified from the node clustering, and on the radio node status information, a scheduling operation is performed to choose a time-slot for executing an update to at least one of the one or more batches of source nodes.
H04W 36/32 - La resélection étant déclenchée par des paramètres spécifiques par des données de localisation ou de mobilité, p. ex. des données de vitesse
A master node list is filtered. A node is a network node configured to create, receive, or transmit information, into an isolated nodes grouping and a filtered nodes grouping based on a single-coverage threshold. For each node in the filtered nodes grouping, a priority-based sequence of compensating neighbor nodes is sequenced. Compensating neighbor nodes are prioritized in terms of compensating capacity. A collective neighbor compensation performance is determined based on an overall compensation provided by the compensating neighbor nodes for a given node that is to be shut down. Each node from the master node list is distributed into one or more batches, based on a batch criteria.
A neighbor node compensation estimator and method. A list of performance parameters are received. The list of parameters are processed to determine a Collective Coverage Ratio by Neighbor Nodes, an Average Handover Success (HOS) Ratio, and a Handover Attempt Ratio. In Response to the Collective Coverage Ratio, the Handover Success (HOS) Ratio, and the Handover Attempt Ratio, a Collective Neighbor Node Compensation by the Neighbor Nodes is determined.
An aspect of this description relates to a method including receiving a list of radio nodes usable in a wireless network. Each of the radio nodes has a corresponding coverage area. The method includes selecting a first radio node from the list of radio nodes. The method includes creating a first batch in which one or more radio nodes are to be assigned. The method includes assigning the first radio node to the first batch. The method includes selecting a second radio node. The method includes determining whether the corresponding coverage area of the second radio node overlaps with the corresponding coverage area of the first radio node. The method includes assigning the second radio node to the first batch in response to a determination that the corresponding coverage area of the second radio node does not overlap with the corresponding coverage area of the first radio node.
A highest priority batch is selected, based on a hotspot count, from a list of unassigned batches, wherein each batch includes one or more nodes in a telecommunication network. A highest priority timeslot is selected, based on batch traffic and batch connected subscriber count, for the highest priority batch. The highest priority timeslot is assigned to the highest priority batch. Implementation of an action for each of the one or more nodes in the highest priority batch during the highest priority timeslot is automatically scheduled.
Neighbor Nodes of a Source Node to be upgraded are sequenced by a Neighbor Node Sequencer. A first Neighbor Node sequencing method or a second Neighbor Node sequencing method is applied by the Neighbor Node Sequencer. In applying a first Neighbor Node sequencing method, Neighbor Nodes are iteratively sorted using a first formula until a calculated Gain in Collective Coverage is not greater than a predetermined Coverage Gain Threshold. A second formula is applied based on Machine Learning. In applying a second Neighbor Node sequencing method, Neighbor Nodes are sorted using the second formula based on the Machine Learning. Neighbor Nodes are sorted using a variant of the first formula.
G06F 18/2413 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur les distances des motifs d'entraînement ou de référence
A device profile provisioning system is provided. A Subscription Management Data Preparation (SM-DP+) server receives a request to download a profile package to an embedded Universal Integrated Circuit Card (eUICC) of a user equipment (UE). The SM-DP+ server sends the profile package to the eUICC. The SM-DP+ server determines whether the profile package is in an error state. The SM-DP+ resets the profile package to be in a released state to download in response to the SM-DP+ determining the error state. The SM-DP+ sends a notification to an end user informing the end user the profile package is available to download.
H04W 8/18 - Traitement de données utilisateur ou abonné, p. ex. services faisant l'objet d'un abonnement, préférences utilisateur ou profils utilisateurTransfert de données utilisateur ou abonné
10.
KEY PERFORMANCE INDICATOR (KPI) NORMALIZATION FOR A SMART SERVICE ANALYZER
A Key Performance Indicator (KPI) Normalizer for a Smart Service Analyzer. User Level Qualitative Key Performance Indicators (KPIs) and User Level Quantitative KPIs are receive by a KPI Normalizer. The User Level Qualitative KPIs are provided to a Multi Scale Normalizer. The User Level Qualitative KPIs are normalized using the Multi Scale Normalizer based on KPI Performance Thresholds associated with the User Level Qualitative KPIs to produce Normalized Qualitative KPIs. User Level Quantitative KPIs are provided to a Trend Deviation Based KPI Normalizer. The User Level Quantitative KPIs are normalized using the Trend Deviation Based KPI Normalizer to produce Normalized Quantitative KPIs based on a Trend Update.
H04L 41/50 - Gestion des services réseau, p. ex. en assurant une bonne réalisation du service conformément aux accords
H04L 41/0604 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant du filtrage, p. ex. la réduction de l’information en utilisant la priorité, les types d’éléments, la position ou le temps
H04L 41/5025 - Pratiques de respect de l’accord du niveau de service en réagissant de manière proactive aux changements de qualité du service, p. ex. par reconfiguration après dégradation ou mise à niveau de la qualité du service
11.
NORMALIZING A TREND DEVIATION-BASED QUANTITATIVE KEY PERFORMANCE INDICTOR (KPI)
A Trend Deviation-Based Quantitative Key Performance Indictors (KPI) Normalizer and method for a Smart Service Analyzer. Trend Deviation-Based Quantitative Key Performance Indictors (KPI) Normalizer receives User Level Quantitative Key Performance Indictors (KPIs) at a Cluster Based Aggregator. Cluster-Based Aggregation of the User Level Quantitative KPIs is performed to generate Network Level Quantitative KPIs. A forecasted trend is determined based on the Network Level Quantitative KPIs. The User Level Quantitative KPIs are processed by comparing against a forecasted Network Level Quantitative KPI to generate Normalized Quantitative KPIs at the User Level based on the forecasted trend, wherein the Normalized Quantitative KPIs at the User Level include a deviation from the forecasted trend.
H04L 41/5009 - Détermination des paramètres de rendement du niveau de service ou violations des contrats de niveau de service, p. ex. violations du temps de réponse convenu ou du temps moyen entre l’échec [MTBF]
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
12.
CUSTOMER EXPERIENCE INDEX ESTIMATOR FOR A SMART SERVICE ANALYZER
A method includes normalizing one or more key performance indicator (KPI), where the one or more KPI include user-level quantitative KPI or user-level qualitative KPI; determining whether weights are available for each normalized KPI; and in response to the weights being available for each normalized KPI, converting, based on the weights for each normalized KPI, each normalized KPI to a customer experience index (CEI) for each user and for each network service. The method further includes determining whether a trend shift has occurred based on the CEI; and automatically generating an alert in response to a determination that the trend shift occurred.
A Smart Service Analyzer obtains Call Direct Record (CDR) Data from Probing Devices of a mobile network. The CDR Data is processed at a Key Performance Indicators (KPI) Generator to generate User Level KPIs. The User Level KPIs are provided to a Customer Experience Index (CEI) Estimator. Machine Learning is applied to the User Level KPIs at the CEI Estimator to generate Generalized User Level CEI Estimates. The Generalized User Level CEI Estimates are provided to a Service Quality Index (SQI) Estimator. Machine Learning is applied to the Generalized User Level CEI Estimates at the SQI Estimator to generate Network Level SQI Estimates.
H04L 41/5009 - Détermination des paramètres de rendement du niveau de service ou violations des contrats de niveau de service, p. ex. violations du temps de réponse convenu ou du temps moyen entre l’échec [MTBF]
H04M 3/22 - Dispositions de supervision, de contrôle ou de test
Usage of a storage volume is monitored, including monitoring a write frequency and an overwrite frequency for the storage volume. The overwrite frequency may be obtained from garbage collection data for the storage volume. The write frequency and overwrite frequency may be used to obtain a growth rate and predict future usage of the storage volume. Where future usage indicates that expansion of storage allocated to the storage volume is needed, affinity requirements, anti-affinity requirements, and rebalancing reduction are evaluated with respect to the expansion. If expansion satisfies these constraints, the storage volume is locally expanded. Otherwise, the storage volume is relocated to a different storage device.
A logical host, e.g., pod, receives a request to instantiate a container with allocation of a fraction of a CPU. In response, a hook within the logical host is invoked that modifies the request to remove the request for allocation of a fraction of a CPU. The logical host then invokes a container runtime interface to instantiate the container. The container may execute on a best-effort set of CPUs or may be bound to one or more dedicated shared CPUs.
Systems and methods for multi-cluster worker management for speed and proximity use cases. A method includes providing a plurality of tasks to a priority-based backlog queue and provisioning each of the plurality of tasks to one of a plurality of clusters. Provisioning each of the plurality of tasks comprises provisioning based on a proximity-based allocation process. The proximity-based allocation process includes identifying a network element location associated with each of the plurality of tasks, identifying a geographic location for each of the plurality of clusters, and prioritizing a nearest cluster of the plurality of clusters.
Disclosed herein is a system configured to receive one of a mobility configuration indication and a mobility configuration request, from a Secondary Node (SN) Control Unit (CU), to determine one or more candidate Primary cell (Pcell) configuration modifications. The one or more candidate Pcell configuration modifications indicate one or more modifications to at least one candidate Primary Secondary cell (PScell) corresponding to a Pcell change of a User Equipment (UE). The system is configured to determine the one or more candidate Pcell configuration modifications based on an overlap of one or more candidate Pcells of a Master Node (MN) CU with at least one candidate PScell of the SN CU. The system is configured to transmit the one or more candidate Pcell configuration modifications to the SN CU for generating PScell configuration of the at least one candidate PScell based on the one or more candidate Pcell configuration modifications for facilitating mobility of the UE from a serving PScell to the at least one candidate PScell.
Embodiments of the disclosure describe an apparatus (1000) related to dual mode Open Radio Access Network Radio Unit (O-RU) operations for energy efficiency and resource sharing. The apparatus (1000) is configured to obtain at least one of an one or more policies, a configuration, and a command associated with a mode of operation of the O-RU (e.g., 304 and/or 704) connected with the apparatus (1000). The mode of operation includes the NES mode and the ULPI mode. The apparatus (1000) is configured to determine whether to switch the mode of operation of the O-RU (e.g., 304 and/or 704) based on obtained at least one of the one or more policies, the configuration, and the command. The apparatus (1000) is configured to transmit a switching command, to the O-RU (e.g., 304 and/or 704), to switch the mode of operation based on the determination.
H04W 28/16 - Gestion centrale des ressourcesNégociation de ressources ou de paramètres de communication, p. ex. négociation de la bande passante ou de la qualité de service [QoS Quality of Service]
H04B 7/06 - Systèmes de diversitéSystèmes à plusieurs antennes, c.-à-d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
Systems and methods for efficient batch upgrading of compute nodes within a network computing platform. A method includes identifying a plurality of compute nodes scheduled to undergo an upgrade process and identifying an application executed by one or more of the plurality of compute nodes. The method includes determining a minimum node availability budget for the application and generating a batch upgrade scheme for the plurality of compute nodes, wherein the batch upgrade scheme upgrades a maximum quantity of the plurality of compute nodes in parallel while complying with the minimum node availability budget for the application.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 8/656 - Mises à jour pendant le fonctionnement
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 41/0823 - Réglages de configuration caractérisés par les objectifs d’un changement de paramètres, p. ex. l’optimisation de la configuration pour améliorer la fiabilité
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
20.
DYNAMIC SCHEDULING MANAGEMENT IN A WIRELESS COMMUNICATION NETWORK
Embodiments disclosed herein provide a method and system for dynamic scheduling management and optimization at the DU (108) for an associated RU (112). The method comprises determining, by a first network entity (108) associated to a wireless communication network, PRBs allocation, pertaining to candidate UEs, for downlink transmission to a second network entity (112), wherein the PRBs allocation is associated to a plurality of time-symbols. Further, a set of time-symbols, from the plurality of time-symbols, in which a PRB allocation for downlink transmission is absent, dynamically determined based on PRBs allocation. Furthermore, a downlink control signal comprising indication for downlink symbol blanking in the set of time-symbols from the plurality of time-symbols, computed based on the determination. Thereafter, the downlink control signal is transmitted to the second network entity (112). The second network entity (112) deactivates transmission of downlink symbols in the set of time-symbols, based on the indication.
A computer system retrieves provisioning data and observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers. The computer system extracts component identifiers from the provisioning data and the log data. The computer system identifies relationships between components from the provisioning data and the log data, such as environmental variable relationships, network relationships, session relationships, access relationships, and network connection relationships. The computer system generates a topology in which nodes represent components and edges represent relationships. The topology is updated to show changes to components and relationships. A user may interact with a visual representation of the topology to change data displayed or invoke changes to the components and relationships represented by the topology.
The present invention extends to methods, systems, and computer program products for predictively addressing hardware component failures. Network packets can be received over time at a platform. Metrics derived from platform hardware components and derived from one or more workloads utilizing the platform hardware components can be monitored. Model training data can be formulated from the metrics. A health check model can be trained using the model training data. The health check model can be executed to compute a probability that a monitored platform hardware component is on a path to failure. It can be determined that the probability exceeds a threshold. A workload can be relocated from a pod containing the monitored platform hardware component to another pod. Additional network packets can be received over time at the platform. The workload can process data contained in the additional network packets at the other pod.
Systems and methods for generating and configuring network service packages includes multiple clusters for executing cloud native network functions and virtual network functions. A method includes receiving a request to generate a network service package comprising a first cluster and a second cluster. The method includes generating a dependency within the network service package such that the second cluster depends upon the first cluster. The method includes automatically configuring a first router associated with the first cluster and a second router associated with the second router such that the first router and the second router can route traffic to each other.
H04L 41/5054 - Déploiement automatique des services déclenchés par le gestionnaire de service, p. ex. la mise en œuvre du service par configuration automatique des composants réseau
H04L 45/741 - Routage dans des réseaux avec plusieurs systèmes d'adressage, p. ex. avec IPv4 et IPv6
24.
TECHNOLOGY FOR SEARCHING FOR PROXIMATE TERMINAL DEVICE
An electronic device according to various embodiments of the present disclosure may be configured to receive, from a first terminal device, a request for a list including at least one terminal device close to the first terminal device, a logical address of a communication network to which the first terminal device is connected, and a physical address of a communication device to which the first terminal device is connected, identify the at least one terminal device close to the first terminal device among the one or more second terminal devices, based on the logical address and the physical address of the first terminal device, and logical addresses, physical addresses, and flags respectively related to the one or more second terminal devices, and transmit the list including the at least one identified terminal device to the first terminal device.
A method, an apparatus, and a computer program product for scaling of subscriber capacity in a cloud native radio access network (RAN). A processing capacity being assigned to one or more containers in a plurality of containers of a cloud native radio access network for providing communication to at least one user equipment in a plurality of user equipments is determined. The determined processing capacity is compared to at least one predetermined threshold in a plurality of predetermined thresholds. Based on the comparing a determination is made whether to change an assignment of the processing capacity.
H04W 28/084 - Équilibrage ou répartition des charges entre les entités de virtualisation des fonctions de réseau [NFV]Équilibrage ou répartition des charges entre les entités de calcul en périphérie, p. ex. calcul en périphérie multi-accès
H04L 41/0806 - Réglages de configuration pour la configuration initiale ou l’approvisionnement, p. ex. prêt à l’emploi [plug-and-play]
H04W 8/18 - Traitement de données utilisateur ou abonné, p. ex. services faisant l'objet d'un abonnement, préférences utilisateur ou profils utilisateurTransfert de données utilisateur ou abonné
H04W 24/02 - Dispositions pour optimiser l'état de fonctionnement
The present invention extends to methods, systems, and computer program products for optimizing resource allocation in view of predicted network traffic patterns and predicted power consumption. Network packets defining a network traffic flow can be received at a platform over time. Metrics can be derived from one or more applications executing on resources of the platform and processing data contained in the network data packets. Model training data can be formulated from the metrics. A resource adjustment model can be trained using the model training data. Executing the model can be automated to adjust resource allocation at the platform. Additional network packets defining an additional network traffic flow can be received at a platform over time. Data contained in the additional network packets can be processed using the adjusted resource allocation.
H04L 41/0897 - Capacité à monter en charge au moyen de ressources horizontales ou verticales, ou au moyen d’entités de migration, p. ex. au moyen de ressources ou d’entités virtuelles
G06N 3/0442 - Réseaux récurrents, p. ex. réseaux de Hopfield caractérisés par la présence de mémoire ou de portes, p. ex. mémoire longue à court terme [LSTM] ou unités récurrentes à porte [GRU]
H04L 41/147 - Analyse ou conception de réseau pour prédire le comportement du réseau
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
H04L 43/08 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux
27.
Cluster Consolidation Using Active and Available Inventory
A computer system pulls observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers, which may be part of a cloud computing platform. The components may be application instances, containers, storage volumes, pods, or other components. The computer system derives a utilization metric for each components and each of one or more types of computing resources: compute, memory, and storage. The utilization metrics are compared to available inventory of computing resources to obtain an active and available inventory (AAI). Components may be redeployed and allocated computing resources reduced based on the AAI. Components may be grouped in clusters and components may be consolidated to a reduced number of clusters based on the AAI.
A system includes a cellular a cellular communication network includes one or more antennas configured to exchange radio signals with a user equipment (UE). One or more computing devices execute a node, such as a gNodeB, configured to manage exchange of data with the UE using the one or more antennas. The node is configured to: receive, from the UE, a first estimate of a timing advance of data transmission between the UE and a cell of the cellular communication network, the first estimate being calculated by the user equipment; evaluate accuracy of the first estimate; and if the first estimate is not accurate, configure the UE to communicate with the cell using a RACH procedure to facilitate acquiring of the timing advance by the cellular communication network. The node may execute on a DU or a CU of the cellular communication network.
H04B 7/26 - Systèmes de transmission radio, c.-à-d. utilisant un champ de rayonnement pour communication entre plusieurs postes dont au moins un est mobile
H04L 27/26 - Systèmes utilisant des codes à fréquences multiples
A method is disclosed. The method includes receiving, by a gNB, one or more UE-based Artificial Intelligence/Machine Learning (AI/ML) model output parameters and/or Key Performance Indicators (KPIs) and device capability information from a User Equipment (UE). The method also includes comparing, by the gNB, the one or more UE-based AI/ML model output parameters and/or KPIs with corresponding predefined benchmarked values. Furthermore, the method includes upon determining that at least one of the one or more UE-based AI/ML model output parameters and/or KPIs is below the corresponding predefined benchmarked value configuring, by the gNB, the UE to perform non-AI/ML-based operations for the at least one selected functionality or use-case, and one of disabling or continuing AI/ML-based measurements and/or predictions for the at least one selected functionality or use-case based on one or more predefined criteria corresponding to the received device capability information.
H04L 41/147 - Analyse ou conception de réseau pour prédire le comportement du réseau
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
The present invention extends to methods, systems, and computer program products for implementing an Infrastructure Management Service (IMS). A selected abstract function workflow defines an order for implementing a plurality of different abstract functions for a use case. A bare metal server of a specified configuration is selected to receive the use case. A bare metal profile pack corresponding the specified configuration and use case is accessed. A plurality of different concrete functions within the bare metal profile pack and corresponding to the plurality of different abstract functions are identified. A secure shell protocol daemon acting as a an IMS agent at the bare metal server receives instructions from worker threads executing the plurality of different concrete functions to implement the use case on the bare metal server.
The present invention extends to methods, systems, and computer program products for optimizing processor core frequency in view of predicted network traffic patterns. Network packets defining a network traffic flow can be received at a platform over time. Metrics can be derived from one or more applications executing at one or more processing units of the platform and processing data contained in the network data packets. Model training data can be formulated from the metrics. A processor unit frequency adjustment model can be trained using the model training data. Executing the model can be automated to adjust the frequency of a processing unit from among the one or more processing units. Additional network packets defining an additional network traffic flow can be received at a platform over time. Data contained in the additional network packets can be processed at the processing unit at the adjusted frequency.
H04L 43/026 - Capture des données de surveillance en utilisant l’identification du flux
G06N 3/0442 - Réseaux récurrents, p. ex. réseaux de Hopfield caractérisés par la présence de mémoire ou de portes, p. ex. mémoire longue à court terme [LSTM] ou unités récurrentes à porte [GRU]
A global license server transmits temporary licenses to a scheduling component for controlling access to managed software by a host such as a cluster, one or more servers, or a cloud computing platform. The scheduling component, when functioning normally, periodically transmits heartbeat messages to the global license server. If the global license server fails to receive heartbeat messages, the global license server instructs the scheduling component to expire the current temporary license for the host. The global license server may also blacklist the host such that the global license server will not transmit additional temporary licenses for the host.
Instantiation of a deployment image is accelerated by associating images with fingerprints, including fingerprints of constituent images (e.g., container and application images) of the deployment image. A database is maintained that associates hosts with the fingerprints of images instantiated thereon. When deploying the deployment image, a host associated with at least a portion of the fingerprint of the deployment image is identified and only the portion of the deployment image that is not already present on the host are transmitted to the host. An application image may be loaded into an already-executing container. The container may be restarted and invoke an entrypoint that references an orchestrator agent that retrieves and loads the application image into the container and invokes the entrypoint of the application image.
A user load specification is received that specifies a number of simulated users, one or more simulated locations of the simulated users, one or more uses (controller or client) of the simulated users, and one or more roles of the simulated users. A user load simulator generates commands from simulated user instances to a computing installation (e.g., cluster), receives response to the commands, and updates states of the simulated user instances according to the responses. The commands and how the commands are submitted may be in accordance with the location, use, and/or role of each simulated user. The computing installation is monitored and observability data from the computing installation is related to the commands.
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
H04L 43/062 - Génération de rapports liés au trafic du réseau
A cluster includes pods, containers, application instances, and storage volumes. A cluster may be represented with a snapshot object from which the cluster can be recovered. To accelerate recovery, the snapshot object is scanned for security threats upon creation and upon receipt by a remote repository. To restore the cluster, the snapshot object is retrieved and transmitted by the remote repository without scanning. Likewise, when the snapshot object is received and used to re-instantiate the cluster without performing a security scan.
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Upon failure of a host and, in response to a lack of hots having available processing units, a host is selected and one or more processing units of the selected host are allocated as shared CPUs for use by one or more components of the failed host. The selected host may be selected according to requirements, such as affinity, anti-affinity, and latency. The shared CPUs may have been previously allocated as a dedicated CPU. The shared CPUs may be bound to the one or more components. The one or more components may include a container.
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
A host receives a request to instantiate a plurality of containers, such as a host of a KUBERNETES pod. The host instantiates application instances of the plurality of containers within a single virtual machine without instantiating the plurality of containers. The CRI for the containers is a container emulator that maintains simulated states for the containers and responds to instructions for the containers. The container emulator performs binding of application instances to processors and the monitoring and reporting of usage information, such as processor time.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
38.
Application Redeployment Using Active and Available Inventory
A computer system pulls observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers, which may be part of a cloud computing platform. The components may be application instances, containers, storage volumes, pods, or other components. The computer system derives a utilization metric for each components and each of one or more types of computing resources: compute, memory, and storage. The utilization metrics are compared to available inventory of computing resources to obtain an active and available inventory (AAI). Components may be redeployed and allocated computing resources reduced based on the AAI. Components may be grouped in clusters and components may be consolidated to a reduced number of clusters based on the AAI.
A critical executable image is used to instantiate an instance on a computing device. The executable image is further saved on the storage device. In response to a failure of the instance, the instance may be restored from the locally stored executable image. The executable image may be of a container and corresponding application image. The computing device may receive a specification, e.g., container specification, to instantiate the instance, the specification including an annotation instructing the computing device to locally store the executable image after instantiation.
A pod on a host receives a pod specification annotated with network data. The pod is instantiated on the host. The pod calls a container network interface (CNI) and passes the CNI the network data. The CNI creates network interfaces according to the network data. The pod calls a container runtime interface (CRI) to instantiate containers for the pod. The CNI and CRI are implemented by an agent that retains the network data. The CRI extracts environmental variables from the network data and configures the containers to use the environmental variables.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
A method comprising steps for partitioning a GPU and saving partition data to a database. The steps include providing a node comprising one or more applications. The steps further include providing a GPU, dividing the GPU into one or more GPU instances, wherein each GPU instance is associated with at least one of the one or more applications, saving partition data pertaining to the one or more GPU instances to a file, and saving the file to a database.
A computer system retrieves provisioning data and observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers. The computer system extracts component identifiers from the provisioning data and the log data. The computer system identifies relationships between components from the provisioning data and the log data, such as environmental variable relationships, network relationships, session relationships, access relationships, and network connection relationships. The computer system generates a topology in which nodes represent components and edges represent relationships. The topology is updated to show changes to components and relationships. A user may interact with a visual representation of the topology to change data displayed or invoke changes to the components and relationships represented by the topology.
H04L 41/12 - Découverte ou gestion des topologies de réseau
H04L 41/0853 - Récupération de la configuration du réseauSuivi de l’historique de configuration du réseau en recueillant activement des informations de configuration ou en sauvegardant les informations de configuration
43.
Implementing a Topology Lock For a Plurality of Dynamically Deployed Components
A computer system pulls observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers, which may be part of a cloud computing platform. The components may be application instances, containers, storage volumes, pods, or other components. The computer system derives a utilization metric for each components and each of one or more types of computing resources: compute, memory, and storage. The utilization metrics are compared to available inventory of computing resources to obtain an active and available inventory (AAI). Components may be redeployed and allocated computing resources reduced based on the AAI. Components may be grouped in clusters and components may be consolidated to a reduced number of clusters based on the AAI. A topology may be generated to represent components and relationships between components. Changes to the topology may be monitored to implement a topology lock.
A first cluster creates a cluster exchange object including identifiers of components of the first cluster and segments of data of the first cluster along with configuration data, such as access points and credentials. The first cluster transmits the object to a second cluster that instantiates copies of the components and retrieves the segments from the first cluster to become a replica of the second cluster. The second cluster may then commence execution upon failure of the primary cluster and restore the primary cluster. The primary cluster may send snapshot objects to the second cluster to communicate changes to the primary cluster and the snapshot objects may also be used to restore the primary cluster following failure. The components of a cluster may be represented in a directory structure and data describing a component may be retrieved in response to user interactions with a representation of the directory structure.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
A cluster may include one or more NFS server pods having multiple storage volumes mounted thereto. Due to loading or other need, a new NFS server pod may be instantiated to handle traffic to one or more of the multiple storage volumes. The new NFS server pod is instantiated and an agent of an orchestrator instantiates a container for an NFS server and configures the new NFS server pod to communicate with clients of the NFS server, which may include configuring a source address of the new NFS server pod and configuring an NFS service of the cluster with an association between the source address and the one or more storage volumes of the new NFS server pod.
A primary database and one or more secondary databases are managed by a redundancy manager, e.g., PATRONI, that manages failover to one of the secondary databases upon failure of the primary database. A separate orchestrator monitors status of the host of the primary database and monitors values such as loading, latency, temperature, and/or trends in these values. Upon detecting that the values indicate a risk of failure of the host, the orchestrator preemptively instructs the redundancy manager to perform failover to one of the secondary databases.
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
47.
Managing Tenant Users in Coordination with Identity Provider
Systems and methods for mapping users to tenants within a containerized workload management architecture. A method includes identifying a tenant group comprising a plurality of users. The method includes mapping the tenant group to a tenant and adding the tenant to a cluster, wherein the cluster comprises compute resources for executing workloads. The method is such that each of the plurality of users within the tenant group is assigned a same role and same permissions within the cluster.
A computer system pulls observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers, which may be part of a cloud computing platform. The components may be application instances, containers, storage volumes, pods, or other components. The computer system derives a utilization metric for each components and each of one or more types of computing resources: compute, memory, and storage. The utilization metrics are compared to available inventory of computing resources to obtain an active and available inventory (AAI). Components may be redeployed and allocated computing resources reduced based on the AAI. Components may be grouped in clusters and components may be consolidated to a reduced number of clusters based on the AAI. Applications may be provisioned and deployed on clusters in groups of different types (dot, triangle, line, graph) having different runtime requirements based on location, latency, hardware resources, and/or round robin assignment.
A method for organizing and deploying containerized applications within a cloud-network architecture framework. The steps include receiving a plurality of pod requests. The steps include organizing the plurality of pod requests into one or more batches. The steps include, for each of the one or more batches, determining a resource requirement for each pod request in the plurality of pod requests in the batch. The steps further include determining a host availability and a host resource availability of one or more hosts. The steps further include deploying each pod request in the plurality of pod requests in each of the one or more batches to the one of the one or more hosts based on the host availability and the host resource availability.
Disclosed is an apparatus (121) at a base station (120) configured to monitor one or more Radio Resource Management (RRM) limitations in a set A of beams or a set B of beams associated with a user equipment (UE) (110). The set A of beams comprises predicted beams and the set B of beams comprises measured beams for beamforming. The apparatus (121) is configured to identify the suboptimal beam(s) from among the at least one of the set A of beams and the set B of beams based on the monitored RRM limitation(s) and transmit, to the UE (110), a first assistance information comprising beam identifiers (IDs) and a flag corresponding to the suboptimal beam(s) that are excluded during beam prediction at the UE. The apparatus (121) is configured to receive, from the UE (110), remaining beam IDs corresponding to the predicted set A of beams associated with the beamforming..
H04B 7/06 - Systèmes de diversitéSystèmes à plusieurs antennes, c.-à-d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
H04B 7/08 - Systèmes de diversitéSystèmes à plusieurs antennes, c.-à-d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station de réception
H04W 24/02 - Dispositions pour optimiser l'état de fonctionnement
A computer system pulls observability data (metrics, logs, events, alerts, inventory) for a plurality of components from remote servers, which may be part of a cloud computing platform. The components may be application instances, containers, storage volumes, pods, or other components. The computer system derives a utilization metric for each components and each of one or more types of computing resources: compute, memory, and storage. The utilization metrics are compared to available inventory of computing resources to obtain an active and available inventory (AAI). The one or more components may be modified based on the AAI such as by adding a component, deleting a component, or moving a component to a new host.
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
A first cluster creates a cluster exchange object including identifiers of components of the first cluster and segments of data of the first cluster along with configuration data, such as access points and credentials. The first cluster transmits the object to a second cluster that instantiates copies of the components and retrieves the segments from the first cluster to become a replica of the second cluster. The second cluster may then commence execution upon failure of the primary cluster and restore the primary cluster. The primary cluster may send snapshot objects to the second cluster to communicate changes to the primary cluster and the snapshot objects may also be used to restore the primary cluster following failure. The components of a cluster may be represented in a directory structure and data describing a component may be retrieved in response to user interactions with a representation of the directory structure.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/16 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
A logical host, e.g., KUBERNETES Kubelet, receives a request for instantiation of containers with shared CPUs that includes an annotation. The logical host adds the shared CPUs to a best-effort set of CPUs available for use by any process. The logical host calls a CRI to instantiate the containers and passes the annotation to the CRI. The CRI is an agent of an orchestrator and binds the shared CPUs to the containers such that the shared CPUs are dedicated to the containers. The CRI adds the shared CPUs to a shared CPU set.
Systems and methods for zero touch provisioning of a bare metal server to run radio access network (RAN) software. A method includes delivering a network boot program to a bare metal server in a preboot execution environment and causing the bare metal server to execute the network boot program. The method includes registering the bare metal server with a data center automation platform and instantiating a radio access network (RAN) application on the bare metal server.
A method comprising identifying one or more application resources associated with one or more applications, wherein the one or more applications is associated with a namespace, identifying a plurality of persistent volume claims, identifying a plurality of storage volumes associated with the namespace, wherein each of the plurality of storage volumes is bound to at least one of the plurality of persistent volume claims, pausing transactions executed on each of the plurality of storage volumes, capturing a snapshot of each of the plurality of storage volumes, creating a copy of the one or more application resources, and capturing a namespace snapshot by capturing the snapshots of each of the plurality of storage volumes and the copy of the one or more application resources.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
56.
Disk Image Dump for Configuring Bare Metal Servers
Systems and methods for capturing a disk image of a bare metal server, and then using the disk image to provision other bare metal servers. A method includes capturing a disk image of a first bare metal server and writing the disk image to a repository manager. The method includes launching a continuous delivery mode on a second bare metal server where the disk image is fetched from repository manager and written to second bare metal server, followed by configuring the unique identity.
Disclosed herein a User Equipment (UE) (201) is disclosed. The UE is configured to receive a Random-Access Channel (RACH)-less Layer 1/Layer 2 Triggered Mobility (LTM) cell switch command from a source gNB-DU (203). The UE is also configured to transmit, to a target gNB-DU (205), a first UL data packet via one of a configured UL scheduling grant and a dynamic UL scheduling grant to indicate a successful RACH-less LTM cell switch. The UE is configured to transmit, to the target gNB-DU, a SR to receive dynamic grants to transmit a second UL data packet. The UE is configured to receive, from the target gNB-DU, a second UL scheduling grant in response to the transmitted scheduling request. The UE is configured to determine, in response to the received second UL scheduling grant, that the indication of the successful RACH-less LTM cell switch to the target gNB-DU is successfully delivered.
H04W 72/23 - Canaux de commande ou signalisation pour la gestion des ressources dans le sens descendant de la liaison sans fil, c.-à-d. en direction du terminal
Systems and methods for remotely configuring a bare metal server located on-premises. A method includes registering a bare metal server with a cloud native platform, wherein the bare metal server is located on-premises at a client location and remotely installing an operating system on the bare metal server. The method includes causing the bare metal server to install the operating system on a plurality of other bare metal servers located on-premises at the client location.
Provided are apparatus, method, and device for automatically manage conflicts. According to example embodiments, the apparatus may be configured to; obtain a conflict matrix specifying one or more conflicts between a first application and a second application of a plurality of applications; obtain a conflict policy definition defining priorities of the plurality of applications relative to each other; and determine, based on the conflict matrix and the conflict policy definition, a conflict mitigation schedule for implementation of changes of the plurality of applications using one or more conflict management and coordination mechanisms to resolve the associated one or more conflicts specified in the conflict matrix.
H04L 41/0873 - Vérification des conflits de configuration entre les éléments du réseau
H04L 41/0894 - Gestion de la configuration du réseau basée sur des règles
H04L 41/5009 - Détermination des paramètres de rendement du niveau de service ou violations des contrats de niveau de service, p. ex. violations du temps de réponse convenu ou du temps moyen entre l’échec [MTBF]
H04L 41/0859 - Récupération de la configuration du réseauSuivi de l’historique de configuration du réseau en conservant l'historique des différentes générations de configuration ou en revenant aux versions de configuration précédentes
An orchestrator manages password rotation bare metal servers that lack an agent for performing password rotation. A user configures parameters for performing password rotation such as the frequency (e.g., cron command) and password rules for a bare metal server. Parameters may be associated with a theme that can be applied to one or multiple bare metal servers. When indicated by the frequency, the orchestrator runs a workflow with respect to one or more bare metal servers in order to change the passwords and update a vault store recording the passwords.
An orchestrator performs user session management of bare metal servers that lack an agent cooperating with the orchestrator. Log data is pulled from the servers, such as using a vector agent. The log data is processed to obtainer user session records (e.g., PID, username, start time, and end time). The user session records are processed to detect malicious or suspicious activity or violations of policies. The orchestrator may invoke performance of a workflow to perform session management actions such as blocking, limiting, or logging off users. The workflow executes remote from the server and may communicate instructions to the server through a secure command line interface.
A system includes a radio node connected to a core network through the Internet. The system further includes a load balancer connected to the Internet and the radio node, a mobility management entity connected to the Internet and the core network, and a packet data network gateway connected to the Internet and the core network. The load balancer receives a plurality of Internet connections and consolidates the Internet connections into a single Internet connection to be provided to the radio node. The mobility management entity is configured to verify a core network connection request from a user equipment using a private blockchain network, and the packet data network gateway is configured to establish a virtual private network connection between the user equipment and the core network in response to successful verification of the core network connection request to provide permitted core network services to the user equipment.
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 67/1004 - Sélection du serveur pour la répartition de charge
A method for creating a temporary storage volume for an application bundle in a cloud-network architecture framework. The method includes identifying an application comprising a role. The role is mapped to a pod comprising a container requesting, from a storage provider, an ephemeral storage volume. The method includes mounting the ephemeral storage volume to one or more of the pod or the container.
Systems and methods for mapping users and applications to persistent data volumes. A method includes identifying an application bundle comprising a role, wherein the role is mapped to a pod comprising one or more containers. The method includes generating a persistent data volume referenced by the application bundle, wherein the persistent data volume is created independent of the application bundle. The method includes generating a persistent volume claim for the persistent data volume and mounting the persistent data volume to one or more of the pod or at least one of the one or more containers.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
A method (400) is disclosed. The method (400) includes receiving (402), from an xApp, a data packet at a near Real-Time Radio access network Intelligent Controller (near RT RIC) node. The method (400) also includes comparing (404), by the near RT RIC node, a set of predefined parameters associated with the data packet with a corresponding set of predefined values stored at the near-RT RIC node. The method (400) further includes validating (406), by the near RT RIC node, the data packet based on the comparison.
Embodiments of the present disclosure disclose a Network Repository Function (NRF). The NRF is configured to receive, from at least one user plane function (UPF), a profile registration request. The profile registration request comprises a plurality of attributes supported by the at least one UPF. The plurality of attributes at least comprise one or more of: Network Address Translation (NAT) functionality, Distributed Denial-of-Service (DDoS) protection, Domain Name Service (DNS) spoofing, packet inspection functionality, energy saving, one or more hardware configurations, and operator specific string. The NRF is further configured to register the profile of the at least one UPF based on the plurality of attributes supported by UPF network function (NF) of the at least one UPF. The NRF is then configured to transmit a registration response indicating the registration of the UPF.
Applications are executed on a host in association with a power profile and a pool of one or more CPUs that are isolated relative to the applications. The power profile includes a lowest power state that is suitable for the applications and achieves required performance for the workload being performed by the applications. The applications may have fractional CPU requirements collectively met by the number of CPUs in the pool. Other components, such as the operating system and one or more agents of one or more orchestrators may be allocated their own isolated pool of CPUs operating at a highest power state. The implementation of isolated CPUs that are shared by multiple applications may be performed by an agent of an orchestrator that is called as the CRI when instantiating containers.
G06F 1/3234 - Économie d’énergie caractérisée par l'action entreprise
69.
MANAGING INTERACTIONS BETWEEN O-CLOUD RESOURCES MANAGEMENT AND ORCHESTRATION AND RADIO ACCESS NETWORK ORCHESTRATION ADMINISTRATION MAINTENANCE FUNCTIONS
Provided are a method, system, and device for managing interactions between an O-Cloud Resources Management and Orchestration (ORMO) Service Management Orchestration Function (SMOF) and a Radio Access Network Operations Administration Maintenance (RANOAM) function. The method may include receiving, by the ORMO, a service related to an SMOF from a Service Management and Exposure (SME); sending, by the ORMO, a request to the SMOF to drain traffic of at least one network function (NF); and sending, by the ORMO, a request to a Topology Exposure and Inventory Management (TE/IV) to update an inventory.
H04L 41/342 - Canaux de signalisation pour la communication dédiée à la gestion du réseau entre entités virtuelles, p. ex. orchestrateurs, SDN ou NFV
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
H04L 41/122 - Découverte ou gestion des topologies de réseau des topologies virtualisées, p. ex. les réseaux définis par logiciel [SDN] ou la virtualisation de la fonction réseau [NFV]
70.
OPTIMIZING RAN NOTIFICATION AREA (RNA) UPDATE AND RAN PAGING USING USER EQUIPMENT (UE) MOBILITY HISTORY INFORMATION
Embodiments of the present disclosure disclose determining priority cell list based on mobility history information of UE. The RIC receives the mobility history information and subscriber ID of the UE from a base station. Further, the RIC stores the mobility history information against the subscriber ID of the UE. Thereafter, the RIC determines the priority cell list based on the mobility history information of the UE for optimizing RNA update and RAN paging.
The present disclosure describes a method, and a system to identify call flows in Radio Unit (RU) or Centralized Unit (CU) and Distributed Units (DU) for cell-to-pod remapping. The method comprising receiving call flow traffic data of a plurality of DUs from an associated CU. Subsequently, the method comprises determining a first DU and a second DU among the plurality of DUs based on the call flow traffic data. Thereafter, the method comprises determining a new configuration for each of the first DU and the second DU based on at least one of the call flow traffic data, historical data, and pre-determined rules using a prediction model. Lastly, the method comprises transmitting the new configuration for each of the first DU and the second DU to the associated CU for remapping of a RU.
The present invention extends to methods, systems, and computer program products for assessing cell base station performance. In one aspect, an intelligent quantitative performance index is used for assessing cell base station performance. Assessing cellular base station performance can include identifying underperforming cellular base stations and predicting cellular base station performance. Artificial Intelligence (AI) can be utilized to identify performance similarities among geographically segregated cellular base stations. AI can be used to derive dynamic scores adapting to different cellular traffic patterns and using smart thresholds. AI models can consider time of a metric degradation for impacting scores/indexes. Aspects can be used to help network operations teams maintain cellular networks and provide upper management a quantitative view of cellular network performance.
Embodiments of the present disclosure disclose a routing management system comprising a user plane control module, a session management module and a radio control module communicatively coupled with the user plane control module. The user plane control module is configured to receive a downlink data notification (DDN) of a User Equipment (UE) from a user plane system and determine whether the UE is in a Radio Resource Control (RRC)_INACTIVE state or an RRC_IDLE state in response to the DDN. The user plane control module is configured to initiate paging either through the radio control module if the UE is in the RRC_INACTIVE state or through the session management module if the UE is in the RRC_IDLE state. The user plane control module is configured to establish a communication between the UE and a Radio Access Network (RAN) node upon successful paging of the UE.
Example embodiments of the present disclosure are related to provisioning network energy management in a telecommunication system. According to embodiments, an apparatus may include an energy management function (EMF) for a telecommunication network. The apparatus may be configured to execute instructions for implementing the EMF to: receive, from at least one network entity, one or more energy data associated with the at least one network entity; process the one or more energy data to produce a level of energy efficiency associated with the at least one network entity; and perform, based on the level of energy efficiency, one or more operations for managing energy usage of the at least one network entity.
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
H04L 41/0833 - Réglages de configuration caractérisés par les objectifs d’un changement de paramètres, p. ex. l’optimisation de la configuration pour améliorer la fiabilité pour la réduction de la consommation d’énergie du réseau
H04L 41/147 - Analyse ou conception de réseau pour prédire le comportement du réseau
Provided are a method, system, and device for shutting down an identified Open Radio Access Network (O-RAN) Cloud (O-Cloud) node. The method may include: receiving, by a Federated O-Cloud Orchestration and Management (FOCOM), a recommendation from an rApp to shutdown an identified O-Cloud node; sending, by the FOCOM, an instruction to a Network Function Orchestrator (NFO) to relocate a workload from the identified O-Cloud node to alternative O-Cloud node based on the recommendation; and sending, by the FOCOM, an instruction to Infrastructure Management Services (IMS) to shutdown the identified O-Cloud node.
H04L 41/0897 - Capacité à monter en charge au moyen de ressources horizontales ou verticales, ou au moyen d’entités de migration, p. ex. au moyen de ressources ou d’entités virtuelles
H04L 41/0833 - Réglages de configuration caractérisés par les objectifs d’un changement de paramètres, p. ex. l’optimisation de la configuration pour améliorer la fiabilité pour la réduction de la consommation d’énergie du réseau
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
H04W 24/02 - Dispositions pour optimiser l'état de fonctionnement
76.
MANAGING INTEROPERABILITY OF NG-RAN NODES IN UE HANDOVER
Embodiments of the present disclosure discloses managing interoperability of NG-RAN nodes in UE handover. The method comprises transmitting an Xn handover request to handover a UE from the source NG-RAN node (204) to a target NG-RAN node (206) including a list of AI/ML use cases configured at the source NG-RAN node (204). Further, a Xn handover response is received from the target NG-RAN node. Further, the method comprises determining, measurement reconfiguration data (319) for the UE by the source NG-RAN node based on the active list of AI/ML use cases until the UE is handed over to the target NG-RAN node. Thereafter, the method comprises transmitting, an RRC reconfiguration message, comprising the measurement reconfiguration data, to the UE (202), for dynamically updating the list of AI/ML use cases. The present disclosure may reduce unnecessary overhead for the NG-RAN nodes by eliminating reception of measurements from the UE.
Example embodiments of the present disclosure relate to Layer 1/Layer 2 (L1/L2) Triggered Mobility (LTM) interworking with Cell Discontinuous Transmission (DTX)/Discontinuous Reception (DRX) at a target cell. According to one or more embodiments, a system may include a distributed unit (DU) that may be configured to determine, based on an active duration of a Cell DTX/DRX cycle associated with an inter-frequency cell, an optimal measurement gap (MG). Further, the DU may be configured to provide, to a user equipment (UE), information of the optimal MG, wherein the information of the optimal MG may be utilized by the UE to perform an inter frequency measurement.
Embodiments of present disclosure disclose internet protocol (IP) assignment and secure traffic for network elements deployed over untrusted transport network. In an embodiment, base station (101) transmits an Open Cloud (O-cloud) available registration request to an operator network system (103) through First Secure Tunnel (FST) (133) established between operator network system (103) and base station (101). The FST is terminated upon receiving network information related to each of plurality of O-cloud entities of O-cloud through FST from operator network system. Thereafter, base station (101) transmits second authentication request to operator network system for establishing Second Secure Tunnel (SST) (135) between operator network system and base station. Finally, base station establishes SST between operator network system and base station when network information is authenticated. The established SST allows bi-directional traffic related to each of plurality of O-cloud entities. The present disclosure helps in handling the traffic at the operator network system.
Example embodiments of the present disclosure relate to cloud-native function (CNF) authentication during the instantiation and bootstrapping of the CNF. According to embodiments, a method may be provided, including sending, by a supplicant to an authenticator during an instantiation and bootstrapping stage of a CNF, a message to initiate an Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) protocol sequence, wherein an EAP-TLS authentication is performed with an authentication server based on the message; receiving, by the supplicant, a result of the EAP-TLS authentication from the authenticator, wherein the result of the EAP-TLS authentication originates from the authentication server, and wherein the authenticator is configured to control traffic of the CNF based on the result of the EAP-TLS authentication.
The present disclosure describes a method and a base station for efficient searching of UE context during base station resume procedure. The base station receives a resume request from a UE. The resume request comprises a base station identifier, a UE identifier, and authentication information. Subsequently, the base station retrieves a RAT information of the UE from a lower layer of the base station upon receiving the resume request. Based on at least one of the base station identifier, the UE identifier, the authentication information, and the RAT information, the base station either receives a UE context information from one of a plurality of neighbouring base stations or determines the UE context information at the base station. Thereafter, the base station establishes a connection with the UE using the UE context information.
The present disclosure relates to techniques for defining beam association efficient beam identification and prediction. Particularly, the present disclosure transmits a set static beams by a network and based on that, predicting a set dynamic beams for a UE. The set of predicted beams are formed dynamically based on the location of the UE. During beam prediction configuration, transmitting a corresponding beam topology identifier to the UE that assist the UE, using a well-defined association between the set of transmitted and predicted beams, to predict one or more best beams from the set of predicted beams for communicating with a base station. In this way, the beam topology identifier defines a unique association between the set of transmitted beams and the set of predicted beams for efficient beam identification and prediction.
H04B 7/06 - Systèmes de diversitéSystèmes à plusieurs antennes, c.-à-d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
H04W 64/00 - Localisation d'utilisateurs ou de terminaux pour la gestion du réseau, p. ex. gestion de la mobilité
82.
SYSTEM AND METHOD FOR PROVIDING A CLOUD RESOURCE OPTIMIZATION POLICY IN TELECOMMUNICATIONS SYSTEM
A system for implementing an open cloud (O-Cloud) optimization policy by an application hosted in a near real-time radio access network Intelligent Controller (nRT-RIC) of a telecommunications network. The system includes a memory storing instructions; and at least one processor configured to implement the nRT-RIC within an open radio access network (O-RAN) architecture, the at least one processor configured to execute the instructions to: receive the O-Cloud optimization policy from a non-real-time radio access network Intelligent Controller (NRT-RIC) within a Service Management and Orchestration (SMO) framework of the telecommunications network; control to implement the O-Cloud optimization policy in the O-Cloud computing environment within the O-RAN.
H04L 41/0894 - Gestion de la configuration du réseau basée sur des règles
H04L 41/0823 - Réglages de configuration caractérisés par les objectifs d’un changement de paramètres, p. ex. l’optimisation de la configuration pour améliorer la fiabilité
83.
DETECTING PHYSICAL CELL IDENTIFIER (PCI) CONFUSION DURING SECONDARY NODE (SN) CHANGE PROCEDURE IN WIRELESS NETWORKS
Embodiments of the present disclosure disclose first secondary node element (203-1) for detecting Physical Cell Identifier (PCI) confusion during Secondary Node (SN) change procedure. The first secondary node element (203-1) receives Secondary Cell Group (SCG) failure information from UE (201) during handover from first secondary node element (203-1) to second secondary node element (203-2), indicating cause of failure of handover as Random Access Channel failure. The first secondary node element (203-1) suspects that failure is due to PCI confusion associated with PCI of second secondary node element (203-2), based on cause, within pre-determined time period of timer configured at primary node element (202) for holding context of UE (201) at first secondary node element (203-1). The first secondary node element (203-1) performs mitigation action during subsequent handover from to secondary node elements, for UEs in wireless network, when PCI of one of secondary node elements is same as suspected PCI.
Example embodiments of the present disclosure relate to Cell Discontinuous Transmission (DTX)/Discontinuous Reception (DRX) interworking with Layer 1/Layer 2 (L1/L2) Triggered Mobility (LTM). According to embodiments, a system may include a serving distributed unit (DU). The serving DU may be configured to: add at least one LTM-specific active duration to at least one Cell DTX/DRX cycle associated with a serving cell; provide, to at least one user equipment (UE), information of the at least one added active duration; and provide, to the at least one UE during the at least one added active duration, a Media Access Control (MAC) Control Element (CE). The MAC CE may include a cell switch command that instructs the UE to perform an LTM cell switch from the serving cell to a target cell.
The present disclosure relates to techniques for predicting the Tx/Rx beam angle of the one or more dynamic beams, corresponding to the static broadcast beams, with optimized granularity. Particularly, the present disclosure receives, at a receiving entity, at least one of beam information and one or more control parameters. The beam information comprises Tx/Rx beam angles of one or more static broadcast beams. Subsequently, during the beam prediction, predicting using a pre-trained learning model, an optimized Tx/Rx beam angle for one or more dynamic beams corresponding to the one or more static broadcast beams, based on the received at least one of beam information, the one or more control parameters and an angle granularity information for effective communication. The angle granularity information defines variation in beam angle for predicting the optimized Tx/Rx beam angle.
H04B 7/06 - Systèmes de diversitéSystèmes à plusieurs antennes, c.-à-d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
H04B 17/391 - Modélisation du canal de propagation
86.
XAPP INSTANCE REGISTRATION IN CLOUD-NATIVE NETWORKS
Provided are a method, system, and device for registering an xApp instance in a cloud-native network. The method may be implemented by a Near Real-Time (RT) Radio Access Network (RAN) Intelligent Controller (RIC), and the method may include: receiving a registration message including details of an xApp instance and an xApp identifier (ID) from Service Management and Orchestration (SMO); generating a certificate signing request (CSR) based on the registration message, wherein the CSR includes the xApp ID registered in the Near RT-RIC; receiving an identification request message from the xApp instance; verifying the xApp instance based on the identification request message; and sending an identification response message to the xApp instance.
Example embodiments of the present disclosure relate to user equipment (UE) Connected mode Discontinuous Reception (C-DRX) interworking with Layer 1/Layer 2 (L1/L2) Triggered Mobility (LTM). According to embodiments, a system may include a distributed unit (DU). The DU may be configured to: provide, to at least one UE, modification information for modifying at least one on-duration of at least one Discontinuous Reception (DRX) cycle associated with the UE; and provide, to the UE during an on-duration modified by the UE based on the modification information, a Media Access Control (MAC) Control Element (CE). The MAC CE may include a cell switch command that instructs the UE to perform an LTM cell switch from a serving cell to a target cell.
Methods to overcome Logical Channel ID (LCID) limitations during bearer type switching when multiple ENDC bearers are supported. Initially, the method recites determining a configuration mode of a bearer to be added for the UE, which may be either MCG mode or SCG-split mode. Further, the method recites allocating an LCID to the bearer based on the configuration mode and one or more criteria. The method recites performing, upon detecting an addition of a secondary node in the communication network by checking whether LCIDs are available for allocating corresponding bearers requiring SCG-split mode in response to the addition of the secondary node. The method further recites assigning the LCIDs to the corresponding bearers requiring the SCG-split mode, when the LCIDs are available for the bearers and switching a set of bearers, among the bearers, into the SCG-split mode when the LCIDs are available only for the set of bearers.
A method of collecting, creating or updating topology and inventory data for a network includes receiving a request through an interface of a service management and orchestrator (SMO), wherein the request contains instructions for topology and inventory information to be collected, created or updated. The method further includes transferring the request to a topology and inventory (TE&IV) management module. The method further includes collecting, creating or updating topology and inventory data for at least a first component of the network based on the request. The topology and inventory data includes a relationship information between the first component and at least a second component of the network, and capability information for the first component.
Provided are a method, system, for determining at least one Open Radio Access Network (O-RAN) element which causes a timing drift in an O-RAN network. In particular, the method may include: receiving, by a radio unit (RU), at least one fronthaul (FH) packet comprising a Time of Day (TOD) from a distributed unit (DU); detecting, by the RU, a timing drift in FH based on the received at least one FH packet; and identifying, by the RU, the at least one O-RAN element causing the timing drift in FH based on detecting the timing drift.
Provided are a method, system, and device for changing the status of an Open Radio Access Network (O-RAN) Cloud (O-Cloud) resource, the method may include: obtaining, by a Service Management and Orchestration Framework (SMO) function, a first request or recommendation to update a functional status of an O-Cloud resource, the first request or recommendation being obtained from an rApp of a Non-Real-Time (Non-RT) RAN Intelligent Controller (RIC), or from an O-Cloud Maintainer, or from the SMO function directly; transmitting, by the SMO function to an Infrastructure Management Services (IMS), a second request to update the functional status of the O-Cloud resource based on the received first request or recommendation; and receiving, by the SMO function from the IMS, a first response as to whether the functional status of the O-Cloud resource was updated.
H04L 67/54 - Gestion de la présence, p. ex. surveillance ou enregistrement pour la réception des informations de connexion des utilisateurs ou état de connexion des utilisateurs
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
Provided are apparatus, method, and device for automatically performing AI/ML model inference. According to embodiments, the apparatus may be configured to: receive an inference request; obtain one or more AI/ML models based on the inference request; and perform an inference at a Service Management and Orchestration (SMO)/Non-RT RIC or Near-RT RIC using the one or more AI/ML models.
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
G06N 5/04 - Modèles d’inférence ou de raisonnement
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 43/20 - Dispositions pour la surveillance ou le test de réseaux de commutation de données le système de surveillance ou les éléments surveillés étant des entités virtualisées, abstraites ou définies par logiciel, p. ex. SDN ou NFV
H04L 41/40 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant la virtualisation des fonctions réseau ou ressources, p. ex. entités SDN ou NFV
Disclosed herein is an apparatus associated with a network controller in an Open-Radio Access Network (O-RAN). The apparatus is configured to receive a request from a RAN function application from among a plurality of RAN function applications associated with the network controller. The request is indicative of a change in one or more parameters associated with the RAN function applications. The apparatus is configured to determine, based on a parameter dependency model, whether a conflict will occur as a result of the change in the one or more parameters. The parameter dependency model defines dependencies among a plurality of parameters associated with the plurality of RAN function applications. The apparatus is further configured to trigger, based on a set of mitigation rules, one or more reconciliation actions upon a determination that the conflict will occur as a result of the change in the one or more parameters.
Provided are an apparatus, method, and device for managing disaster alert detection within a network. According to embodiments, the apparatus may be configured to: retrieve, based on a defined schedule, one or more event-related alerts via a method of procedure (MOP). The apparatus may also send, based on a first request from a user, the retrieved one or more event-related alerts to a management module having a graphical user interface (GUI) portal, wherein the first request may include a request to view the retrieved one or more event-related alerts.
Provided are apparatus, method, and device for managing network neighbor data. According to embodiments, the method may be configured to: receiving, by a configuration manager, updated data of one or more neighboring cells in a telecommunications network; saving, by the configuration manager, the updated data into persistent storage; displaying, by the configuration manager, the updated data in a graphical user interface (GUI); and receiving, by the configuration manager, a first instruction to change a configuration in the one or more neighboring cells.
A node of a cellular communication network is coupled to a beam-forming antenna defining a plurality of directions. For each beam direction, the node determines a usage requirement and a priority of each item of user equipment of items of user equipment within each beam direction. The node select a selected beam direction of the at least the portion of the plurality of beam directions according to aggregations of the usage requirement and the priority of each item of user equipment within each beam direction. The aggregation may be of a spectral usage metric that is a combination of the priority and usage requirement. The aggregation may be of a shortlist of items of equipment within each direction selected based on the spectral usage metrics thereof.
H04W 16/28 - Structures des cellules utilisant l'orientation du faisceau
H04B 7/06 - Systèmes de diversitéSystèmes à plusieurs antennes, c.-à-d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
Provided are a method, system, for determining at least one Open Radio Access Network (O-RAN) element which causes a timing drift in an O-RAN network. In particular, the method may include: receiving, by a radio unit (RU), at least one fronthaul (FH) packet comprising a Time of Day (TOD) from a distributed unit (DU); detecting, by the RU, a timing drift in FH based on the received at least one FH packet; and identifying, by the RU, the at least one O-RAN element causing the timing drift in FH based on detecting the timing drift.
In general, the current subject matter relates to sounding reference signal analysis based user mobility estimation. In some implementations, sounding reference signal analysis based user mobility estimation includes receiving, via one or more antennas, using at least one processor, a plurality of sounding reference signals from a user equipment that is external to the at least one processor, analyzing, using the at least one processor, the plurality of sounding reference signals during a first time period and a second time period, determining, using the at least one processor, a target correlation value based on analyzing the plurality of sounding reference signals during the first time period and the second time period, and determining, using the at least one processor, based on the target correlation value, a mobility of the user equipment in a geographical area.
H04L 5/00 - Dispositions destinées à permettre l'usage multiple de la voie de transmission
H04W 8/02 - Traitement de données de mobilité, p. ex. enregistrement d'informations dans un registre de localisation nominal [HLR Home Location Register] ou de visiteurs [VLR Visitor Location Register]Transfert de données de mobilité, p. ex. entre HLR, VLR ou réseaux externes
Embodiments of the present disclosure relate to a Control Plane System (CPS) of a Radio Access Network (RAN) node. The CPS is configured to detect a Protocol Data Unit (PDU) session established between a User Equipment (UE) and a data network and generate a session establishment request, based on the detection. The session establishment request comprises a first information element (IE) to create a Packet Detection Rule (PDR) to classify one or more IP packets received from a data network addressed to the UE as Down Link (DL) packets and a second IE to create a plurality of Forward Action corresponding to the PDR to route the DL packets from a User Plane System (UPS) to the UE through a Data Radio Bearer. The CPS is further configured to transmit the session establishment request to the UPS to facilitate the UPS to route the downlink packets to the UE through the DRB.
H04W 40/12 - Sélection d'itinéraire ou de voie de communication, p. ex. routage basé sur l'énergie disponible ou le chemin le plus court sur la base de la qualité d'émission ou de la qualité des canaux
H04W 40/24 - Gestion d'informations sur la connectabilité, p. ex. exploration de connectabilité ou mise à jour de connectabilité
Carrier aggregation between remote DU in a cellular communication network is used to increase throughput to user equipment. In response to user equipment being located in cells of remote DUs, a physical link is created and a logical link between the cells of the DUs. The physical link may be an L2 or L3 network connection. Carrier aggregation is performed with transmission of data over the physical link while latency and status of the physical link is acceptable. A context may be used to facilitate the carrier aggregation, the context including identifiers of the cells as well as addresses (IP, MAC) of the DUs.