To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps.
Some embodiments of the invention provide a method for providing flow processing offload (FPO) for a host computer at a physical network interface card (pNIC) connected to the host computer. A set of compute nodes executing on the host computer are each associated with a set of interfaces that are each assigned a locally-unique virtual port identifier (VPID) by a flow processing and action generator. The pNIC includes a set of interfaces that are assigned physical port identifiers (PPIDs) by the pNIC. The method includes receiving a data message at an interface of the pNIC and matching the data message to a stored flow entry that specifies a destination using a VPID. The method also includes identifying, using the VPID, a PPID as a destination of the received data message by performing a lookup in a mapping table storing a set of VPIDs and a corresponding set of PPIDs and forwarding the data message to an interface of the pNIC associated with the identified PPID.
Some embodiments provide a method for establishing multiple virtual service networks over multiple datacenters. The method configures, for each virtual service network of the plurality of virtual service networks, a set of machines distributed across the datacenters to implement an ordered set of network services for the virtual service network. The method configures multiple service network selectors executing within the datacenters to receive a data message, select one of the virtual service networks for the data message based on analysis of contents of the data message, determine a location within the datacenters for a machine implementing a first network service of the ordered set of network services for the selected virtual service network, and transmit the data message to the machine implementing the first network service.
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/51 - Discovery or management thereof, e.g. service location protocol [SLP] or web services
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Some embodiments provide novel methods for performing services for machines operating in one or more datacenters. For instance, for a group of related guest machines (e.g., a group of tenant machines), some embodiments define two different forwarding planes: (1) a guest forwarding plane and (2) a service forwarding plane. The guest forwarding plane connects to the machines in the group and performs L2 and/or L3 forwarding for these machines. The service forwarding plane (1) connects to the service nodes that perform services on data messages sent to and from these machines, and (2) forwards these data messages to the service nodes. In some embodiments, the guest machines do not connect directly with the service forwarding plane. For instance, in some embodiments, each forwarding plane connects to a machine or service node through a port that receives data messages from, or supplies data messages to, the machine or service node. In such embodiments, the service forwarding plane does not have a port that directly receives data messages from, or supplies data messages to, any guest machine Instead, in some such embodiments, data associated with a guest machine is routed to a port proxy module executing on the same host computer, and this other module has a service plane port. This port proxy module in some embodiments indirectly can connect more than one guest machine on the same host to the service plane (i.e., can serve as the port proxy module for more than one guest machine on the same host).
Examples can include an optimizer that dynamically determines where to place virtual network functions for a slice in a distributed Telco cloud network. The optimizer can determine a slice path that complies with a service level agreement and balances network load. The virtual network functions of the slice can be provisioned at clouds identified by the optimal slice path. In one example, performance metrics are normalized, and tenant-selected weights can be applied. This can allow the optimizer to prioritize particular SLA attributes in choosing an optimal slice path.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/0806 - Configuration setting for initial configuration or provisioning, e.g. plug-and-play
H04L 41/0826 - Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
H04L 41/0893 - Assignment of logical groups to network elements
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
H04L 41/5025 - Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
H04L 47/2425 - Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
H04L 67/101 - Server selection for load balancing based on network conditions
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
6.
HIERARCHICAL API FOR DEFINING A MULTI-SEGMENTED APPLICATION IN AN SDDC
Some embodiments provide a simplified mechanism to deploy and control a multi- segmented application by using application-based manifests that express how application segments of the multi-segment application are to be defined or modified, and how the communication profiles between these segments. In some embodiments, these manifests are application specific. Also, in some embodiments, deployment managers in a software defined datacenter (SDDC) provide these manifests as templates to administrators, who can use these templates to express their intent when they are deploying multi-segment applications in the datacenter. Application-based manifests can also be used to control previously deployed multi- segmented applications in the SDDC. Using such manifests would enable the administrators to be able to manage fine grained micro-segmentation rules based on endpoint and network attributes.
A virtual network over several public clouds of several public cloud providers and/or in several regions is an overlay network that spans across several public clouds to interconnect one or more private networks (e.g., networks within branches, divisions, departments of the entity or their associated datacenters), mobile users, and SaaS (Software as a Service) provider machines, and other web applications of the entity. The virtual network can be configured to optimize the routing of the entity's data messages to their destinations for best end-to-end performance, reliability and security, while trying to minimize the routing of this traffic through the =Internet. Also, the virtual network can be configured to optimize the layer 4 processing of the data message flows passing through the network.
A system and method for automatic detection of a network incident from real-time network data is disclosed. The method includes: collecting real-time network data; executing performance calculations on the real-time network data to compute performance metrics; and detecting a pattern over a time window, wherein detecting a pattern includes detecting a proportion of metric values crossing a threshold exceeding a defined percentage amount, detecting a presence of a sequence of metric values, detecting a time-ordered stretch of metric values with a length of the time-ordered stretch exceeding a defined threshold, detecting a cyclical presence of a sequence of metric values, or combinations thereof.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/0659 - Management of faults, events, alarms or notifications using network fault recovery by isolating or reconfiguring faulty entities
H04L 41/142 - Network analysis or design using statistical or mathematical methods
Remote desktop servers include a display encoder that maintains a secondary framebuffer that contains display data to be encoded and transmitted to a remote client display and a list of display primitives effectuating updated display data in the secondary framebuffer. The display encoder submits requests to receive the list of drawing primitives to a video adapter driver that receives and tracks drawing primitives that, when executed, update a primary framebuffer.
Remote desktop servers include a display encoder that maintains a secondary framebuffer that contains display data to be encoded and transmitted to a remote client display. The display encoder submits requests to update the display data in the secondary framebuffer to a video adapter driver that has access to a primary framebuffer whose display data is updated according to drawing commands received from applications running on the remote desktop servers. The video adapter driver utilizes a spatial data structure to track changes made to the display data located in regions of the primary framebuffer and copies the display data in those regions of the primary framebuffer to corresponding regions in the secondary framebuffer.
A method creates a distributed virtual switch (DVswitch) and distributed virtual ports (DVports) for the DVswitch. The DVswitch binds virtual switches in a collection of hosts together in a software abstraction. Also, the DVports are available for connection by virtual network interface cards (VNICs) of virtual machines in the collection of hosts. A request is received for a connection of a virtual network interface card (VNIC) of a virtual machine for a host in the collection of hosts to a DVport. If the requested DVport is available, the method provides connection information for the requested DVport to the host to allow the host to connect the requested DVport to the VNIC. The DVport stores a runtime state for a virtual port associated with a virtual switch for the host and the virtual switch forwards network frames between the VNIC and a physical network interface card (NIC).
A method for persisting a state of a virtual port in a virtualized computer system is described. A distributed virtual port (DVport) is stored in a persistent storage location, the DVport comprising a state of a corresponding virtual port and configuration settings of the virtual port. In addition, an association between the virtual port and the virtual network interface card (VNIC) connected to the virtual port is stored. When a virtual machine corresponding to the VNIC is restarted, the state from the DVport is restored to a new virtual port from the persistent storage location.
A server-based desktop-virtual machines architecture may be extended to a client machine. In one embodiment, a user desktop is remotely accessed from a client system. The remote desktop is generated by a first virtual machine running on a server system, which may comprise one or more server computers. During execution of the first virtual machine, writes to a corresponding virtual disk are directed to a delta disk file or redo log. A copy of the virtual disk is created on the client system. When a user decides to 'check out' his or her desktop, the first virtual machine is terminated (if it is running) and a copy of the delta disk is created on the client system. Once the delta disk is present on the client system, a second virtual machine can be started on the client system using the virtual disk and delta disk to provide local access to the user's desktop at the client system. This allows the user to then access his or her desktop without being connected to a network.
G06F 12/00 - Accessing, addressing or allocating within memory systems or architectures
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs