An example method of handling traffic for an existing connection of a virtual machine (VM) migrated from a source site to a destination site includes: receiving, at an edge server of the destination site, the traffic, the traffic being associated with a network flow; determining, by the edge server of the destination site, that a stateful service of the edge server does not have state for the network flow; sending, by the edge server of the destination site, a threshold number of packets of the traffic to a plurality of sites; receiving, at the edge server of the destination site, an acknowledgement from the source site that the source site has the state for the network flow; and creating, by the edge server of the destination site, a flow mapping to send the traffic associated with the network flow to the source site.
A computing environment can include a host system that maintains a guest system, and a hardware component configured to implement artificial intelligence (“AI”) methods of processing and analyzing date. The guest system can provide a virtual computing environment that receives a request to implement an AI application, and utilize a framework and a guest library to convert data from the AI application into an intermediate representation (“IR”). The host system can receive the IR with a virtual device (“VD”), and utilize an IR backend to translate the IR into hardware operations for the hardware component. Translated hardware operations can be provided to, and carried out by, the hardware component to provide an implementation of the AI application. Results of the hardware operations can be transmitted from the VD of the host system to a VD driver of the guest system, virtualizing the hardware component relative to the guest system.
A novel method for dynamic network service allocation that maps generic services into specific configurations of service resources in a network is provided. An application that is assigned to be performed by computing resources in the network is associated with a set of generic services, and the method maps the set of generic services to the service resources based on the assignment of the application to the computing resources. The mapping of generic services is further based on a level of service that is chosen for the application, where the set of generic services are mapped to different sets of network resources according to different levels of services.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
H04L 41/5041 - Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
H04L 41/5051 - Service on demand, e.g. definition and deployment of services in real time
4.
AUTOMATED METHODS AND SYSTEMS THAT PROVIDE RESOURCE RECOMMENDATIONS FOR VIRTUAL MACHINES
The current document is directed to methods and systems that generate recommendations for resource specifications used in virtual-machine-hosting requests. When distributed applications are submitted to distributed-computer-system-based hosting platforms for hosting, the hosting requestor generally specifies the computational resources that will need to be provisioned for each virtual machine included in a set of virtual machines that correspond to the distributed application, such as the processor bandwidth, memory size, local and remote networking bandwidths, and data-storage capacity needed for supporting execution of each virtual machine. In many cases, the hosting platform reserves the specified computational resources and accordingly charges for them. However, in many cases, the specified computational resources significantly exceed the computational resources actually needed for hosting the distributed application. The currently disclosed methods and systems employ machine learning to provide accurate estimates of the computational resources for the VMs of a distributed application.
A system for optimizing network traffic management is provided. The system includes a plurality of data processing units (DPUs), each assigned an identifier and configured to process network traffic for associated virtual network interface cards (vNICs). The system also includes a vNIC placement handler configured to receive media access control (MAC) address information from the plurality of DPUs and execute a relocation of at least two vNICs to be directly associated with one of the plurality of DPUs without through an inter-DPU physical network based on mapping the MAC address information to the identifier. The system further includes a communication framework integrated with the vNIC placement handler to enable transmission of MAC address information from the plurality of DPUs to the vNIC placement handler and offload network traffic from the vNICs to corresponding one or more of the plurality of DPUs.
Some embodiments of the invention provide novel methods for facilitating a distributed SNAT (dSNAT) middlebox service operation for a first network at a host computer in the first network on which the dSNAT middlebox service operation is performed and a gateway device between the first network and a second network. The novel methods enable dSNAT that provides stateful SNAT at multiple host computers, thus avoiding the bottleneck problem associated with providing stateful SNAT at gateways and also significantly reduces the need to redirect packets received at the wrong host by using a capacity of off-the-shelf gateway devices to perform 1Pv6 encapsulation for 1Pv4 packets and assigning locally unique 1Pv6 addresses to each host executing a dSNAT middlebox service instance that are used by the gateway device.
A method for handling system calls during execution of an application over a plurality of nodes, each including processor and memory, and an application monitor and a runtime executed in the processor thereof, includes: establishing first threads in the runtime of a first node and establishing second threads in the runtime of a second node; determining by the application monitor of the first node, in response to a system call made by a first thread, that executing the system call involves resources present on the second node; sending by the application monitor of the first node, the system call and arguments of the system call to the second node for execution thereat; receiving by the application monitor of the first node, results of the system call from the second node; and returning by the application monitor of the first node, the results of the system call to the first thread.
The disclosure herein describes deduplicating data chunks using chunk objects. A batch of data chunks is obtained from an original data object and a hash value is calculated for each data chunk. A first duplicate data chunk is identified using the hash value and a hash map. A chunk logical block address (LBA) of a chunk object is assigned to the duplicate data chunk. Payload data of the duplicate data chunk is migrated from the original data object to the chunk object, and a chunk map is updated to map the chunk LBA to a physical sector address (PSA) of the migrated payload data on the chunk object. A hash entry is updated to map to the chunk object and the chunk LBA. An address map of the original data object is updated to map an LBA of the duplicate data chunk to the chunk object and the chunk LBA.
Example methods and systems for virtual tunnel endpoint (VTEP) mapping for overlay networking are described. One example may involve a computer system monitoring multiple VTEPs that are configured for overlay networking. In response to detecting a state transition associated with a first VTEP from a healthy state to an unhealthy state, the computer system may identify mapping information that associates a virtualized computing instance with the first VTEP in the unhealthy state; and update the mapping information to associate the virtualized computing instance with a second VTEP in the healthy state. In response to detecting an egress packet from the virtualized computing instance to a destination, an encapsulated packet may be generated and sent towards the destination based on the updated mapping information. The encapsulated packet may include the egress packet and an outer header identifying the second VTEP to be a source VTEP.
In one set of embodiments, a computer system can receive a request to provision a virtual machine (VM) in a host cluster, where the VM is associated with a virtual graphics processing unit (GPU) profile indicating a desired or required framebuffer memory size of a virtual GPU of the VM. In response, the computer system can execute an algorithm that identifies, from among a plurality of physical GPUs installed in the host cluster, a physical GPU on which the VM may be placed, where the identified physical GPU has sufficient free framebuffer memory to accommodate the desired or required framebuffer memory size, and where the algorithm allows multiple VMs associated with different virtual GPU profiles to be placed on a single physical GPU in the plurality of physical GPUs. The computer system can then place the VM on the identified physical GPU.
A method for containerized workload scheduling can include monitoring network traffic between a first containerized workload deployed on a node in a virtual computing environment to determine affinities between the first containerized workload and other containerized workloads in the virtual computing environment. The method can further include scheduling, based, at least in part, on the determined affinities between the first containerized workload and the other containerized workloads, execution of a second containerized workload on the node on which the first containerized workload is deployed.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 41/0897 - Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
H04L 43/0876 - Network utilisation, e.g. volume of load or congestion level
The current document is directed to an improved notification system for distributed applications. The new and improved notification system provides a notification-customization interface and a notification dashboard, accessible to users of the notification system through commonly available web browsers. The customization interface allows users to specify the types of notifications which the user desires, to specify the information to be included in the notifications, to specify the user devices to which notifications are to be transmitted, and to specify time ranges and/or relative times for notification transmission. The notification dashboard provides a dashboard interface for receiving, storing, and accessing stored notifications accessible to Internet-connected devices. In described implementations, the notification system employs an ontology to provide a common language for notification specification.
Some embodiments provide novel methods for performing services for machines operating in one or more datacenters. For instance, for a group of related guest machines (e.g., a group of tenant machines), some embodiments define two different forwarding planes: (I) a guest forwarding plane and (2) a service forwarding plane. The guest forwarding plane connects to the machines in the group and performs L2 and/or L3 forwarding for these machines. The service forwarding plane (1) connects to the service nodes that perform services on data messages sent to and from these machines, and (2) forwards these data messages to the service nodes. In some embodiments, the guest machines do not connect directly with the service forwarding plane.
H04L 41/0806 - Configuration setting for initial configuration or provisioning, e.g. plug-and-play
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/0893 - Assignment of logical groups to network elements
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 41/5003 - Managing SLAInteraction between SLA and QoS
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 45/302 - Route determination based on requested QoS
H04L 45/586 - Association of routers of virtual routers
H04L 67/563 - Data redirection of data network streams
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
H04L 69/321 - Interlayer communication protocols or service data unit [SDU] definitionsInterfaces between layers
H04L 69/324 - Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the data link layer [OSI layer 2], e.g. HDLC
H04L 69/325 - Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the network layer [OSI layer 3], e.g. X.25
H04L 101/622 - Layer-2 addresses, e.g. medium access control [MAC] addresses
14.
IMPLEMENTING DEFINED SERVICE POLICIES IN A THIRD-PARTY CONTAINER CLUSTER
Some embodiments provide a method of implementing service rules for a container cluster that is configured by a first SDN controller cluster. The method registers for event notification from an application programming interface (API) server to receive notification regarding events associated with resources deployed in the container cluster. The method forwards to a second SDN controller cluster resource identifiers collected through the registration for resources of the container cluster. The second SDN controller cluster defines service policies that are not defined by the first SDN controller cluster. The method receives, from the second SDN controller cluster, service policies defined by the second SDN controller cluster based on the resource identifiers. The method distributes service rules defined based on the service policies to network elements in the container cluster to enforce on data messages associated with machines deployed in the container cluster configured by the first SDN controller cluster.
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some examples, graphics processing units (GPU) are identified in a computing environment. Graphics processing requests are received. A graphics processing request includes a GPU memory requirement. The graphics processing requests are processed using a graphics processing request placement model that minimizes a number of utilized GPUs that are utilized to accommodate the requests. Virtual GPUs (vGPUs) are created to accommodate the graphics processing requests according to the graphics processing request placement model. The utilized GPUs divide their GPU memories to provide a subset of the plurality of vGPUs.
An example method of hypervisor lifecycle management in a virtualized computing system having a cluster of hosts includes: receiving, from a user at a lifecycle manager executing in the virtualized computing system, identification of a seed host; obtaining, by the lifecycle manager, a software specification from the seed host, the software specification describing a running image of the hypervisor executing on the seed host; generating, by the lifecycle manager, a software image from metadata and payloads stored on the seed host; setting, by the lifecycle manager, a host desired state for the cluster based on the software specification; and storing, by the lifecycle manager, the software image in a software depot in association with the host desired state.
Some embodiments provide a method for configuring an edge computing device to implement a logical router belonging to a logical network. The method configures a datapath executing on the edge computing device to use a first routing table associated with the logical router for processing data messages routed to the logical router. The method configures a routing protocol application executing on the edge computing device to (i) use the first routing table for exchanging routes with a network external to the logical network and (ii) use a second routing table for exchanging routes with other edge computing devices that implement the logical router.
H04L 67/289 - Intermediate processing functionally located close to the data consumer application, e.g. in same machine, in same home or in same sub-network
H04L 101/622 - Layer-2 addresses, e.g. medium access control [MAC] addresses
Some embodiments provide a method for configuring a gateway router of a virtual datacenter. The method is performed at a network management component of a virtual datacenter that is defined in a public cloud and comprises a set of network management components and a set of network endpoints connected by a logical network managed by the network management components of the virtual datacenter. The method receives a set of network addresses of the network endpoints. The method aggregates at least a subset of the network addresses into a single subnet address that encompasses all of the aggregated network addresses. The method provides an aggregated route for the subset of network addresses to a gateway router that connects the virtual datacenter to a public cloud underlay network in order for the router to route data messages directed to the network endpoints to the logical network of the virtual datacenter.
Some embodiments provide a method for dynamically deploying a managed forwarding element (MFE) in a software-defined wide-area network (SD-WAN) for a particular geographic region across which multiple SaaS applications is distributed. The method determines, based on flow patterns for multiple flows destined for the multiple SaaS applications distributed across the particular geographic region, that an additional MFE is needed for the particular geographic region. The method configures the additional MFE to deploy at a particular location in the particular geographic region for forwarding the multiple flows to the multiple SaaS applications. The method provides, to a particular set of MFEs that connect a set of branch sites to the SD-WAN, a set of forwarding rules to direct the particular set of MFEs to use the additional MFE for forwarding subsequent data messages belonging to the multiple flows to the multiple SaaS applications.
Some embodiments provide a method for a first smart NIC of multiple smart NICs of a host computer. Each of the smart NICs executes a smart NIC operating system that performs virtual networking operations for a set of data compute machines executing on the host computer. The method receives a data message sent by one of the data compute machines executing on the host computer. The method performs virtual networking operations on the data message to determine that the data message is to be transmitted from a port of a second smart NIC of the multiple smart NICs. The method passes the data message to the second smart NIC via a private communication channel connecting the plurality of smart NICs.
Disclosed are various examples of provisioning a data processing unit (DPU) management operating system (OS). A management hypervisor installer executed on a host device launches or causes a server component to provide a management operating system (OS) installer image at a particular URI accessible over a network internal to the host device. A baseboard management controller (BMC) transfers the DPU management OS installer image to the DPU device. A volatile memory based virtual disk is created using the DPU management OS installer image. The DPU device is booted to a DPU management OS installer on the volatile memory based virtual disk. The DPU management OS installer installs a DPU management operating system to a nonvolatile memory of the DPU device on reboot of the DPU device.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
22.
DISTRIBUTED BRIDGING BETWEEN PHYSICAL AND OVERLAY NETWORK SEGMENTS
Some embodiments provide a method for configuring a network to bridge data messages between a logical overlay network layer 2 (L2) segment and a physical L2 segment. The method identifies each host computer in the network on which at least one logical network endpoint connected to the logical overlay network L2 segment executes. For each identified host computer, the method configures a forwarding element executing on the identified host computer to bridge (i) data messages sent from the logical network endpoints executing on the identified host computer to network endpoints connected to the physical L2 segment and (ii) data messages sent from network endpoints connected to the physical L2 segment, executing on the identified host computer and on other host computers in the network, to the logical network endpoints executing on the identified host computer.
Some embodiments provide a novel method for defining a set of policies for a set of applications executing on a host computer of a software-defined network (SDN). The method configures, on a physical network interface card (PNIC) connected to the host computer, a network adapter to create a logical port that connects an interface of the host computer to a virtual distributed switch (VDS) executing on the PNIC. The method defines the set of policies based on the logical port for the VDS to apply to data message flows sent from the set of applications on the host computer to one or more other host computers of the SDN.
The present disclosure is related to devices, systems, and methods for a cloud scheduler. An example method can include receiving a schedule associated with an automation task to be performed in a virtualized environment via a REST API, wherein the task is associated with a target, associating the schedule with a partition, storing the schedule in a cache store responsive to determining that the schedule is to be invoked within a threshold time period, and receiving the schedule from the cache store and invoking the target responsive to the schedule becoming overdue.
Managing cloud snapshots in a development platform is described herein. One example method includes creating a snapshot of a virtual computing instance (VCI), provided by a cloud provider, using a development platform, receiving a request to revert to the snapshot, and performing a revert operation responsive to receiving the request. The revert operation can include creating a new boot disk on the cloud provider to replace a current boot disk in the development platform, creating a new data disk to replace a current data disk associated with the VCI, powering off the VCI and detaching the boot disk and the data disk, attaching the new boot disk and the new data disk to the VCI, powering on the VCI, and deleting the detached boot disk and the detached data disk.
System and computer-implemented method for detecting and reconciling moved workloads for a management component in a computing environment determines workloads that have moved as moved workloads based on received data at the management component. For a first moved workload with an associated workload, workload metadata is swapped with the associated workload and the first moved workload is updated in the management component. For a second moved workload without an associated workload, the second moved workload is preserved as preserved workloads for further processing.
System and computer-implemented method for reconciling moved workloads for a management component in a computing environment determines whether an updated workload has a tracking marker that moves with the workload and requires remediation. When the tracking marker is found in an inventory database of the management component, the metadata of the workload is reconciled in the management component.
Methods and apparatus to implement post-provisioning tasks are disclosed. An example apparatus comprising memory, instructions, and programmable circuitry to be programmed by the instructions to obtain metadata associated with a post-provisioning task, the post-provisioning task to modify a plugin, the plugin to provide a capability to a cloud resource of a computing platform provider, the metadata represented in accordance with a first file format, transform the metadata from the first file format to a second file format, the second file format compatible with the plugin, and register the post-provisioning task in a deployment environment provided by the computing platform provider.
Systems, apparatus, articles of manufacture, and methods are disclosed to analyze resource dependencies, the apparatus comprising: interface circuitry; machine readable instructions; and programmable circuitry to at least one of instantiate or execute the machine readable instructions to: generate a self-contained dependency descriptor property based on a dependency between a first cloud resource and a second cloud resource; receive a resource allocation request, the resource allocation request indicative of a first cloud resource account type, the resource allocation request not specific to a cloud provider; based on the self-contained dependency descriptor property, determine a second cloud resource account type to satisfy the resource allocation request; determine the cloud provider based on a property associated with a first resource type and a second resource type; and determine a cloud resource based on the cloud provider, the cloud resource to be allocated in response to the resource allocation request.
Example apparatus disclosed includes at least one memory, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to generate a local state for a first resource, the first resource obtained from a cloud service model associated with a registered cloud account, the first resource including a first identifier; identify a second resource from the cloud service model, the second resource including a second identifier; and catalog the second resource when the second identifier is different from the first identifier.
A computer system comprises a machine-learning (ML) system at which alerts are received from endpoints, wherein the ML system is configured to: upon receiving a first alert and a second alert, apply an ML model to the first and second alerts; based at least in part on the first alert being determined to belong to a first cluster of the ML system, classify the first alert into one of a plurality of alert groups, wherein alerts classified into a first alert group of the alert groups are assigned a higher priority for security risk evaluation than alerts classified into a second alert group of the alert groups; and based on the second alert being determined to not belong to any cluster of the ML system, analyze a chain of events that triggered the second alert to determine whether there is suspicious activity associated with the second alert.
An example method of identifying resources deployed in clouds in a computing system includes: receiving, at an asset scanner executing in a data center, billing artifacts from the clouds, the billing artifacts relating resources deployed in the clouds with identification and usage information; transforming, by the asset scanner, the billing artifacts into transformed billing artifacts, each transformed billing artifact having entries that relate one of the resources to a selected portion of the identification and usage information; generating, by the asset scanner, a plurality of jobs to process the resources; and processing, by the asset scanner, the plurality of jobs to update a database that relates the resources and the selected portion of the identification and usage information.
Systems, apparatus, articles of manufacture, and methods are disclosed for detection and reconciliation of configuration drift. An example apparatus includes example programmable circuitry to identify a resource associated with a request to reconcile an updated configuration with the resource, the resource to be identified based on a first identifier of the resource included with the request. Additionally, the example programmable circuitry is to identify a finite state machine corresponding to the updated configuration based on a second identifier of the updated configuration included with the request. The example programmable circuitry is also to initiate the finite state machine corresponding to the updated configuration to reconcile the updated configuration with the resource.
A method of issuing one or more commands for a management appliance of a software-defined data center (SDDC) to perform an operation, includes the steps of: retrieving the operation to be performed by the management appliance; transmitting a request to the management appliance for a first token, wherein the first token is associated with permissions for issuing commands to the management appliance, and wherein the request for the first token includes a second token that is associated with the initiator of the operation and that has a longer time-to-live period than the first token has; and upon receiving the first token from the management appliance, transmitting the first token and a command to the management appliance, wherein the command is for the management appliance to execute at least one task of the operation.
Certificate management as-a-service for software-defined datacenters is described herein. One method includes receiving an indication of an expiry of a first certificate of a virtual appliance in a virtualized environment via a certificate management agent of a gateway device in communication with the appliance, and performing a certificate replacement process responsive to determining that the expiry of the first certificate exceeds a threshold, wherein the certificate generation process includes sending a request to the appliance via an agent associated with the appliance, receiving, from the appliance, a certificate signing request (CSR), sending the CSR to an external certificate authority, receiving a second certificate from the certificate authority, and replacing the first certificate with the second certificate.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
36.
LIFECYCLE MANAGEMENT OF HETEROGENEOUS CLUSTERS IN A VIRTUALIZED COMPUTING SYSTEM
An example method of hypervisor lifecycle management in a virtualized computing system having a cluster of hosts includes: obtaining, by a lifecycle manager (LCM) agent executing in a host of the hosts, a desired state document, the desired state document defining a desired state of software in the host, the software including a hypervisor; comparing selection criteria in the desired state document against hardware information obtained from a hardware platform of the host to select an image of a plurality of images defined in the desired state document; and applying, by LCM agent, the selected image to the host.
An example method of hypervisor lifecycle management in a virtualized computing system having a cluster of hosts includes: obtaining, by a lifecycle manager (LCM) agent executing in a host of the hosts, a desired state document, the desired state document defining a desired state of software in the host, the software including a hypervisor, the desired state including a plurality of images; comparing selection criteria in a software policy of the desired state document against hardware information obtained from a hardware platform of the host to select an image of the plurality of images defined in the desired state document; and applying, by LCM agent, the selected image to the host.
System and computer-implemented method for processing operation requests in a computing environment uses an intent for an operation request received at a service instance that is submitted to an intent valet platform to process the operation request. The intent is queued in an intent table of intents and then retrieved for processing. The requested operation for the retrieved intent is delegated to the service for execution from the intent valet platform. When a completion signal from the service is received at the intent valet platform, the intent is marked as being in a terminal state.
Some embodiments provide a method for configuring a network to bridge data messages between a hardware-implemented L2 overlay network segment and a software-implemented L2 overlay network segment. The method identifies a host computer on which a logical network endpoint connected to the software-implemented overlay executes. The hardware-implemented L2 overlay connects at least a first set of network endpoints located in a first physical network zone and connected to a first L2 network segment and a second set of network endpoints located in a second physical network zone and connected to a second L2 network segment. The identified host computer is located in the first physical network zone. The method configures a forwarding element executing on the host computer to bridge data messages between the logical network endpoint and (i) the first set of network endpoints and (ii) the second set of network endpoints.
provisioning cloud-agnostic resource instances by sharing cloud resources is described herein. One example method includes creating a blueprint using a development platform, wherein the blueprint includes a definition of a resource, and wherein provisioning the resource includes provisioning a first cloud resource and a second cloud resource provided by a cloud provider, provisioning a first instance of the resource of the blueprint by provisioning a first instance of the first cloud resource and a first instance of the second cloud resource, and provisioning a second instance of the resource of the blueprint, wherein provisioning the second instance of the resource includes provisioning a second instance of the first cloud resource and sharing the first instance of the second cloud resource.
Cloning a cloud-agnostic deployment is described herein. One example method includes receiving modifications to an existing deployment created using a blueprint in a virtualized environment, and performing a deployment clone operation responsive to receiving a request to clone the deployment. The deployment clone operation can include creating an image associated with a virtual computing instance (VCI) of the deployment, creating a snapshot associated with a disk of the deployment, generating a clone blueprint based on the image and the snapshot, and deploying the clone blueprint in the virtualized environment.
An asynchronous mechanism for processing synchronous operation flows is described herein. One example method includes receiving a request from an orchestrator engine to determine a state of a cloud resource of a cloud automation platform, propagating the request to the cloud automation platform, caching a task identifier received from the cloud automation platform responsive to the request, receiving data indicative of the state of the cloud resource from the cloud automation platform wherein the data is associated with the task identifier, and providing the data to the orchestrator engine.
System and computer-implemented method for reconciling moved workloads for a management component in a computing environment uses a remediation queue to enqueue a remediation entry for a workload that has moved within the computing environment. The remediation entry for the workload is dequeued from the remediation queue and a remediation service on the remediation entry for the workload is executed to update metadata for the workload in the management component. A processing status of the remediation entry for the workload is stored at the management component.
Described herein are a system and method for forming a container image. The system and method include obtaining a first layer of a plurality of layers of the container image. The contents of the first layer are stored in a directory such that a first disk image layer file is mounted to the directory. A second layer of the plurality of layers is obtained, and the contents of the second layer are stored in the directory so that the first disk image layer includes contents of the first layer and the second layer. The first disk image layer is saved and is mountable and includes files of the container image.
Some embodiments provide a novel method for forwarding data messages between first and second host computers. To send, to a first machine of the first host, a second flow from a second machine of the second host in response to a first flow from the first machine, the method identifies from a set of tunnel endpoints (TEPs) of the first host a TEP that is a source TEP of the first flow. The method uses the identified TEP to identify one non-uniform memory access (NUMA) node of a set of NUMA nodes of the first host as the NUMA node associated with the first flow. The method selects, from a subset of TEPs of the first host that is associated with the identified NUMA node, one TEP as a destination TEP of the second flow. The method sends the second flow to the selected TEP of the first host.
Some embodiments provide a novel method for forwarding data messages between first and second host computers. To send, to a first machine executing on the first host computer, a flow from a second machine executing on the second host computer, the method identifies a destination network address of the flow. The method uses the identified destination network address to identify a particular tunnel endpoint group (TEPG) including a particular set of one or more tunnel endpoints (TEPs) associated with a particular non-uniform memory access (NUMA) node of a set of NUMA nodes of the first host computer. The particular NUMA node executes the first machine. The method selects, from the particular TEPG, a particular TEP as a destination TEP of the flow. The method sends the flow to the particular TEP of the particular NUMA node of the first host computer to send the flow to the first machine.
Some embodiments of the invention provide a method of using routing tables of GSLB DNS servers to perform path selection in response to DNS requests from client devices. At a first GSLB DNS server that operates in a first region and that maintains a first routing table, the method receives, from a client device, a DNS request for accessing a set of resources provided by a first server in the first region and a second server in a second region. The method determines, based on the first routing table and a second routing table associated with the second region, that a first path from the client device to the first server is shorter than a second path from the client device to the second server, and provides a network address associated with the first path to the client device for reaching the first server to access the set of resources.
H04L 61/4511 - Network directoriesName-to-address mapping using standardised directoriesNetwork directoriesName-to-address mapping using standardised directory access protocols using domain name system [DNS]
48.
METHODS AND SYSTEMS FOR PERFORMING APPLICATION DIAGNOSTICS VIA DISTRIBUTED TRACING WITH ENHANCED OBSERVABILITY
Methods and systems are directed to performing application diagnostics via distributed tracing with enhanced observability. Methods are executed by an operations manager that collects spans of microservices of a distributed application executing in a cloud infrastructure. The operations manager forms traces from the spans for each request for services from the application. The operations manager reduces the dimensionality of the traces by generating a behavioral map of points in a two-dimensional space, each point represents one of the traces. The behavior map is displayed in a graphical user interface having functionalities that enables a user to investigate properties of the traces by trace type and duration and investigate of erroneous traces or clusters of traces and determine which optimization tasks to execute.
Some embodiments provide a method for performing secure frame capture for an application executing on a data compute node. At the application, the method receives and parses a frame for a particular L7 protocol. The method identifies an action to perform within the application based on the parsed frame. Based on secure frame capture being enabled for the application, the method writes information regarding the frame to a capture file stored at the DCN. The information regarding the frame omits (i) any L2-L4 information and (ii) any payload data carried by the frame.
In an example, a method for provisioning a cell site in a 5G RAN may include receiving a plurality of steps involved in provisioning the cell site for the 5G RAN. In an example, provisioning the cell site may include provisioning of a physical infrastructure layer, a container orchestration platform on the physical infrastructure layer, and a containerized network function (CNF) instance associated with the 5G RAN in the container orchestration platform. Further, the method may include converting the plurality of steps into a dependency graph of tasks. The dependency graph may represent workflows and relationships between the tasks. Furthermore, based on feeding the dependency graph as an input to an orchestrator, the method may include provisioning the cell site by executing the tasks in an order according to the dependency graph.
An example method for managing a cell site in a 5G RAN may include determining a physical infrastructure layer, a container orchestration platform on the physical infrastructure layer, and a CNF instance associated with the 5G RAN in the container orchestration platform based on a site identifier associated with the cell site. Based on the physical infrastructure layer, the container orchestration platform, and the CNF instance, the method may include building a logical site resource map representing topological information of the cell site. Further, the method may include monitoring and/or managing the cell site using the logical site resource map.
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
Some embodiments of the invention provide a method of performing end-user monitoring. At a health monitor that executes on a first host computer along with a client machine and a load balancer, to monitor health of a set of two or more servers that are candidate servers for processing packets from the client machine, the method exchanges health monitoring messages with each server in the set of servers to assess health of the servers in the set. At the health monitor, the method provides health data expressing health of the servers to the load balancer to use in determining how to distribute packets from the client machine between the servers in the set of servers.
System and method for scaling flexible cloud namespaces (FCNs) in a software-defined data center (SDDC) uses resource utilizations in resource capacity profiles of the FCNs in the SDDC, which are compared with resource utilization thresholds set for the resource capacity profiles. Based on these comparisons, resource capacities in the resource capacity profiles of the FCNs are scaled.
Systems, apparatus, articles of manufacture, and methods are disclosed for converting enforcement policy information into provisioning template information by instantiating or executing machine-readable instructions to determine a type of a first placeholder of a provisioning template with a plurality of placeholders, copy enforcement policy data corresponding to the determined type of the first placeholder, fill the first placeholder of the provisioning template with the copied enforcement policy data, and save the provisioning template.
Methods and apparatus to manage infrastructure as code (IaC) implementations are disclosed, A disclosed example system to manage a shared computing resource includes programmable circuitry; and machine readable instructions to cause the programmable circuitry to: determine an IaC type associated with a request corresponding to the shared computing resource; select a template from a plurality of IaC templates based on the IaC type; and service the request based on the template.
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
56.
METHODS AND APPARATUS TO ISOLATE STATE MANAGEMENT IN INFRASTRUCTURE AS CODE ENVIRONMENTS
Systems, apparatus, articles of manufacture, and methods are disclosed to isolate state management in infrastructure as code environments. Disclosed is an apparatus comprising monitor a security infrastructure to determine a first state of the security infrastructure, the security infrastructure to control a function based on the first state, the function defined by an operating protocol; determine that the security infrastructure has transitioned to a second state, the second state associated with an alteration to the security infrastructure; determine whether the alteration of the security infrastructure associated with the second state is undesired, wherein the alteration being undesired corresponds to the function of the security infrastructure deviating from the operating protocol; and modify the security infrastructure by replacing the second state with a third state to counteract the deviation from the operating protocol corresponding to the second state.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
57.
METHODS AND APPARATUS TO DETERMINE TEMPLATES FOR USE WITH CLOUD ACCOUNTS
An example apparatus comprises memory, first instructions, and programmable circuitry to be programmed by the first instructions to associate a first portion of metadata with a first category, the metadata corresponding to a cloud resource of a cloud account, associate a second portion of the metadata with a second category, and determine a template based on the first portion being greater than the second portion, the template associated with the first category, the template including second instructions to define a target state to be enforced on the cloud account.
A method for network address management is provided. Embodiments include determining a creation of a namespace associated with a cluster of computing devices, wherein a subset of computing resources of the cluster of computing devices is allocated to the namespace. Embodiments include assigning, to the namespace, a network address pool comprising a plurality of network addresses in a subnet, wherein the assigning causes the plurality of network addresses to be reserved exclusively for the namespace. Embodiments include receiving an indication that a pod is added to the namespace. Embodiments include, in response to the receiving of the indication, assigning a network address from the network address pool to the pod.
An example method for encrypting clusters during deployment may include retrieving, from a blueprint, resource information required to deploy a cluster including a host computing system and a virtual cluster manager node to manage the host computing system. The resource information may include host information and disk information required to deploy the virtual cluster manager node, and encryption information associated with a key provider. Based on the host information and the disk information, a clustered datastore may be created on the host computing system. Further, the virtual cluster manager node may be deployed on the clustered datastore. Based on the encryption information associated with the key provider, the virtual cluster manager node and associated disks may be encrypted. Upon encrypting the virtual cluster manager node, a cluster may be created and the host computing system may be added to the cluster.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
A method of managing a virtual machine (VM) image for deployment of the VM across a plurality of software-defined data centers (SDDCs) includes the steps of: separately uploading parts of the VM image to a cloud storage, forming a complete image of the VM from the separately uploaded parts of the VM image, and downloading from the cloud storage the complete VM image to each of the SDDCs in which the VM is to be deployed.
Systems, apparatus, articles of manufacture, and methods are disclosed to detect an installation script, the installation script including a second version of software in system storage of a first cluster of a plurality of clusters, a first version of the software installed in the first cluster, and after execution of the first version of the software by a first cluster control plane (CCP) pod is stopped, start execution of a second CCP pod, the second CCP pod instantiated with the second version of the software; and interface circuitry to direct an application programming interface (API) operation request received at the first cluster to the second CCP pod without directing the API operation request to the first CCP pod.
Methods and apparatus to manage cloud computing resources are disclosed. An example apparatus includes network interface circuitry; computer readable instructions; and programmable circuitry to instantiate: allocation candidate circuitry to determine allocation candidates for a first allocation resource and a second allocation resource, respectively; iteration circuitry to generate a first candidate set based on the first allocation candidate and the second allocation candidate; filter circuitry to determine whether the allocation candidates are incompatible; skipping circuitry to determine to skip, after a determination that the allocation candidates are incompatible, a second candidate set based on the incompatibility between the allocation candidates present in the second candidate set; and the filter circuitry to determine whether allocation candidates of a third candidate set are compatible, the allocation candidate circuitry to, after the third candidate set is determined as compatible, cause assignment of the third candidate set.
The disclosure provides a method for monitoring a disconnected component in a container orchestration system. The method generally includes obtaining, by an abstraction layer, monitoring data associated with the component while the component is connected to a network, wherein the abstraction layer comprises at least one proxy component acting as a proxy for the component in the container orchestration system; intercepting, by the abstraction layer, a query intended for the component, wherein the query originated from one or more monitoring agents configured to continuously monitor at least the component; and responding to the query based on the monitoring data obtained by the abstraction layer for the component.
An example method of managing tenant networks in a data center includes: obtaining, by tenant network topology discovery software executing in the data center, inventory data for a tenant network deployed in the data center from a network manager, the tenant network comprising a software-defined network managed by the network manager; generating, by the tenant network topology discovery software, a tenant network model based on the inventory data, the tenant network model including objects representing components of the tenant network and relationships between the components; storing, by the tenant network topology discovery software, the tenant network model in a database; and updating, by the tenant network topology discovery software, the tenant network model in response to monitoring the tenant network.
H04L 41/12 - Discovery or management of network topologies
H04L 41/0604 - Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
65.
Dynamic Resource Placement in Multi-Cloud Environments
The disclosure provides a method for deploying resources in a multi-cloud environment. The method includes receiving, by a dynamic resource placement system, a request to generate a resource placement configuration for one or more resources to be deployed in a multi-cloud environment; obtaining, by the dynamic resource placement system, a custom resource placement logic; obtaining, by the dynamic resource placement system, a cloud context comprising details of available cloud environments in the multi-cloud environment; generating, by the dynamic resource placement system and based on analyzing the cloud context using the custom resource placement logic, a resource placement configuration specifying one or more target cloud environments for deploying the one or more resources; and providing, by the dynamic resource placement system, the resource placement configuration to a cloud infrastructure management platform for deploying the one or more resources.
H04L 47/762 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
Methods and apparatus to configure virtual machines (VMs) are disclosed. Am example system to manage a plurality of virtual machines of a shared computing resource, the system includes interface circuitry, programmable circuitry, and machine readable instructions to cause the programmable circuitry to at least one of scan or monitor the plurality of virtual machines, determine whether a master application corresponding to the virtual machines has accepted a minion application corresponding to a first one of the virtual machines, and in response to the determination that the master application has not accepted the minion application, cause the master application to accept the minion application.
Systems, apparatus, articles of manufacture, and methods are disclosed for association of cloud accounts by instantiating or executing machine-readable instructions to in response to a linking request, associate a first cloud account and a second cloud account, where the association causes changes made to the first cloud account to be propagated to the second cloud account, store the association in a database, monitor a configuration of the first cloud account, and after a change in the configuration information of the first cloud account, apply the configuration information corresponding to the first cloud account to the second cloud account.
Some embodiments of the invention provide a method for a PIM (passive intermodulation interference) detection RAN (radio access network) application deployed across one or more RICs (RAN intelligent controllers) for detecting PIM in a RAN including multiple RAN base stations for servicing multiple users located across multiple regions, each region including at least one RAN base station. The method is performed for a particular region serviced by a particular RAN base station. The method detects (1) high uplink noise for the particular region and (2) antenna imbalance for the particular region. Based on said detection, the method determines whether high KPI (key performance indicator) impact is detected for the particular region. When high KPI impact is detected for the particular region, the method generates a PIM alert to notify an operator of the particular RAN base station that services the particular region that PIM is detected for the particular region.
H04B 7/08 - Diversity systemsMulti-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the receiving station
H04B 17/382 - MonitoringTesting of propagation channels for resource allocation, admission control or handover
69.
RAN INTELLIGENT CONTROLLER MESSAGE TRACING AND QUERY SERVICE
Some embodiments of the invention provide a method for performing packet tracing in a RAN. The method receives a trace request that includes a tracing specification for performing a trace operation for a subset of messages exchanged between a base station component and a RAN application via a RIC that operates as an interface between the base station component and the RAN application, the trace operation for collecting state data associated with the RIC. Based on the tracing specification, the method performs the trace operation for the subset of messages. The method receives a set of trace information that was collected during the trace operation and that includes state data associated with the RIC. The method processes the set of trace information to generate multiple sets of state data associated with the RIC for use in responding to queries for state data associated with the RIC.
The current document is directed to distributed-computer-systems and, in particular, to management of distributed applications and cloud infrastructure using artificial-life agents. The artificial-life agents are organized into a population, the size of which is stabilized as individual artificial-life agents predict system-control parameters, receive rewards based on the predictions, thrive and propagate as they learn to provide better predictions while adapting to a constantly changing environment, and expire when they fail to provide useful predictions over periods of time. The predictions output by individual artificial-life agents are used to provide consensus predictions by the artificial-life-agent population to a cloud-infrastructure-management or distributed-application-management controller.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
G06N 3/006 - Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
71.
DETERMINATION OF ACTIVE AND STANDBY SMART NICS THROUGH DATAPATH
Some embodiments provide a method for a first smart NIC of multiple smart NICs of a host computer. Each of the smart NICs executes a smart NIC operating system that performs networking operations for a set of data compute machines executing on the host computer. When the first smart NIC identifies itself as an active smart NIC for the host computer, the first smart NIC sends a first message through a datapath to a second smart NIC to verify whether the second smart NIC identifies as an active smart NIC or a standby smart NIC. If the second smart NIC sends a reply second message to the first smart NIC through the datapath, the first smart NIC (i) determines that the second smart NIC identifies as a standby smart NIC and (ii) operates to process data traffic sent to and from the host computer as the active smart NIC.
Components of a distributed data object are synchronized using streamlined tracking metadata. A target component of the distributed data object is detected as it becomes available and stale. A source component that is up-to-date and that mirrors the address space of the detected target component is identified. A set of mapped address ranges and a set of unmapped address ranges of the identified source component are obtained. A mapped address range of the target component that corresponds with an unmapped address range of the source component is identified. The identified mapped address range of the target component is then synchronized with the corresponding unmapped address range of the source component. Thus, unmapped address ranges are synchronized without using tracking metadata of the source component.
In some embodiments, a method receives a set of packets for a flow and determines a set of features for the flow from the set of packets. A classification of an elephant flow or a mice flow is selected based on the set of features. The classification is selected before assigning the flow to a network resource in a plurality of network resources. The method assigns the flow to a network resource in the plurality of network resources based on the classification for the flow and a set of classifications for flows currently assigned to the plurality of network resources. Then, the method sends the set of packets for the flow using the assigned network resource.
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
74.
Backup and Restore of Containers Running in Virtual Machines
One or more embodiments provide a method for data storage. For example, the method may include adding a second virtual disk to the VM, the second virtual disk backed by a second virtual disk file. The method may also include creating one or more volumes configured to store container data of the one or more containers, the one or more volumes using storage from the second virtual disk and not the first virtual disk. The method may furthermore include mounting the one or more volumes in the one or more containers. The method may in addition include backing up the second virtual disk file independent from the first virtual disk file to create a copy of the second virtual disk file.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
75.
Centralized monitoring of containerized workloads in a multi-tenant, multi-cloud environment
The disclosure provides a method for monitoring tenant workloads in a multi-cloud environment. The method generally includes determining a first new workload for a first tenant is deployed on a first data plane associated with a first cloud platform in the multi-cloud environment; configuring a monitoring stack on a second data plane associated with a second cloud platform in the multi-cloud environment to collect first metrics data for the first new workload; and creating a network policy allow list including a source internet protocol (IP) address associated with the monitoring stack, wherein the network policy allow list is to be used by an ingress controller deployed on the first data plane to control ingress traffic to the first new workload, including at least ingress traffic from the monitoring stack intended for the first new workload.
H04L 41/0813 - Configuration setting characterised by the conditions triggering a change of settings
H04L 41/0853 - Retrieval of network configurationTracking network configuration history by actively collecting configuration information or by backing up configuration information
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 43/08 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
76.
METHODS AND SYSTEMS THAT AUTOMATICALLY GENERATE PARAMETERIZED CLOUD-INFRASTRUCTURE TEMPLATES
The current document is directed to an infrastructure-as-code (“IaC”) cloud-infrastructure-management service or system that automatically generates parameterized cloud-infrastructure templates that represent cloud-based infrastructure, including virtual networks, virtual machines, load balancers, and connection topologies. The IaC cloud-infrastructure manager automatically transforms cloud-infrastructure-specification-and-configuration files into a set of parameterized cloud-infrastructure-specification-and-configuration files and a parameters file that together comprise a parameterized cloud-infrastructure template.
The disclosure provides a method for authenticating a network agent deployed in a networking environment. The method generally includes receiving, by a network controller in the networking environment, a name of an external node where the network agent is running and a token associated with the external node; in response to receiving the name of the external node, obtaining, by the network controller, a secret associated with the token; parsing, by the network controller, the secret to determine an expected external node name corresponding to the token; comparing the expected external node name with the received external node name; and trusting the network agent when the expected external node name and the received external node name match.
The disclosure provides an approach for gateway optimization. Embodiments include receiving, at a first gateway associated with a first tenant within a data center, a packet directed to a first public network address of an endpoint associated with a second tenant within the data center. Embodiments include performing, by the first gateway, network address translation (NAT) to translate the first public network address to a private network address of the endpoint. Embodiments include forwarding, by the first gateway, the packet to an edge gateway of the data center. Embodiments include forwarding, by the edge gateway, the packet to a second gateway associated with the second tenant within the data center without sending the packet to a public interface of the edge gateway. Embodiments include forwarding, by the second gateway, the packet to the endpoint.
METHODS AND SYSTEMS THAT AUTOMATICALLY GENERATE SCHEMA FOR CLOUD-INFRASTRUCTURE-SPECIFICATION-AND-CONFIGURATION FILES THAT ARE USED FOR AUTOCOMPLETION AND VALIDATION
The current document is directed to an IaC cloud-infrastructure-management service or system that automatically generates schema for cloud-infrastructure-specification-and-configuration files used by integrated development environments (“IDEs”), associated with the IaC cloud-infrastructure-management service or system, for autocompletion and validation. The IaC cloud-infrastructure management service or system accesses cloud-provider plug-ins to collect information and then encodes collected information regarding resource types, resource-type-associated functions and function arguments, cloud-infrastructure-specification-and-configuration data-file syntax, and other relevant information into cloud-infrastructure-specification-and-configuration data-file schemas. The schemas are input to integrated IDEs to control autocompletion and cloud-infrastructure-specification-and-configuration data-file validation.
Site reliability engineering (SRE) may be provided as a service to software products, such as an on-premises software product residing at a first computing environment. A SRE service site may be hosted at a second computing environment that is remote and separate from the first computing environment. A SRE agent resides at the first computing environment to monitor the software product, and provides information, such as metric data or log information pertaining to the software product, to the SRE service site. A SRE service of the SRE service site performs analysis of the information to identify an issue with the software product, diagnosis to determine a cause of the issue, and identifies a remediation that may be applied by the SRE agent to address the issue.
Example methods and systems for connection establishment in a global server load balancer (GSLB) environment are described. In one example, a computer system may establish a first connection with a first entity and a second connection with the second entity. The first connection may be established based on first parameter information that includes a shared certificate and a first identifier (ID). The second connection may be established based on second parameter information that includes the shared certificate and a second ID. The shared certificate may be shared by multiple entities that include the first entity and a second entity. In response to receiving a first request, a first response may be generated and sent towards the first entity via the first connection. In response to receiving a second request, a second response may be generated and sent towards the second entity via the second connection.
H04L 61/4511 - Network directoriesName-to-address mapping using standardised directoriesNetwork directoriesName-to-address mapping using standardised directory access protocols using domain name system [DNS]
82.
DYNAMIC SITE SELECTION IN GLOBAL SERVER LOAD BALANCING (GSLB) ENVIRONMENT
Example methods and systems for dynamic site selection in a global server load balancer (GSLB) environment are described. In one example, a computer system may obtain first health information from a first entity and second health information from a second entity. The first health information may be generated based on multiple first traffic flows between (a) multiple first client devices and (b) a first pool of backend servers. The second health information may be generated based on multiple second traffic flows between (a) multiple second client devices and (b) a second pool of backend servers. In response to receiving a request to access the service, the computer may select a selected site based on the first health information and/or second health information. A response may be generated and sent to cause a third client device to access the service by directing a third traffic flow towards the selected site.
Some embodiments provide a novel method for emulating a local storage for a host computer comprising a network interface card (NIC). On the NIC, a storage emulator program is deployed to emulate a local virtual volume (vVol) storage, from several external storages accessed through the NIC, for a set of processes executing on the host computer. The several external storages include at least one external storage that is not a native vVol storage. An interface of a bus is configured on the NIC to connect the NIC to the host computer to provide the emulated local vVol storage for the set of processes.
Some embodiments provide a novel method for configuring a network interface card (NIC) that is connected to a host computer and that emulates a local non-volatile memory express (NVMe) storage device, using external storages, for a set of processes executing on the host. The method configures, on the NIC's operating system (OS), a storage emulator program to present the external storages to the host processes as the local NVMe storage device. The method configures, on the NIC's OS, a disk device to exchange NVMe requests and responses between the host processes and the external storages by exchanging the NVMe requests and responses (1) with a virtual NVMe (vNVMe) controller of the NIC through a storage stack of the OS, or (2) directly with the vNVMe controller such that the disk device bypasses the storage stack. Exchanging NVMe requests and responses directly with the vNVMe controller optimizes the NIC's performance.
A system and computer-implemented method for analyzing software-defined data centers (SDDCs) uses resource utilization metrics for an SDDC to determine a plurality of scale-in and scale-out events in the SDDC for a period of time using a particular combination of resource scale-in and scale-out thresholds defined and to compute a total cost of the SDDC during the period of time based on a changing number of hosts being used in the SDDC due to the plurality of scale-in and scale-out events in the SDDC during the period of time. Parameters of a resource manager in the SDDC are set according to the particular combination of resource scale-in and scale-out thresholds in response to user input.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
86.
Methods and Systems that Generate Random Numbers Based on Nondeterministic Phenomena that are Computationally Initiated and Computationally Accessed Within a Computer System
The current document is directed to methods and systems that generate sequences of random numbers. Unlike many currently available random-number generators that continuously measures a physical apparatus or other signal source, the currently disclosed methods and systems employ nondeterministic phenomena that are computationally initiated and computationally accessed within a computer system. The nondeterministic phenomena are often a produced by multiple simultaneously executing, asynchronous threads or other computational entities, with the unpredictability arising from multiple different types and sources of nondeterministic behavior within the computer system. Unlike pseudorandom-number generators, statistics and metrics computed from sequences of random numbers produced by the currently disclosed random-number generators have values close to those expected for a random-selection process. Unlike random numbers that depend on specialized circuitry or single signal sources, the currently disclosed random-number generators used standard components and significant redundancy and robustness to single-point failures.
An example method may include executing, using an integration plugin installed on a first cloud-based automation platform running in a first management node, a schedule job to obtain an API response from a second management node executing a second cloud-based automation platform. The API response may include a custom form schema of a first form associated with the second cloud-based automation platform in a first defined data format. Further, the method may include parsing the custom form schema to determine form fields and dependency of the form fields. Furthermore, the method may include translating the custom form schema into a second defined data format supported by the first cloud-based automation platform based on the parsed custom form schema. Further, the method may include persisting the translated custom form schema in a database associated with the first cloud-based automation platform.
System and computer-implemented method for updating applications running in a distributed computing system uses an update agent associated with an existing application to make a request for update information regarding the existing application to a service to receive a response that includes a target version of the existing application and an update window of time, which is based on information contained in the request for update information. A deployment of the target version of the existing application within the update window of time is coordinated by the update agent when the target version is newer than a current version of the existing application.
Automated computer-implemented methods and systems for troubleshooting and resolving problems with objects of a cloud infrastructure are described herein. In response to detecting abnormal behavior of an object running in the cloud infrastructure based on a key performance indicator (“KPI”) of the object, a graphical user interface (“GUI”) is displayed to enable a user to select KPIs of components of the object. For each of the components, a separate rule learning engine is deployed to generate rules for detecting a problem with the component based on the KPI of the object and the KPIs of the component. The rules are subsequently used to detect a runtime problem with the object and display in the GUI remedial measures for resolving the problem. Remedial measures are automatically executed to resolve the problem with the object via the GUI.
Described herein are systems, methods, and software to manage the assignment of hosts to host clusters and the assignment of virtual endpoints to the host clusters. In one implementation, a management service identifies a host to be added to a computing environment and identifies physical resources available on the host. The management service further determines a host cluster for the host from a plurality of host clusters in the computing environment based on the physical resources available on the host and assign the host to the host cluster.
A framework for implementing reinforcement learning (RL)-based dynamic aggregation for distributed learning (DL) and federated learning (FL) is provided. In one set of embodiments, the framework includes an RL agent that interacts with the parameter server and clients of a DL/FL system and periodically receives two inputs from the system while the system is executing a training run: a “state” comprising information regarding the current runtime properties of the system and a “reward” comprising information pertaining to one or more training metrics to be optimized. In response to these inputs, the RL agent generates an “action” comprising information for modifying the parameter server's aggregation function in a manner that maximizes future cumulative rewards expected from the DL/FL system based on the state.
Some embodiments of the invention provide, for a RAN (radio access network), a method of rapidly upgrading multiple machines distributed across multiple cell sites, each particular machine of the multiple machines executing one or more base station applications. The method downloads a second boot disk for each of the multiple machines at each of the multiple cell sites, the second boot disk including an upgraded version of a first boot disk currently used by each of the multiple machines. For each particular machine, the method (1) powers off the particular machine, (2) creates a copy of data stored by a data disk of the particular machine to preserve data stored currently on the data disk, (3) replaces the first boot disk of the particular machine with the second boot disk that is the upgraded version of the first boot disk, and (4) powers on the particular machine.
Some embodiments provide a novel method for configuring edge routers in a first network. The method configures on a first compute node of the first network (1) a first higher-level edge router and (2) a set of lower-level edge routers. Each lower-level edge router is configured for a different set of subnetworks defined in the first network and is connected to an external second network through the first higher-level edge router. The method detects a condition that requires a particular lower-level edge router for a particular subnetwork to be moved to another compute node. The method configures the particular lower-level edge router to operate on a second compute node below a second higher-level edge router operating on the second compute node to connect the particular lower-level edge router to the external second network.
An example method for provisioning data volume for a virtual compute instance may include receiving a request to provision a data volume for a virtual compute instance. The request may specify a size of the data volume, a type of the data volume, and a first input/output operations per second (IOPS) value for the data volume. Further, the method may include determining a recommended IOPS value for the data volume by applying a logic to the specified size, the specified type, and the first IOPS value. Furthermore, the method may include provisioning the data volume for the virtual compute instance with the specified size, the specified type, and the recommended IOPS value.
Described herein are systems, methods, and software to manage applications, databases, data centers, and personnel during mergers and acquisitions. In one implementation, a management service migrates one or more applications from one or more data centers associated with a first company to one or more data centers associated with a second acquiring company. The management service further monitors resource usage associated with the one or more applications and determines a configuration for deploying the one or more applications based on the resource usage. The configuration defines at least execution locations in a set of data centers for the second company.
One example method to perform storage capacity planning in a hyper-converged infrastructure (HCI) environment is disclosed. The method includes obtaining historical storage capacity usage data of a set of virtual storage area network (vSAN) clusters, processing the historical storage capacity usage data to generate processed historical storage capacity usage data, training a machine learning model with the processed historical storage capacity usage data to generate a first trained machine learning model, and in response to a first vSAN cluster being newly deployed in the HCI environment, dispatching the first trained machine learning model to the first vSAN cluster.
Example methods and systems for implementing an process-aware identity firewall are described. In one example, a computer system may detect a request for a virtualized computing instance to access a resource. The computer system may obtain (a) identity information identifying a user or a user device associated with the virtualized computing instance and (b) process information associated with a process that initiates the request to access the resource. The computer system may map the identity information, the network event information and the process information to an identity firewall rule that includes at least (a) a first parameter that is mappable to the identity information, (b) a second parameter that is mappable to the network event information and (c) a third parameter that is mappable to the process information. The identity firewall rule may be applied to allow or block the request to access the resource.
Some embodiments provide a method for using a first SDN controller as a Network Controller as a Service (NCaaS). The first SDN controller receives a first set of network attributes regarding network elements in a first container cluster configured by a second SDN controller, and a second set of network attributes regarding network elements in a second container cluster configured by a third SDN controller. These container clusters do not have a controller for defining particular network policies. Based on the sets of network attributes, the first SDN controller defines the particular network policies to control forwarding data messages between the first and second container clusters. The first SDN controller distributes at least a subset of the particular network policies to the first container cluster in order for network elements at the first container cluster to enforce on data messages exchanged between the first and second container clusters.
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
Some embodiments of the invention provide a method for providing a functions as a service (FaaS). The method is performed at a FaaS framework executing in a first cloud. The method receives a first API (Application Programming Interface) call invoking a particular function that is stored by for the FaaS framework. The method selects, from multiple cloud providers, a particular cloud provider for executing the particular function. The method sends executable code for the particular function in a format compatible with the selected particular cloud provider to the particular cloud provider to use to instantiate and execute the particular function.
Some embodiments of the invention provide a method of implementing a FaaS (functions as a service) framework executing in a first cloud for multiple applications operating on multiple machines in the first cloud. The method provides to the FaaS framework (1) multiple sets of credentials for accessing of multiple cloud providers and (2) a set of selection rules for selecting cloud providers from the multiple cloud providers to execute multiple functions for the multiple programs. For each particular function in the multiple functions, the method configures the FaaS framework to use the set of selection rules to select a particular cloud provider from the multiple cloud providers to execute the particular function, and configures the FaaS framework to use a particular set of credentials associated with the selected particular cloud provider from the multiple sets of credentials to forward the particular function to the selected particular cloud provider for execution.