A computer system comprises a plurality of endpoints at which security agents generate security alerts and a machine-learning (ML) system that receives the security alerts from the endpoints and that separates the security alerts into a plurality of clusters, wherein the ML system is configured to execute on a processor of a hardware platform to: determine that a group of first alerts of the security alerts belongs to a first cluster of the clusters; create a first representative alert from metadata of the first alerts belonging to the first cluster; and in response to a security analytics platform evaluating the first representative alert as being harmless to the computer system, store information indicating that all of the first alerts are harmless.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
System and computer-implemented method for detecting and reconciling moved workloads for a management component in a computing environment determines workloads that have moved as moved workloads based on received data at the management component. For a first moved workload with an associated workload, workload metadata is swapped with the associated workload and the first moved workload is updated in the management component. For a second moved workload without an associated workload, the second moved workload is preserved as preserved workloads for further processing.
An example method of identifying resources deployed in clouds in a computing system includes: receiving, at an asset scanner executing in a data center, billing artifacts from the clouds, the billing artifacts relating resources deployed in the clouds with identification and usage information; transforming, by the asset scanner, the billing artifacts into transformed billing artifacts, each transformed billing artifact having entries that relate one of the resources to a selected portion of the identification and usage information; generating, by the asset scanner, a plurality of jobs to process the resources; and processing, by the asset scanner, the plurality of jobs to update a database that relates the resources and the selected portion of the identification and usage information.
Managing cloud snapshots in a development platform is described herein. One example method includes creating a snapshot of a virtual computing instance (VCI), provided by a cloud provider, using a development platform, receiving a request to revert to the snapshot, and performing a revert operation responsive to receiving the request. The revert operation can include creating a new boot disk on the cloud provider to replace a current boot disk in the development platform, creating a new data disk to replace a current data disk associated with the VCI, powering off the VCI and detaching the boot disk and the data disk, attaching the new boot disk and the new data disk to the VCI, powering on the VCI, and deleting the detached boot disk and the detached data disk.
A computer system comprises a machine-learning (ML) system at which alerts are received from endpoints, wherein the ML system is configured to: upon receiving a first alert and a second alert, apply an ML model to the first and second alerts; based at least in part on the first alert being determined to belong to a first cluster of the ML system, classify the first alert into one of a plurality of alert groups, wherein alerts classified into a first alert group of the alert groups are assigned a higher priority for security risk evaluation than alerts classified into a second alert group of the alert groups; and based on the second alert being determined to not belong to any cluster of the ML system, analyze a chain of events that triggered the second alert to determine whether there is suspicious activity associated with the second alert.
System and computer-implemented method for reconciling moved workloads for a management component in a computing environment determines whether an updated workload has a tracking marker that moves with the workload and requires remediation. When the tracking marker is found in an inventory database of the management component, the metadata of the workload is reconciled in the management component.
System and computer-implemented method for reconciling moved workloads for a management component in a computing environment uses a remediation queue to enqueue a remediation entry for a workload that has moved within the computing environment. The remediation entry for the workload is dequeued from the remediation queue and a remediation service on the remediation entry for the workload is executed to update metadata for the workload in the management component. A processing status of the remediation entry for the workload is stored at the management component.
Some embodiments provide a method for configuring a network to bridge data messages between a hardware-implemented L2 overlay network segment and a software-implemented L2 overlay network segment. The method identifies a host computer on which a logical network endpoint connected to the software-implemented overlay executes. The hardware-implemented L2 overlay connects at least a first set of network endpoints located in a first physical network zone and connected to a first L2 network segment and a second set of network endpoints located in a second physical network zone and connected to a second L2 network segment. The identified host computer is located in the first physical network zone. The method configures a forwarding element executing on the host computer to bridge data messages between the logical network endpoint and (i) the first set of network endpoints and (ii) the second set of network endpoints.
The disclosure provides a method for assigning new load to an application instance in a public cloud. The method generally includes calculating, for each application instance of a plurality of application instances running in the public cloud, a respective resource utilization score, wherein for each application instance: the respective score is calculated by applying, for each of two or more resource utilization metrics associated with the application instance, a respective weight to a respective resource usage value for the resource utilization metric, and wherein, for each of the two or more resource utilization metrics, the respective weight is a function of the respective resource usage values for the two or more resource utilization metrics; identifying an application instance having a highest respective score among the respective scores calculated for the application instances; and determining whether the application instance having the highest respective score is capable of handling the new load.
This disclosure is directed to automated computer-implemented methods for application discovery from log messages generated by event sources of applications executing in a cloud infrastructure. The methods are executed by an operations manager that constructs a data frame of probability distributions of event types of the log messages generated by the event sources in a time period. The operations manager executes clustering techniques that are used to form clusters of the probability distributions in the data frame, where each of the clusters corresponds to one of the applications. The operations manager displays the clusters of the probability distributions in a two-dimensional map of applications in a graphical user interface that enables a user to select one of the clusters in the map of applications that corresponds to one of the applications and launch clustering of probability distributions of the user-selected cluster to discover two or more instances of the application.
Some embodiments of the invention provide a method for an interference detection RAN application deployed across one or more RICs for detecting and identifying external interference in a RAN that includes multiple RAN base stations servicing users located across multiple regions, each region including at least one RAN base station. The method is performed for a particular region serviced by a particular RAN base station. The method detects an interference incident associated with the particular region. The method analyzes a pattern of spectrum interference associated with the particular region. Based on said analysis, the method determines whether the pattern of spectrum interference matches a first signature pattern associated with internal interference or a second signature pattern associated with external interference. When the pattern of spectrum interference matches the second signature pattern, the method generates an alert to notify an operator of the particular RAN base station of the external interference.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
Some embodiments of the invention provide a method of implementing a virtualization software-based service mesh for a network that includes multiple host computers, each host computer including a set of virtualization software executing a set of application instances. For each host computer, the method deploys, to the set of virtualization software, an application service agent and an application service data plane that includes a set of data plane service mesh levels. The method configures the application service agent to apply policy rules defined for flows associated with the set of application instances to the flows on the application service data plane, and configures the application service data plane to forward the flows for the set of application instances to and from services provided at each data plane service mesh level in the set of data plane service mesh levels according to the policy rules applied by the application service agent.
A method of managing desired states of software-defined data centers (SDDCs), includes the steps of: in response to a user selection of a first modular template that includes a first set of desired configurations and a user selection of a second modular template that includes a second set of desired configurations, creating a composite template that includes desired configurations from the first and second modular templates; and in response to a user selection to assign the composite template to an SDDC, creating a desired state document that includes desired configurations from the composite template and then transmitting an instruction to update actual configurations of the SDDC to match corresponding desired configurations from the desired state document.
The current document is directed to an infrastructure-as-code (“IaC”) cloud-infrastructure-management service or system that allows users and upstream management systems to define and deploy infrastructure, such as virtual networks, virtual machines, load balancers, and connection topologies, within cloud-computing systems. The IaC cloud-infrastructure-management service or system includes a service frontend, a task manager, an event-processing component, and multiple Idem-service workers. The task manager manages execution of commands and requests received from the service frontend, using multiple queues, provides for prioritization of command-and-request execution by the multiple Idem-service workers, and provides for preemption of long-running executing commands and requests. The IaC cloud-infrastructure-management service or system enforces specified states of the cloud infrastructure using enforced-state identifiers and enforced-state versions supplied in state commands and enforce requests.
System and computer-implemented method for managing placements of software components in host computers of a computing environment uses placement rules for a software component, which are automatically generated based on user input received at a managed entity in the computing environment. The placement rules are transmitted to a management entity that controls placement and migration of software components in the computing environment using a resource scheduler. The placement rules are then provided to the resource scheduler to be applied to the software component for placement operations.
Disclosed are various embodiments relating to a security framework for media playback. In one embodiment, a client device has a decryption module, a streaming module, and a playback module. The playback module may be configured to request media data from the streaming module and render the media data on an output device. The streaming module may be configured to obtain the media data from the decryption module by a request that specifies a size of the media data. The size may be dynamically determined based at least in part on an amount of available temporary data storage. The decryption module may be configured to decrypt a portion of an encrypted media file based at least in part on the specified size to produce the media data.
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06F 3/04842 - Selection of displayed objects or displayed text elements
H04N 21/2347 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
H04N 21/4405 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream decryption
17.
ADAPTIVE PRIVILEGE ADJUSTMENT FOR LEAST PRIVILEGE ACCESS
Techniques associated with adaptive privilege adjustment are disclosed. A least privilege access role for an entity can be received from an access control system that provides the entity a plurality of privileges to access a plurality of resources in a data center. Access by the entity to one or more resources of the data center can be monitored, and based on the access by the entity, it can be determined that the entity does not access at least one resource of the plurality of resources. The least privilege access role can be updated subsequently for the entity to remove at least one privilege of the plurality of privileges for accessing the at least one resource. The least privilege access role for the entity can be applied to the access control system to remove access to the at least one resource for the entity.
Some embodiments provide a novel method for dynamically processing data message flows using different non-uniform memory access (NUMA) nodes of a processing system. Each NUMA node includes a memory and processors that can access data other memories of other NUMA nodes. A load balancing application associated with a first NUMA node receives flows destined for an endpoint application. The flows are assigned to the first NUMA node to be forwarded to the endpoint application. The load balancing application monitors a central processing (CPU) usage of the first NUMA node to determine whether the CPU usage of the first NUMA node exceeds a particular threshold. When the CPU usage of the first NUMA node exceeds the particular threshold, the load balancing application reassigns at least a subset of the flows to the second NUMA node for processing.
Some embodiments provide a novel method for processing data message flows using several non-uniform memory access (NUMA) nodes of a processing system. Each NUMA node includes a local memory and a set of processors that can access data from local memories of other NUMA nodes. A load balancing application associated with a first NUMA node receives a data message flow destined for an endpoint application. The load balancing application determines whether the first NUMA node should perform a middlebox service operation on the data message flow that is destined to the endpoint application. Based on a determination that the first NUMA node should not process the data message flow, the load balancing application directs the data message flow to a second NUMA node for performing the middlebox service operation.
The disclosure provides a method for determining a target configuration for a container-based cluster. The method generally includes determining, by a virtualization management platform configured to manage components of the cluster, a current state of the cluster, determining, by the virtualization management platform, at least one of performance metrics or resource utilization metrics for the cluster based on the current state of the cluster, processing, with a model configured to generate candidate configurations recommended for the cluster, the current state and at least one of the performance metrics or the resource utilization metrics and thereby generate the candidate configurations, calculating a reward score for each of the candidate configurations, selecting the target configuration as a candidate configuration from the candidate configurations based on the reward score of the target configuration, and adjusting configuration settings for the cluster based on the target configuration to alter the current state of the cluster.
Chunks of data are identified and deduplication is performed on the chunks of data using associated cyclic redundancy check (CRC) values. A plurality of CRC values is obtained that is associated with consecutive data blocks stored in a payload data store. Cut point CRC values are identified in the plurality of CRC values and CRC chunks are identified based on those cut point CRC values, wherein each CRC chunk is bounded by two consecutive cut point CRC values. A CRC chunk hash value is generated for each CRC chunk. A pair of duplicate CRC chunks is identified using the CRC chunk hash values and a deduplication operation is performed in association with the identified pair of duplicate CRC chunks. Using existing CRC values during the identification of chunk cut points reduces the computing resource costs associated with performing that process using the data blocks.
An example method of accessing object data managed by virtual infrastructure (VI) services of virtualization management software that manages a cluster of hosts in a data center and a virtualization layer executing in the cluster of hosts includes: receiving, from a client at a unified data service executing in the virtualization management software, a request for accessing the object data; planning, in response to the request, an operation to access the object data that targets a first VI service of the VI services; invoking, in response to the operation, an application programming interface (API) of the first VI service to access the object data, the API being exposed by a unified data library integrated with the first VI service; and forwarding, from the unified data service to the client, a result of accessing the object data.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
23.
METHOD AND SYSTEM TO PERFORM COMPLIANCE AND AVAILABILITY CHECK FOR INTERNET SMALL COMPUTER SYSTEM INTERFACE (ISCSI) SERVICE IN DISTRIBUTED STORAGE SYSTEM
One example method for a host in a virtual storage area network (vSAN) cluster to support vSAN Internet small computer system interface (iSCSI) target services in a distributed storage system of a virtualization system is disclosed. The method includes obtaining ownership information of a target and determining, from the ownership information, whether the host is an owner of the target. In response to determining that the host is the owner of the target, the method further includes determining whether the host commits to a policy provided by the vSAN to support the vSAN iSCSI target services. In response to determining that the host fails to commit to the policy, the method includes reporting a warning message.
A machine-learning (ML) platform at which alerts are received from endpoints and divided into a plurality of clusters, wherein a plurality of alerts in each of the clusters is labeled based on metrics of maliciousness determined at a security analytics platform, the plurality of alerts in each of the clusters representing a population diversity of the alerts, and wherein the ML platform is configured to execute on a processor of a hardware platform to: select an alert from a cluster for evaluation by the security analytics platform; transmit the selected alert to the security analytics platform, and then receive a determined metric of maliciousness for the selected alert from the security analytics platform; and based on the determined metric of maliciousness, label the selected alert and update a rate of selecting alerts from the cluster for evaluation by the security analytics platform.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
25.
DYNAMIC SCALING OF EDGE FORWARDING ELEMENTS BASED ON PREDICTED METRICS OF MACHINE AND UPLINK INTERFACE USAGE IN A SOFTWARE DEFINED DATACENTER
Some embodiments provide a novel method for preemptively deploying gateways in a first network to one or more external networks. The first network of some embodiments includes a first gateway connecting to the one or more external networks. The method collects a set of statistics for the first gateway associated with bandwidth usage of the first gateway. The method determines that a second gateway needs to be deployed in the first network (1) by using the collected set of statistics to perform predictive modeling computations to predict a future load on the first gateway, and (2) by determining that the predicted future load exceeds a particular threshold. The method distributes a set of one or more forwarding rules to forward data message flows from a subset of machines in the first network to a particular external network through the second gateway.
This disclosure is directed to methods and system for intelligent roaming of user equipment (“UE”) of a home network onto a visited network. The methods and systems monitor performance of voice and data services for UEs in coverage areas of edge cell sites of the home network. The methods and systems determine which UEs in the coverage areas of the edge cell sites to roam on the visited network based on decreases in voice and data services. The UEs in the coverage areas selected for roaming are pushed to roam on the visited network by sending a signal that instructs selected UEs to switch into roaming mode while the UEs are still in the coverage areas of the edge cell sites.
Custom resource schema modification is described herein. One example method includes providing an interface for modifying a schema of a custom resource in a virtualized environment. The interface can include a first portion configured to receive modifications to summary information corresponding to the custom resource and a second portion configured to receive modifications to properties corresponding to the schema of the custom resource. The method can include validating the modified schema, and saving the modified schema of the custom resource responsive to the validation being successful.
Disclosed are examples of accountable decentralized anonymous payment systems and methods. One such method comprises storing, in a digital wallet, a digital coin that has been signed by a bank computing device; rerandomizing the digital coin and a coin signature to produce a new version of the digital coin that is anonymous with respect to an owner of the digital coin; sending the new version of the digital coin to a recipient computing device; computing a nullifier for the new version of the digital coin using a pseuodorandom function over a serial number of the digital coin; sending the nullifier for the new version of the digital coin to the bank computing device; and providing the bank computing device a zero knowledge proof that a value of the nullifier for the new version of the digital coin is correct and is the same as a nullifier of the digital coin.
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentialsReview and approval of payers, e.g. check of credit lines or negative lists
29.
PROVISIONING IMAGES TO DEPLOY CONTAINERIZED WORKLOADS IN A VIRTUALIZED ENVIRONMENT
A method for provisioning images to deploy containerized workloads in a virtualized environment can include bringing up a containerized workload in a virtualized computing environment responsive to receiving a request to run a containerized workload in the virtualized computing environment. Bringing up the containerized workload can include creating a VMDK that includes a container image in shared storage of an image registry responsive to authenticating with the image registry, attaching the VMDK to a virtual computing instance, responsive to receiving a request, made by a container running in the VCI, for a file of the container image in the attached VMDK, retrieving the file from the shared storage, and bringing up the containerized workload using the file.
Example methods and systems for priority-based network bandwidth allocation are described. In one example, a first computer system may detect an event indicating that network bandwidth allocation is required for a virtualized computing instance. The first computer system may identify, from multiple priority levels, a first priority level that is associated with (a) the virtualized computing instance, (b) a logical network element to which the virtualized computing instance is attached, or (c) a group that includes the virtualized computing instance or the logical network element. The first computer system may obtain network bandwidth capacity information associated with physical network adapter(s) capable of forwarding traffic associated with the virtualized computing instance. Based on the first priority level and the network bandwidth capacity information, the first computer system may configure a priority-based network bandwidth allocation policy that includes parameter(s) applicable to the traffic associated with the virtualized computing instance.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 47/76 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
31.
MANAGING CRYPTOGRAPHIC COMPLIANCE ON A COMPUTING DEVICE USING A DISTRIBUTED LEDGER
Disclosed are various embodiments for binding the configuration state of client devices to the blockchain and utilizing the binding for managing cryptographic compliance. A management agent can send a request to a smart contract hosted by a blockchain network for a zero-knowledge proof (ZKP) of a configuration state for a computing device, the state including cryptographic policies. Cryptographic operations performed by the client device can be performed by complying with the policies stored on the blockchain network.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Example methods and systems for security threat analysis are described. One example may involve a first computer system configuring a test packet that includes malicious content for forwarding along a network path between (a) a first network element that is connected with a first virtualized computing instance and (b) a second network element that is connected with a second virtualized computing instance. The test packet may be injected at the first network element and forwarded towards the second network element. In response to a security checkpoint detecting the test packet, the security checkpoint may apply one or more security policies on the test packet; and generate and send report information towards a management entity. The report information may indicate whether the malicious content in the test packet is detectable based on the one or more security policies.
Systems and methods are described for providing ways to optimize client performance during screen data streaming. Periods of time a client takes to render frames can be tracked and the frame rendering times can be analyzed to determine when the client performance is insufficient. If the client rendering performance is determined to be insufficient, the encoding method can be dynamically modified in ways preferable for improving the client rendering performance. Different approaches can be utilized for calculating metrics and determining when to modify the encoding method, such as linear interpolation or by taking averages of frame rendering times in a moving sample window.
To populate an entropy pool with entropy from external sources, a computer system transmits, to multiple entropy sources, a request to receive entropy. At least one of the multiple entropy sources is an external source that is external and operatively connected to the computer system. The computer system receives entropy from the external source. The computer system stores the entropy received from the external source in an entropy storage medium. The computer system receives, from a client computer system, a request for entropy to be used by the client computer system to implement a random number generation algorithm. In response to receiving the request, the computer system provides a portion of the stored entropy. The portion of the stored entropy provided in response to receiving the request includes the entropy received from the external source.
Some embodiments of the invention provide a method for providing automated admission control services for a RAN system. The method receives a trigger alert that includes an application identifier for an application, a dRIC identifier associated with a dRIC to which the application is to be deployed, and a set of configurations for the application that are in a first format. The method converts the set of configurations from the first format to a second format and sends the set configurations in the second format to an FCAPS management pod deployed to the dRIC. Upon receiving positive acknowledgment indicating successful implementation of the set of configurations from the FCAPS management pod, the method updates a configuration table stored in a database of the RAN with a set of admissions control information for the application. The method sends a notification to an API server for the RAN indicating the set of configurations have been successfully implemented for the application.
Some embodiments use one or more CRDs (custom resource definitions), and Custom Resource (CR) instances based on these CRDs, to dynamically generate a unified user interface (UI) to display information (e.g., operational metrics) regarding different applications (xApps, rApps, etc.) in the O-RAN system. Using such CRDs and CR instances, frees up the application developers from having to define the UI programs for generating the UIs for their RAN applications. It also allows the O-RAN system to provide one unified approach for generating a UI to display information about O-RAN applications developed by different application developers.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/0233 - Object-oriented techniques, for representation of network management data, e.g. common object request broker architecture [CORBA]
37.
AGGREGATING METRICS OF NETWORK ELEMENTS OF A SOFTWARE-DEFINED NETWORK FOR DIFFERENT APPLICATIONS BASED ON DIFFERENT AGGREGATION CRITERIA
Some embodiments provide a novel method of providing operational data for network elements in a software-defined network (SDN). The method deploys a framework for collecting operational data for a set of network elements in the SDN. The framework of some embodiments includes an interface for different client applications to use in order to configure the framework to collect and aggregate the operational data based on different collection and aggregation criteria that satisfies different requirements of the different client applications. The method also deploys data collectors in the SDN that the framework configures to collect operational data from the set of network elements in the SDN.
Some embodiments of the invention provide a method for WAN (wide area network) optimization for a WAN that connects multiple sites, each of which has at least one router. At a gateway router deployed to a public cloud, the method receives from at least two routers at least two sites, multiple data streams destined for a particular centralized datacenter. The method performs a WAN optimization operation to aggregate the multiple streams into one outbound stream that is WAN optimized for forwarding to the particular centralized datacenter. The method then forwards the WAN-optimized data stream to the particular centralized datacenter.
Some embodiments provide a method for deploying network management services for a plurality of tenants. The method is performed at a multi-tenant service executing in a container cluster implemented in a public cloud. For a first tenant, the method deploys a first set of network management services in the container cluster for managing a first group of datacenters of the first tenant. For a second tenant, the method deploys a second set of network management services in the container cluster for managing a second group of datacenters of the second tenant.
The disclosure provides an approach for load balancing requests among data centers based on one or more environmental impact factors of the data centers. A method of load balancing requests among data centers is provided. The method includes receiving, at a load balancer from a client, a service request. The method further includes selecting, by the load balancer, a first data center of a plurality of data centers based on one or more environmental impact factors associated with each of the plurality of data centers. The method further includes causing the service request to be serviced by the selected first data center.
An example virtualized computing system includes a host cluster having a virtualization layer directly executing on hardware platforms of hosts, the virtualization layer supporting execution of virtual machines (VMs), the VMs including pod VMs, the pod VMs including container engines supporting execution of containers in the pod VMs; an orchestration control plane integrated with the virtualization layer, the orchestration control plane including a master server and pod VM controllers, the pod VM controllers executing in the virtualization layer external to the VMs, the pod VM controllers configured as agents of the master server to manage the pod VMs; pod VM agents, executing in the pod VMs, configured as agents of the pod VM controllers to manage the containers executing in the pod VMs.
Some embodiments of the invention provide a method for enabling inter-gateway connectivity in an SD-WAN (software-defined wide area network) that connects multiple sites. The method deploys to the SD-WAN a floating hub gateway router that that (1) connects to multiple gateway routers each of which is deployed in a cloud and connects to at least one edge router in at least one site, and (2) does not connect to edge routers at any site. The method provides a network address associated with the floating hub gateway router to the multiple gateway routers deployed in one or more clouds for the SD-WAN. The method configures the floating hub gateway router to establish a tunnel with each gateway router in the multiple gateway routers to enable inter-gateway connectivity between the multiple gateway routers.
Described herein are systems, methods, and software to manage multi-type storage in a cluster computing environment. In one example, a host can identify health and performance information at a first time for each local data store on the host and a hyperconverged data store available to the host. The host can further identify health and performance information associated with the data stores at a second time and can compare the health and performance information at the first time and the second time to identify differences in the information. The host then communicates the differences to a second host in the computing environment.
Some embodiments provide a method for evaluating a network correctness requirement at an evaluation program instance assigned to evaluate a particular network correctness requirement. The method identifies data message properties associated with the particular network correctness requirement. The method evaluates the particular network correctness requirement by (i) determining a path through a set of network devices for a data message having the identified data message properties and (ii) from a data storage that stores data message processing rules for a plurality of network devices including the set of network devices and additional network devices, retrieving and storing in memory data specifying data message processing rules for the set of network devices to use in evaluating the particular network correctness requirement.
The present disclosure relates to extending workload provisioning using a low-code development platform. Some embodiments include a medium having instructions to provide an interface for creating a custom resource in a virtualized environment, the interface including a first portion configured to receive summary information corresponding to the custom resource, and a second portion configured to receive a schema corresponding to the custom resource. Some embodiments include creating the custom resource according to the summary information and the schema.
An example method of automatically deploying a containerized workload on a hypervisor based device is provided. The method generally includes booting the device running a hypervisor, in response to booting the device: automatically obtaining, by the device, one or more intended state configuration files from a server external to the device, the one or more intended state configuration files defining a control plane configuration for providing services for at least deploying and managing the containerized workload and workload configuration parameters for the containerized workload; deploying a control plane pod configured according to the control plane configuration; deploying one or more worker nodes based on the control plane configuration, and deploying one or more workloads identified by the workload configuration parameters on the one or more worker nodes.
Some embodiments of the invention provide a method for implementing a software-defined private mobile network (SD-PMN) for an entity. At a physical location of the entity, the method deploys a first set of control plane components for the SD-PMN, the first set of control plane components including a security gateway, a user-plane function (UPF), an AMF (access and mobility management function), and an SMF (session management function). At an SD-WAN (software-defined wide area network) PoP (point of presence) belonging to a provider of the SD- PMN, the method deploys a second set of control plane components for the SD-PMN that includes a subscriber database that stores data associated with users of the SD-PMN. The method uses an SD-WAN edge router located at the physical location of the entity and a SD-WAN gateway located at the SD-WAN PoP to establish a connection from the physical location of the entity to the SD- WAN PoP.
H04W 84/04 - Large scale networksDeep hierarchical networks
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
H04W 24/02 - Arrangements for optimising operational condition
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
Anomalies are detected in a distributed application that runs on a plurality of nodes to execute at least first and second workloads. The method of detecting anomalies includes collecting first network traffic data of the first workload and second network traffic data of the second workload during a first period of execution of the first and second workloads, collecting third network traffic data of the first workload and fourth network traffic data of the second workload during a second period of execution of the first and second workloads, and detecting an anomaly in the distributed application based on a comparison of the third network traffic data against the first network traffic data or a comparison of the fourth network traffic data against the second network traffic data. Anomalies may also be detected by comparing network traffic data of two groups of containers executing the same workload.
H04L 67/1029 - Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
H04L 67/1031 - Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests
H04L 43/062 - Generation of reports related to network traffic
H04L 47/783 - Distributed allocation of resources, e.g. bandwidth brokers
H04L 43/04 - Processing captured monitoring data, e.g. for logfile generation
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
49.
OFFLOADING STATEFUL SERVICES FROM GUEST MACHINES TO HOST RESOURCES
Some embodiments of the invention provide a method for offloading one or more data message processing services from a machine executing on a host computer. The method is performed by the machine. The method uses a set of virtual resources allocated to the machine to perform a set of services for a first set of data messages belonging to a particular data message flow. The method determines that for a second set of data messages belonging to the particular data message flow, the set of services should be performed by a virtual network interface card (VNIC) that executes on the host computer and is attached to the machine. Based on the determination, the method directs the VNIC to perform the set of services for the second set of data messages. The VNIC uses resources of the host computer to perform the set of services for the second set of data messages.
Some embodiments of the invention provide a method for defining a telecommunications network deployment for a particular geographic region that includes of a set of sub-regions. The telecommunications network including an access network, an edge network, and a core network. The method is performed for each sub-region in the set of sub-regions. The method determines population density of UEs (user equipment) within the sub-region. Based on the determined population density, the method identifies an area type for the sub-region from a set of area types. The method simulates performance of the telecommunications network to explore, based on the identified area type, multiple configurations for access nodes that connect the UEs to the telecommunications network, each configuration in the multiple configurations indicating (1) a number of access nodes to be included in the telecommunications network deployment and (2) locations at which each access node is to be deployed. The method selects a particular configuration for access nodes from the multiple configurations for use in defining the telecommunications network deployment.
Disclosed are aspects of workload selection and placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some aspects, workloads are assigned to virtual graphics processing unit (vGPU)-enabled graphics processing units (GPUs). A number of vGPU placement neural networks are trained to maximize a composite efficiency metric based on workload data and GPU data for the plurality of vGPU placement models. A combined neural network selector is generated using the vGPU placement neural networks, and utilized to assign a workload to a vGPU-enabled GPU.
Computer-implemented methods, media, and systems for automating secured deployment of containerized workloads on edge devices are disclosed. One example computer-implemented method includes receiving, by a software defined wide area network (SD-WAN) edge device and from a remote manager, resource quotas for a compute service to be enabled at the SD-WAN edge device. Pre-deployment sanity checks are performed by confirming availability of resources satisfying the resource quotas, where the resources are at the SD-WAN edge device. In response to the confirmation of the availability of resources satisfying the resource quotas, one or more security constructs are set up to isolate SD-WAN network functions at the SD-WAN edge device from the compute service at the SD-WAN edge device. The compute service is attached to a SD-WAN network by the SD-WAN edge device. An acknowledgement that the compute service is enabled at the SD-WAN edge device is sent to the remote manager.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 41/5051 - Service on demand, e.g. definition and deployment of services in real time
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
Some embodiments provide a novel method for performing services on a host computer that executes several data compute nodes (DCNs). The method receives, at a module executing on the host, a data message associated with a DCN executing on the host. The method supplies the data message to a service virtual machine (SVM) that executes on the host and on which several service containers execute. One or more of the service containers then perform a set of one or more services on the data message. The method then receives an indication from the SVM that the set of services has been performed on the data message.
A system for private networking within a virtual infrastructure is presented. The system includes a virtual machine (VM) in a first host, the VM being associated with a first virtual network interface card (VNIC), a second VM in a second host, the second VM being associated with a second VNIC, the first and second VNICs being members of a fenced group of computers that have exclusive direct access to a private virtual network, wherein VNICs outside the fenced group do not have direct access to packets on the private virtual network, a filter in the first host that encapsulates a packet sent on the private virtual network from the first VNIC, the encapsulation adding to the packet a new header and a fence identifier for the fenced group, and a second filter in the second host that de-encapsulates the packet to extract the new header and the fence identifier.
A scanner redirection method includes the steps of: receiving from an application running on a host server, a request for scanner properties; acquiring properties of the physical scanner; converting the properties of the physical scanner that are described according to a first scanning protocol to properties of the physical scanner that are described according to a second scanning protocol; transmitting the properties of the physical scanner that are described according to the second scanning protocol to the application; in response to detecting a user selection made on an image of a user interface, transmitting the user selection to the application; and in response to the user selection, receiving from the application, a request for a scanned image, and transmitting a request to an image capture core to acquire the scanned image from the physical scanner.
Disclosed herein is a system and method for controlling network traffic among namespaces in which various entities, such as virtual machines, pod virtual machines, and a container orchestration system, such as Kubernetes, reside and operate. The entities have access to a network that includes one or more firewalls. The traffic that is permitted to flow over the network among and between the namespaces is defined by a security policy definition. The security policy definition is posted to a master node in a supervisor cluster that supports and provisions the namespaces. The master node invokes a network manager to generate a set of firewall rules and program the one or more firewalls in the network to enforce the rules.
A novel algorithm for packet classification that is based on a novel search structure for packet classification rules is provided. Addresses from all the containers are merged and maintained in a single Trie. Each entry in the Trie has additional information that can be traced back to the container from where the address originated. This information is used to keep the Trie in sync with the containers when the container definition dynamically changes.
Disclosed are various examples for controlling and managing data access to increase user privacy and minimize intentional or inadvertent misuse of accessed information. Upon detecting a request for an administrator review of a user client device, permission for administrator access can be obtained from a user associated with the user client device. The client device identifier can be obfuscated such that the administrator accessing the data is not provided the actual device identifier. An administrator review session between the user client device and an administrator client device can be established to allow the administrator client device access to the permitted client device data.
Some embodiments provide a method for one of multiple shared API processing services in a container cluster that implements a network policy manager shared between multiple tenants. The method receives a configuration request from a particular tenant to modify a logical network configuration for the particular tenant. Configuration requests from the plurality of tenants are balanced across the plurality of shared API processing services. Based on the received configuration request, the method posts a logical network configuration change to a configuration queue in the cluster. The configuration queue is dedicated to the logical network of the particular tenant. Services are instantiated separately in the container cluster for each tenant to distribute configuration changes from the respective configuration queues for the tenants to datacenters that implement the tenant logical networks such that configuration changes for one tenant do not slow down processing of configuration changes for other tenants.
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/342 - Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
60.
METHOD FOR MODIFYING AN SD-WAN USING METRIC-BASED HEAT MAPS
Some embodiments provide a method for using a heat map to modify an SD-WAN (software-defined wide-area network) deployed for a set of geographic locations. From a set of managed forwarding elements (MFEs) that forward multiple data message flows through the SD- WAN to a set of destination clusters, the method collects multiple metrics associated with the multiple data message flows. Based on the collected multiple metrics, the method generates a heat map that accounts for (1) the multiple data message flows, (2) locations of the set of MFEs, and (3) locations of the one or more destination clusters. The method uses the generated heat map to identify at least one modification to make to the SD-WAN to improve forwarding of the multiple data message flows.
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
H04L 43/026 - Capturing of monitoring data using flow identification
Example methods are provided for a first switch to perform congestion-aware load balancing in a data center network. The method may comprise: receiving probe packets from multiple next-hop second switches that connect the first switch with a third switch via multiple paths. The method may also comprise: processing congestion state information in each probe packet to select a selected next-hop second switch from the multiple next-hop second switches, the selected next-hop second switch being associated with a least congested path from the first switch to the third switch. The method may further comprise: in response to receiving data packets from a fourth switch that are destined for a destination connected with the third switch, sending the data packets to the selected next-hop second switch such that the data packets travel to the third switch along the least congested path.
Disclosed are various embodiments for coordinating the rollback of installed operating systems to an earlier, consistent state. In response to determining that a data processing unit (DPU) installed on a computing device has failed to successfully boot a first time, the computing device can be power cycled for a first time. In response to determining that the DPU has successfully booted a second time, a first version of a host operating system can be booted. A DPU operating system (DPU OS) is then booted from a DPU alternate boot image. In response to determining that the first version of the host operating system fails to match an executing version of the DPU OS, the computing device can be power cycled a second time and the host operating system is then booted from a host alternate boot image.
Some embodiments provide a method for detecting a failure of a layer 2 (L2) bump-in-the-wire service at a device. In some embodiments, the device sends heartbeat signals to a second device connected to L2 service nodes in order to detect failure of the L2 service (e.g., a failure of all the service nodes). In some embodiments, the heartbeat signals are unidirectional heartbeat signals (e.g., a unidirectional bidirectional-forwarding-detection (BFD) session) sent from each device to the other. The heartbeat signals, in some embodiments, use a broadcast MAC address in order to reach the current active L2 service node in the case of a failover (i.e., an active service node failing and a standby service node becoming the new active service node). The unidirectional heartbeat signals are also used, in some embodiments, to decrease the time between a failover and data messages being forwarded to the new active service node.
H04L 43/0805 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
H04L 43/10 - Active monitoring, e.g. heartbeat, ping or trace-route
Some embodiments provide a method for handling failure at one of several peer centralized components of a logical router. At a first one of the peer centralized components of the logical router, the method detects that a second one of the peer centralized components has failed. In response to the detection, the method automatically identifies a network layer address of the failed second peer. The method assumes responsibility for data traffic to the failed peer by broadcasting a message on a logical switch that connects all of the peer centralized components and a distributed component of the logical router. The message instructs recipients to associate the identified network layer address with a data link layer address of the first peer centralized component.
H04L 41/5041 - Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
A network system that includes a first set of network hosts in a first domain and a second set of network hosts in a second domain. Within each of the domains, the system includes several edge switching elements (SEs) that each couple to the network hosts and forward network data to and from the set of network hosts. Within the first domain, the system includes (i) an interior SE that couples to a particular edge SE in order to receive network data for forwarding from the edge SE when the edge SE does not recognize a destination location of the network data and (ii) an interconnection SE that couples to the interior SE, the edge SE, and the second domain through an external network. When the edge SE receives network data with a destination address in the second domain, it forwards the network data directly to the interconnection SE.
Some embodiments provide a method for performing data message processing at a smart NIC of a computer that executes a software forwarding element (SFE). The method determines whether a received data message matches an entry in a data message classification cache stored on the smart NIC based on data message classification results of the SFE. When the data message matches an entry, the method determines whether the matched entry is valid by comparing a timestamp of the entry to a set of rules stored on the smart NIC. When the matched entry is valid, the method processes the data message according to the matched entry without providing the data message to the SFE executing on the computer.
A version control interface provides for time travel with metadata management under a common transaction domain as the data. Examples generate a time-series of master branch snapshots for data objects stored in a data lake, with the snapshot comprising a tree data structure such as a hash tree and associated with a time indication. Readers select a master branch snapshot from the time-series, based on selection criteria (e.g., time) and use references in the selected master branch snapshot to read data objects from the data lake. This provides readers with a view of the data as of a specified time.
Some embodiments provide a method of implementing context-aware routing for a software-defined wide-area network, at an SD-WAN edge forwarding element (FE) located at a branch network connected to the SD-WAN. The method receives, from an SD-WAN controller, geolocation route weights for each of multiple cloud datacenters across which a set of application resources is distributed. The application resources are all reachable at a same virtual network address. For each of the cloud datacenters, the method installs a route for the virtual network address between the branch network and the cloud datacenter. The routes have different total costs based at least in part on the geolocation metrics received from the SD-WAN controller. The SD-WAN edge FE selects between the routes to establish connections to the set of application resources.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
Systems, apparatus, articles of manufacture, and methods are disclosed to manage a deployment of virtual machines in a cluster by, in a first host of a plurality of hosts, monitor, with first control plane services, an availability of second control plane services at a second host of the plurality of hosts, wherein the first control plane services and the second control plane services support implementation of application programming interface (API) requests in association with managing a cluster, after a determination that the second control plane services at the second host is not available, assign the first control plane services at the first host to operate in place of the second control plane services at the second host, and in the first host, assign, via the first control plane services at the first host, resources of one or more hosts in the cluster to support the API request.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A version control interface provides for accessing a data lake with transactional semantics. Examples generate a plurality of tables for data objects stored in the data lake. The tables each comprise a set of name fields and map a space of columns or rows to a set of the data objects. Transactions read and write data objects and may span a plurality of tables with properties of atomicity, consistency, isolation, durability (ACID). Performing the transaction comprises: accumulating transaction-incomplete messages, indicating that the transaction is incomplete, until a transaction-complete message is received, indicating that the transaction is complete. Upon this occurring, a master branch is updated to reference the data objects according to the transaction-incomplete messages and the transaction-complete message. Tables may be grouped into data groups that provide atomicity boundaries so that different groups may be served by different master branches, thereby improving the speed of master branch updates.
Some embodiments provide a method for sending data messages at a network interface controller, NIC, (100) of a computer (135). From a network stack executing on the computer (135), the method receives (i) a header for a data message to send and (ii) a logical memory (155) address of a payload for the data message. The method translates the logical memory address into a memory address for accessing a particular one of multiple devices (115, 140, 150) connected to the computer. The method reads payload data from the memory address of the particular device (115, 140,150). The method sends the data message with the header received from the network stack and the payload data read from the particular device (115, 140, 150).
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 49/901 - Buffering arrangements using storage descriptor, e.g. read or write pointers
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 3/06 - Digital input from, or digital output to, record carriers
72.
Minimizing traffic drop when rekeying in a distributed security group
Exemplary methods, apparatuses, and systems include a central controller receiving a request to generate a new encryption key for a security group to replace a current encryption key for the security group. The security group includes a plurality of hosts that each encrypt and decrypt communications using the current encryption key. In response to receiving the request, the central controller determines that a threshold period following generation of the current encryption key has not expired. In response to determining that the threshold period has not expired, the central controller delays execution of the request until the expiration of the threshold period. In response to the expiration of the threshold period, the central controller executes the request by generating the new encryption key, storing a time of creation of the new encryption key, and transmitting the new encryption key to the plurality of hosts.
H04L 9/12 - Transmitting and receiving encryption devices synchronised or initially set up in a particular manner
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
73.
IN-MEMORY SCANNING FOR FILELESS MALWARE ON A HOST DEVICE
The disclosure herein describes the processing of malware scan requests from VCIs by an anti-malware scanner (AMS) on a host device. A malware scan request is received by the AMS from a VCI, the malware scan request including script data of a script from a memory buffer of the VCI. The AMS scans the script data of the malware scan request, outside of the VCI, and determines that the script includes malware. The AMS notifies the VCI that the script includes malware, whereby the VCI is configured to prevent execution of the script or take other mitigating action. The AMS provides scanning for fileless malware to VCIs on a host device without consuming or otherwise affecting resources of the VCIs.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 12/08 - Addressing or allocationRelocation in hierarchically structured memory systems, e.g. virtual memory systems
Some embodiments provide a method for generating a multi-layer network map from network configuration data. The method receives network configuration data that defines network components and connections between the network components for a network that spans one or more datacenters. Based on the received network configuration data, the method generates multiple data layers for a multi-layer interactive map of the network. Different data layers include different network components and connections. The method generates a visual representation of the network for each data layer. Each visual representation includes a map of the network at a different level of hierarchy.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 43/045 - Processing captured monitoring data, e.g. for logfile generation for graphical visualisation of monitoring data
H04L 41/12 - Discovery or management of network topologies
75.
AUTHENTICATION ORCHESTRATION ACROSS REMOTE APPLIANCES
Bootstrapping a new remote appliance based on a request received at a main appliance based on established trust between the two appliances can be implemented as computer-implemented methods, media, and systems. A request is received at an authentication orchestrator at the main appliance to perform an operation requested by a user for execution on a remote appliance. The authentication orchestrator at the main appliance obtains an authentication token issued by an identity provider at the main appliance for the user associated with the request. The authentication orchestrator requests to exchange the authentication token issued by the identity provider at the main appliance for a new authentication token that is issued by an identity provider at the remote appliance. The authentication orchestrator at the main appliance initiates an authentication of the user at an appliance manager at the remote appliance based on providing the new authentication token.
G06F 21/41 - User authentication where a single sign-on provides access to a plurality of computers
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Techniques for delivering remote applications to servers in an on-demand fashion (i.e., as end-users need them) are provided. In one set of embodiments, these techniques include packaging the installed contents (e.g., executable code and configuration data) of the remote applications into containers, referred to as application packages, that are placed on shared storage and dynamically attaching (i.e., mounting) an application package to a server at a time an end-user requests access a remote application in that package, thereby enabling the server to launch the application.
Disclosed are various examples of hosting a data processing unit (DPU) management operating system using an operating system software stack of a preinstalled DPU operating system. The preinstalled DPU operating system of the DPU is leveraged to provide a virtual machine environment. A DPU management operating system is executed within the virtual machine environment of the preinstalled DPU operating system. A third-party DPU function or a management service function is provided using the DPU hardware resources accessed through the DPU management operating system and the virtual machine environment.
A method for opening unknown files in a malware detection system, is provided. The method generally includes receiving a request to open a file classified as an unknown file, opening the file in a container, collecting at least one of a log of events carried out by the file or observed behavior traces of the file while open in the container, transmitting, to a file analzyer, at least one of the file, the log of events, or the behavior traces for static analysis, determining, a final verdict for the file, based on at least one of the file, the log of events, or the behavior traces, wherein the final verdict for the file is based on the static analysis or dynamic analysis of the file, and taking one or more actions based on a policy configured for the first endpoint and the final verdict.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
79.
AUTOMATED DISCOVERY OF VULNERABLE ENDPOINTS IN AN APPLICATION SERVER
The disclosure provides an approach for discovering vulnerable application server endpoints. Embodiments include retrieving, from an application server, an object representing a front controller of the application server. Embodiments include extracting, from the object, values for a plurality of variables. Embodiments include constructing, based on the values for the plurality of variables, one or more universal resource locators (URLs) corresponding to one or more methods of the front controller. Embodiments include sending one or more unauthenticated requests to one or more resources indicated by the one or more URLs. Embodiments include determining, based on a given response to a given unauthenticated request of the one or more unauthenticated requests, whether a given URL of the one or more URLs is vulnerable. Embodiments include performing one or more actions based on the determining of whether the given URL is vulnerable.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
VMWARE INFORMATION TECHNOLOGY (CHINA) CO., LTD. (China)
VMWARE, INC. (USA)
Inventor
Shen, Jianjun
Gu, Ran
Jiang, Caixia
Fauser, Yves
Abstract
Some embodiments of the invention provide a method for adding routable subnets to a logical network that connects multiple machines and is implemented by a software defined network (SDN). The method receives an intent-based API that includes a request to add a routable subnet to the logical network. The method defines (i) a VLAN (virtual local area network) tag associated with the routable subnet, (ii) a first identifier associated with a first logical switch to which at least a first machine in the multiple machines that executes a set of containers belonging to the routable subnet attaches, and (iii) a second identifier associated with a second logical switch designated for the routable subnet. The method generates an API call that maps the VLAN tag and the first identifier to the second identifier. The method provides the API call to a management and control cluster of the SDN to direct the management and control cluster to implement the routable subnet.
A method for locating malware in a malware detection system, is provided. The method generally includes storing, at a first endpoint, a mapping of a first file hash and a first file path for a first file classified as an unknown file, opening, at the first endpoint, the first file prior to determining whether the first file is benign or malicious, determining, at the first endpoint, a first verdict for the first file, the first verdict indicating the first file is benign or malicious, locating the first file using the mapping of the first file hash and the first file path, and taking one or more actions based on a policy configured for the first endpoint and the first verdict indicating the first file is benign or malicious.
Methods, systems, and computer-readable media that manage cloud computing environments. A pool manager creates a pool of cloud computing environments according to a pool specification specifying a headroom threshold of the pool. The pool manager receives, from a requester computer, a request to claim a cloud computing environment. The pool manager determines that one or more cloud computing environments are available. In response, the pool manager provides to the requesting computer credentials for accessing the cloud computing environment. The pool manager designates the cloud computing environment as claimed and unavailable to other requester computers until receiving a notification indicating that the cloud computing environment is unclaimed. The pool manager ensures that the correct number of environments are available on a pre-determined schedule.
Provisioning a data processing unit (DPU) management operating system (OS). A management hypervisor installer executed on a host device launches or causes a server component to provide a management operating system (OS)installer image at a particular URI accessible over a network internal to the host device. A baseboard management controller (BMC) transfers the DPU management OS installer image to the DPU device. A volatile memory based virtual disk is created using the DPU management OS installer image. The DPU device is booted to a DPU management OS installer on the volatile memory based virtual disk. The DPU management OS installer installs a DPU management operating system to a nonvolatile memory of the DPU device on reboot of the DPU device.
The present disclosure provides example computer-implemented method, medium, and system for managing IP addresses for DPDK enabled network interfaces for cloud native pods. One example method includes creating a pod of one or more containers, where the pod connects to multiple networks through multiple network interfaces. A poll mode driver (PMD) is attached to a first network interface of the multiple network interfaces, where the PMD enables one or more data plane development kit (DPDK) applications inside the pod to manage the first network interface. A first container network interface (CNI) is created to handle the DPDK enabled first network interface. A first Internet protocol (IP) address is allocated to the first network interface using the first CNI. The first IP address is passed to the one or more DPDK applications using the first CNI.
Some embodiments provide a method that identifies a first number of requests received at a first application. Based on the first number of requests received at the first application, the method determines that a second application that processes requests after processing by the first application requires additional resources to handle a second number of requests that will be received at the second application. The method increases the amount of resources available to the second application prior to the second application receiving the second number of requests.
Described herein are systems, methods, and software to manage the identification of control packets in an encapsulation header. In one implementation, a computing system may receive a Geneve packet at a network interface and determine that the Geneve packet includes an Operations and Management (OAM) flag. Once the OAM flag is identified, the computing system can select a processing queue from a plurality of processing queues for a main processing system of the computing system based on the OAM flag and assign the Geneve packet to the processing queue.
A combined data processing unit (DPU) and server solution with DPU operating system (OS) integration is described. A DPU OS is executed on a DPU or other computing device, where the DPU OS exercises secure calls provided by a DPU's trusted firmware component, that may be invoked by DPU OS components to abstract DPU vendor-specific and server vendor-specific integration details. An invocation of one of the secure calls made on the DPU to communicate with its associated server computing device is identified. In an instance in which the one of the secure calls is invoked, the secure call invoked is translated into a call or request specific to an architecture of the server computing device and the call is performed, which may include sending a signal to the server computing device in a format interpretable by the server computing device.
VMWARE INFORMATION TECHNOLOGY (CHINA) CO., LTD. (China)
VMWARE, INC. (USA)
Inventor
Tang, Qiang
Xiao, Zhaoqian
Abstract
Some embodiments of the invention provide a method of sending data in a network that includes multiple worker nodes, each worker node executing at least one set of containers, a gateway interface, and a virtual local area network (VLAN) tunnel interface. The method configures the gateway interface of each worker node to associate the gateway interface with multiple subnets. Each subnet is associated with a namespace, a first worker node executes a first set of containers of a first namespace, and a second worker node executes a second set of containers of the first namespace and a third set of containers of a second namespace. The method sends data between the first set of containers and the second set of containers through a VLAN tunnel between the first and second worker nodes. The method sends data between the first set of containers and the third set of containers through the gateway interface.
Systems and methods are described for providing a virtual machine ("VM") as a service. A user device can install a VM to enable itself as an edge node. The user device can then and use a portion of its computing resources to provide the service to the endpoint device by running the VM. In an example, an edge node can directly receive a request for a service from an endpoint device. The edge node can determine that it needs assistance from another device to jointly provide the service. Then another user device which is available to operate as an edge node can join the edge team.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
90.
METHOD OF MANAGING STORAGE SPACE BASED ON ROBUST DETERMINATIONS OF DURATIONS FOR DELETING SNAPSHOTS
A method of managing storage space of a storage device, wherein the storage device includes a plurality of snapshots of a file, includes the steps of: in response to a request to delete a first snapshot, determining a first amount of time that elapsed between a creation of the first snapshot and a creation of a second snapshot that is a child snapshot of the first snapshot: and after determining the first amount of time, executing a first process to delete the first snapshot over a first time interval, wherein the first time interval is based on the first amount of time.
In some embodiments, a method receives data for a block in a blockchain during a recovery process in which a recovering replica is recovering the block for a first instance of the blockchain being maintained by the recovering replica. The block is received from a second instance of the blockchain being maintained by a source replica. The method splits the data for the block into a plurality of chunks. Each chunk includes a portion of the data for the block; It is determined whether the recovering replica can recover a chunk in the plurality of chunks using a representation of the chunk. In response to determining that the recovering replica can recover the chunk, sending the representation of the chunk to the recovering replica. In response to determining that the recovering replica cannot recover the chunk, sending the data for the chunk to the recovering replica.
A version control interface for data provides a layer of abstraction that permits multiple readers and writers to access data lakes concurrently. An overlay file system, based on a data structure such as a tree, is used on top of one or more underlying storage instances to implement the interface. Each tree node tree is identified and accessed by means of any universally unique identifiers. Copy-on-write with the tree data structure implements snapshots of the overlay file system. The snapshots support a long-lived master branch, with point-in-time snapshots of its history, and one or more short-lived private branches. As data objects are written to the data lake, the private branch corresponding to a writer is updated. The private branches are merged back into the master branch using any merging logic, and conflict resolution policies are implemented. Readers read from the updated master branch or from any of the private branches.
Some embodiments provide a method for a first smart NIC of multiple smart NICs of a host computer. Each of the smart NICs executes a smart NIC operating system that performs virtual networking operations for a set of data compute machines executing on the host computer. The method receives a data message sent by one of the data compute machines executing on the host computer. The method performs virtual networking operations on the data message to determine that the data message is to be transmitted from a port of a second smart NIC of the multiple smart NICs. The method passes the data message to the second smart NIC via a private communication channel connecting the plurality of smart NICs.
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
A method of managing configurations of a plurality of data centers that are each managed by one or more management servers, includes the steps of: in response to a change made to the configurations of one of the data centers, updating a desired state document that specifies a desired state of each of the data centers, the updated desired state document including the change; and instructing each of the data centers to update the configurations thereof according to the desired state specified in the updated desired state document. The management servers include a virtual infrastructure management server and a virtual network management server and the configurations include configurations of software running in the virtual infrastructure management server and the virtual network management server, and configurations of the data center managed by the virtual infrastructure management server and the virtual network management server.
H04L 41/0266 - Exchanging or transporting network management information using the InternetEmbedding network management web servers in network elementsWeb-services-based protocols using meta-data, objects or commands for formatting management information, e.g. using eXtensible markup language [XML]
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/085 - Retrieval of network configurationTracking network configuration history
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/00 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
Some embodiments provide a method for forwarding multicast data messages at a forwarding element on a host computer. The method receives a multicast data message from a routing element executing on the host computer along with metadata appended to the multicast data message by the routing element. Based on a destination address of the multicast data message, the method identifies a set of recipient ports for a multicast group with which the multicast data message is associated. For each recipient port, the method uses the metadata appended to the multicast data message by the routing element to determine whether to deliver a copy of the multicast data message to the recipient port.
Some embodiments provide a method for providing redundancy and fast convergence for modules operating in a network. The method configures modules to use a same anycast inner IP address, anycast MAC address, and to associate with a same anycast VTEP IP address. In some embodiments, the modules are operating in an active-active mode and all nodes running modules advertise the anycast VTEP IP addresses with equal local preference. In some embodiments, modules are operating in active-standby mode and the node running the active module advertises the anycast VTEP IP address with higher local preference.
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
H04L 45/586 - Association of routers of virtual routers
H04L 45/28 - Routing or path finding of packets in data switching networks using route fault recovery
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 45/00 - Routing or path finding of packets in data switching networks
A method of upgrading an application executing in a software-defined data center (SDDC) includes: expanding a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicating the database of the first version to a database of the second version of the application while the services of the second version are inactive; and contracting, in response to activation of the services of the second version and deactivation of the services of the first version, the database of the second version, while the services of the second version re active, to generate a contracted database, the contracted database supporting the services of the second version.
This disclosure relates generally to configuring an application or service with reconfigurable cryptographic features taking the form of cryptographic algorithms, protocols or functions. The application or service can be configured with a cryptographic provider configured to receive abstracted cryptographic API calls and retrieve specific cryptographic features based on established cryptographic policies. This configuration allows for rapid updates to the cryptographic framework and for the cryptographic framework to be managed remotely in enterprise environments.
This relates generally to configuring and automatically selecting a cipher solution for secure communication. An example method includes, at an electronic device, receiving a request initiated by a requestor for one or more cryptographic operations, determining contextual information associated with the requestor, selecting a cipher solution for processing the request based on the contextual information and a policy engine, and processing the request for the one or more cryptographic operations by executing one or more cryptographic algorithms in accordance with the selected cipher solution.
The disclosure provides an approach for cryptographic agility. Embodiments include receiving, by a cryptographic agility system associated with an application, a request to establish a. secure communication session. Embodiments include, prior to establishing the secure communication session, selecting, by the cryptographic agility system, a first cryptographic technique and a second cryptographic technique for the secure communication session. Embodiments include, during the secure communication session, utilizing the first encryption technique for securely communicating a first set of data. Embodiments include determining that a condition has been met for switching from the first encryption technique to the second encryption technique. Embodiments include, based on the determining that the condition has been met, utilizing the second encryption technique for securely communication a second set of data.