Various embodiments of the present technology generally relate to systems and methods for managing configuration data in a virtual or containerized software environment. A configuration data management system may enable ConfigMaps to be added to an application pod of a virtual software environment without restarting the application pod, a ConfigMap including a data object containing configuration data. The configuration data management process may monitor for creation of a first ConfigMap in the virtual software environment, append a name of the first ConfigMap to a data element name from the first ConfigMap to produce an appended data element, and store the appended data element to a super ConfigMap, the super ConfigMap including a specialized ConfigMap configured to contain data elements from multiple ConfigMaps.
Systems and methods are disclosed for implementing a cloud based network function. In certain embodiments, a method may comprise operating a custom operator in a containerized software environment such as Kubernetes to manage a virtual network interface controller (Vnic) on an application pod, the Vnic being reachable directly from a network external to the containerized software environment. The method may include identifying the application pod to which to add the Vnic, determining a worker node in the containerized software environment on which the application pod is running, creating the Vnic on the worker node, and executing a job on the worker node to inject the Vnic into the application pod.
Systems and methods are disclosed for implementing a virtual IP for a container pod. In certain embodiments, a method may comprise operating a cloud based network system in a containerized software environment to assign a virtual internet protocol (VIP) address to an application pod of a containerized software environment, the VIP being directly reachable from a network external to the containerized software environment. The method may include reserving a range of internet protocol (IP) addresses for use as VIP addresses, assigning a first fixed IP address to a first application pod, assigning a first VIP address from the range of IP addresses to the first application pod, and routing traffic directed to the first VIP address to the first fixed IP address.
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
Systems and methods are disclosed for implementing cloud network service management. In certain embodiments, a method may comprise operating a cloud native application (CnApp) custom operator in a containerized software environment to dynamically manage cloud native network service on a target application pod via a persistent network interface to an external network. The method may include obtaining a first resource definition data, for a first custom resource, to define attributes for a bundle of resources used to implement the cloud native network service, and creating the first custom resource based on the first resource definition data, including initializing the target application pod. The method may include generating a second resource definition data, derived from the first resource definition data, to define attributes for a virtual network interface to associate with the target application pod, and applying the second resource definition data to initialize creation of a second custom resource.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
5.
MACHINE LEARNING MODEL GENERATION FOR TIME DEPENDENT DATA
Embodiments generate a machine learning ("ML") model. Embodiments receive training data, the training data including time dependent data and a plurality of dates corresponding to the time dependent data. Embodiments date split the training data by two or more of the plurality of dates to generate a plurality of date split training data. For each of the plurality of date split training data, embodiments split the date split training data into a training dataset and a corresponding testing dataset using one or more different ratios to generate a plurality of train/test splits. For each of the train/test splits, embodiments determine a difference of distribution between the training dataset and the corresponding testing dataset. Embodiments then select the train/test split with a smallest difference of distribution and train and test the ML model using the selected train/test split.
A key management service (KMS) in a cloud computing environment has an internal vault for cryptographic operations by an internal cryptographic key within the cloud environment and a proxy key vault communicatively coupled to an external key manager (EKM) that stores an external cryptographic key. The KMS uses a provider-agnostic application program interface (API) that permits the cloud service customer to use the same interface request and format for cryptographic operation requests regardless of whether the request is for an operation directed to an internal vault or to an external vault and regardless of the particular vendor of the external key management service operating on the external hardware device.
A system may display a Graphical User Interface including a source region presenting a plurality of source data-serialization elements and a destination region presenting a plurality of destination data-serialization elements. The system may receive a user input associating a first destination data-serialization element, of the plurality of destination data-serialization elements, and a first source data-serialization element of the plurality of source data-serialization elements. Responsive to receiving the user input, the system may generate and store a mapping expression that defines a mapping association between the first source data-serialization element and the first destination data-serialization element. The system may present in a mapping region of the GUI displayed concurrently with the source region and the destination region, a mapping element representing the mapping association between the first source data-serialization element and the first destination data-serialization element.
In an embodiment, a method may include accessing, by a computing system, a multi-node problem. The multi-node problem may include a plurality of nodes, each respective node having one or more node features. The method may include providing, by the computing system, each respective node with each respective node feature to a machine learning model. The method may include determining, by the computing system using the machine learning model, a subset of nodes of the plurality of nodes based at least in part on the respective node features. The method may include calculating, by the computing system, one or more solutions to the multi-node problem based at least in part on the subset of nodes. The method may include storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer tenancy, where the traffic is generated by a multi-tenancy service. The traffic can be destined to the target service. The traffic can be tagged by the multi-tenancy service with information indicating that the traffic is egressing therefrom on behalf of the customer tenancy. The customer tenancy can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network). The customer network can be associated with the egress policy. The customer can define the egress policy at different granularity levels by using different attributes. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy, based on the customer-defined attributes, on the traffic that the target service is receiving.
Techniques are disclosed for rotating network addresses following the installation of a prefab region network at a destination site. A manager service executing within a distributed computing system can allocate a rotation network address pool to a root allocator service that may be configured to provide network addresses from network address pools to dependent nodes within the distributed computing system, with each dependent node associated with a corresponding first network address of the network address pools. The manager service can receive an indication that a second network address of the rotation network address pool is associated with a dependent node. In response, the manager service can execute a migration operation for the dependent node to redirect network traffic within the distributed computing system from the first network address to the second network address.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network, or by a multi-tenancy service on behalf of the customer. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network) or by the multi-tenancy service. The customer network can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network). The customer network can be associated with the egress policy. The customer can define the egress policy at different granularity levels by using different attributes. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy, based on the customer-defined attributes, on the traffic that the target service is receiving.
Techniques for generating high-precision localization of a moving object on a trajectory are provided. In one technique, a particular image that is associated with a moving object is identified. A set of candidate images is selected from a plurality of images that were used to train a neural network. For each candidate image in the set of candidate images: (1) output from the neural network is generated based on inputting the particular image and said each candidate image to the neural network; (2) a predicted position of the particular image is determined based on the output and a position that is associated with said each candidate image; and (3) the predicted position is added to a set of predicted positions. The set of predicted positions is aggregated to generate an aggregated position for the particular image.
Techniques for layout-aware multi-modal networks for document understanding are provided. In one technique, word data representations that were generated based on words that were extracted from an image of a document are identified. Based on the image, table features of one or more tables in the document are determined. One or more table data representations that were generated based on the table features are identified. The word data representations and the one or more table data representations are input into a machine-learned model to generate a document data representation for the document. A task is performed based on the document data representation. In a related technique, instead of the one or more table data representations, one or more layout data representations that were generated based on a set of layout features, of the document, that was determined based on the image are identified and input into the machine-learned model.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Embodiments relate to generating time-series energy usage forecast predictions for energy consuming entities. Machine learning model(s) can be trained to forecast energy usage for different energy consuming entities. For example, a local coffee shop location and a large grocery store location are both considered retail locations, however their energy usage over days or weeks may differ significantly. Embodiments organize energy consuming entities into different entity segments and store trained machine learning models that forecast energy usage for each of these individual entity segments. For example, a given machine learning model that corresponds to a given entity segment can be trained using energy usage data for entities that match the given entity segment. A forecast manager can generate a forecast prediction for an energy consuming entity by matching the entity to a given entity segment and generating the forecast prediction using the entity segment's trained machine learning model.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
Techniques for providing machine-learned (ML)-based artificial intelligence (AI) capabilities are described. In one technique, multiple AI capabilities are stored in a cloud environment. While the AI capabilities are stored, a request for a particular AI capability is received from a computing device of a user. Also, in response to receiving training data based on input from the user, the training data is stored in a tenancy, associated with the user, in the cloud environment. In response to receiving the request, the particular AI capability is accessed, a ML model is trained based on the particular AI capability and the training data to produce a trained ML model, and an endpoint, in the cloud environment, is generated that is associated with the trained ML model. The endpoint is provided to the tenancy associated with the user.
Systems, methods, and other embodiments associated with concurrently joining voice channels and web channels are described. In one embodiment, a method includes establishing a voice session to communicate over an audio channel, wherein a live agent communicates audio voice signals with a user. In response to identifying an issue from the user, transmitting a navigation link wherein the navigation link, when activated, navigates a browser to a web page associated with the issue. A web session is established to communicate between the browser and the web page. The voice session and the web session associated with the user are linked together. A call controller may then communicate simultaneously with both channels since they are connected allowing a live agent to disconnect from the audio channel.
Network entities associated with a virtual cloud network are transitioned through a certificate bundle distribution process for distributing new certificate authority certificates to the network entities. Operations may include executing, in relation to each of the network entities, a first operation associated with a first phase of the process; obtaining, for each particular network entity, individual entity information associated with a progress of a particular network entity in relation to the first phase; computing, based on the individual entity information, an aggregate metric indicative of an aggregate progress of the network entities in relation to the first phase; determining, based on the aggregate metric, that one or more transition criteria are satisfied for transitioning the network entities from the first phase to a second phase of the process; and executing, in relation to each of the network entities, a second operation associated with the second phase of the process.
Embodiments permit secure information exchange using lightweight data and near-field communication (NFC). A user can transmit lightweight data, such as one or more indicators (e.g., user indicator, scope indicator(s), documents indicator(s), etc.), to a receiving computing system via the user's wireless device and an NFC protocol. Because NFC transmissions are performed by co-located devices, this lightweight data transmission can trigger and/or continue a sophisticated workflow. For example, the receiving computing system can be associated with a product or service provider, and the lightweight data transmission can progress a workflow related to a particular product and/or service. The workflow progression can include accessing secure user information via the indicator(s) received over the NFC transmission.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
21.
TRACKING DATA CENTER BUILD DEPENDENCIES WITH CAPABILITIES AND SKILLS
A cloud-computing service (e.g., a "Puffin Service") is described. The service may maintain backward and forward compatibility between skills and capabilities. Skills may be configured to enable improved tracking of a process for building data center. There may be occasions in which an orchestrator may use both skills and capabilities to drive build operations. To enable both constructs to be utilized, the Puffin Service maintains associations between skills and capabilities. These associations enable skills to be published when published capabilities are identified and corresponding capabilities to be published for published skills, which in turn allows the Orchestrator to drive build operations based on any suitable combination of capabilities and/or skills. Previously published capabilities may be identified and system-generated skills ("shadow skills") may be used to represent the previously published capabilities, further enabling compatibility between constructs while avoiding burdensome data entry.
A cloud infrastructure orchestration service may maintain a service plan and manifest (SPAM) corresponding to a service to be bootstrapped (e.g., provisioned and deployed) to a cloud computing environment (e.g., to a data center). The service plan may specify a deterministic order of releases for performing a process to fully bootstrap the service using one or more build milestones and one or more execution units, each execution unit specifying ordered steps for transitioning between build milestones Each step may reference one or more execution target checkpoint transitions, which in turn reference an alias of a configuration file that defines a release. A manifest may be used to identify the configuration files and artifacts to be used by the releases and to validate the service plan. A SPAM may be used to reduce/eliminate nondeterministic behavior of previous orchestration systems and to provide visualizations of the bootstrapping process at different granularities.
A cloud infrastructure orchestration service (CIOS) may be used to create a service plan and manifest (SPAM) that defines a deterministic order of releases for bootstrapping a service (e.g., provisioning and deploying resources of the service) to a cloud computing environment (e.g., to a data center). A corresponding manifest may be used to identify the configuration files and artifacts to be. The manifest may be used to validate the service plan. The CIOS may be configured to validate the SPAM. If compatible, the SPAM may be added to a SPAM set. A SPAM set (a collection of SPAMs corresponding to respective services) may be used to derive a version set (identifying configuration file and artifact versions) with which a directed acyclic graph may be generated. CIOS may bootstrap various services within the data center based at least in part on traversing the directed acyclic graph.
A cloud infrastructure orchestration service (CIOS) may track build progress. A service plan may define a first execution order of releases for bootstrapping a service (e.g., provisioning and deploying resources of the service) to an execution target (ET) (e.g., a set of devices of a data center). The first execution order may be defined using transitions between ET checkpoints, with each transition and checkpoint being associated with a corresponding release. A directed acyclic graph (DAG) may be generated from any suitable number of service plans associated with various services to define a second execution order for the releases needed to bootstrap the services. At build time, CIOS may track release execution by updating the state of an ET to correspond to an ET checkpoint when the release is successful. ET states may be used by CIOS to enforce the second execution order.
A cloud infrastructure orchestration service (CIOS) may track build progress made by any suitable number of regional orchestrators. An orchestrator control plane may be configured to generate a region build plan for bootstrapping a plurality of services within a data center. The orchestrator control plane may instruct a region orchestrator to execute a build according to the build plan. The region orchestrator may be configured to update an execution state corresponding to the execution of the region build plan as it executes steps of the ordered steps of the region build plan. At any suitable time (e.g., when executing one of the steps fails), intervention data may be received with which a new region build plan may be generated. The new region build plan may be used for subsequent execution of the region build. This may enable run-time corrections to be made.
A cloud-computing service (e.g., a "Puffin Service") is described. The service may maintain service and skill catalogs corresponding to various services to be deployed to a region (e.g., during a region build). The service may host numerous user interfaces with which various service and skill metadata may be provided. In some embodiments, such data may include one or more dependencies between skills. The data managed by the cloud-computing service may be utilized to build a dependency graph. Navigation of the dependency graph may be performed via one or more user interfaces hosted by the cloud-computing service. An orchestration service (e.g., a Multi-Flock Orchestrator) may manage bootstrapping efforts for any suitable number of services during a region build based at least in part on dependencies between skills.
Skills and skills metadata may be used to define a process for building a data center. Skills of one service may depend on skills corresponding to the same or different service. A dependency graph may be generated based on these dependencies. The graph may specify an order by which orchestration operations are to be performed to build the services, thereby building the data center. During execution of the process for building the data center, health states corresponding to the skills may be tracked (based at least in part on alarms and/or namespaces associated with the skills). When an unhealthy skill is identified, the system may traverse the dependency graph to identify a root cause (e.g., failed operations corresponding to a skill on which the unhealthy skill directly/indirectly depends). A notification and/or various options may be provided to address the unhealthy state of one or both skills.
Techniques are described for data management. An example method can include processing a first message indicating that an intermediate computing system managed by the first data center has received data from a second data center in a second region. The method can further include transmitting first control instructions to the intermediate computing system to validate the data based at least in part on a first criteria. The method can further include processing validation results from the intermediate computing system. The method can further include processing a second message indicating to release the data from the first isolated environment of the intermediate computing system. The method can further include processing, by the computing system, a third message indicating that the second message originated from a computing device located in the first region. The method can further include causing the data to be released from the first isolated environment.
A cloud infrastructure orchestration service (CIOS) may track build progress made by any suitable number of regional orchestrators. The cloud infrastructure orchestration system may include any suitable number of regional orchestrators, each regional orchestrator executing in an isolated hosting environment (e.g., a service cell isolated from other service cells). An orchestrator control plane may be configured to generate a build plan for bootstrapping a plurality of services within a data center, the build plan may be generated based at least in part on a service build definition of a plurality of service build definitions, the service build definition specifying a deterministic process for bootstrapping a service of the plurality of services. The orchestrator control plane may instruct a regional orchestrator to perform bootstrapping operations according to the build plan and may track the progress of the bootstrapping operations on an ongoing basis.
A variety of testing environments and techniques are disclosed. An orchestrator control plane may generate a build plan comprising a plurality of ordered steps for bootstrapping one or more services. The build plan may be generated based at least in part on one or more service plans and manifests that individually specify a deterministic process for bootstrapping a service. The orchestrator control plan may instruct a region orchestrator executing within an isolated testing environment to execute a test build of the one or more services according to the build plan. The region orchestrator may execute, as part of executing the test build, a subset of steps from the plurality of ordered steps of the build plan utilizing resources of the isolated testing environment and in an order identified by the build plan. At any suitable time, the isolated testing environment may be reset to enable subsequent test build executions.
Techniques are provided for creating a "ubiquitous search index" which allows for full-text as well as value range-based search across all columns from multiple database tables, multiple user-defined unmaterialized views, and external sources. In one implementation, the data is indexed in a peculiarly constructed schema-based JSON format without duplicating data. The techniques maintain eventual consistency with the normalized source of truth database tables, and do not have a significant impact on the performance of transactional Data Manipulation Language (DML) operations.
32.
SCALABLE HUB AND SPOKE TOPOLOGY FOR ROUTING USING COMPUTE INSTANCES
Techniques are described for creating a network link between a first customer virtual network in a first cloud environment and a second customer virtual network in a second cloud environment. The first customer virtual network in the first cloud environment is created to enable a user associated with a customer tenancy in the second cloud environment to access one or more services provided in the first cloud environment. The network link is created based on one or more link-enabling virtual networks that are deployed in the first cloud environment and the second cloud environment.
Techniques are described for creating a network link between a first customer virtual network in a first cloud environment and a second customer virtual network in a second cloud environment. The first customer virtual network in the first cloud environment is created to enable a user associated with a customer tenancy in the second cloud environment to access one or more services provided in the first cloud environment. The network link is created based on one or more link-enabling virtual networks that are deployed in the first cloud environment and the second cloud environment.
Techniques for enabling the building of general input data ML flows using a serverless data-representation-as-a-servive (DRaaS) are provided. In one technique, in response to receiving a first data representation (DR) generation request from a first calling entity, first input data is retrieved based on the first DR generation request, a first set of DRs is generated (by a DR generator) based on the first input data, and the first set of DRs are made available to the first calling entity. In response to receiving a second DR generation request from a second calling entity that is different than the first calling entity, second input data is retrieved based on the second DR generation request, a second set of DRs is generated based on the second input data, and the second set of DRs are made available to the second calling entity.
Techniques described herein are directed toward univariate series truncation policy using change point detection. An example method can include a device determining a first time series comprising a first set of data points indexed over time. The device can determine a first and second change point of the first time series based on a relative position and a category of the change points. The device can generate a first and second truncated time series based on the change points. The device can generate a first and second forecasted value using a first forecasting technique. The device can compare the first forecasted value and the second forecasted value using a second time series. The device can select one of the forecasting techniques to generate a final forecasted value based on the comparison. The device can generate, using the selected first forecasting technique, the final forecasted value.
Systems and methods provide tiered assessment of use of services in a cloud environment. An operator cloud environment running on computers including microprocessors, wherein the operator cloud environment is deployed within a first realm owned by an operator tenant of the realm, a set of software products provided to the first realm from a cloud infrastructure provider of the cloud environment for access via the first realm by a plurality of end users as vendor cloud services, and a metering service. Usage data that records usage of services in a realm includes identification data associating user entities with their usage of the services is provided to the operator tenant associated with control of the realm. A second set of data is generated by processing the usage data to remove or convert the identification data and is provided to the cloud infrastructure provider associated with control of the cloud environment.
Techniques for controlling resource deployments in a cloud partition of a cloud environment are disclosed. A cloud service provider (CSP) operates the cloud environment where its customers can specify constraints on deployments to their respective partitions (i.e., regions or realms). A partition-specific deployment constraint is a rule that constrains the changes/updates that can be made to one or more specific partitions. A partition-specific deployment constraint applies to at least one partition but may apply to multiple partitions. For example, a partition-specific deployment constraint may apply to one or more regions in a realm. A partition-specific deployment constraint is evaluated at deployment time using the most recent state, or a curated subset thereof, for at least one specific partition. A global deployment orchestrator conditions a deployment, at least in part, on if the deployment satisfies the partition-specific constraint(s) in the target partition.
Techniques for controlling resource deployments in a cloud partition of a cloud environment are disclosed. A cloud service provider (CSP) operates the cloud environment where its customers can specify two-tiered constraints on deployments to their respective partitions (i.e., regions or realms). A first deployment constraint may be a global constraint set by the CSP and a second deployment constraint may be a partition-specific deployment constraint set by a customer of the CSP. Each deployment constraint applies to the changes/updates that can be made to one or more specific partitions. A global deployment orchestrator conditions a deployment, at least in part, on whether the deployment satisfies the two-tiers of deployment constraint(s).
Techniques for monitoring the health of services of a system are disclosed. A system determines that a detected alarm is associated with a service feature, and the service feature is associated with a service of a cloud environment. The system computes a health metric for the service based at least on the detected alarm that is associated with the service feature. Additionally, the system generates a visual representation that includes the health metric for display on a service health interface.
Techniques for enabling a customer operator of a cloud service provider (CSP) the ability to disable operator access to resources in a customer cloud environment are disclosed. Operator access may be disabled or suspended by operators of the CSP customer initiating a disable command. Disabling operator access includes (a) terminating existing sessions that provide operators access to the resources, (b) rejecting new requests for credentials to establish sessions that provide operator access, and/or (c) revoking existing credentials used to establish sessions that provide operator access. Disabling operator access may apply to resources in the customer cloud environment or to a subset of resources and/or may apply to some operators but not to other operators. The operators may be of the same or different categories of operators. At the conclusion of a designated period of time, the ability of operator to access the customer cloud environment may be restored.
Techniques for providing user access to cloud environments through an administrative tenancy to comply with sovereignty requirements are disclosed. The administrative tenancy is one of multiple tenancies in the cloud environment. The administrative tenancy includes tools for communicating with services running outside of the administrative tenancy. The user may only be able to access these services through the administrative tenancy. User access to the administrative tenancy requires the user to satisfy one or more sovereignty requirements. After determining that the user satisfies the sovereignty requirements for the cloud environment, the system grants the user access to the tools within the administrative tenancy to communicate with services outside the administrative tenancy.
Embodiments described herein are generally related to systems and methods for providing access to software products or services in a cloud computing or other computing environment. Dynamic rate card management allows organizations to optimize the number of rate cards to a manageable level wherein, for example, rate cards can be associated with the type of contract policy. In accordance with an embodiment, in order to accommodate the use of dynamic rate cards, a migration service or process can be used to convert/migrate subscriptions that were originally created under a first, legacy or former subscription pricing model, to conform instead with a subscription pricing service model, for use with the various systems, methods, and features described herein.
Systems and methods described herein provide for a customizable console, for use with providing cloud environments. Cloud computing offerings enable access within the context of a cloud environment by third-party operators acting as resellers of products or services owned or managed by a cloud provider. An operator provides access to their customers via consoles that are customizable by the operators to enable greater control over their cloud-based products and services.
Embodiments described herein are generally related to systems and methods for providing cloud environments, for use by tenants of a cloud infrastructure environment in accessing software products, services, or other offerings associated with the environment, including methods for defining and enforcing service control policies directed to services and service features. In accordance with an embodiment, the system comprises a service control repository or service catalog that provides a definition of the services and service features, together with service control policies or rules that define availability or access to the service features. A service control policy framework, comprising a feature management service, determines, by reference to a hierarchy of entities defining the service control policies, which different entities can control the availability of particular services or service features to end users.
Techniques for enhancing software extensibility in cloud environments are disclosed. One or more embodiments receive instructions to define services and workers within the cloud, categorize workers based on predefined types, apply corresponding predefined functionality, and instantiate corresponding cloud-based infrastructure. Additionally, one or more embodiments facilitate communication and integration with third-party services. One or more embodiments further generate canary components to help ensure service operationality. An approval processes may be used to certify compliance with cloud standards. Such techniques enhance cloud service deployment, scalability, and interoperability while maintaining security and reliability.
Techniques for monitoring the health of services of a system are disclosed. A system determines a health metric for a service in a cloud environment. Additionally, the system determines a first service feature of the service and a plurality of downstream service features that depend on the first service feature. The system determines an impact weight for the first service based on the plurality of downstream service features. Additionally, the system computes a weighted health metric for the service at least by applying the impact weight to the health metric. The system generates a visual representation that includes the weighted health metric for display on a service health interface.
Techniques for consent-driven access management include: receiving, from a requestor, a request for consent for an actor to access a target set of resources in a cloud environment; identifying a consent workflow that specifies a name and/or an attribute of a set of one or more users from which to obtain respective approvals of the consent request; traversing the consent workflow to obtain the respective approvals from the set of one or more users; determining that one or more access policies, separate from the consent workflow, permit the actor to access the target set of resources; where access by the actor to the target set of resources is conditioned on both (a) obtaining the respective approvals from the set of one or more users and (b) determining that the one or more access policies, separate from the consent workflow, permit the actor to access the target set of resources.
Techniques for presenting information indicating infrastructure security and compliance of services of a customer cloud environment to a customer-facing dashboard are disclosed. The information presented to the customer-facing dashboard is a subset of information available to operators associated with a cloud service provider (CSP). A tier-one dashboard service obtains information indicating infrastructure security and compliance of services in the customer cloud infrastructure environment. The tier-one dashboard service presents the information indicating infrastructure security and compliance to CSP operators on a CSP-facing dashboard. The CSP-facing dashboard is not accessible by customer operators. A tier-two dashboard service obtains the infrastructure security and compliance information from the tier-one dashboard service and filters the infrastructure security information to create a subset of information indicating the security and compliance of the services. The subset of infrastructure security and compliance information is presented to operators associated with a customer of the CSP on a customer-facing dashboard.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 21/50 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06Q 10/0639 - Performance analysis of employees; Performance analysis of enterprise or organisation operations
Techniques for deploying artifacts to a computing environment are disclosed. A system includes an artifact deployment tool. The artifact deployment tool determines that an artifact is available for deployment to a target computing environment. The artifact deployment tool obtains a deployment token representing verification that a set of one or more customer designated conditions are satisfied to deploy the artifact to the target computing environment. The artifact deployment tool generates a deployment request to deploy the artifact to the target computing environment. The deployment request includes the deployment token. The artifact deployment tool directs the deployment request to a deployment service for deploying artifacts to the target computing environment. The deployment service obtains validation of the deployment token and, responsive to obtaining validation of the deployment token, deploys the artifact to the target computing environment.
Techniques for responding to a trigger event that threatens an operability of at least a portion of a cloud infrastructure of a cloud environment are disclosed. In response to detecting the occurrence of the trigger event, a system executes a mitigation process for mitigating an effect of the trigger event. The mitigation process includes determining a set of candidate services as candidates for stopping execution of operations in the cloud environment. In addition, the mitigation process generates a ranking of the set of candidate services based on weighting metrics associated with respective service features of the set of candidate services. Further, based on the ranking, the mitigation process selects a service of the set of candidate services and stops execution of operations of the service to at least partially mitigate the effect of the trigger event.
A blockchain system is enabled to participate in distributed transactions that uses a two-phase commit protocol ("2PC"). In a 2PC, a computer system, such as a DBMS or blockchain system, commits a transaction that changes data (e.g. database, world state) using two phases. To participate in a distributed transaction using 2PC, a blockchain system executes a "staged transaction". A staged transaction transitions through the 2PC phases. In the prepare phase, the new values for world state records are staged in staging records as staged values. In the second phase, if the distributed transaction is to be committed, the world staging records are set to the staged values.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
52.
EXECUTING UNSUPERVISED PRE-TRAINING TASKS WITH A MACHINE LEARNING MODEL TO PREDICT DOCUMENT GRAPH ATTRIBUTES
Techniques for multi-layer training of a machine learning model are disclosed. A system pre-trains a machine learning model on training data obtained from unlabeled document graph data by executing unsupervised pre-training tasks on the unlabeled document graph data to generate a labeled pre-training data set. The system modifies document graphs to change attributes of nodes in the document graphs. The system pre-trains the machine learning model with a data set including the modified document graphs and un-modified document graphs to generate prediction associated with the modifications to the document graphs. Subsequent to pre-training, the system fine-tunes the machine learning model with a set of labeled training data to generate predictions associated with a specific attribute of a document graph.
Systems, methods, and other embodiments associated with automatic clustering of signals including added ambient signals are described. In one embodiment, a method includes receiving time series signals (TSSs) associated with a plurality of sensors measuring physical properties of a plurality of machines. The TSSs are automatically separated into a plurality of clusters corresponding to the plurality of the machines. A group of ambient TSSs is identified that overlaps more than one of the clusters. The group of the ambient TSSs is added into the one cluster of the clusters that corresponds to the one machine. A machine learning model is then trained to detect an anomaly of a non-nominal physical property based on the one cluster to generate a trained machine learning model that is specific to the one machine without using the TSSs not included in the one cluster.
Techniques for making network path performance measurements by utilizing multi-layer tunneling are described. In a distributed environment that includes one or more nodes configured to inject network traffic (compute nodes) and one or more nodes that are not configured to inject network traffic (router nodes), techniques are disclosed that allow for the measurement of performance metrics across network segments that include at least one router node. In certain implementations, with one or more router nodes configured with a tunnel termination endpoint and/or a locally-relevant label-to-port mapping, performance metrics between router nodes or between router nodes and compute nodes can be measured. Performance metrics that may be measured using the techniques disclosed herein include network latency, packet loss, and jitter. In addition, the techniques may be used for fault isolation.
A technique may include receiving, by a management service a plurality of instance configurations from a client device. The technique may then include receiving, by the management service, information identifying a launch request for a compute instance. The technique may include determining, by the management service, one or more candidate shapes for the compute instance based at least in part on the plurality of instance configurations. The technique may include selecting, by the management service and from the one or more candidate shapes, a launch shape for the compute instance and launching the compute instance using the launch shape. The technique may then include providing, the client device access to the compute instance, launched based on the launch shape.
Embodiments permit scope limited access to a user's secure information using blockchain backed credential(s). A user can register with a secure information manager and control the scope with which the user's secure information is shared. For example, the user can permit a vetted entity access to the user's secure information via a portable access point. The user can select scope definition that control how the user's secure information is shared with the vetted entity. The vetted entity can scan the user's portable access point and request a credential. The credential can be a blockchain backed credential that is assigned access privileges that correspond the user's selections. The vetted entity can then issue data access request(s) using the credential. The secure information manager can permit the vetted entity scope limited access to the user's secure information that corresponds to the access privileges assigned to the credential.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
57.
ACCESS MANAGER THAT LIMITS ACCESS TO USER INFORMATION USING AUTHENTICATION AND VERIFICATION
Embodiments permit scope limited access to a user's secure information using credential authentication and user information verification. Certain information sharing protocols can require an explicit grant to share a user's secure information with a requesting entity. In some scenarios such an explicit grant may be impractical, such as when the user is not available to provide such an explicit grant. Embodiments of a secure information manager can permit a vetted entity scope and time limited access to a user's secure information in such scenarios, for example when the vetted entity provides an assertion that the user is unable to provide an explicit grant. For example, in scenario(s) with exigent circumstances, the secure information manager can permit the vetted entity to access a limited scope of user information that corresponds to the vetted entity's relationship to the user, role in a workflow, or other suitable characteristics of the vetted entity.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
58.
MANAGER FOR INGESTING SECURE USER INFORMATON AND PERMITTING SCOPE LIMITED ACCESS
Embodiments permit scope limited access to a user's secure information using blockchain backed credential(s). A user can register with a secure information manager and control the scope with which the user's secure information is shared. For example, the user can permit a vetted entity access to the user's secure information via a portable access point. The user can select scope definition that control how the user's secure information is shared. The vetted entity can scan the user's portable access point and request a credential. The vetted entity can then issue data access request(s) using the credential. The secure information manager can permit the vetted entity scope limited access to the user's secure information that corresponds to the access privileges assigned to the credential. The secure user information managed by the secure information manager can be received or retrieved from multiple sources and ingested/organized according to a multidimensional data schema.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
59.
DIGITAL TWIN FOR DISTRIBUTING DECENTRALIZED COMPUTE RESOURCES
The present disclosure relates to systems and methods for distributing decentralized compute resources. Compute resource metadata that identifies a set of decentralized compute resources can be received. A request to use one or more compute resources can be received. A digital twin can be generated. The digital twin can facilitate identification of a particular compute resource, and the digital twin can be representative of potential interactions between a receiver entity and a set of provider entities. An interaction can be initiated between the receiver entity and a particular provider entity. The interaction may involve allocating the particular compute resource from the particular provider entity to the receiver entity in response to the request.
Techniques are disclosed for a mobile prefab factory for building region data centers. The mobile prefab factory can include a containment enclosure configured to mount physical computing resources of a data center, a networking device, a power supply electrically connected to the networking device, and a plurality of computing devices of the physical computing resources communicatively connected to the networking device and electrically connected to the power supply. A manager service can configure the computing devices for transmission to the destination site by implementing a seed server device of the plurality of computing devices and implementing a software resource repository at the seed server device. While the containment enclosure is in transit, the seed server device can deploy software resources to the plurality of computing devices.
H04L 41/00 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 67/00 - Network arrangements or protocols for supporting network services or applications
61.
TECHNIQUES FOR A CABLE TERMINATION PROTECTION APPARATUS IN A PREFAB FACTORY
A cable termination protection apparatus and methods of use in a prefab factory are disclosed. The cable termination protection apparatus can include a frame having ports arranged on a face of the frame. Each of the ports can be configured to accept a cable termination connector of a networking cable of a static network fabric in a prefab factory. A computing device can generate instructions usable to disconnect a networking cable from the cable termination protection apparatus and reconnect the networking cable at a networking port of a networking device of a region data center rack. The computing device can receive a build request for the region data center request. In response, the computing device can obtain physical configuration parameters for computing devices on the data center rack and cabling specification information. The computing device can generate the instructions using the physical configuration parameters and the cabling specification information.
Techniques are disclosed for validating a cloud region built at a prefab factory. A computing device of the cloud region can receive a network configuration from a manager service. The network configuration can correspond to a network topology of physical resources in the cloud region and can include a first identifier associated with a computing device, a second identifier associated with a neighboring computing device, and information associating the computing device with the neighboring computing device. The computing device can be configured for transmitting to a second data center and can boot into a test mode at the second data center and receive a new identifier from a server device. The computing device can verify the new identifier and send a validation request to the neighboring computing device. The computing device can validate a network connection to the neighboring computing device based on a response to the validation request.
H04L 41/00 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
63.
TECHNIQUES FOR BUILDING CLOUD REGIONS AT A PREFAB FACTORY
Techniques are disclosed for building a region at a prefab factory. A manager service can receive a build request. The manager service can generate, based on the build request, a physical build request for building physical resources within the prefab factory. The manager service can receive an indication that the physical resources corresponding to the physical build request have been built. In response, the manager service can implement a virtual bootstrap environment at a second data center communicatively connected to the prefab factory. The manager service can deploy software resources to the physical resources using the virtual bootstrap environment. The manager service can configure the physical resources for transmitting to a destination site by at least generating an inventory of the physical resources and generating a network configuration corresponding to a network topology of the physical resources in the prefab factory.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 41/00 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
Techniques are disclosed for a networking fabric in a data center for a prefab factory. The networking fabric can include a plurality of networking cables routed through the data center characterized by a static network fabric topology, with a set of networking cables of the plurality of networking cables configured to terminate at a location in the data center. A plurality of computing devices can be positioned at the location and configured to form a region network when communicatively connected to the set of networking cables according to a connection plan. The connection plan can be generated by a network service using a physical build request. The network service can determine the configuration of the plurality of computing devices and the static network fabric topology. The network service can generate the connection plan using the configuration and the static network fabric topology.
H04L 41/00 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
65.
SYSTEM SELECTED FUNGIBLE CONFIGURABLE ATTRIBUTES FOR A COMPUTE INSTANCE
Techniques for configuring and creating a compute instance are disclosed. A system may receive a request to launch a compute instance where the compute instance defined by a configurable attribute, The request comprises one or more user-specified criteria for the configurable attribute without including a specific value for the configurable attribute. The system determines a set of candidate values for the configurable attribute. The system selects the specific value for the configurable attribute from the set of candidate values for the configurable attribute, based on the one or more user-specified criteria. The system stores the specific value in association with the configurable attribute; and launches the compute instance based on the system-selected specific value for the configurable attribute of the compute instance.
In an embodiment, a computer generates, from an input, an inference that contains multiple probabilities respectively for multiple mutually exclusive classes that contain a first class and a second class. The probabilities contain (e.g. due to overfitting) a higher probability for the first class that is higher than a lower probability for the second class. In response to a threshold exceeding the higher probability, the input is automatically and more accurately classified as the second class. One, some, or almost all classes may have a respective distinct threshold that can be concurrently applied for acceleration. Data parallelism may simultaneously apply a threshold to a batch of multiple inputs for acceleration.
A computer stores a reference corpus that consists of many reference points that each has a respective class. Later, an expected class and a subject point (i.e. instance to explain) that does not have the expected class are received. Multiple reference points that have the expected class are selected as starting points. Based on the subject point and the starting points, multiple discrete interpolated points are generated that have the expected class. Based on the subject point and the discrete interpolated points, multiple continuous interpolated points are generated that have the expected class. A counterfactual explanation of why the subject point does not have the expected class is directly generated based on continuous interpolated point(s) and, thus, indirectly generated based on the discrete interpolated points. For acceleration, neither way of interpolation (i.e. counterfactual generation) is iterative. Generated interpolated points can be reused to amortize resources consumed while generating counterfactuals.
In some implementations, techniques described herein may include identifying text in a visually rich document and determining a sequence for the identified text. The techniques may include selecting a language model based at least in part on the identified text and the determined sequence. Moreover, the techniques may include assigning each word of the identified text to a respective token to generate textual features corresponding to the identified text. The techniques may include extracting visual features corresponding to the identified text. The techniques may include determining positional features for each word of the identified text. The techniques may include generating a graph representing the visually rich document, each node in the graph representing each of the visual features, textual features, and positional features of a respective word of the identified text. The techniques may include training a classifier on the graph to classify each respective word of the identified text.
Techniques for generating, simulating, and optimizing one or more provider-specific cloud-based architectures from a provider-independent architecture definition are disclosed. An architecture generator maps provider-independent service definitions to provider-specific service components for one or more specific cloud service providers. An architecture simulator simulates execution of a set of operations on the provider-specific cloud-based architectures to determine one or more performance and cost metrics. An architecture optimizer varies one or more design choices or parameters of a provider-specific service component to suggest which variant is optimal with respect to an optimization objective.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
70.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROTECTING AGAINST UNAUTHORIZED USE OF CERTIFICATE MANAGEMENT PROTOCOL (CMP) CLIENT IDENTITY PRIVATE KEYS AND PUBLIC KEY CERTIFICATES ASSOCIATED WITH NETWORK FUNCTIONS
A method for protecting against unauthorized use of CMP client identity private keys and CMP public key certificates associated with NFs includes receiving, by a CMP CA proxy, a first CMP certificate request for renewing a security certificate associated with a first NF, the CMP certificate request including a public key certificate associated with the first NF and is protected by a CMP client identity private key associated with the first NF. The method further includes determining that the first NF is registered with the NRF, and, in response to determining that the first NF is registered with the NRF, checking, by the CMP CA proxy whether the first CMP certificate request includes an NRF-issued access token for the first NF, determining that the CMP certificate request does not include the NRF-issued access token for the first NF, and, in response to determining that the first CMP certificate request does not include the NRF-issued access token for the first NF, performing a network security action regarding the first CMP certificate request.
G06F 21/33 - User authentication using certificates
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Techniques for modeling Java source code in a symbolic description language are disclosed, including: obtaining a set of Java source code; determining that the set of Java source code includes a user-defined type; determining that the set of Java source code includes a loop; generating, based on the set of Java source code, a symbolic description language (SDL) model including an SDL representation of the user-defined type and an SDL representation of the loop.
Techniques for transforming Java source code using a symbolic description language are disclosed, including: obtaining a set of Java source code corresponding to a Java program; generating a symbolic description language (SDL) model of the set of Java source code; generating, based on the SDL model, a transformed program including at least one transformation relative to the Java program.
Systems, methods, and other embodiments associated with generating a stream of ML estimates from a stream of observations in real-time using a circular double buffer are described. In an example method, observations are received from the stream of observations. The observations are loaded in real time into a circular buffer. The circular buffer includes a first buffer and a second buffer that are configured together in a circular configuration. Estimates of what the observations are expected to be are generated by a machine learning model from the observations that are in the circular buffer. The generation of estimates alternates between generating the estimates from observations in the first buffer in parallel with loading the second buffer, and generating the estimates from observations in the second buffer in parallel with loading the first buffer. The estimates are written to the stream of estimates in real time upon generation.
G06N 3/0442 - Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
74.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR USING OPTIMIZED TOKEN BUCKET ALGORITHM FOR INGRESS MESSAGE RATE LIMITING ACROSS DISTRIBUTED PRODUCER NETWORK FUNCTION (NF) APPLICATIONS
A method for using an optimized token bucket algorithm for ingress message rate limiting across distributed producer network function (NF) applications includes implementing a producer NF instance as distributed producer NF applications and implementing distributed ingress gateways (IGWs) for performing ingress message rate limiting for the distributed producer NF applications. The method further includes maintaining, for each of the distributed IGWs, a local token bucket for rate limiting of ingress service- based interface (SBI) request messages received by each of the distributed IGWs and maintaining a distributed token bucket for refilling the local token buckets. The method further includes receiving ingress SBI request messages at the distributed IGWs and consuming, by the distributed IGWs, tokens from the local token buckets to allow processing of the ingress SBI request messages by the distributed producer NF applications and refilling the local token buckets with tokens from the distributed token bucket when numbers of tokens in the local token bucket fall below a threshold level.
Techniques for multi-layer forecasting of computational workloads are disclosed. A system identifies a level of granularity associated with a request to forecast a computational workload for a particular entity. The system obtains attribute data of computational resources at the specified level of granularity. The system determines whether computational resources, not specified in the request, should be included in a workload forecast. The system applies a time-series forecast model to time-series data obtained from computational resources associated with the request. The system presents one or more workload forecasts for computational workloads associated with the request.
Systems, methods, and other embodiments associated with frequency- domain resampling of time series are described. An example method includes generating a power spectrum for a first time series signal that is sampled inconsistently with a target sampling rate. Prominent frequencies are selected from the power spectrum. Sets of first phase factors that map the prominent frequencies to a frequency domain at first time points are generated. Coefficients are identified that relate the sets of first phase factors to values of the first time series signal at the first time points. Sets of second phase factors that map the prominent frequencies to a frequency domain at second time points are generated. A second time series signal that is resampled at the target sampling rate is generated by generating new values at the second time points from the coefficients and sets of second phase factors.
Techniques for presenting a user with instructions for completing tasks based on monitoring images of user actions are disclosed. A system monitors user actions to identify a next operation in a set of operations to present to a user. The system presents to the user instructions for completing the next operation. The system monitors user actions and may also monitor a manufacturing component status or operating equipment status to determine whether an operation has been completed. The system may reorder a sequence of operations for a particular task based on one or both user input and identifying a different sequence of operations associated with a superior task execution rating.
Techniques are disclosed herein for extending a cloud service's reach into on- or off-premises environments and other cloud platforms to enable migration and multi-cloud use cases. In one aspect, a computer-implemented method is provided that includes deploying a remote agent appliance with a discovery plugin in an external environment of a use creating an asset source specifying a location of the external environment from which external assets and associated asset metadata should be discovered, generating a discovery job for the purpose of retrieving the asset metadata for the external assets discovered within the asset source, executing, using the discovery plugin, the discovery job to discover and retrieve the external assets and the associated asset metadata within the asset source, and providing a collection of assets that includes the asset metadata for the external assets as at least part of an inventory to the user.
A secure private network connectivity system (SNCS) within a cloud service provider infrastructure (CSPI) is described that provides secure private network connectivity between external resources residing in a customer's on-premise environment and the customer's resources residing in the cloud. The SNCS provides secure private bi-directional network connectivity between external resources residing in a customer's external site representation and resources and services residing in the customer's VCN in the cloud without a user (e.g., an administrator) of the enterprise having to explicitly configure the external resources, advertise routes or set up site-to-site network connectivity. The SNCS provides a high performant, scalable, and highly available site-to-site network connection for processing network traffic between a customer's on-premise environment and the CSPI by implementing a robust infrastructure of network elements and computing nodes that are used to provide the secure site to site network connectivity.
Techniques are provided for automated migration replication. A method can include creating a volume group including an initial snapshot of a virtual machine (VM) residing in an initial environment and in an initial configuration (VM1). The method can include generating a terraform stack based on the initial snapshot, execution of which in an environment causes replication of VM1 in that environment. The method can include providing the terraform stack to a user, generating a subsequent snapshot of the VM in a subsequent configuration (VM2), generating a delta file characterizing the difference between the initial snapshot stored in the volume group and the subsequent snapshot, generating a delta terraform stack based on the delta file, wherein execution of the delta terraform stack causes previously replicated VM1 to update to replicate VM2, and providing the delta terraform stack to the user.
Systems, methods, and other embodiments associated with detecting impairment using a vibration fingerprint that characterizes gait dynamics are described. An example method includes receiving measurements of a gait of a being from a sensor. The measurements of the gait are converted into a time series of observations for each frequency bin in a set of frequency bins. A time series of residuals is generated for each range of the set by pointwise subtraction between the time series of observations and a time series of references for each range of the set. An impairment metric is generated based on the time series of residuals. The impairment metric is compared to a threshold for the impairment. In response to the impairment metric satisfying the threshold, the being is indicated to be impaired.
A novel overlay network DDoS mitigation system (ONDMS) is described for performing DDoS attack mitigation in a virtual network environment. Network traffic received by network resources in overlay networks is monitored. When a potential DDoS attack is detected, ONDMS may initiate a protected mode for a network resource. This may involve creating one or more shadow VNICs for the network resource being protected. While in protected mode, as a result of the one or more shadow VNICs, packets that would otherwise be received by the network resource being protected are instead redirected to one or more alternative destinations (e.g., to a DDoS scrubber system within ONDMS) that are configured to filter and analyze the packets and take appropriate mitigation actions, as needed. This protects the network resource being protected from the potential DDoS attack.
Techniques are provided (e.g., a method, a system, non-transitory computer-readable medium storing code or instructions executable by one or more processors) herein for migration planning, assessment, and launch. A method includes identifying at least one asset in a source environment, adding the asset to a migration project, the migration project designating the source environment and a destination environment for replication of the asset, receiving a request for generation of a recommended shape, the recommended shape designating a shape of the replicated asset in the destination environment, and generating the recommended shape based on metadata characterizing attributes of the source environment, a replication strategy provided by a user, and rules designating compatibility requirements for the recommended shape. In some embodiments, the metadata characterizes at least a central processing unit (CPU) related attribute and a network related attribute of the source environment.
A secure private network connectivity system (SNCS) within a cloud service provider infrastructure (CSPI) is described that provides secure private network connectivity between external resources residing in a customer's on-premise environment and the customer's resources residing in the cloud. The SNCS provides secure private bi-directional network connectivity between external resources residing in a customer's external site representation and resources and services residing in the customer's VCN in the cloud without a user (e.g., an administrator) of the enterprise having to explicitly configure the external resources, advertise routes or set up site-to-site network connectivity. The SNCS provides a high performant, scalable, and highly available site-to-site network connection for processing network traffic between a customer's on-premise environment and the CSPI by implementing a robust infrastructure of network elements and computing nodes that are used to provide the secure site to site network connectivity.
Techniques are described herein for merging multiple smart card application files into a single, consolidated file that may be used by a smart card runtime environment to execute multiple applications. The techniques may reduce the load size of installed application code on a smart card by bundling applications and libraries together into an optimized file. As a result, smart card platforms may have more space available to execute the applications at runtime and/or to install additional applications. Embodiments herein may further provide flexibility on defining access controls over resources for which the code is not known. When application files are merged, packages and libraries that were previously public may be made private within the merged application file to restrict external access to unknown code in the bundle.
According to another aspect of the subject matter described herein, includes receiving or generating a service based interface (SBI) request message. The method further includes identifying a next-hop network function (NF) of the SBI request message. The method further includes determining, from a registered profile of the next-hop NF, whether the next-hop NF supports handling of encrypted SBI request message parameters. The method further includes in response to determining that the next-hop NF supports handling of encrypted SBI request message parameters: encrypting selected SBI request message parameters; adding one or more headers to the SBI request message or updating one or more headers in the SBI request message to facilitate identification and decryption of the encrypted SBI request message parameters; and transmitting the SBI request message to the next-hop NF.
Methods, systems, and computer readable media for detecting stolen access tokens are disclosed. One example method for detecting stolen access tokens comprises: at a network function (NF) comprising at least one processor: receiving, via a transport layer security (TLS) connection and from a sender, a service request comprising an access token, wherein the access token includes ownership information indicating a TLS parameter for verifying an owner of the access token; determining, using the ownership information of the access token and TLS information in a TLS certificate obtained from the sender, whether the ownership information and the TLS information matches; and in response to determining that the ownership information and the TLS information do not match, rejecting the service request.
G06F 21/33 - User authentication using certificates
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Disclosed techniques relate to orchestrating power consumption reductions across a number of hosts. Power consumption of power-drawing devices (e.g., hosts, servers, etc.) may be monitored with respect to a power threshold. When the current power consumption corresponding to those devices breaches the power threshold, or at any suitable time, the system may identify a set of reduction actions configured to reduce aggregate power consumption. The power threshold may be updated dynamically based on the operational status of related systems and environmental factors. A number of response levels may be utilized, each having an association to a corresponding set of reduction actions. The impact to customers, hosts, and/or workloads can be computed at run time based on current conditions and workloads, and a particular response level can be selected based on the computed impact. These techniques enable a sufficient, but least impactful response to be employed.
Disclosed techniques relate to orchestrating power consumption reductions across a number of hosts. A number of response levels may be utilized, each having an association to a corresponding set of reduction actions. The impact to customers, hosts, and/or workloads can be computed at run time based on current and/or predicted conditions and workloads, and a particular response level can be selected based on the computed impact. These techniques enable a sufficient, but least impactful response to be employed.
Techniques for detecting and remediating anomalous intervals in time-series data of a monitored device are disclosed. A system trains a machine learning model on a combination of real data obtained from a monitoring device and false data generated by adding noise to the real data. The model predicts operating values for the device at individual intervals of a time-series data set. The system identifies anomalies in the time-series data based on differences between the predicted values and the real values. If the difference between a predicted value generate by the machine learning model and the real value exceeds a threshold, the system identifies a particular data point, such as a meter reading, as anomalous. The system ranks anomalies to perform remediation operations based on the ranking.
G06N 3/0442 - Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
Disclosed techniques relate to orchestrating power consumption reductions across a number of hosts. A current value for an aggregate power threshold of a plurality of hosts may be identified. During a first time period, an aggregate power consumption of the plurality of hosts may be managed using the current value for the aggregate power threshold. A triggering event indicating a modification to the aggregate power threshold is needed may be detected. A new value for the aggregate power threshold may be determined based on the triggering event. During a second time period, the aggregate power consumption of the plurality of hosts may be managed using the new value for the aggregate power threshold.
A plurality of GPU clusters are communicatively coupled with one another via a plurality of network devices arranged in a hierarchical structure, wherein the GPU clusters includes at least a first GPU cluster operating at a first speed and a second GPU cluster operating at a second speed that is different than the first speed. A routing policy is configured for each network device, wherein the configuring includes establishing a mapping of each incoming port-link of the network device to a unique outgoing port-link of the network device. For a packet transmitted by a GPU of a host machine and received by a first network device, an incoming port-link of the first network device is determined on which the packet was received and based on the configuring, an outgoing port-link is identified that corresponds to the incoming port-link. The packet is forwarded on the outgoing port-link of the network device.
Described herein is a network fabric including a plurality of graphical processing unit (GPU) clusters that are communicatively coupled with one another via a plurality of switches arranged in a hierarchical structure including a first tier of switches, a second tier of switches, and a third tier of switches. One or more switches are selected from the third tier of switches to form a set of target switches, where each target switch receives address information of each GPU included in the plurality of GPU clusters. Each target switch generates, a plurality of sets of address information by filtering received address information based on a condition and transmits the plurality of sets of address information to each switch included in the first tier of switches, wherein the switch stores a subset of the plurality of sets of address information in accordance with the condition.
Described herein is a network fabric including a plurality of graphical processing unit (GPU) clusters. The plurality of GPU clusters includes at least a first GPU cluster operating at a first speed and a second GPU cluster operating at a second speed that is different than the first speed. The network fabric includes a plurality of blocks, wherein each block includes: (a) one or more racks that host a GPU cluster, and (b) a plurality of switches arranged in a hierarchical structure that communicatively couple the block to other blocks included in the network fabric. Responsive to receiving a request to execute a workload, allocating one or more GPUs from the plurality of GPU clusters to execute the workload.
Each host machine of a plurality of host machines stores hierarchical locality information for the host machine that identifies at least a rack comprising the host machine, and a block of a plurality of blocks hosting the rack. The host machine is associated with one or more graphical processing units (GPUs), and wherein GPUs included in a first block operate at a first speed and GPUs included in a second block operate at a second speed that is different than the first speed. Responsive to receiving a request requesting execution of a workload, one or more host machines are identified as being available for executing the workload, and the hierarchical locality information and linkage information of the one or more host machines is provided in response to the request.
Systems, methods, and other embodiments associated with computer deepfake detection are described. In one embodiment, a method includes converting audio-visual content of a person delivering a speech into a set of time series signals. Residual time series signals of residuals that indicate an extent to which the time series signals differ from machine learning estimates of authentic delivery of the speech by the person are generated. Residual values from one synchronous observation of the residual time series signals are placed into an array of residual values for a point in time. A sequential analysis of the residual values of the array is performed to detect an anomaly in the residual values for the point in time. In response to detection of the anomaly, an alert that deepfake content is detected in the audio-visual content is generated.
Herein is database query acceleration from dynamic discovery of whether contents of a persistent column can be stored in an accelerated representation in storage-side memory. In an embodiment, based on data type discovery, a storage server detects that column values in a persistent column have a particular data type. Based on storage-side metadata including a frequency of access of the persistent column as an offload input column for offload computation requests on a certain range of memory addresses, the storage server autonomously decides to generate and store, in storage-side memory in the storage server, an accelerated representation of the persistent column that is based on the particular data type. The storage server receives a request to perform an offload computation for the offload input column. Based on the accelerated representation of the persistent column, execution of the offload computation is accelerated.
Embodiments permit scope limited access to a user's secure information using non-fungible tokens (NFTs). A user can register with a secure information manager and control the scope with which the user's secure information is shared. For example, the user can permit a vetted entity access to the user's secure information via a portable access point. The user can select scope definition that control how the user's secure information is shared with the vetted entity. The vetted entity can scan the user's portable access point and request a credential. The credential can be a NFT that is assigned access privileges that correspond the user's selections. The vetted entity can then issue data access request(s) using the credential. The secure information manager can permit the vetted entity scope limited access to the user's secure information that corresponds to the access privileges assigned to the NFT.
In one or more embodiments, a software service allows software providers to implement machine learning (ML) features into products offered by the software providers. Each ML feature may be referred to as an encapsulated ML application, which may be defined and maintained in a central repository, while also being provisioned for each user of the software provider on an as-needed basis. Advantageously, embodiments allow for a central definition for an ML application that encapsulates data science and processing capabilities and routines of the software provider. This central ML application delivers a ML deployment pipeline template that may be replicated multiple times as separate, tailored runtime pipeline instances on a per-user basis. Each runtime pipeline instance accounts for differences in the specific data of each user, resulting in user-specific ML models and predictions based on the same central ML application.
Techniques for lazy compaction are disclosed, including: selecting, by a garbage collector, multiple regions of a memory for inclusion in a relocation set; populating, by the garbage collector, a lazy free list (LFL) with the multiple regions selected for inclusion in the relocation set; subsequent to populating the LFL: determining, by an allocator, that an ordinary free list managed by the garbage collector is depleted; responsive to determining that the ordinary free list is depleted: selecting a region in the LFL; executing one or more load barriers associated respectively with one or more objects marked as live in the region, each respective load barrier being configured to relocate the associated object from the region if the associated object is still live; subsequent to executing the one or more load barriers: allocating the region.