Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
A network environment comprises a plurality of host machines that are communicatively coupled to each other via a network fabric comprising a plurality of switches that in turn include a plurality of ports. Each host machine comprises one or more GPUs that execute customer workloads. Described herein are different approaches that provide for addressing the problem of handling network overlay encapsulation without causing adverse impact to the performance of workloads executed on the GPU clusters.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
A network environment comprises a plurality of host machines that are communicatively coupled to each other via a network fabric comprising a plurality of switches that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from and to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane. A packet originating at the first host machine and destined for the second host machine is communicated using only ports from the first subset of ports.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
The present disclosure relates generally to establishing a connection between a client and an endpoint in a manner that reduces network latency. In an example, a network layer proxy receives a request of a client for an endpoint connection establishment, the request including endpoint information. The network layer proxy sends, to an application layer proxy, the endpoint information, the endpoint information sent using a connection-less protocol. Thereafter, the network layer proxy receives, from the application layer proxy, a network address of an endpoint selected by the application layer proxy based on the endpoint information and application layer information. The network layer proxy sends a response to the client such that a connection is established to the endpoint using a connection-based protocol and such that the connection bypasses the application layer proxy.
Discussed herein is a mechanism of building/constructing a network fabric for a cluster of GPUs. A plurality of sets of GPUs are created, wherein each set of GPUs is created by selecting one GPU from each host machine in the plurality of host machines. Each set of GPUs is coupled to a different group of switches in a plurality of groups of switches. The coupling included: (i) coupling each GPU in the set of GPUs to a unique ingress port of a first switch included in a corresponding group of switches that is associated with the set of GPUs, and (ii) mapping virtually, each ingress port of the first switch to a unique egress port of a plurality of egress ports of the first switch. A packet originating at a source GPU and destined for a destination GPU is communicated via the network fabric.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
Techniques for securely accessing a computer network are described. An access provider sends network access credentials to an access management device. Upon receiving the credentials, the access management device generates an image key that embeds the credentials. The access management device then presents the image key to a client device. The client device receives the image key and extracts the credentials from within the image key. The client device transmits the credentials to the access provider with an authentication request. Based on the credentials included with the authentication request, the access provider attempts to authenticate the client device. If authentication is successful, the access provider grants the client device access to the wireless network and resources accessible via the wireless network.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Embodiments optimize hotel room reservations for a hotel. For a first day of a plurality of future days, embodiments automatically determine, based on an objective function, an overbooking limit for each category of hotel rooms for the hotel, where the hotel includes a plurality of different room categories. Embodiments receive a first reservation request for the first day for a first category room. When the determined overbooking limit for the first category room has not been reached, embodiments accept the first reservation request. When the accepted first reservation request is being checked in to the hotel on the first day, embodiments automatically determine, based on the objective function, to reject the first reservation request, accept the first reservation request, or upgrade the first reservation request to a higher category room.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 10/0631 - Resource planning, allocation, distributing or scheduling for enterprises or organisations
Techniques are disclosed to establish trust in a cluster of edge devices. An edge device cloud service can associate a first cloud-computing edge device with a fleet of cloud-computing edge devices and provision the first cloud-computing edge device with a master encryption key. The edge device cloud service can associate a second cloud-computing edge device with the fleet and provision the second cloud-computing edge device with the master encryption key and the first public encryption key. The first cloud-computing edge device can receive from the second cloud-computing edge device encrypted message data comprising the second public encryption key. The first cloud-computing edge device can decrypt the encrypted message data using the master encryption key stored in the first key store and update the first key store with the second public encryption key.
Systems, methods, and other embodiments associated with clustering of time series signals based on frequency domain analysis are described. In one embodiment, an example method includes accessing time series signals to be separated into clusters. The example method also includes determining similarity in the frequency domain among the time series signals. The example method further includes extracting a cluster of similar time series signals from the time series signals based on the similarity in the frequency domain. And, the example method includes training a machine learning model to detect anomalies based on the cluster.
Technology is disclosed herein for generating a visualization of data based on an AI-generated data object. In an implementation, an application, such as a data analytics application, receives a natural language input from a user which relates to a table of data in the application. The table includes data organized according to table columns. The application generates a prompt for a large language model (LLM) service which includes the names of the table columns. The prompt tasks the LLM service with selecting columns for the visualization based on the natural language input and the names of the table columns. The prompt tasks the LLM service with generating a response in a JSON format. The application populates the JSON object, which describes the visualization, according to the response. The application then creates visualization based on the JSON object.
A system is disclosed that includes capabilities by which a nested sub-resource residing in a service tenancy can access a customer-owned resource residing in a customer tenancy without the use of a cross-tenant policy. The disclosed system provides the ability for a nested sub-resource residing in a service tenancy to obtain the resource principal identity of a higher-level resource residing in the customer tenancy and use the identity of the higher-level resource to access a customer-owned resource residing in the customer tenancy. Using the resource principal identity of its higher-level resource, the sub-resource can access a customer-owned resource that resides in a customer tenancy in a seamless way without having to write a cross-tenancy policy statement that provides permission to the sub-resource to access the customer-owned resource.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Techniques are disclosed herein for implementing digital assistants using generative artificial intelligence. An input prompt comprising a natural language utterance and candidate agents and associated actions can be constructed. An execution plan can be generated using a first generative artificial model based on the input prompt. The execution plan can be executed to perform actions included in the execution plan using agents indicated by the execution plan. A response to the natural language utterance can be generated by a second generative artificial intelligence model using one or more outputs from executing the execution plan.
Techniques are disclosed for storage and retrieval mechanisms for knowledge artifacts acquired and applicable across conversations to enrich user interactions with a digital assistant. In one aspect, a method includes receiving a natural language utterance form a user during a session between the user and the digital assistant and obtaining a topic context instance for the utterance. The obtaining includes executing a search, determining whether the utterance satisfies a threshold of similarity with one or more topics, identifying the topic context instance associated with the topics, and associating the utterance with the topic context instance. A first generative artificial intelligence model can then be used to generate a list of executable actions. An execution plan is then created, and the topic context instances is updated with the execution plan. The execution plan is then executed, and an output or communication derived from the output is sent to the user.
Operations of a certificate bundle validation service may include receiving a first certificate bundle that includes a first set of one or more digital certificates, and a digital signature, associated with the first certificate bundle; determining, using a public key of an asymmetric key pair associated with a second set of one or more digital certificates, that the digital signature is generated using a private key of the asymmetric key pair; and responsive to determining that the digital signature is generated using the private key, storing the first certificate bundle in a certificate repository as a trusted certificate bundle.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Techniques are described for performing packet level data centric protection enforcement. Instead of being restricted to perimeter-based security and defining and creating rules that are difficult to maintain, techniques described herein allow users to create data-centric, intent-based policies that are enforced at different enforcement points within one or more networks. In some examples, a method comprises receiving a packet at an enforcement point (EP) within one or more networks that include a plurality of enforcement points (EPs); accessing enforcement data that indicates allowed communications between the EP and one or more other EPs, wherein the data are generated from a policy that specifies how traffic flows the one or more networks and a determination of possible data movements between at least two of EPs in the plurality of EPs; and enforcing the flow of the packet at the EP based on the data.
Techniques are disclosed for automatically generating prompts. A method comprises accessing first prompts, wherein each of the first prompts is a prompt for generating a portion of a SOAP note using a machine-learning model. For each respective first prompt of the first prompts: (i) using the respective first prompt to obtain a first result from a first machine-learning model, (ii) using the respective first prompt and the first result to obtain a second result from a second machine-learning model, the second result including an assessment of the first result, (iii) using the second result to obtain a third result from a third machine-learning model, the third result including a second prompt, (iv) setting the second prompt as the respective first prompt, (v) repeating steps (i)-(iv) a number of times to obtain a production prompt, (vi) adding the production prompt to a collection of prompts; and storing the collection of prompts.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
G16H 15/00 - ICT specially adapted for medical reports, e.g. generation or transmission thereof
Techniques for preparing data for high-precision absolute localization of a moving object along a trajectory are provided. In one technique, a sequence of points is stored, where each point corresponds to a different set of Cartesian coordinates. A curve is generated that approximates a line that passes through the sequence of points. Based on the curve, a set of points is generated on the curve, where the set of points is different than the sequence of points. New Cartesian coordinates are generated for each point in the set of points. After generating the new Cartesian coordinates, Cartesian coordinates of a position of a moving object are determined.
Techniques are disclosed herein for routing an utterance to action for a digital assistant with generative artificial intelligence. An input query comprising particular data can be received from a user. An action and a set of input argument slots within a schema associated with the action can be identified based on the input query. The input argument slots can be filled by determining whether one or more parameters are derivable from the particular data and filling the input argument slot with a version of the parameters that conforms to the schema. An execution plan that comprises the action that includes the set of filled input argument sots can be sent to an execution engine configured to execute the action for generating a response to the input query.
In an embodiment, a computer generates a respective original inference from each of many records. Permuted values are selected for a feature from original values of the feature. Based on the permuted values for the feature, a permuted inference is generated from each record. Fairness and accuracy of the original and permuted inferences are measured. For each of many features, the computer measures a respective impact on fairness of a machine learning model, and a respective impact on accuracy of the machine learning model. A global explanation of the machine learning model is generated and presented based on, for multiple features, the impacts on fairness and accuracy. Based on the global explanation, an interactive indication to exclude or include a particular feature is received. The machine learning model is (re-)trained based on the interactive indication to exclude or include the particular feature, which may increase the fairness of the model.
In accordance with an embodiment, described herein is a system and method for providing a chat-to-visualization user interface for use with a data analytics workbook assistant. A data analytics system or environment can be integrated with a digital assistant system or environment which provides natural language processing, for purposes of leveraging a user's text or speech input while generating, modifying, or interacting with data visualizations. The user can interact with the system using a chat-like conversation. Based upon a received input from the user as part of the conversation, the system can generate data comprising a resolved intent and entities, and locate an appropriate dataset. The system supports complex follow-up interactions or questions that pertain to previous responses combined with the curated data. The user can use modifiers to further enhance their questioning or analysis of the data, and incorporate resulting insights into their visualization project.
The present disclosure relates to secure deployment of model weights from a generative artificial intelligence (GenAI) platform to a cloud service. The method includes accessing the model metadata and a set of weights of a GenAI model associated with a GenAI platform. These model weights may be encrypted using a first encryption key that may be provided in the model metadata. These encrypted model weights may be decrypted based on the model metadata by utilizing the first encryption key from the model metadata. Each key may be associated with the specific type of GenAI model. Before storing the model weights from the GenAI platform cloud tenancy to a cloud storage in GenAI home region, the model weights may be encrypted again by utilizing a second encryption key. This encryption by the cloud may enable independent control over the sensitive information during transit and storing.
Techniques for a unified relational database framework for hybrid vector search are provided. In one technique, multiple documents are accessed and a vector table and a text table are generated. For each accessed document, data within the document is converted to plaintext, multiple chunks are generated based on the plaintext, an embedding model generates a vector for each of the chunks, the vectors are stored in the vector table along with a document identifier that identifies the accessed document, tokens are generated based on the plaintext, the tokens are stored in the text table along with the document identifier. Such processing may be performed in a database system in response to a single database statement to create a hybrid index. In response to receiving a hybrid query, a vector query and a text query are generated and executed and the respective results may be combined.
Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
Cyber-security techniques are described for monitoring a cloud environment and identifying potential problems, including malicious threats, to the monitored cloud environment using operational telemetry. Techniques are described for monitoring and collecting data related to reverse or recursive DNS (rDNS) traffic associated with a monitored cloud environment. The recursive DNS traffic includes recursive DNS (rDNS) requests originating from the cloud environment and responses to those requests received from DNS resolvers. This collected data is then analyzed to identify potential threats to the monitored cloud environment. The collected data may be analyzed to identify potential sources of threats and to identify one or more portions of the cloud environment that are the targets of the threats. The analysis may trigger alerts to be generated, actions to be performed (e.g., protective measures), reports to be generated, patterns to be recognized, etc.
H04L 61/4511 - Network directoriesName-to-address mapping using standardised directoriesNetwork directoriesName-to-address mapping using standardised directory access protocols using domain name system [DNS]
Techniques are disclosed herein for configuring agents for use by digital assistants that use generative artificial intelligence. An agent may be in the form of a container that is configured to have one or more actions that can be executed by a digital assistant. The agent may be configured by initially defining specification parameters for the agent based on natural language input from a user. Configuration information for the one or more assets can be imported into the agent. One or more actions may then be defined for the agent based on importing of the configuration information, the natural language input from the user, or both. A specification document can be generated for the agent and can comprise various description metadata, such as agent, asset, or action metadata, or combinations thereof. The specification document may be stored in a data store that is communicatively coupled to the digital assistant.
Techniques are disclosed for stream orchestration for variable-length message streams, including routes specified using an implementation-independent stream orchestration language (SOL). In an example method, a computing system receives a variable-length message, the variable-length message including context information and a payload. The computing system determines, from the context information, routing information that identifies at least one consumer of the variable-length message. The computing system outputs the variable-length message to the consumer.
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for automatic SOAP note generation using task decomposition. A text transcript is accessed and segmented into portions. The text transcript can correspond to an interaction between a first entity and a second entity. Machine-learning model prompts are used to extract entities and facts for the respective portions and generate SOAP note sections based at least in-part on the facts. A SOAP note is generated by combining the SOAP note sections. The SOAP note can be stored in a database in association with at least one of the first entity and the second entity.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
G16H 15/00 - ICT specially adapted for medical reports, e.g. generation or transmission thereof
33.
EXECUTING AN EXECUTION PLAN WITH A DIGITAL ASSISTANT AND USING LARGE LANGUAGE MODELS
Techniques are disclosed herein for executing an execution plan for a digital assistant with generative artificial intelligence (genAI). A first genAI model can generate a list of executable actions based on an utterance provided by a user. An execution plan can be generated to include the executable actions. The execution plan can be executed by performing an iterative process for each of the executable actions. The iterative process can include identifying an action type, invoking one or more states, and executing, by the one or more states, the executable action using an asset to obtain an output. A second prompt can be generated based on the output obtained from executing each of the executable actions. A second genAI model can generate a response to the utterance based on the second prompt.
Techniques are disclosed for providing an authenticated model customization for a machine-learning model. A cloud service provider platform accesses a message including, at least, timestamp data and user identification data. A training group of data entities is identified based on the data in the message. A training dataset is determined based on the training group of data entities. A machine-learning model is modified based on the training dataset. The modified machine-learning model is provided during an authenticated network session associated with the user identification data. In some embodiments, the modification of the machine-learning model is removed based on a determination that the authenticated network session had ended.
A system receives a configuration request comprising an infrastructure definition that defines a set of resources, to be selected from a set of tenant-managed resources implemented on a tenant's premises, for implementing the compute target entity. The system generates a compute target entity associated with an addressable identifier. The compute target entity corresponds to the set of resources selected from the set of tenant-managed resources. The system receives an execution request for execution of a set of operations, where the execution request specifies the addressable identifier associated with the compute target entity for execution of the set of operations. The system maps the addressable identifier of the compute target entity to the set of resources. The system causes execution of the set of operations on the set of resources on the tenant's premises via the compute target entity.
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
Techniques for maintaining state and context of conversations between a user and digital assistant using threads. In one aspect, a method includes receiving a natural language utterance from a user during a session, obtaining a topic context instance for the natural language utterance, and generating, by a GenAI model, a list comprising an executable action based on candidate actions associated with the topic context instance. The executable action is then executed to produce an output. The executing includes determining there is no thread running within the session that is associated with the topic context instance, the executable action, or both, and responsive to determining there is no thread running, creating a thread associated with the topic context instance, the executable action, or both, and executing, using the thread, the executable action to obtain the output. The output or a communication derived from the output is then sent to the user.
In accordance with an embodiment, described herein are systems and methods for providing a data analytics workbook assistant and integration with data analytics environments. A data analytics system or environment can be integrated with a provider operating as an implementation of a provider framework which provides natural language processing, for purposes of leveraging a user's text or speech input, within a data analytics or data visualization project, for example while generating, modifying, or interacting with data visualizations. The method can, upon receiving the input, process, by the selected provider, a text input or a speech input of the input, to generate, modify, or interact with a data analytics information or visualization.
A system and computer-implemented method include receiving a request for allocating graphical processing unit (GPU) resources for performing an operation. The request includes metadata identifying a client identifier (ID) associated with a client, throughput, and latency of the operation. A resource limit is determined for performing the operation based on the metadata. Attributes associated with each GPU resource of a plurality of GPU resources available for assignment are obtained. The attribute is analyzed that is associated with each GPU resource with respect to the resource limit. A set of GPU resources is identified from the plurality of GPU resources based on the analysis. A dedicated AI cluster is generated by patching the set of GPU resources within a single cluster. The dedicated AI cluster reserves a portion of a computation capacity of a computing system for a period of time and the dedicated AI cluster is allocated to the client associated with the client ID.
Techniques for providing a transactionally-consistent Hierarchical Navigable Small Worlds (HNSW) index are described. In one technique, a HNSW index for a plurality of vectors is stored. In response to receiving a set of changes to the plurality of vectors, storing the set of changes in a shared journal instead of applying the set of changes to the HNSW index. in response to receiving a vector query that includes a query vector, a subset of the set of changes in the shared journal is identified based on the query vector. Also, based on the query vector and the HNSW index, a subset of the plurality of vectors is identified. A result of the vector query is generated based on the subset of the set of changes and the subset of the plurality of vectors.
Described herein is a token exchange framework between two different cloud services providers. A multi-cloud infrastructure included in a first cloud environment that is provided by a first cloud services provider (CSP) receives a first request from a user associated with an account in a second cloud environment that is provided by a second CSP. The first request corresponds to using of a service provided by the first cloud environment and includes a first token issued by the second CSP. The multi-cloud infrastructure obtains a second token issued by the first CSP based on validating the first token with respect to a trust configuration corresponding to the second CSP. The trust configuration is previously generated and maintained by the first CSP in the first cloud environment. The multi-cloud infrastructure transmits the second token to the service to enable the user to utilize the service provided by the first cloud environment.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06F 21/41 - User authentication where a single sign-on provides access to a plurality of computers
Described herein is a token exchange framework between two different cloud services providers. A multi-cloud infrastructure included in a first cloud environment that is provided by a first cloud services provider (CSP) receives a first request from a user associated with an account in a second cloud environment that is provided by a second CSP. The first request corresponds to using of a service provided by the first cloud environment and includes a first token issued by the second CSP. The multi-cloud infrastructure obtains a second token issued by the first CSP based on validating the first token with respect to a trust configuration corresponding to the second CSP. The trust configuration is previously generated and maintained by the first CSP in the first cloud environment. The multi-cloud infrastructure transmits the second token to the service to enable the user to utilize the service provided by the first cloud environment.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06F 21/41 - User authentication where a single sign-on provides access to a plurality of computers
The technology disclosed herein streamlines the design and provision of communication services supplied by a communication service provider. In a particular example, a method includes receiving service design parameters for a communication service provided by the communication service provider and identifying a root node in a service design catalog corresponding to at least a first parameter of the service design parameters. The method further includes creating a service design by traversing from the root node through one or more subsequent nodes defined by the service design catalog corresponding to one or more second parameters of the service design parameters until one or more leaf nodes indicated by the service design catalog are reached. The method also includes configuring resources corresponding to nodes traversed in the service design in accordance with the service design and providing the communication service via the resources.
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
Described herein is a token exchange framework between two different cloud services providers. A multi-cloud infrastructure included in a first cloud environment that is provided by a first cloud services provider (CSP) receives a first request from a user associated with an account in a second cloud environment that is provided by a second CSP. The first request corresponds to using of a service provided by the first cloud environment and includes a first token issued by the second CSP. The multi-cloud infrastructure obtains a second token issued by the first CSP based on validating the first token with respect to a trust configuration corresponding to the second CSP. The trust configuration is previously generated and maintained by the first CSP in the first cloud environment. The multi-cloud infrastructure transmits the second token to the service to enable the user to utilize the service provided by the first cloud environment.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06F 21/41 - User authentication where a single sign-on provides access to a plurality of computers
Techniques are described for using taints. According to some configurations, a method includes associating data with one or more taints, wherein the one or more taints include a first taint; accessing one or more assertions that specify constraints on the data that affect how the data flows through one or more networks based at least in the one or more taints, wherein enforcement points within the one or more networks enforce the assertions; determining, based at least in part on the one or more taints and the one or more assertions, that a first enforcement point is authorized to access at least a portion of the data; allowing the first enforcement point to access the at least the portion of the data; and responsive to the first enforcement point accessing the at least the portion of the data, associating the first enforcement point with the one or more taints.
CONTROLLING PLACEMENT OF RESOURCES WITHIN A CLOUD INFRASTRUCTURE OF A FIRST CLOUD SERVICE PROVIDER FOR A CLOUD SERVICE OFFERED BY A SECOND CLOUD SERVICE PROVIDER
Techniques are disclosed for dynamically managing access to cross-cloud services. Provided are access control mechanisms for controlling and/or managing access to cross-cloud services offered by and between one or more cloud service providers. The techniques include detecting that a request for a cloud service has been received by a first component of a first cloud environment of a first cloud service provider and receiving an indication that deployment of the cloud service is permitted. In response to receiving the indication, a second component of the first cloud environment generates an instruction for implementing the cloud service within a second cloud environment and causes deployment of the cloud service within the second cloud environment based on the instruction.
Embodiments are directed to a firewall for a virtual machine ("VM") application. Embodiments initiate event monitoring of the VM application. Embodiments receive an event and compare the event to a plurality of events stored in a baseline profile of the VM application. When the event differs from any of the plurality of events, embodiments automatically generate an alert and/or perform an action corresponding to the VM application.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR EXCHANGING OUTBOUND REGISTRATION COUNT INFORMATION AMONG INTERROGATING CALL SESSION CONTROL FUNCTIONS (I-CSCFS) AND USING THE OUTBOUND REGISTRATION COUNT INFORMATION FOR SERVING CSCF (S-CSCF) SELECTION
A method for exchanging outbound registration count information among l-CSCFs and using the outbound registration count information for S- CSCF selection includes receiving, at a first l-CSCF of a cluster of l-CSCFs and from each S-CSCF in a cluster of S-CSCFs, a value indicating a registration capacity of the S-CSCF. The method further includes receiving, at the first l-CSCF and from at least one other l-CSCF in the cluster of I- CSCFs, outbound registration counts indicating numbers of outbound registrations that the at least one other l-CSCF has with the S-CSCFs in the cluster of S-CSCFs. The method further includes calculating, by the first I- CSCF and using the values indicating the registration capacities of the S- CSCFs and the outbound registration counts, values indicating updated registration capacities of the S-CSCFs. The method further includes using, by the first l-CSCF, the values indicating the updated registration capacities to select an S-CSCF for at least one outbound registration message.
Using a cloud orchestration platform, a data set is identified that corresponds to task processes associated with a microservice. A set of pods is identified that includes a master pod and worker pods. Each worker pod replicates data from the master pod. The master pod is accessed by calling a container orchestration platform. Upon detecting an input that triggers an upgrade for at least part of the set of pods: a default pod-replacement protocol of the container orchestration platform is interrupted by transmitting a custom script, from the cloud orchestration platform to the container orchestration platform, where execution of the script causes: polling the microservice for pod status information and determining whether a condition for iteration advancement for the upgrade is satisfied based on the status information. An iteration advancement of the upgrade to a next pod occurs upon determining that the condition is satisfied.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
49.
PROVISIONING AND MANAGING RESOURCES WITHIN A CLOUD INFRASTRUCTURE OF A FIRST CLOUD SERVICE PROVIDER FOR A CLOUD SERVICE OFFERED BY A SECOND CLOUD SERVICE PROVIDER
Techniques are disclosed for provisioning and managing resources within a cloud infrastructure of a first cloud service provider for a cloud service offered by a second cloud service provider. Cross-cloud services can be provisioned and managed by and between private clouds of cloud service providers. The techniques include receiving a request for a cloud service by a component of a first private cloud within a first cloud environment and from a component of a second private cloud within a second cloud environment. The techniques further include the component of the first private cloud performing one or more operations to establish network connectivity prerequisites for network connectivity between the first private cloud and the second private cloud and causing one or more components of the first private cloud to provision the cloud service in the second private cloud.
An architecture for offering a service of a first cloud service provider via a second cloud service provider is disclosed. A first cloud service provider infrastructure includes a first infrastructure and a second infrastructure. The first infrastructure is physically connected to a third infrastructure of a second cloud service provider infrastructure based on a first protocol. The first infrastructure is also physically connected to the second infrastructure based on a second protocol that is different from the first protocol. Using the first and second infrastructures, low latency high-bandwidth cross-cloud services can be provisioned and managed between private clouds of different cloud service providers.
Techniques are disclosed herein for transforming natural language conversations into a visual output. In one aspect, a computer-implement method includes generating an input string by concatenating a natural language utterance with a schema representation comprising a set of entities for visualization actions, generating, by a first encoder of a machine learning model, one or more embeddings of the input string, encoding, by a second encoder of the machine learning model, relations between elements in the schema representation and words in the natural language utterance based on the one or more embeddings, generating, by a grammar-based decoder of the machine learning model and based on the encoded relations and the one or more embeddings, an intermediate logical form that represents at least the query, the one or more visualization actions, or the combination thereof, and generating, based on the intermediate logical form, a command for a computing system.
Techniques for generating a retail forecasting model from product-cluster-based estimated elasticity values to forecast the effects of price changes on the demand for a set of products are disclosed. A system generates cluster-based price-elasticity values for a set of products by applying a set of regressive elasticity-estimation algorithms to a set of product data and clustering products based on product descriptions and estimated price-elasticity values. The system uses the cluster-based price-elasticity values for the products to generate the retail forecasting model.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 30/0202 - Market predictions or forecasting for commercial activities
G06Q 30/0201 - Market modellingMarket analysisCollecting market data
53.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR USING SERVICE COMMUNICATION PROXY (SCP) TO AUTOMATICALLY CONFIGURE SERVICE-BASED INTERFACE (SBI) TRANSACTION TIMEOUTS
A method for automatically configuring service-based interface (SBI) timeouts includes determining, by a service communication proxy (SCR), latency measurements for SBI interfaces with producer network functions (NFs). The method further includes maintaining, by the SCR, a database of the latency measurements for the SBI interfaces with the producer NFs. The method further includes communicating, by the SCR, the latency measurements to an element management system (EMS) for automatically configuring, at consumer NFs, timeouts for the SBI interfaces associated with the producer NFs.
Techniques are disclosed for providing services based on infrastructure distributed between multiple cloud service providers. Low-latency high-bandwidth cross-cloud services can be provisioned and managed by and between private clouds of cloud service providers. The techniques include forming a cloud network between a first set of compute resources of a first infrastructure of a first cloud environment and a second set of compute resources of a second infrastructure of a second cloud environment. The first cloud environment is provided by a first cloud service provider and the second cloud environment is provided by a second cloud service provider different from the first cloud service provider.
Techniques are disclosed for provisioning a cloud service of a first cloud service provider using a control plane of a second cloud service provider. The techniques include detecting that a request for a cloud service provided by the first cloud service provider has been received from the second cloud environment of a second cloud service provider different from the first cloud service provider. The techniques further include, after detecting that the request for the cloud service has been received, provisioning a first set of resources within the first cloud environment and linking the first set of resources to a second set of resources within the second cloud environment. Linking the first set of resources to the second set of resources enables data pertaining to the cloud service to be transferred from the second cloud environment to the first cloud environment.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
56.
PROVISIONING AND MANAGING RESOURCES WITHIN A CLOUD INFRASTRUCTURE OF A FIRST CLOUD SERVICE PROVIDER FOR CLOUD SERVICES OFFERED BY A SECOND CLOUD SERVICE PROVIDER
Techniques are disclosed for provisioning and managing resources within a cloud infrastructure of a first cloud service provider for cloud services offered by a second cloud service provider. Cross-cloud services can be provisioned and managed by and between private clouds of cloud service providers. The techniques include receiving a request for a cloud service by a component of a first private cloud within a first cloud environment and from a component of a second private cloud within a second cloud environment. The techniques further include the component of the first private cloud performing one or more operations to establish network connectivity prerequisites for network connectivity between the first private cloud and the second private cloud and causing one or more components of the first private cloud to provision the cloud service in the second private cloud.
Techniques for deriving an optimal traversal path on a racetrack are disclosed. The system partitions a track into straight and curved segments. The system identifies optimal traversals through each segment from historical traversal data. The system stitches the optimal traversals together and smooths the optimal traversals at the transition points between track segments. The system verifies that the smoothed traversals meet one or more kinematic criteria before outputting the optimal traversal path.
Techniques for escalating a service ticket between two service providers include receiving an initial service ticket at an initial service provider for resolution of an issue from an affected entity that is affected by the issue. The initial service ticket comprises an access-restricted set of attributes of the affected entity. Based on the initial service ticket, the system generates an escalated service ticket at the initial service provider. The escalated service ticket identifies the issue and omits the access-restricted attributes. The system transmits the escalated service ticket to a higher-tiered service provider, which is not authorized to access the access-restricted set of attributes comprised in the initial service ticket. In response to transmitting the escalated service ticket, the system receives information corresponding to the resolution of the issue from the higher-tiered service provider and processes the initial service ticket based on the information corresponding to the resolution of the issue.
The technology disclosed herein enables optimization of the performance of QoS flows in a network slice of a fifth-generation (5G) network. In a particular example, a method includes creating the network slices in the 5G network. The network slices comprise logically separated networks running on the 5G network. The method further includes retrieving network information indicative of whether a service level is being met for a Quality-of-Service (QoS) flow of a slice of the network slices and determining the service level for the QoS flow is not being met based on the network information. In response to determining the service level for the QoS flow is not being met, the method includes directing network control functions of the 5G network to reconfigure resource usage for the slice to achieve the service level for the QoS flow.
H04L 41/0897 - Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
H04L 41/5025 - Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
Operations of a digital signature manager may include detecting, in a certificate repository on a first virtual cloud network, set of one or more new certificate authority (CA) certificates; transmitting, to a key management service hosted on a second virtual cloud network, a CA dataset that includes the set of one or more new CA certificates; receiving, from the key management service, a digital signature of the CA dataset generated based at least on a global private key stored on the second virtual cloud network in a private key repository associated with the key management service; and storing the digital signature in the certificate repository in a data structure that associates the digital signature with the CA dataset.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Techniques are described herein for performing thread-local garbage collection. The techniques include automatic profiling and separation of private and shared objects, allowing for efficient reclamation of memory local to threads. In some embodiments, threads are assigned speculatively-private heaps within memory. Unless there is a prior indication that an allocation site yields shared objects, then a garbage collection system may assume and operate as if such allocations are private until proven otherwise. Object allocations in a private heap may violate the speculative state of the heap when reachable outside of the thread. When violations to the speculative state are detected, an indication may be generated to notify the garbage collection system, which may prevent thread-local memory reclamation operations until the speculative state is restored. The garbage collection system may learn from the violations to reduce the allocation of invalidly private objects and increase the efficiency of the garbage collection system.
Disclosed techniques relate to instrumenting applications. In an example, a method involves providing a web page application with a tracer application. The method further involves accessing a source of the web page application. The method further involves detecting a reference to an element of the web page application in the source. The method further involves detecting the user interaction with the web page application. The method further involves automatically logging a start of a span based on the detection of the user interaction. The logging includes associating the span with the tracer application. The method further involves executing operations relating to the element. The method further involves determining that the element is ready for additional user interactions. The method further involves automatically logging an end of the span based upon the determining.
Techniques are described herein for authenticating a pod. A method can include a manager instance receiving a first request for a first token to access a computing resource. The manager instance can determine an identity of the service account and generate a second request for the first token based at least in part on the authentication. The manager instance can transmit the second request to a token issuance service of the computing system. The token issuance service can generate a third request for the first token, the third request comprising the identity of the service account and a token issuance service signature. The token issuance service can transmit the third request to an identity service of the computing system. The identity service can generate the first token based at least in part on determining whether to generate the first token.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
64.
MODEL AUGMENTATION FRAMEWORK FOR DOMAIN ASSISTED CONTINUAL LEARNING IN DEEP LEARNING
Techniques are described herein for generating block extender model. An example method includes a system accessing a base model trained for identifying a base class. The system can access an extender comprising block extenders, the extender class distinct from the base class. The system can connect the extender with the base model to generate an augmented model. The system can input training data to the augmented model, the training data is provided to the base model and the extender, the training data comprising the custom class. The system can train the extender model to identify the custom class based at least in part on the training data and the signal received from the base machine learning model. The system can generate a trained extender based at least in part on the training, the extender trained to identify an object associated with the extender class.
Embodiments detect one or more events on an electrical grid. Embodiments use a sensor installed at an edge of the electrical grid to generate a sensor waveform at a first sampling rate corresponding to current and/or voltage signals. Embodiments transform the sensor waveform into multiple frequency bands and digitize the multiple frequency bands at a second sampling rate that is lower than the first sampling rate. Embodiments receive, by a pattern recognition machine learning algorithm at the edge, the digitized multiple frequency bands for events and predict, using the ML algorithm, an occurrence of the one or more events.
H02J 3/00 - Circuit arrangements for ac mains or ac distribution networks
H02J 13/00 - Circuit arrangements for providing remote indication of network conditions, e.g. an instantaneous record of the open or closed condition of each circuitbreaker in the networkCircuit arrangements for providing remote control of switching means in a power distribution network, e.g. switching in and out of current consumers by using a pulse code signal carried by the network
66.
FAILOVER HANDLING FOR PODS EXECUTING AN APPLICATION IN A HIGH AVAILABILITY MODE
The technology disclosed herein enables a service manager of a container orchestration platform to handle failovers of pods executing an application in a high availability mode. In a particular example, a method includes receiving pod information including unique application identifiers generated by the application and indications of which of the pods are active and standby. The method further includes configuring service objects provided by the container orchestration platform of the pods to each correspond to respective ones of the pods based on the unique application identifiers. The method also includes receiving updated pod information indicating a first pod of the pods, which was on standby, is now active having first application identifier of the unique application identifiers previously assigned to a second pod that failed. Additionally, the method includes reconfiguring a service object associated with the first application identifier to correspond to the first pod instead of the second pod.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
67.
SYNCHRONIZING DOCUMENT OBJECT MODEL TREES RESPECTIVELY MAINTAINED BY A SERVER AND A BROWSER
A system synchronizes a server-side DOM tree and a browser-side DOM tree with one another. Server may receive from a browser, a hash value of the browser-side DOM tree, and a server-side update instruction for applying a first server-side update to the server-side DOM tree to synchronize with a first browser-side update by the browser to the browser-side DOM tree. The server may identify the server-side DOM tree based on the hash value. The server may execute upon the server-side DOM tree, the first server-side update and a second server-side update that is triggered by the first server-side update. The server may compute a browser-side update instruction for applying a second browser-side update to the browser-side DOM tree to synchronize with the server-side DOM tree. The server may transmit the browser-side update instruction to the browser, and the browser may apply the second browser-side update to the browser-side DOM tree.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
Techniques for selecting medical items for presentation using an artificial intelligence architecture are provided. In one technique, summary note data that is composed by a physician for a patient is received. A machine-learned (ML) language model generates, based on the summary note data, a set of feature values. A profile of the patient and a profile of the physician are identified. An ML recommendation model determines, based on the profile of the patient, the profile of the physician, and the set of feature values, a plurality of candidate medical items..An ML reinforcement learning model generates a ranking of the plurality of candidate medical items. A subset of the plurality of candidate medical items is caused to be presented on a. screen of a computing device based on the ranking.
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indicesICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
69.
UPDATING DIGITAL CERTIFICATES ASSOCIATED WITH A VIRTUAL CLOUD NETWORK
Techniques for updating certificate bundles may include receiving, at an entity associated with a virtual cloud network, a certificate bundle that includes an updated set of certificate authority (CA) certificates. The techniques may include applying a validation process to an entity certificate based on the certificate bundle, with the entity certificate having been issued to the entity prior to the entity receiving the certificate bundle. The validation process may include validating, by the entity, a certificate chain that includes the entity certificate and a CA certificate included in the updated set of CA certificates. The techniques may include, responsive to validating the certificate chain, installing the certificate bundle in a storage medium associated with the entity, and utilizing, by the entity, the certificate bundle to authenticate at least one additional entity associated with the virtual cloud network.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
Systems and methods are disclosed for implementing a cloud based network function. In certain embodiments, a method may comprise operating a custom operator in a containerized software environment such as Kubernetes to manage a virtual network interface controller (Vnic) on an application pod, the Vnic being reachable directly from a network external to the containerized software environment. The method may include identifying the application pod to which to add the Vnic, determining a worker node in the containerized software environment on which the application pod is running, creating the Vnic on the worker node, and executing a job on the worker node to inject the Vnic into the application pod.
Various embodiments of the present technology generally relate to systems and methods for managing configuration data in a virtual or containerized software environment. A configuration data management system may enable ConfigMaps to be added to an application pod of a virtual software environment without restarting the application pod, a ConfigMap including a data object containing configuration data. The configuration data management process may monitor for creation of a first ConfigMap in the virtual software environment, append a name of the first ConfigMap to a data element name from the first ConfigMap to produce an appended data element, and store the appended data element to a super ConfigMap, the super ConfigMap including a specialized ConfigMap configured to contain data elements from multiple ConfigMaps.
Systems and methods are disclosed for implementing a virtual IP for a container pod. In certain embodiments, a method may comprise operating a cloud based network system in a containerized software environment to assign a virtual internet protocol (VIP) address to an application pod of a containerized software environment, the VIP being directly reachable from a network external to the containerized software environment. The method may include reserving a range of internet protocol (IP) addresses for use as VIP addresses, assigning a first fixed IP address to a first application pod, assigning a first VIP address from the range of IP addresses to the first application pod, and routing traffic directed to the first VIP address to the first fixed IP address.
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
Systems and methods are disclosed for implementing cloud network service management. In certain embodiments, a method may comprise operating a cloud native application (CnApp) custom operator in a containerized software environment to dynamically manage cloud native network service on a target application pod via a persistent network interface to an external network. The method may include obtaining a first resource definition data, for a first custom resource, to define attributes for a bundle of resources used to implement the cloud native network service, and creating the first custom resource based on the first resource definition data, including initializing the target application pod. The method may include generating a second resource definition data, derived from the first resource definition data, to define attributes for a virtual network interface to associate with the target application pod, and applying the second resource definition data to initialize creation of a second custom resource.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
74.
MACHINE LEARNING MODEL GENERATION FOR TIME DEPENDENT DATA
Embodiments generate a machine learning ("ML") model. Embodiments receive training data, the training data including time dependent data and a plurality of dates corresponding to the time dependent data. Embodiments date split the training data by two or more of the plurality of dates to generate a plurality of date split training data. For each of the plurality of date split training data, embodiments split the date split training data into a training dataset and a corresponding testing dataset using one or more different ratios to generate a plurality of train/test splits. For each of the train/test splits, embodiments determine a difference of distribution between the training dataset and the corresponding testing dataset. Embodiments then select the train/test split with a smallest difference of distribution and train and test the ML model using the selected train/test split.
A key management service (KMS) in a cloud computing environment has an internal vault for cryptographic operations by an internal cryptographic key within the cloud environment and a proxy key vault communicatively coupled to an external key manager (EKM) that stores an external cryptographic key. The KMS uses a provider-agnostic application program interface (API) that permits the cloud service customer to use the same interface request and format for cryptographic operation requests regardless of whether the request is for an operation directed to an internal vault or to an external vault and regardless of the particular vendor of the external key management service operating on the external hardware device.
A system may display a Graphical User Interface including a source region presenting a plurality of source data-serialization elements and a destination region presenting a plurality of destination data-serialization elements. The system may receive a user input associating a first destination data-serialization element, of the plurality of destination data-serialization elements, and a first source data-serialization element of the plurality of source data-serialization elements. Responsive to receiving the user input, the system may generate and store a mapping expression that defines a mapping association between the first source data-serialization element and the first destination data-serialization element. The system may present in a mapping region of the GUI displayed concurrently with the source region and the destination region, a mapping element representing the mapping association between the first source data-serialization element and the first destination data-serialization element.
In an embodiment, a method may include accessing, by a computing system, a multi-node problem. The multi-node problem may include a plurality of nodes, each respective node having one or more node features. The method may include providing, by the computing system, each respective node with each respective node feature to a machine learning model. The method may include determining, by the computing system using the machine learning model, a subset of nodes of the plurality of nodes based at least in part on the respective node features. The method may include calculating, by the computing system, one or more solutions to the multi-node problem based at least in part on the subset of nodes. The method may include storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer tenancy, where the traffic is generated by a multi-tenancy service. The traffic can be destined to the target service. The traffic can be tagged by the multi-tenancy service with information indicating that the traffic is egressing therefrom on behalf of the customer tenancy. The customer tenancy can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network). The customer network can be associated with the egress policy. The customer can define the egress policy at different granularity levels by using different attributes. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy, based on the customer-defined attributes, on the traffic that the target service is receiving.
Techniques are disclosed for rotating network addresses following the installation of a prefab region network at a destination site. A manager service executing within a distributed computing system can allocate a rotation network address pool to a root allocator service that may be configured to provide network addresses from network address pools to dependent nodes within the distributed computing system, with each dependent node associated with a corresponding first network address of the network address pools. The manager service can receive an indication that a second network address of the rotation network address pool is associated with a dependent node. In response, the manager service can execute a migration operation for the dependent node to redirect network traffic within the distributed computing system from the first network address to the second network address.
H04L 41/08 - Configuration management of networks or network elements
H04L 41/12 - Discovery or management of network topologies
H04L 61/4511 - Network directoriesName-to-address mapping using standardised directoriesNetwork directoriesName-to-address mapping using standardised directory access protocols using domain name system [DNS]
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network, or by a multi-tenancy service on behalf of the customer. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network) or by the multi-tenancy service. The customer network can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network). The customer network can be associated with the egress policy. The customer can define the egress policy at different granularity levels by using different attributes. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy, based on the customer-defined attributes, on the traffic that the target service is receiving.
Techniques for generating high-precision localization of a moving object on a trajectory are provided. In one technique, a particular image that is associated with a moving object is identified. A set of candidate images is selected from a plurality of images that were used to train a neural network. For each candidate image in the set of candidate images: (1) output from the neural network is generated based on inputting the particular image and said each candidate image to the neural network; (2) a predicted position of the particular image is determined based on the output and a position that is associated with said each candidate image; and (3) the predicted position is added to a set of predicted positions. The set of predicted positions is aggregated to generate an aggregated position for the particular image.
Techniques for layout-aware multi-modal networks for document understanding are provided. In one technique, word data representations that were generated based on words that were extracted from an image of a document are identified. Based on the image, table features of one or more tables in the document are determined. One or more table data representations that were generated based on the table features are identified. The word data representations and the one or more table data representations are input into a machine-learned model to generate a document data representation for the document. A task is performed based on the document data representation. In a related technique, instead of the one or more table data representations, one or more layout data representations that were generated based on a set of layout features, of the document, that was determined based on the image are identified and input into the machine-learned model.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
Embodiments relate to generating time-series energy usage forecast predictions for energy consuming entities. Machine learning model(s) can be trained to forecast energy usage for different energy consuming entities. For example, a local coffee shop location and a large grocery store location are both considered retail locations, however their energy usage over days or weeks may differ significantly. Embodiments organize energy consuming entities into different entity segments and store trained machine learning models that forecast energy usage for each of these individual entity segments. For example, a given machine learning model that corresponds to a given entity segment can be trained using energy usage data for entities that match the given entity segment. A forecast manager can generate a forecast prediction for an energy consuming entity by matching the entity to a given entity segment and generating the forecast prediction using the entity segment's trained machine learning model.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
Techniques for providing machine-learned (ML)-based artificial intelligence (AI) capabilities are described. In one technique, multiple AI capabilities are stored in a cloud environment. While the AI capabilities are stored, a request for a particular AI capability is received from a computing device of a user. Also, in response to receiving training data based on input from the user, the training data is stored in a tenancy, associated with the user, in the cloud environment. In response to receiving the request, the particular AI capability is accessed, a ML model is trained based on the particular AI capability and the training data to produce a trained ML model, and an endpoint, in the cloud environment, is generated that is associated with the trained ML model. The endpoint is provided to the tenancy associated with the user.
Systems, methods, and other embodiments associated with concurrently joining voice channels and web channels are described. In one embodiment, a method includes establishing a voice session to communicate over an audio channel, wherein a live agent communicates audio voice signals with a user. In response to identifying an issue from the user, transmitting a navigation link wherein the navigation link, when activated, navigates a browser to a web page associated with the issue. A web session is established to communicate between the browser and the web page. The voice session and the web session associated with the user are linked together. A call controller may then communicate simultaneously with both channels since they are connected allowing a live agent to disconnect from the audio channel.
Network entities associated with a virtual cloud network are transitioned through a certificate bundle distribution process for distributing new certificate authority certificates to the network entities. Operations may include executing, in relation to each of the network entities, a first operation associated with a first phase of the process; obtaining, for each particular network entity, individual entity information associated with a progress of a particular network entity in relation to the first phase; computing, based on the individual entity information, an aggregate metric indicative of an aggregate progress of the network entities in relation to the first phase; determining, based on the aggregate metric, that one or more transition criteria are satisfied for transitioning the network entities from the first phase to a second phase of the process; and executing, in relation to each of the network entities, a second operation associated with the second phase of the process.
Embodiments permit secure information exchange using lightweight data and near-field communication (NFC). A user can transmit lightweight data, such as one or more indicators (e.g., user indicator, scope indicator(s), documents indicator(s), etc.), to a receiving computing system via the user's wireless device and an NFC protocol. Because NFC transmissions are performed by co-located devices, this lightweight data transmission can trigger and/or continue a sophisticated workflow. For example, the receiving computing system can be associated with a product or service provider, and the lightweight data transmission can progress a workflow related to a particular product and/or service. The workflow progression can include accessing secure user information via the indicator(s) received over the NFC transmission.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
90.
TRACKING DATA CENTER BUILD DEPENDENCIES WITH CAPABILITIES AND SKILLS
A cloud-computing service (e.g., a "Puffin Service") is described. The service may maintain backward and forward compatibility between skills and capabilities. Skills may be configured to enable improved tracking of a process for building data center. There may be occasions in which an orchestrator may use both skills and capabilities to drive build operations. To enable both constructs to be utilized, the Puffin Service maintains associations between skills and capabilities. These associations enable skills to be published when published capabilities are identified and corresponding capabilities to be published for published skills, which in turn allows the Orchestrator to drive build operations based on any suitable combination of capabilities and/or skills. Previously published capabilities may be identified and system-generated skills ("shadow skills") may be used to represent the previously published capabilities, further enabling compatibility between constructs while avoiding burdensome data entry.
A cloud infrastructure orchestration service may maintain a service plan and manifest (SPAM) corresponding to a service to be bootstrapped (e.g., provisioned and deployed) to a cloud computing environment (e.g., to a data center). The service plan may specify a deterministic order of releases for performing a process to fully bootstrap the service using one or more build milestones and one or more execution units, each execution unit specifying ordered steps for transitioning between build milestones Each step may reference one or more execution target checkpoint transitions, which in turn reference an alias of a configuration file that defines a release. A manifest may be used to identify the configuration files and artifacts to be used by the releases and to validate the service plan. A SPAM may be used to reduce/eliminate nondeterministic behavior of previous orchestration systems and to provide visualizations of the bootstrapping process at different granularities.
A cloud infrastructure orchestration service (CIOS) may be used to create a service plan and manifest (SPAM) that defines a deterministic order of releases for bootstrapping a service (e.g., provisioning and deploying resources of the service) to a cloud computing environment (e.g., to a data center). A corresponding manifest may be used to identify the configuration files and artifacts to be. The manifest may be used to validate the service plan. The CIOS may be configured to validate the SPAM. If compatible, the SPAM may be added to a SPAM set. A SPAM set (a collection of SPAMs corresponding to respective services) may be used to derive a version set (identifying configuration file and artifact versions) with which a directed acyclic graph may be generated. CIOS may bootstrap various services within the data center based at least in part on traversing the directed acyclic graph.
A cloud infrastructure orchestration service (CIOS) may track build progress. A service plan may define a first execution order of releases for bootstrapping a service (e.g., provisioning and deploying resources of the service) to an execution target (ET) (e.g., a set of devices of a data center). The first execution order may be defined using transitions between ET checkpoints, with each transition and checkpoint being associated with a corresponding release. A directed acyclic graph (DAG) may be generated from any suitable number of service plans associated with various services to define a second execution order for the releases needed to bootstrap the services. At build time, CIOS may track release execution by updating the state of an ET to correspond to an ET checkpoint when the release is successful. ET states may be used by CIOS to enforce the second execution order.
A cloud infrastructure orchestration service (CIOS) may track build progress made by any suitable number of regional orchestrators. An orchestrator control plane may be configured to generate a region build plan for bootstrapping a plurality of services within a data center. The orchestrator control plane may instruct a region orchestrator to execute a build according to the build plan. The region orchestrator may be configured to update an execution state corresponding to the execution of the region build plan as it executes steps of the ordered steps of the region build plan. At any suitable time (e.g., when executing one of the steps fails), intervention data may be received with which a new region build plan may be generated. The new region build plan may be used for subsequent execution of the region build. This may enable run-time corrections to be made.
A cloud-computing service (e.g., a "Puffin Service") is described. The service may maintain service and skill catalogs corresponding to various services to be deployed to a region (e.g., during a region build). The service may host numerous user interfaces with which various service and skill metadata may be provided. In some embodiments, such data may include one or more dependencies between skills. The data managed by the cloud-computing service may be utilized to build a dependency graph. Navigation of the dependency graph may be performed via one or more user interfaces hosted by the cloud-computing service. An orchestration service (e.g., a Multi-Flock Orchestrator) may manage bootstrapping efforts for any suitable number of services during a region build based at least in part on dependencies between skills.
Skills and skills metadata may be used to define a process for building a data center. Skills of one service may depend on skills corresponding to the same or different service. A dependency graph may be generated based on these dependencies. The graph may specify an order by which orchestration operations are to be performed to build the services, thereby building the data center. During execution of the process for building the data center, health states corresponding to the skills may be tracked (based at least in part on alarms and/or namespaces associated with the skills). When an unhealthy skill is identified, the system may traverse the dependency graph to identify a root cause (e.g., failed operations corresponding to a skill on which the unhealthy skill directly/indirectly depends). A notification and/or various options may be provided to address the unhealthy state of one or both skills.
Techniques are described for data management. An example method can include processing a first message indicating that an intermediate computing system managed by the first data center has received data from a second data center in a second region. The method can further include transmitting first control instructions to the intermediate computing system to validate the data based at least in part on a first criteria. The method can further include processing validation results from the intermediate computing system. The method can further include processing a second message indicating to release the data from the first isolated environment of the intermediate computing system. The method can further include processing, by the computing system, a third message indicating that the second message originated from a computing device located in the first region. The method can further include causing the data to be released from the first isolated environment.
A cloud infrastructure orchestration service (CIOS) may track build progress made by any suitable number of regional orchestrators. The cloud infrastructure orchestration system may include any suitable number of regional orchestrators, each regional orchestrator executing in an isolated hosting environment (e.g., a service cell isolated from other service cells). An orchestrator control plane may be configured to generate a build plan for bootstrapping a plurality of services within a data center, the build plan may be generated based at least in part on a service build definition of a plurality of service build definitions, the service build definition specifying a deterministic process for bootstrapping a service of the plurality of services. The orchestrator control plane may instruct a regional orchestrator to perform bootstrapping operations according to the build plan and may track the progress of the bootstrapping operations on an ongoing basis.
A variety of testing environments and techniques are disclosed. An orchestrator control plane may generate a build plan comprising a plurality of ordered steps for bootstrapping one or more services. The build plan may be generated based at least in part on one or more service plans and manifests that individually specify a deterministic process for bootstrapping a service. The orchestrator control plan may instruct a region orchestrator executing within an isolated testing environment to execute a test build of the one or more services according to the build plan. The region orchestrator may execute, as part of executing the test build, a subset of steps from the plurality of ordered steps of the build plan utilizing resources of the isolated testing environment and in an order identified by the build plan. At any suitable time, the isolated testing environment may be reset to enable subsequent test build executions.
Techniques are provided for creating a "ubiquitous search index" which allows for full-text as well as value range-based search across all columns from multiple database tables, multiple user-defined unmaterialized views, and external sources. In one implementation, the data is indexed in a peculiarly constructed schema-based JSON format without duplicating data. The techniques maintain eventual consistency with the normalized source of truth database tables, and do not have a significant impact on the performance of transactional Data Manipulation Language (DML) operations.