Techniques for updating a sentiment analysis model in response to feedback for operations that are executed based on an output of the sentiment analysis model are disclosed. The sentiment analysis model analyzes a chat conversation to determine user sentiment. The system executes an operation based on the user sentiment determined by the sentiment analysis model. The operation may be the transfer of the chat conversation from a chatbot to a human agent or the generation of an outbound message by the chatbot. The system receives positive or negative feedback regarding the appropriateness and/or timeliness of the operation. The positive or negative feedback is attributed to the sentiment analysis model. The system generates training data using the feedback. The system then retrains the sentiment analysis model based on the training data.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
Generative artificial intelligence-based techniques are disclosed herein to conduct conversations of various types with multiple Large Language Model Services at once. In one aspect, a method is provided that includes identifying large language model services that qualify to take part in the conversation based, at least in part, on a conversation type and profile information provided by a user, user-selected terms selected by the user specific to the conversation, or both, rendering a conversation screen in a graphical user interface, receiving a prompt input into a dialog box of the conversation screen, communicating the prompt input to each of the large language model services, receiving responses from the large language model services based on the prompt input, and rendering the responses in a dialog box of the conversation screen with an indication of which of the large language model services provided each of the responses.
A shared-nothing database system is provided in which parallelism and workload balancing are increased by assigning the rows of each table to “slices”, and storing multiple copies (“duplicas”) of each slice across the persistent storage of multiple nodes of the shared-nothing database system. When the data for a table is distributed among the nodes of a shared-nothing system in this manner, requests to read data from a particular row of the table may be handled by any node that stores a duplica of the slice to which the row is assigned. For each slice, a single duplica of the slice is designated as the “primary duplica”. All DML operations (e.g. inserts, deletes, updates, etc.) that target a particular row of the table are performed by the node that has the primary duplica of the slice to which the particular row is assigned. The changes made by the DML operations are then propagated from the primary duplica to the other duplicas (“secondary duplicas”) of the same slice.
Systems and methods are disclosed for implementing a virtual IP for a container pod. In certain embodiments, a method may comprise operating a cloud based network system in a containerized software environment to assign a virtual internet protocol (VIP) address to an application pod, the VIP being directly reachable from a network external to the containerized software environment. The method may include assigning a first VIP address to route traffic to a first fixed IP address assigned to a first application pod, and in response to the first application pod becoming unavailable, switching a second application pod having a second fixed IP address from a standby role to the active role, and assigning the first VIP address to route traffic to the second fixed IP address, enabling continued access to a service offered by the first application pod and the second application pod through the first VIP address.
A computer system performs tasks in an access restricted environment. Data is logged in diagnostic files about logical resources in use by the computer system as the computer system attempts to perform the tasks. Occasionally, a problem may prevent the computer system from correctly performing a task. A machine authenticates a user in the access-restricted environment and receives error metadata to initiate an automated process for generating a troubleshooting signature. The automated process involves selecting a metadata extraction policy based on a category of error and using the metadata extraction policy to extract metadata from a diagnostic file. The extracted metadata is analyzed to determine troubleshooting components including the problem, a source of the problem, and/or a version of software that encountered the problem. These troubleshooting components are combined in the troubleshooting signature, which is consumed in a diagnostic tool environment that is separate from the access-restricted environment.
A system grants access for a computing entity to execute a requested operation upon a target resource based on a set of one or more access policies associated with a different computing entity. The access control service receives a surrogate access request from a first computing entity. The surrogate access request represents a request for the first computing entity to execute a requested operation upon a target resource based on a set of one or more access policies corresponding to a principal associated with a second computing entity. The system obtains a set of one or more access policies respectively, including a set of one or more authorized operations associated with the principal, and determines whether the requested operation corresponds to at least one authorized operation. Responsive to determining that the requested operation corresponds to at least one authorized operation, the system authorizes execution of the requested operation.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
An efficient join processing technique is provided that improves multi-level hash join performance by decomposing join operations into a set of opcodes that describe any complex hash join, pipelining execution of these opcodes across join levels, and sharing join operand metadata across opcodes. Complex multi-level joins are easier to describe and execute when decomposed into opcodes. The join technique decomposes multi-level join operations into a minimal set of opcodes such that the join work at each node of the multi-level join can be fully described as an execution of a sequence of opcodes. Operand metadata is shared across the opcodes of all join levels that reference the operand, thereby obviating the need to copy or transmit rows between the join nodes.
A unified schema, such as a common metrics schema, is provided that can universally cater to different kinds of ML metrics generated by different ML pipelines and platforms. In certain implementations, a metrics management system is provided. The metrics management system is based upon the unified schema and provides a repository for storing ML-related metrics in which the metrics may be generated by different disparate pipelines or platforms. The metrics management system may include adapters, converters, layers, libraries, or combinations thereof that can receive metric data and can provide generalized data that can be consumed by various different types of downstream systems. The generalized data may be provided to a downstream system, such as a user interface, an adjustment module, etc.
G06F 16/25 - Integrating or interfacing systems involving database management systems
9.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DETECTING AND PROCESSING INTER-PUBLIC LAND MOBILE NETWORK (PLMN) SERVICE-BASED INTERFACE (SBI) MESSAGES WITHOUT 3GPP-SBI-ORIGINATING-NETWORK-ID HEADERS
A method for detecting and processing egress inter-PLMN SBI request messages without 3gpp-Sbi-Originating-Network-Id headers includes receiving, by a proxy NF serving a plurality of PLMNs, an egress inter-PLMN SBI request message without an 3gpp-Sbi-Originating-Network-Id header. The method further includes determining an originating network identifier from the message, from DNS, or from a database record. The method further includes adding a 3gpp-Originating-Network-Id header to the message, populating the header with the originating network identifier, and forwarding the message to or towards a target PLMN.
The present disclosure relates to a custom framework for fine-grained human activity recognition. One or more input videos may be accessed, where the one or more input videos comprise one or more frames depicting one or more actors and one or more objects. A plurality of object-pose interaction graphs may be generated for individual frames from the one or more input videos based at least in part on one or more objects of interest from the one or more objects and on one or more joint keypoints of the one or more actors. A first graph neural network may be trained based at least in part on the plurality of object-pose interaction graphs to identify spatial information for the one or more actors, the one or more objects of interest, and one or more interactions between the one or more actors and the one or more objects of interest. A second graph neural network may be trained based at least in part on the plurality of object-pose interaction graphs and one or more keyframes from the plurality of frames to identify temporal information for the one or more actors, the one or more objects of interest, and the one or more interactions between the one or more actors and the one or more objects of interest. A classifier may be trained to identify one or more actions in the one or more input videos based at least in part on the spatial information and the temporal information.
G06V 20/40 - ScenesScene-specific elements in video content
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a patternLocating or processing of specific regions to guide the detection or recognition
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
The technology disclosed herein enables resiliency of routing between NFs when degraded 5G NF topology information is provided to an SCP by an NRF. In a particular example, a method includes transmitting requests for NRF status from a Service Communications Proxy (SCP) to NRFs in a 5G network. The NRFs exchange messages with each other to determine whether Network Function (NF) topology information is available from the NRFs. The method further includes receiving responses to the requests in the SCP. The responses indicate a number of the NRFs from which the NF topology information is available. The method also includes identifying one or more failed NRFs of the NRFs that are in a failed state based on the responses. The NF topology information is aggregated from operative NRFs should one or more of the NRFs still be operative.
Operations include identifying a cryptographic Application Programming Interface (API) call corresponding to a Cryptography Architecture API. A cryptographic security ruleset may be applied to match one or more rules based on attributes of a cryptographic operation identified by the cryptographic API call. The system may perform operations associated with the one or more matched rules. As an example, an operation may include generating a risk analysis metric for the cryptographic API call. The system may generate a cryptographic health report based on the risk analysis metric.
Techniques are provided for optimizing resources (e.g., CPU, memory, IO) allocated to a database server using one or more machine learning models. A database management system executes a database workload for the database server. During execution of the workload, a monitoring service collects metrics for the database workload and sends the metrics to a resource allocation prediction service. The resource allocation prediction service implements one or more machine learning models to generate optimized resource allocation predictions. A generated resource allocation prediction is sent to a change recommendation generation service that generates change instructions for updating the resources allocated to the database server in order to align the current resource allocation of the database server with the resource allocation prediction.
Systems and methods for converting data flow to data processing code. One example system includes an electronic processor configured to receive a data flow for processing a set of source data on a target runtime, determine a characteristic associated with the set of source data, determine a target configuration of the target runtime, generate data processing code at least by adding an operator to the data flow at a point based at least on the characteristic associated with the set of source data and the target configuration of the target runtime, and output the data processing code to a compiler for generation of machine executable code.
Systems, methods, and other embodiments associated with automatic configuration of a circular buffer for ingesting a stream and generating ML estimates in real-time are described. In one embodiment, an example method includes loading a stream of multivariate time series observations into a circular buffer at a real-time pace of input from a target asset. The circular buffer is configured with a buffer configuration that specifies buffer length and choice of arrangement as a single-buffer or dual-buffer. The method then adjusts the buffer configuration until generation of machine learning estimates of the multivariate time series observations that are in the circular buffer satisfies a threshold test for generation at the real-time pace. And, at the real time pace, the method loads additional multivariate time series observations into the circular buffer that is in the adjusted configuration and generates additional machine learning estimates of the additional multivariate time series observations.
Techniques are disclosed herein for resolving date/time expressions while transforming natural language to a logical form such as a meaning representation language. A class label for a token in a natural language utterance and a meaning representation for the natural language utterance can be predicted. The class label can be associated with a date/time expression. The meaning representation can include an operator and a value. When the value associated with the class label matches a predetermined value type or the operator matches a predetermined operator, the value and/or the operator can be modified, and an executable statement can be generated for the meaning representation. A query on a computing system can be executed using the executable statement.
G06F 40/58 - Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
G06F 40/284 - Lexical analysis, e.g. tokenisation or collocates
17.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR OUT-OF-BAND TRANSPORT LAYER SECURITY (TLS) VERSION AND PARAMETER NEGOTIATION USING A NETWORK FUNCTION REPOSITORY FUNCTION (NRF)
An example method includes registering, at a network function repository function (NRF) of a telecommunications network, a producer network function, including receiving a first transport layer security (TLS) version for the producer network function; providing, by the NRF, the first TLS version of the producer network function to a consumer network function in a network function discovery response; and establishing, by the consumer network function, a service based interface (SBI) communication with the producer network function based on the first TLS version and a second TLS version for the consumer network function.
Techniques are disclosed for querying, retrieval, and presentation of data. A data analytic system can enable a user to provide input, through a device to query data. The data analytic system can identify the semantic meaning of the input and perform a query based on the semantic meaning. The data analytic system can crawl multiple different sources to determine a logical mapping of data for the index. The index may include one or more subject areas, terms defining those subject areas, and attributes for those terms. The index may enable the data analytic system to perform techniques for matching terms in the query to determine a semantic meaning of the query. The data analytic system can determine a visual representation best suited for displaying results of a query determined by semantic analysis of an input string by a user.
An electronic form management system is programmed to: (i) provide a planning UI configured to enable a planning user to assign conditions of approval to a planning application during a planning phase, each condition of approval includes a completion status and one or more conditions to which a permit application is subject during a permitting phase; (ii) provide a permitting UI configured to enable a permitting user to administer the permit application during the permitting phase; (iii) update a completion status of at least one condition of approval data element of the plurality of conditions of approval data elements in response to a condition being satisfied; (iv) calculate an aggregate completion status of a set of conditions of approval data elements; and (v) cause to be displayed at least one graphical interface element representing the calculated aggregate completion status of the set of conditions of approval data elements.
A method may include receiving a first series of frames of image data may include a representation of a vehicle. The method may include for each frame of the first series of frames of the image data, identifying a wheel based at least in part on at least a portion of the frame of the image data. The method may include determining a set of coordinates indicating a position of the wheel within the frame of the image data. The method may include generating a graph based at least in part on the set of coordinates indicating the position of each wheel identified in each frame of the first series of frames of the image data. The method may include determining whether the wheel is associated with the vehicle. The method may include generating an axle count of the vehicle.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/54 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
21.
SYSTEM AND METHOD FOR SUPPORTING A QUOTA POLICY LANGUAGE IN A CLOUD INFRASTRUCTURE ENVIRONMENT
Systems and methods described herein support a quota policy language in a cloud infrastructure environment. A quota policy can be configured by setting a statement syntax. In order to set a quota, the quota can establish a number of factors. Each quota can be unique within a service family (e.g. compute), and as such, each quota can define a target service along with a quota name. Next, a quota can define the value to set the quota to, as well as a compartment the quota targets. There can be a set of conditions that can be included determine when the quota is applied.
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
H04L 41/5006 - Creating or negotiating SLA contracts, guarantees or penalties
H04L 47/76 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
A computer-implemented method includes receiving a first input for a deterministic model and generating, in accordance with the model, a first output corresponding to the first input; the generating is performed in a first computation time. The method also includes storing the first input and the first output as a first input-output pair in a cache; the first input-output pair has a priority in the cache according to the first computation time. The method further includes subsequently receiving a second input for the model that is a duplicate of the first input; and generating a second output corresponding to the second input by retrieving the first output from the cache.
Techniques are described herein that provide machine learning-augmented report summarization. One or more embodiments train and apply a machine learning model to generate a summary report for an entity that is associated with a particular hierarchical level in an organization utilizing base reports from entities at another hierarchical level in the organization. A training data set used for training the machine learning model includes base reports at a particular hierarchical level in the organization and identification of content from the base reports that is to be used for generating a summary report. The machine learning model may then be applied to any set of base reports to generate a corresponding summary report.
Techniques for natural language processing include accessing an input string comprising a natural language utterance and a database schema representation for a database; providing the natural language utterance to a first encoder to generate one or more embeddings of the natural language utterance; providing the database schema representation to the first encoder to generate one or more embeddings of the database schema representation; encoding, by a second encoder, relations between elements in the database schema representation and words in the natural language utterance based on the one or more embeddings of the natural language utterance and the one or more embeddings of the database schema representation; and generating a logical form for the natural language utterance based on the encoded relations, the one or more embeddings of the natural language utterance, and the one or more embeddings of the database schema representation.
A method may include receiving a first series of frames of image data may include a representation of a vehicle. The method may include for each frame of the first series of frames of the image data, identifying a wheel based at least in part on at least a portion of the frame of the image data. The method may include determining a set of coordinates indicating a position of the wheel within the frame of the image data. The method may include generating a graph based at least in part on the set of coordinates indicating the position of each wheel identified in each frame of the first series of frames of the image data. The method may include determining whether the wheel is associated with the vehicle. The method may include generating an axle count of the vehicle.
Techniques are described herein for efficiently managing and transmitting changes to a standard object within a data processing system while preserving the integrity of customer-defined custom views on that standard object. The proposed systems and methods introduce a novel approach to handling Create, Update, and Destroy (hereinafter “CUD”) operations on a standard object, allowing customers to define custom views that encapsulate specific fields of interest. The systems and methods involve the creation of a replication object, triggered by changes to the standard object, which captures these alterations without storing actual data. This mechanism, coupled with the generation of change logs and transmission to a message bus, enables changes to the standard object to be performed without impacting the customer-defined views.
A data management system receives updates to records of a source dimension. Some records of the source dimension reference target dimensions. The data management system identifies template records from existing records in the source dimension for modeling changes to connections with the target dimensions based on the updated records in the source dimension. The template records are discovered using rules-driven processes, AI-driven processes, or a serial or parallel hybrid processes including rules and AI. These processes use ancestor information from the updated records to find best-matching template records. The rules-driven processes additionally rely on matching fields, and the AI-driven processes additionally rely on vector embeddings and optionally clustering. Updates are made to the target records in the target dimensions, including any roll-up structures indicated for data propagation, identified using the template records, and downstream applications using the target records may consume the updates.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
28.
Copy-Or-Generate Model With Semi-Sandboxed Or Fully Sandboxed Decoding To Handle Text Generation Tasks In Accurate And Secure Manner
A copy-or-generate model architecture is provided that a generates generation distribution obtained from the outputs of the last decoder layer and a copy distribution built from the cross-attention scores of the last decoder layer. The model applies copy weights to the generation distribution and copy distribution to determine whether to generate a next token or to copy a token from the prompt. The model provides better security by ensuring that the input values from the prompt are directly copied to the output when appropriate, such that the model is blind to the original values to copy. In a semi-sandboxed configuration, additional information may be input to the model to help the model adapt the output based on the context of those input fields.
Techniques for processing a large dataset to extract information from the dataset, where the processing is performed by multiple systems and where the entire dataset, due to its size, cannot be loaded into the memory of any one of the multiple systems. In certain implementations, the information extracted from the dataset is in the form of a set of one or more statistical metrics computed for the dataset. For example, the dataset may include datapoints related to a machine-learning (ML) model, and a metric value can be computed for the ML model based upon the dataset datapoints. The metric value may, for example, be a metric that measures the performance of the ML model.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
30.
SYSTEM AND METHOD FOR GENERATING DATA VISUALIZATION SCORES FOR USE WITH A DATA ANALYTICS ENVIRONMENT
Embodiments described herein are generally related to systems and methods for generating data visualization scores, for use with data analytics environments. In accordance with an embodiment, the system can operate in the manner of an expert system, or according to a series of processes or rules, to examine a data visualization of interest, compare a list of found elements with element types specified by an analytics data visualization score matrix, and generate, based on matching found elements with the analytics data visualization matrix, a data visualization score associated with the data visualization. In accordance with an embodiment, the system can operate as a data visualization advisor, during preparation of a data visualization, to provide a user with recommendations or score values indicative of a quality or complexity of their data visualization, which may be helpful in improving their data visualization, for example from a beginner-level to a more advanced-level.
A distributed computing system is described that leverages a nearline storage layer to minimize the downtime required for bootstrapping a new computing cluster in the distributed computing system. The system executes a computing cluster comprising a set of computing nodes and determines a set of one or more data segments to be written to a nearline storage system. The system writes the data segments to the nearline storage system. In certain examples, the system receives a request to create a second computing cluster and responsive to the request, bootstraps the second computing cluster using the set of data segments stored on the nearline storage system. The system additionally leverages the nearline storage layer to accelerate query processing by the computing nodes of a computing cluster.
A sharded, permissioned, distributed ledger may reduce the amount of work and communication required by each participant, thus possibly avoiding scalability bottlenecks that may be inherent in previous distributed ledger implementations and possibly enabling the use of additional resources to translate to increased throughput. A sharded, permissioned, distributed ledger may be made up of multiple shards, each of which may also be a distributed ledger and which may operate in parallel. Participation within a sharded, permissioned, distributed ledger may be allowed only with permission of an authority. A sharded, permissioned, distributed ledger may include a plurality of nodes, each including a dispatcher configured to receive transaction requests from clients and to forward received requests to verifiers configured to append transactions to individual ones of the shards.
G06Q 20/06 - Private payment circuits, e.g. involving electronic currency used only among participants of a common payment scheme
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
33.
SYSTEM AND METHOD FOR GENERATING DATA VISUALIZATION SCORES FOR USE WITH A DATA ANALYTICS ENVIRONMENT
Embodiments described herein are generally related to systems and methods for generating data visualization scores, for use with data analytics environments. In accordance with an embodiment, the system can operate in the manner of an expert system, or according to a series of processes or rules, to examine a data visualization of interest, compare a list of found elements with element types specified by an analytics data visualization score matrix, and generate, based on matching found elements with the analytics data visualization matrix, a data visualization score associated with the data visualization. In accordance with an embodiment, the system can operate as a data visualization advisor, during preparation of a data visualization, to provide a user with recommendations or score values indicative of a quality or complexity of their data visualization, which may be helpful in improving their data visualization, for example from a beginner-level to a more advanced-level.
Techniques are disclosed herein for focused training of language models and end-to-end hypertuning of the framework. In one aspect, a method is provided that includes obtaining a machine learning model pre-trained for language modeling, and post-training the machine learning model for various tasks to generate a focused machine learning model. The post-training includes: (i) training the machine learning model on an unlabeled set of training data pertaining to a task that the machine learning model was pre-trained for as part of the language modeling, and the unlabeled set of training data is obtained with respect to a target domain, a target task, or a target language, and (ii) training the machine learning model on a labeled set of training data that pertains to another task that is an auxiliary task related to a downstream task to be performed using the machine learning model or output from the machine learning model.
Techniques for concurrent lazy reference tracking in an old garbage collection generation are disclosed, including: encountering, by a mutator thread during a garbage collection epoch, a first instruction to write a first value to a field; responsive to encountering the first instruction to write the first value to the field: entering a slow-path write barrier; performing, by the slow-path write barrier, a first one or more reference counting operations with respect to the field; encountering, by the mutator thread during the first garbage collection epoch and subsequent to encountering the first instruction to write the first value to the field, a second instruction to write a second value to the field; responsive to encountering the second instruction to write the second value to the field: entering a fast-path write barrier; wherein the fast-path write barrier does not perform any reference counting operations with respect to the field.
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
36.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PRESERVING NETWORK BANDWIDTH DURING NETWORK ADDRESS TRANSLATION (NAT) DEVICE UNAVAILABILITY OR AFTER NAT DEVICE REBOOT
A method for preserving network bandwidth during NAT device unavailability or after a NAT device reboot includes receiving, by a session initiation protocol (SIP) proxy, messages from SIP endpoints, determining, by the SIP proxy, that at least some of the SIP endpoints are located behind a NAT device. The method further includes determining, by the SIP proxy, that the NAT device is potentially unavailable or has potentially rebooted, testing, by the SIP proxy, reachability of at least some of the SIP endpoints located behind the NAT device and determining that the at least some of the SIP endpoints are unreachable, classifying, by the SIP proxy, all of the SIP endpoints located behind the NAT device as unreachable, and rejecting, by the SIP proxy, SIP messages directed towards the SIP endpoints located behind the NAT device.
A virtual assistant may engage in a first conversation with a target user. Based on the first conversation, the virtual assistant may identify a need for a set of information to be provided to the target user. Upon determining that the set of information cannot be obtained from a knowledge base that is accessible to the virtual assistant, the virtual assistant may initiate a second conversation with an informed user. The virtual assistant may request the set of information from the informed user. The request directed to the informed user may be different from a request received by the virtual assistant from the target user in the first conversation. Upon receiving the set of information, the virtual assistant may alter the set of information. The virtual assistant may subsequently provide the set of information to the target user.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
38.
MANAGING CONTENT ITEMS TO FILL A CONTENT SLOT ON A WEBPAGE
Present invention describes a computer-implemented method for managing content to be rendered in a webpage. A web server receives a request to access a webpage. A user identifier or a device identifier is extracted from the request. The web server transmits a request for a Self Sovereign Identity (SSI) to a gatekeeper server using the identifier. The web server receives the SSI generated based on a propensity score indicating extent to which the user has propensity for releasing user's data to a digital platform. The web server extracts data about content preferences, user characteristics, or browsing history from the SSI based on data related to privacy of a user associated with the user identifier. using the data, a content item is selected to fill a content slot on the webpage. The web server configures the webpage such that the selected content item fills the content slot.
During pretraining, a computer generates three trainable and untrained machine learning models that are a token sequence encoder, a token predictor, and a path predictor. A sequence of lexical tokens is generated that represents a lexical text in a training corpus. A graph is generated that represents the lexical text. In the graph, a next traversal path is selected that corresponds to a next lexical token that is adjacent to a sliding subsequence of the sequence of lexical tokens. From the subsequence, the token sequence encoder infers an encoded sequence that represents the subsequence. The path predictor and token predictor accept the encoded sequence as input for respective inferencing for which respective training losses are measured. Both training losses are combined into a combined loss that is used to increase the accuracy of the three machine learning models by, for example, backpropagation of the combined loss.
Techniques are described for creating and enforcing network policies using a zero trust packet routing (ZPR) policy language (ZPL). Generally, ZPL allows users to create data-centric, intent-based policies that are evaluated and enforced at different enforcement points within one or more networks to control data flow. According to some configurations, ZPL is used to define ZPR policy statements that specifies who/what (e.g., users, computing resources) can access data and how traffic flows throughout one or more networks. Generally, when packets are transmitted/received, the enforcement points evaluate the ingress or egress rules associated with the policy. In this way, packets are not transmitted from an enforcement point to a next hop until the rules are evaluated by the enforcement point and the enforcement point determines that the transmission is authorized by the policy.
Techniques are described for performing zero trust packet routing (ZPR) in one or more networks. ZPR allow users to create data-centric, intent-based policies that are evaluated and enforced at different enforcement points within one or more networks to control data flow. The policy statements specify who/what (e.g., users, computing resources) can access data and where that data is allowed to travel throughout one or more networks. ZPL policy statements are focused on allowing/denying tagged resources (users, compute instances, . . . ) to allow/deny access to tagged data that is also tagged. Generally, when packets are transmitted/received, the enforcement points evaluate the ingress or egress rules associated with the policy. In this way, packets are not transmitted from an enforcement point to a next hop until the rules are evaluated by the enforcement point and the enforcement point determines that the transmission is authorized by the policy.
The present embodiments relate to systems and methods for automatic sign in upon account signup. Particularly, the present embodiments can utilize a federated login approach for automatic sign in upon account signup for a cloud infrastructure. Specifically, the signup and sign in service (also known as SOUP) and an identity provider portal can be configured such that the nodes are aware of each other as Security Assertion Markup Language (SAML) partners. After new account registration, the signup service can redirect the user browser to a cloud infrastructure console to start with a federated login flow, where a sign in service can issue a SAML authentication request, and redirects it to signup service. Responsive to validating the browser using a SAML authentication process, the browser can be automatically signed into the new account and allowed access the account relating to the cloud infrastructure service.
A computer comprising multiple processors and non-uniform memory implements multiple threads that perform a lock operation using a shared lock structure that includes a pointer to a tail of a first-in-first-out (FIFO) queue of threads waiting to acquire the lock. To acquire the lock, a thread allocates and appends a data structure to the FIFO queue. The lock is released by selecting and notifying a waiting thread to which control is transferred, with the thread selected executing on the same processor socket as the thread controlling the lock. A secondary queue of threads is managed for threads deferred during the selection process and maintained within the data structures of the waiting threads such that no memory is required within the lock structure. If no threads executing on the same processor socket are waiting for the lock, entries in the secondary queue are transferred to the FIFO queue preserving FIFO order.
Techniques are described for identifying successful adversarial attacks for a black box reading comprehension model using an extracted white box reading comprehension model. The system trains a white box reading comprehension model that behaves similar to the black box reading comprehension model using the set of queries and corresponding responses from the black box reading comprehension model as training data. The system tests adversarial attacks, involving modified informational content for execution of queries, against the trained white box reading comprehension model. Queries used for successful attacks on the white box model may be applied to the black box model itself as part of a black box improvement process.
G06F 18/2113 - Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06F 18/22 - Matching criteria, e.g. proximity measures
Techniques are described for performing zero trust packet routing (ZPR). A method comprises accessing a policy that specifies how traffic flows through one or more networks, wherein policy statements of the policy reference tags associated with resources of the one or more networks, wherein the tags include one or more of a first tag that identifies first data, a second tag that identifies an identity of a user, a third tag that identifies an identity of a computing resource, a fourth tag that identifies an identify of a network; determining, based on the policy, rules to enforce at enforcement points (EPs) within the one or more networks; distributing the rules to the EPs; and enforcing the rules associated with the policy at individual ones of the EPs, wherein enforcing the rules includes evaluating one or more layer 4 attributes and one or more layer 7 attributes.
Techniques are described for using a zero trust packet routing (ZPR) architecture to enforce ZPR policy language (ZPL) statements. Instead of being restricted to perimeter-based security and defining and creating rules that are difficult to maintain, techniques described allow users to create data-centric, intent-based policies using ZPL that are enforced using a ZPR architecture at different enforcement points within one or more networks. According to some configurations, a method comprises receiving a packet at an enforcement point within one or more networks that include a plurality of enforcement points; accessing one or more rules associated with a policy that specifies how traffic flows through the enforcement point and other enforcement points of the one or more networks, wherein the policy includes one or more layer 4 rules and one or more layer 7 rules; and enforcing the one or more rules associated with the policy at the enforcement point.
Techniques for extracting tables from images using a Language Model. The techniques include detecting, within an image, an area that includes a table. The techniques further include extracting, from the area of the image, tabular data for the table, the extracted tabular data comprising a plurality of content items in the table and structural information for the table. The techniques further include generating a prompt that includes the plurality of content items and the structural information. The techniques further include providing the prompt as input to a language model. The techniques further include responsive to providing the prompt as input to the language model, generating, by the language model, a parsable representation of the table, wherein the parsable representation is in a format and includes the plurality of content items of the table and the structural information of the table in the image.
Techniques are described for using a zero trust packet routing (ZPR) architecture to enforce ZPR policy language (ZPL) statements. Instead of being restricted to perimeter-based security and defining and creating rules that are difficult to maintain, techniques described allow users to create data-centric, intent-based policies using ZPL that are enforced using a ZPR architecture at different enforcement points within one or more networks. According to some configurations, ZPL is used to define the policy statements that specifies who/what (e.g., users, computing resources) can access data and where that data is allowed to travel throughout one or more networks. In this way, packets are not transmitted from an enforcement point to a next hop until the rules are evaluated by the enforcement point and the enforcement point determines that the transmission is authorized by the policy.
A computing device may receive a plurality of scanning requests with at least one scanning request in the plurality identifying a target address of a target network. The computing device may for at least a subset of the plurality of scanning requests: generate a scanner instance and a virtual network interface card (VNIC) in response to the scanning request. The scanner instance and the VNIC communicating with a routing namespace that can communicate with two or more scanner instances simultaneously. Until the target address has been scanned: one or more packets can be sent from the scanner instance to the target address via the routing namespace and VNIC. The one or more packets can be wrapped in one or more packet wrappers identifying the target address and the target network. In response to the target address being scanned, the scanner instance and VNIC can be decommissioned.
Operations of a certificate bundle distribution service may include: detecting a trigger condition to distribute a certificate bundle that includes a set of one or more certificate authority certificates; partitioning each particular network entity of a plurality of network entities associated with a computer network into one of a plurality of certificate distribution groups based on an entity identifier of the particular network entity, in which each particular certificate distribution group includes a particular subset of network entities from the plurality of network entities; selecting a particular certificate distribution group, of the plurality of certificate distribution groups, for distribution of the certificate bundle; and transmitting the certificate bundle to the particular subset of network entities in the particular certificate distribution group.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
51.
Performing Security Protocol Transitions While Executing An Execution Environment Of A Virtual Cloud Network
A system determines a trigger condition for executing a security protocol transition with respect to an execution environment of a virtual cloud network. In response to determining the trigger condition, the system executes the security protocol transition while executing the execution environment. The security protocol transition includes terminating execution of a first security protocol and initiating execution of a second security protocol. The first security protocol includes utilizing a first authorization process to authorize a set of network entities to access a set of target resources. The second security protocol includes utilizing a second authorization process to authorize the set of network entities to access the set of target resources. The trigger condition indicates that one or more parameters associated with the virtual cloud network meets a set of transition criteria for executing the security protocol transition.
A target entity-specific API document generation system is disclosed that includes capabilities for generating an API document in accordance with the API functionality specified by a target entity. The system obtains an API document that is relevant to a source entity and converts the API document into a common interface format (CIF) application programming interface (API) document. The system then identifies one or more relevant portions within the CIF API document for further processing. The system identifies one or more component types within a relevant portion. The system identifies content related to a component type and then transforms the content related to the component type based on a set of transformation rules associated with the component type. The system then generates a target entity-specific API document for a target entity where the target entity-specific API document includes the transformed content related to the component type identified in the relevant portion.
Techniques are described for creating and enforcing network policies using a zero trust packet routing (ZPR) policy language (ZPL). Generally, ZPL allows users to create data-centric, intent-based policies that are evaluated and enforced at different enforcement points within one or more networks to control data flow. According to some configurations, a method comprises accessing policy statements defined according to a Zero Trust Packet Routing (ZPR) Policy Language (ZPL) that supports layer 4 policy statements and layer 7 policy statements that are used to define how traffic flows through the one or more networks; and enforcing rules associated with the policy statements at enforcement points within the one or more networks.
A computer-implemented method includes receiving a graph pattern matching query, determining that the query is an unbounded recursive path query (RPQ), and initializing an unbounded root vertex match operator (UBRM) using a starting vertex as input to the UBRM; the UBRM is configured to compute multiple hops between vertices on the graph. The method also includes performing a first reachability search using the UBRM; the UBRM traverses the graph to identify all vertices reachable from the starting vertex. The method further includes generating first level vertices of a path pattern that computes the RPQ; and adding the first level vertices to a first level context for processing by a successor match operator; the successor match operator comprises an unbounded intermediate neighbor match operator (UBNM), the UBNM is configured to compute multiple hops between vertices on the graph and to identify neighbors for each of the first level vertices.
Techniques are provided for generating and maintaining Precomputed Result Tables (“PRTs”) that are used by a database server to improve performance of SPARQL queries that target an RDF graph set. Each PRT corresponds to a particular pattern, which may be simple-triple, star, or chain. The techniques support the creation of PRTs for chain patterns that contain multiple instances of the same property, and PRTs for star and chain patterns that include reverse properties. Techniques are also described for maintaining such PRTs as Data Manipulation Language (DML) operations make changes to the RDF graph tables that belong to the RDF graph set associated with the PRTs.
Techniques for constructing padding-sensitive batches for text recognition are provided. In one technique, a first bounding box from a list of bounding boxes is added or included into a batch of zero or more bounding boxes. Each bounding box in the list of bounding boxes surrounds different detected text in a digital image. A second bounding box is identified from the list. The second bounding box is wider than the first bounding box. A difference between (1) a width of the second bounding box and (2) a particular width is determined. The particular width is based on a width of a bounding box in the batch. Based on the difference and a threshold value, it is determined whether to include the second bounding box in the batch. The batch is then input into a test recognition model.
A system analyzes a set of static source code to identify source code segments that may be implemented using parallel multi-thread processing at run-time. The system determines whether source code segments meet multi-threading criteria, including lacking operational data dependencies and having particular run time or computing resource consumption characteristics. Based on determining that the source code meets multi-threading criteria, the system modifies the static source code to utilize parallel multi-thread processing at run-time. The system generates a recommendation for a user based on the source code modification.
Techniques including receiving, by a cloud environment of a cloud service provider, a requested resource usage for a resource associated with a subscription of a user account with the cloud service provider. The techniques further include determining, as part of managing resources at a subscription level independent of managing resources at a user account level, a resource usage by the subscription. The techniques further include determining that a usage limit for the resource is greater than the resource usage. Based on the usage limit being greater than the resource usage, the techniques further include determining that a remaining resource allocation for the subscription is greater than or equal to the requested resource usage. Based on the remaining resource allocation being greater than or equal to the requested resource usage, the techniques further include providing the resource to the user account and updating the resource usage.
A sampling procedure is performed for paths on a multi-hop distributed graph that includes vertices partitioned on a plurality of machines; a sampled path includes first and second vertices hosted on first and second machines respectively. The sampling procedure includes communicating, by the first machine to the second machine, first path information comprising an identifier for the second vertex and an identifier for a target vertex of the sampled path; the target vertex is hosted on a target host machine. The procedure further includes communicating edge information by the first machine to the target host machine, and communicating feature information by the second machine to the target host machine. Communication of the edge information and communication of the feature information are deferred relative to communication of the first path information.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
A system executes an authorization process for initiating a session with a computing entity. Executing the authorization process includes determining an identity associated with the computing entity, identifying a current set of access policies associated with the identity, and determining, based on the current set of access policies, a first set of actions that the computing entity is authorized to perform. While executing the session, the system executes a first action in accordance with the current set of access policies. Subsequent to executing the first action, the set of access policies is modified. The system detects an occurrence of a trigger condition, and in response, re-executes the authorization process for the session, including determining, based on the modified set of access policies, a second set of actions the computing entity is authorized to perform that differs from the first set of actions.
Techniques may include receiving an asynchronous event message at an inbound adapter of a document-based monitoring system. In addition, the techniques may include accessing a document containing a hierarchical log of event entries. The techniques may include adding an event entry to the hierarchical log based at least in part on the asynchronous event message, and where the event entry may include information identifying the asynchronous event message. Moreover, the techniques may include identifying a corrupted event entry in the hierarchical log. Also, the techniques may include comparing the timestamp fields of the event entries in the hierarchical log to the timestamp field of the corrupted event entry. The techniques may include determining a parent entry of the corrupted event entry based at least in part on the comparison. In addition, the techniques may include updating the parent ID field of the corrupted event entry to identify the parent entry.
G06F 16/17 - Details of further file system functions
G06F 16/185 - Hierarchical storage management [HSM] systems, e.g. file migration or policies thereof
62.
Consensus Protocol For Asynchronous Database Transaction Replication With Fast, Automatic Failover, Zero Data Loss, Strong Consistency, Full SQL Support And Horizontal Scalability
A consensus protocol-based replication approach is provided. For each change operation performed by a leader server on a copy of the database, the leader server creates a replication log record and returns a result to the client. The leader does not wait for consensus for the change operation from the followers. For a commit, the leader creates a commit log record and waits for consensus. Thus, the leader executes database transactions asynchronously, performs replication of change operations asynchronously, and performs replication of transaction commits synchronously.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Systems and method for SaaS/PaaS resource usage and allocation in an analytic applications environment. An exemplary method can provide an analytic applications environment, a control plane comprising a server, the control plane further comprises a provisioning component and a console interface, a data warehouse, and a monitoring agent. The method can provision an instance of the data warehouse in the context of a tenant, the provisioned instance of the data warehouse having an initial size. Upon the provisioning the instance of the data warehouse, the method can add adding an entry to a metrics repository of the monitoring agent, the added entry indicative of the initial size of the provisioned instance of the data warehouse, the added entry being tagged, the tag being indicative of the tenant. The method can monitor, by the monitoring agent, an amount of data stored at the provisioned instance of the data warehouse.
H04L 47/762 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 47/80 - Actions related to the user profile or the type of traffic
64.
SYSTEM AND METHOD FOR AUTOMATIC GENERATION OF BI MODELS USING DATA INTROSPECTION AND CURATION
In accordance with an embodiment, described herein are systems and methods for automatic generation of business intelligence (BI) data models using data introspection and curation, as may be used, for example, with enterprise resource planning (ERP) or other enterprise computing or data analytics environments. The described approach uses a combination of manually-curated artifacts, and automatic generation of a model through data introspection, of a source data environment, to derive a target BI data model. For example, a pipeline generator framework can evaluate the dimensionality of a transaction type, degenerate attributes, and application measures; and use the output of this process to create an output target model and pipeline or load plan. The systems and methods described herein provide a technical improvement in the building of new subject areas or a BI data model within much shorter periods of time.
A system executes an authorization process for initiating a session with a computing entity. Executing the authorization process includes determining an identity associated with the computing entity, identifying a current set of access policies associated with the identity, and determining, based on the current set of access policies, a first set of actions that the computing entity is authorized to perform. While executing the session, the system executes a first action in accordance with the current set of access policies. Subsequent to executing the first action, the set of access policies is modified. The system detects an occurrence of a trigger condition, and in response, re-executes the authorization process for the session, including determining, based on the modified set of access policies, a second set of actions the computing entity is authorized to perform that differs from the first set of actions.
The system and methods for determining the representative samples from a large imperfectly labeled dataset to support data processing and inferences for machine-learning applications. The method includes accessing data samples that may be processed to generate embedded vectors along with a set of reference labels. For each label, clustering is performed to group at least some of the embedded vectors together into clusters based on the associated inherent patterns, followed by a refinement process to select relevant clusters from the clustered patterns. One or more embedded vectors from the selected clusters are passed to a statistical technique to generate representative embedded vectors for each label. The statistical technique is configured such that the weights of selected embedded vectors within each of the cluster are the same. These representative embedded vectors may be further fed into a machine-learning model to predict a label from the set of reference labels for a given prompt.
An audiovisual compilation tool is provided for generating a customized and personalized audiovisual compilation. The audiovisual compilation tool receives a request that specifies a purpose and team. Based on the request, the audiovisual compilation personalization tool accesses a candidate viewer data structure to identify candidate viewers on the specified team. The audiovisual compilation tool creates a customized audiovisual compilation based on items of audiovisual content labeled for the specified purpose and based on aggregate characteristics of the candidate viewers. The customized audiovisual compilation is personalized by substituting audible names for different candidate viewers in marked sections of audio from the selected audiovisual content, and blending the marked sections with surrounding audio content. Tracked feedback questions specific to content in the audiovisual compilation may be automatically generated and inserted into the audiovisual compilation, and overlayed graphical elements may be added to trigger external functionality from within the audiovisual compilation.
Techniques for augmenting training data include accessing training data comprising a plurality of training examples comprising a first training example comprising a first natural language utterance and a first logical form for the first natural language utterance. A second natural language utterance is generated by adding or replacing one or more values in the first natural language utterance. A logical form for the second natural language utterance is generated. A second training example is generated, comprising the second natural language utterance and the logical form for the second natural language utterance. The training data is augmented by adding the second training example to the plurality of training examples to generate an augmented training data set. A machine learning model is trained to generate logical forms for utterances using the augmented training data set.
Systems, methods, and other embodiments associated with characteristics-based selection of time series forecast algorithms are described. In one embodiment, a method includes analyzing each time series in a training set of time series to yield characteristics vectors for the time series. The method trains an auto-encoder to minimize error: (1) between bottleneck layer and the characteristics vectors, and (2) between input layer and output layer. The method generates new characteristics vectors that fill gaps between neighboring characteristics. The method inputs the new characteristics vectors to the bottleneck layer to generate a testing set of time series. The method tests forecasting algorithms using the testing set to find forecasting error. The method trains a ranking function to assign a rank to each forecasting algorithm based on a provided characteristics vector. And, the method automatically selects one of the forecasting algorithms to monitor an additional time series based on the ranks.
Embodiments of the present technology relate to systems and methods for running tasks in parallel against a variable number of databases stored on the same server. In an embodiment, a server hosting multiple databases receives a request to perform a job on the multiple databases. In response to the request, an agent on the server connects to each of the databases and stores pointers to the connection information in an array. The agent then passes the array to a service running Go that initializes multiple worker threads to complete the job on the multiple databases in parallel and passes each of the worker threads a reference to the array of pointers. In some examples, the job includes a health check and the workers, to complete the job, perform a health check on each of the multiple databases. The server then replies to the requesting entity with results of the job.
Various embodiments of the present technology generally relate to systems and methods for performing health checks on network functions. In certain embodiments, a network function (NF) health monitoring system may comprise one or more processors, and a memory having stored thereon instructions. The instructions, upon execution, may cause the one or more processors to obtain health check details for a producer NF, the health check details including a health check endpoint configured to receive health check probes, send a health check probe to the health check endpoint requesting a health status of the producer NF, update the health status of the producer NF in a locally stored list of producer NFs based on a response to the health check probe, and select a target producer NF to send traffic to based on the health status of the locally stored list of producer NFs.
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
H04L 43/20 - Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
Various embodiments of the present technology generally relate to systems and methods for performing network session processing based on slice load. In certain embodiments, a policy control function (PCF) system may comprise one or more processors, and a memory having stored thereon instructions. The instructions, upon execution, may cause the one or more processors to receive a network session request corresponding to a specified network slice, a network slice including a particular portion of a network's capacity, determine a load level of the specified network slice, and reject the network session request when the load level is above a selected threshold.
The present disclosure relates to a framework that provides execution of serverless functions in a cloud environment based on occurrence of events/notifications from services in an entirely different cloud environment. A target agent obtains a notification from a source agent, where the target agent is deployed in a target cloud environment and the source agent is deployed in a source cloud environment that is different than the target cloud environment. The target agent determines a function that is to be invoked based on the notification. Upon successfully verifying whether the target agent is permitted to invoke the function that is deployed in a target customer tenancy of the target cloud environment, the target agent invokes the function in the target customer tenancy of the target cloud environment.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
74.
SYSTEMS AND METHODS FOR AUTOMATIC FORECASTING ALGORITHM SELECTION BASED ON TIME SERIES CHARACTERISTICS
Systems, methods, and other embodiments associated with characteristics-based selection of time series forecast algorithms are described. In one embodiment, a method includes analyzing each time series in a training set of time series to yield characteristics vectors for the time series. The method trains an auto-encoder to minimize error: (1 ) between bottleneck layer and the characteristics vectors, and (2) between input layer and output layer. The method generates new characteristics vectors that fill gaps between neighboring characteristics. The method inputs the new characteristics vectors to the bottleneck layer to generate a testing set of time series. The method tests forecasting algorithms using the testing set to find forecasting error. The method trains a ranking function to assign a rank to each forecasting algorithm based on a provided characteristics vector. And, the method automatically selects one of the forecasting algorithms to monitor an additional time series based on the ranks.
Techniques discussed herein manage backups of a service cell (SC). Each SC may include a data plane that is isolated from other SCs and comprises a distributed computing cluster (a cluster). A manifest that specifies one or more backup policies may be used to generate a full backup or a partial backup of a data set stored by the cluster. In accordance with the manifest, a signal may be sent to nodes of the cluster. In response, the nodes may transmit locally-stored data (e.g., data segments) to specified locations at a remote storage. The system may maintain a mapping of which segments correspond to data that was stored in the cluster at a time corresponding to a full or partial backup.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
76.
Container Orchestration Framework Aware Port Scanning
A scanner service can be configured to scan one or more nodes associated with a container management service. The container management service can be configured to manage a set of services by allocating managed containers associated with the set of services to the one or more nodes. The scanner service can be configured to identify vulnerabilities of processes running on the one or more nodes. The vulnerabilities can be attributed to the containers and/or the associated services rather than to the nodes. The scanner service is aware of the container management service and communicates vulnerabilities of associated containers.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
77.
ISSUING DELEGATE CREDENTIALS FOR ACCESSING TARGET RESOURCES
A system provides a delegate credential to access a target resource based on one or more access policies associated with a delegate principal. The system receives a credential request for the delegate credential from a computing entity associated with a recipient principal. The system transmits an approval request to an approval service associated with the target resource for the approval service to approve issuance of the delegate credential to the recipient principal. The system receives an approval confirmation from the approval service and generates the delegate credential responsive to receiving the approval confirmation. The system transmits the delegate credential to a computing entity associated with the recipient principal. The computing entity accesses the target resource by presenting the delegate credential to a resource service associated with the target resource.
A breadth first search (BFS) algorithm is provided that uses out-of-core external storage in a memory constrained system. Memory resources are used as long as they are available and external storage is used when necessary due to memory pressure. The BFS algorithm uses a disk-spilling hash-table (DSH) as the visited set and disk-spilling queues (DSQs) as the BFS frontier queue. To get the most out of the DSH, subsequent inserts and lookups must happen in the same DSH partition. To ensure that consecutive lookups happen in the same DSH partition, the BFS frontier queue is partitioned in a manner similar to the DSH partitions.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
79.
PARAMETRIC DEFINITION GENERATION OF MULTI-DIMENSIONAL STRUCTURES FROM DIGITAL IMAGES
Techniques for generating parametric definitions of multi-dimensional structures from digital images are provided. In one technique, for each image in a set of images, a set of parameter values is stored for a set of parameters of a first function that describes an object in the image. A neural network is trained based on the set of images and the set of parameter values of each image. After training the neural network, an image is input into the neural network. Based on inputting the image into the neural network, an output is generated that comprises a set of output parameter values of a particular object depicted in the image.
A system provides a delegate credential to access a target resource based on one or more access policies associated with a delegate principal. The system receives a credential request for the delegate credential from a computing entity associated with a recipient principal. The system transmits an approval request to an approval service associated with the target resource for the approval service to approve issuance of the delegate credential to the recipient principal. The system receives an approval confirmation from the approval service and generates the delegate credential responsive to receiving the approval confirmation. The system transmits the delegate credential to a computing entity associated with the recipient principal. The computing entity accesses the target resource by presenting the delegate credential to a resource service associated with the target resource.
A system receives an approval request from an access agent for the system to request an approval for the access agent to access the first resource. The system determines, based on a dependency attribute associated with the first resource, a resource dependency between the first resource and a second resource. The system generates, based at least in part on the resource dependency between the first resource and the second resource, an approval requisition for requesting the approval to access the first resource based on an approval workflow corresponding to the second resource. The system traverses the approval workflow corresponding to the second resource to obtain, based on the approval requisition, the approval to access the first resource. Upon obtaining the approval to access the first resource, the access agent accesses the first resource based at least in part on the approval.
Techniques are disclosed for providing, by a first computing resource of a first cloud service provider (CSP), a first service to a user having a first user account with a second CSP, the first service provided via a first private network of the user with the second CSP. The techniques further include transmitting data between a second private network communicatively coupled with the first private network, the second private network associated with a second user account of the user with the first CSP. The techniques further include collecting, by a second computing resource of the first CSP, usage data associated with at least one of the first private network or the second private network. The techniques further include transmitting, by the second computing resource of the first CSP, the usage data to a second service of the second CSP.
Systems and methods are disclosed for implementing entity-relationship privacy for machine learning models. Raw data may be used to fine-tune a large language model that has been pre-trained with publicly available data. Raw data is first modified to generate training data that provides privacy for sensitive relationships between entities. The raw data is first analyzed to identify sensitive entity relationships, where each of the sensitive entity relationships include a first entity and a second entity. Then, for each sensitive entity relationship, at least one of the first and second entities is replaced with a non-sensitive entity generated by the reference model. Then the resulting training data may be used to further train, or fine-tune, a large language model that has been pre-trained with publicly available data.
Techniques are disclosed for providing metrics associated with a private cloud network available from a first cloud service provider via a second cloud service provider. Cross-cloud services can be provisioned and managed by and between private clouds of cloud service providers. First observability data associated with a cloud service provisioned by a first cloud environment in a second cloud environment is obtained. The first observability data is processed into second observability data that is compatible with an observability data format of the second cloud environment, and the second observability data is provided to the second cloud environment.
Techniques are described for invoking and switching between chatbots of a chatbot system. In some embodiments, the chatbot system is capable of routing an utterance received while a user is already interacting with a first chatbot in the chatbot system. For instance, the chatbot system may identify a second chatbot based on determining that (i) such an utterance is an invalid input to the first chatbot or (ii) that the first chatbot is attempting to route the utterance to a destination associated with the first chatbot. Identifying the second chatbot can involve computing, using a predictive model, separate confidence scores for the first chatbot and the second chatbot, and then determining that a confidence score for the second chatbot satisfies one or more confidence score thresholds. The utterance is then routed to the second chatbot based on the identifying of the second chatbot.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Input specifying a first parent worksheet of a descendant worksheet of a spreadsheet workbook of the client-side spreadsheet is received. The descendant worksheet includes a first data dimension to be displayed to a user. The first parent worksheet and the received input are used to access a hierarchical structure characterizing the spreadsheet workbook and a server-side data model to determine one or more non-direct ancestor worksheets of the descendant worksheet. Data is selectively retrieved from the one or more non-direct ancestor worksheets. The data represents one or more non-direct ancestor data dimensions of the one or more non-direct ancestors. The retrieved data in the descendant worksheet in combination with data of the first data dimension are displayed.
Embodiments determine a final occupancy prediction for a check-in date for a plurality of hotel rooms. Embodiments receive historical reservation data including a plurality of booking curves for the hotel rooms corresponding to a plurality of reservation windows, the historical reservation data including a plurality of features. Based on the historical reservation data, embodiments generate a first occupancy prediction for the check-in date using a first model and generate a second occupancy prediction for the check-in date using a second model. Embodiments determine a best performing model from at least the first model and the second model uses a corresponding occupancy prediction corresponding to the best performing model as the final occupancy prediction for the check-in date.
G06Q 10/02 - Reservations, e.g. for tickets, services or events
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Techniques are disclosed for rotating resource identifiers within a region network. An identities service can receive a first request for a first identifier of a software resource within the region network from a client node. The identities service can generate the first identifier based at least in part on first attributes and send the first identifier and a first caching instruction to the client node. The identities service can receive an identity rotation instruction that includes information usable by the identities service to provide a second caching instruction in response to requests for software resource identifiers. The identities service can receive a second request for a second identifier of the software resource. The identities service can generate the second identifier based at least in part on the second attributes and send the second identifier and the second caching instruction to the client node.
Techniques are disclosed for providing, by a first computing resource of a first cloud service provider (CSP), a first service to a user having a first user account with a second CSP, the first service provided via a first private network of the user with the second CSP. The techniques further include transmitting data between a second private network communicatively coupled with the first private network, the second private network associated with a second user account of the user with the first CSP. The techniques further include collecting, by a second computing resource of the first CSP, usage data associated with at least one of the first private network or the second private network. The techniques further include transmitting, by the second computing resource of the first CSP, the usage data to a second service of the second CSP.
Described herein is a mechanism of constructing a cluster placement group in a cloud environment. A request is received from a first customer of a cloud environment, where the request corresponds to creating a cluster placement group (CPG). The CPG identifies a first set of requested resources comprising a first type of resource and a second type of resource requested by the first customer, wherein the first type of resource is different than the second type of resource. An availability domain is identified in the cloud environment that includes a second set of available resources comprising all resources included in the first set of requested resources. From the second set of available resources in the availability domain, a set of resources corresponding to the first set of requested resources to the first customer are allocated. The set of resources allocated to the first customer is associated with the CPG.
Techniques for inventory optimization using a simulation service model are provided. In one technique, a first optimization technique is used to generate, based on demand data, first output that comprises a first plurality of output values, each value corresponding to a node in a multi-echelon system. While using the first optimization technique, a plurality of variable values, each variable value corresponding to a node in the multi-echelon system, is generated. Then, a second optimization technique that is different than the first optimization technique is used to generate, based on the demand data and the plurality of variable values, second output that comprises a second plurality of output values, each value corresponding to a node in the multi-echelon system.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 10/08 - Logistics, e.g. warehousing, loading or distributionInventory or stock management
G06Q 10/087 - Inventory or stock management, e.g. order filling, procurement or balancing against orders
92.
SUBSCRIPTION TO A SERVICE PROVIDED BY A FIRST CLOUD SERVICE PROVIDER VIA A SECOND CLOUD SERVICE PROVIDER
Techniques are disclosed for a method including providing, by a first computing resource of a first cloud service provider (CSP), a first service to a user having a first user account with a second CSP, the first service provided via a first private network of the user with the second CSP. The method further including transmitting data between a second private network communicatively coupled with the first private network, the second private network associated with a second user account of the user with the first CSP. The method further including associating, by a second computing resource of the first CSP, the first private network and the second private network with a subscription of the second user account with the first CSP, the subscription available via the second CSP.
Techniques are disclosed herein for managing resource locks within a cloud environment of a cloud service provider offering a cloud service to a second cloud service provider. A request for a cloud service is received and a set of operations associated with provisioning the cloud service is performed. At least one operation can include designating resources associated with the cloud service as locked resources in a first cloud environment. The cloud service is provisioned, which causes access to a locked resource of the locked resources to be restricted from within the first cloud environment and permitted from within a second cloud environment. The provisioned cloud service enables data pertaining to the cloud service to be transferred from the second cloud environment to the first cloud environment.
Techniques are disclosed for augmenting data sets used for training machine learning models and for generating predictions by trained machine learning models. These techniques may increase a number and diversity of examples within an initial training dataset of sentences by extracting a subset of words from the existing training dataset of sentences. The techniques may conserve scarce sample data in few-shot situations by training a data generation model using general data obtained from a general data source.
Techniques are disclosed for aggregating received data from a data stream. Data is received from a particular stream partition to which a device is subscribed, and subsets of the data, from the particular stream partition, are aggregated based on respective keys associated with the subsets of the data. The device determines whether one or more subsets of data, associated with a particular key, meet at least one processing criteria, such as a threshold amount of data, and refraining from processing the aggregated data when the processing criteria is not met. Once additional subset(s) of data associated with the particular key are received, they are aggregated with the one or more subsets of data. When the processing criteria is satisfied, the device processes the aggregated subsets of data associated with the particular key.
Techniques are described for monitoring the health of services in a computing environment such as a data center. More particularly, the present disclosure describes techniques for monitoring the health and availability of capabilities in a computing environment such as a data center by enabling alarms to be associated with the capabilities. A capability refers to a set of resources in a data center. By providing the ability to associate an alarm with a capability, the health or availability of the associated capability can be monitored or ascertained by tracking the state of the alarm associated with the capability. For example, if the alarm associated with a particular capability is triggered, it may indicate that the particular capability and the one or more resources corresponding to the particular capability are not in a healthy state. Accordingly, by monitoring alarms associated with capabilities, the health of the associated capabilities can be ascertained.
The technology disclosed herein enables 5G wireless monitoring and analysis using an enhanced feed of 5G SBI traffic. In a particular example, a method includes receiving a 5G SBI message in a Service Communication Proxy (SCP), extracting information from protocols used to transmit the 5G SBI message, and including the information in a mirror message with a copy of a payload of the 5G SBI message. The method further includes transmitting the mirror message to a monitoring system.
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
98.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR DETECTING AND MITIGATING SECURITY ATTACKS ON PRODUCER NETWORK FUNCTIONS
A method for detecting and mitigating security attacks on producer network NFs (202) using access token to non-access-token parameter correlation at a proxy NF (126B) includes receiving an inter-PLMN SBI request message, obtaining, from an access token transmitted with the inter-PLMN SBI request message, at least one network- or service-identifying parameter and obtaining, externally from the access token, at least one network- or service-identifying parameter. The method further includes comparing the at least one network- or service-identifying parameter obtained from the access token and the at least one network- or service-identifying parameter obtained externally from the access token and performing a network security action when the at least one network- or service-identifying parameter obtained from the access token does not match the at least one network- or service-identifying parameter obtained externally from the access token.
Novel techniques are described for routing of overlay packets within overlay networks in a cloud environment. A network device, located in the data path between a compute instance in an overlay network that is the source of a packet and a compute instance in the overlay network that is the intended destination of the packet, is able to route the packet using only special encoded information included in the packet's header when the packet is received by the network device. The special encoded information is in the form of a special encoded address (e.g., an encoded IP address) that is included in a field of the packet's header. The special encoded address encodes various different pieces of information that are used by the network devices in the data path from the source compute instance to the destination compute instance to route the packet in the overlay network.
H04L 45/64 - Routing or path finding of packets in data switching networks using an overlay routing layer
H04L 45/645 - Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
H04L 45/655 - Interaction between route computation entities and forwarding entities, e.g. for route determination or for flow table update
H04L 45/76 - Routing in software-defined topologies, e.g. routing between virtual machines
100.
SUBSCRIPTION TO A SERVICE PROVIDED BY A FIRST CLOUD SERVICE PROVIDER VIA A SECOND CLOUD SERVICE PROVIDER
Techniques are disclosed for a method including providing, by a first computing resource of a first cloud service provider (CSP), a first service to a user having a first user account with a second CSP, the first service provided via a first private network of the user with the second CSP. The method further including transmitting data between a second private network communicatively coupled with the first private network, the second private network associated with a second user account of the user with the first CSP. The method further including associating, by a second computing resource of the first CSP, the first private network and the second private network with a subscription of the second user account with the first CSP, the subscription available via the second CSP.