METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR NETWORK ANALYTICS DATA DIRECTOR (NADD)-ASSISTED DYNAMIC CONFIGURATION OF HYPERTEXT TRANSFER PROTOCOL (HTTP) PARAMETER SETTINGS AT NETWORK FUNCTIONS (NFS)
A method for NADD-assisted dynamic configuration of HTTP parameter settings at NFs includes receiving, at the NADD, SBI message feeds from a plurality of producer NFs. The method further includes determining, by the NADD and from at least one of the SBI message feeds, an HTTP parameter setting for one of the producer NFs. The method further includes communicating, by the NADD, the HTTP parameter setting to the producer NF. The method further includes receiving, by the producer NF and from the NADD, the HTTP parameter setting. The method further includes using, by the producer NF, the HTTP parameter setting to control traffic flow from a consumer NF to the producer NF.
Improved network traffic flow processing techniques are described. In a network device providing multiple processing planes, different processing resources can be allocated to affect efficient and rapid packet processing. This allocation of resources can be upset via receipt of a configuration update. When a configuration update is received, a previously programmed flow can be provisionally invalidated. To prevent the overwhelming of slow path resources, a provisionally invalid flow can continue to be processed according to previous programming by a fast path.
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
3.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING ACCESS TO COMMUNICATION NETWORK HEALTH INFORMATION USING COMMUNICATION-NETWORK-AWARE GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RETRIEVAL AUGMENTED GENERATION (RAG) MODEL AND NETWORK FUNCTION (NF)
A method for providing access to communication network health information using a communication-network-aware generative Al RAG model includes receiving, as a first input to the RAG model, a query for communication network health information and receiving, as a second input to the RAG model, at least one feed of communication network health information regarding at least one NF. The method further includes using the query to extract, from the communication network health information regarding the at least one network function, context information for the query for communication network health information, providing the query and the context information as inputs to a base LLM component of the RAG model, and generating, as output, a query response including an indication of the communication network health information requested by the query and in a natural language format.
Disclosed is an improved approach to manage hierarchical metadata for a database system. A hierarchical metadata structure pertaining to a hierarchical object structure of a multitenant database architecture for a plurality of tenants may be maintained where the multitenant database architecture comprising a container database (CDB) that includes a pluggable database (PDB). A request for an access to a metadata object in the hierarchical metadata structure for one or more database objects in the container database may be received. In response to the request for the access to the metadata, access to at least a portion of the hierarchical metadata structure is provisioned.
A rack level cage and components thereof are disclosed herein. The rack level cage can be a physical security system. The physical security system can include a rack cage that can include at least one top opening. The system can also include a blocking plate secured to the rack cage to at least partially obstruct the top opening.
A rack level cage physical security system with magnetic sensor shield is described herein. The rack level can be a physical security system that can include a rack cage, a body defining an internal volume that can contain at least one server, a door coupled to the body and moveable between an open position and a closed position, and a magnetic securement system that can prevent an external magnetic field from affecting a magnetic switch. The internal volume of the body can be accessible via the door when the door is in the open position.
Embodiments described herein are generally directed to computer-based data analytics and the processing of enterprise data, including the generation and use of data models for determining inferred characteristics associated with candidates. In accordance with an embodiment, the system utilizes data-processing pipelines and machine learning models to process structured, semi-structured, and/or unstructured sets of data, received from various sources; generate a multi-dimensional ontology and a taxonomy associated with the characteristics of open positions or potential candidates; identify, based on the data models, one or more additional or inferred characteristics associated with the candidates; and present the output by way of an analytics dashboard, scorecard, or other data visualization.
Systems, methods, and computer-readable media are provided for generating natural language project summaries via large language models including deterministically derived data value narratives. A computer-implemented method includes processing a first input configuring data stored in association with a plurality of fields, generating a narrative for a project, and causing display of the narrative in a report for the project. The narrative is generated by applying one or more deterministic operations to derive one or more values for the project based at least in part on at least one field of the plurality of fields, based at least in part on the configured data, generating a prompt, prompting a large language model with the prompt to generate a result, and storing the result as the narrative for the project. The prompt includes the one or more derived values and a context comprising the project for which a narrative is being generated.
Systems, methods, and computer-readable media are provided for generating a prompt that specifies a plurality of fields and corresponding values of record(s). The prompt specifies a data structure to use for filling in components of a change order and includes a particular natural language description of a particular issue that caused the change order. A large language model is prompted with the prompt to generate a result based at least in part on the corresponding values of the record(s). The result from the large language model includes a particular data structure comprising particular values of a particular change order, which may then be displayed on a user interface along with an option to save the particular change order. Information from the record(s) and/or result(s) from the large language model may indicate whether or not manual labor, financial resources, and/or other resources are impacted by the change, and an impact may be stored in association with the change order reflecting a corresponding type of impact. The user interface may display another option to provide natural language input to modify the particular change order, causing the large language model to be re-prompted to generate another result to trigger change order creation.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
10.
LEVERAGING LARGE LANGUAGE MODELS TO CRAFT MEANINGFUL SYNTHESIS OF THE UNDERLYING TRENDS AND PATTERNS IN A CERTAIN SEGMENTS
Systems, articles, and computer-implemented methods are provided for generating summaries of a plurality of insights in multi-dimensional data to describe underlying trends using a large language model. A data structure is generated describing the plurality of insights where the data structure encapsulates for each insight of the plurality of insights to be included: a member of a data hierarchy that fits a descendant dimension that includes the insight, a value of the descendant dimension that fits the insight, and a characteristic of the insight. The data structure is included within a prompt to a large language model to summarize the plurality of insights. The prompt may also include data representing a relationship between the plurality of insights, such as how a first insight of the plurality of insights contributes to a second insight of the plurality of insights.
Systems, methods, and computer-readable media are provided for using generative Al enriched with metadata about historical document characteristics to transform documents of various formats, including images, to the fields and values they represent. A prompt template may be selected in association with a type of document. The prompt template indicates field definition(s) of field(s) to be detected in the document and location(s) in which the field(s) have been detected in prior documents. A large language model is prompted with a prompt generated using the prompt template to generate a result that assigns value(s) to the field(s). Output from the language model is used for identifying the field to value mapping for the document, such that data detected from the document may be stored in appropriate database structures of a database. Metadata stored in association with the prompt template is updated based on location(s) in the document in which the field(s) were detected, and the value(s) of the field(s) are stored in a database. Outbound documents may be similarly translated to detect values of corresponding fields requested by third parties, even if those values are not stored in the database. In this scenario, values for fields may be detected in outbound documents using the prompt templates enriched with metadata as processed by the large language model before such information is prepared to be sent to a third party.
Techniques discussed herein relate to generating and utilizing snapshots (also referred to as "service images") of a cloud-based service. A snapshot may be generated within a source environment (e.g., one compartment and/or region) and re-instantiated in a target environment (e.g., a different compartment and/or region, the same compartment/region as would be the case in a recovery scenario). The snapshot may include serialized data of any suitable combination of resource metadata, images, block/boot volume content, runtime state data, environmental variables, and the like of the service of the source environment, at a time at which the snapshot was generated. The snapshot may be deserialized in the target environment and used to perform infrastructure and/or artifact/software releases to bring the control plane and/or data plane resources of the target environment to a desired state corresponding to the state of the service in the source environment when the snapshot was generated.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
H04L 41/084 - Configuration en utilisant des informations préexistantes, p. ex. en utilisant des gabarits ou en copiant à partir d’autres éléments
H04L 67/00 - Dispositions ou protocoles de réseau pour la prise en charge de services ou d'applications réseau
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
13.
SYSTEM AND METHOD FOR USE WITH A DATA ANALYTICS ENVIRONMENT TO ENABLE USE OF AI IN PROVIDING CUSTOMER SUPPORT
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to enable use of AI in providing customer support. Machine learning AI models are trained based on one or more previous service request lifecycles of service requests of a customer to determine latent emotions of the customer based on determined customer problem data. A customer service prioritization signal related to a current service request of the customer is generated by a predictive analytics application that includes the models. The customer service prioritization signal is indicative of a need to prioritize a current service request of the customer based on the determined latent emotions of the customer and is generated during and prior to the end of the lifecycle of the current service request whereby escalation of the current service request may be deferred or prevented.
G06Q 30/016 - Fourniture d’une assistance aux clients, p. ex. pour assister un client dans un lieu commercial ou par un service d’assistance après-vente
An interactive digital assistant action interface includes a computer including processors that provide access to a data analytics environment, a chat-assistance service or application, and a large language model (LLM). The chat-assistance service or application delivers to the LLM a prompt corresponding to a received query and a desired task is determined based on the LLM receiving the prompt. One or more processes, steps, and/or APIs of the determined desired task are executed at the data analytics environment, and results of the one or more processes, steps, and/or APIs of the determined desired task being executed at the data analytics environment are provided.
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to provide hi-query AI for use with the data analytics environment. Systems and methods disclosed can provide for query processing and semantic analysis. The system can take a user's natural language question and run a semantic search to discern the query's intent and find tables relevant to the question, and generate a query to run against a data store or data warehouse.
Embodiments described herein are generally related to data analytics environments, and to systems and methods for providing aggregated summaries and aspect scores associated with unstructured textual data. In accordance with an embodiment, the system uses a key-based or batch approach that assesses factors associated with an unstructured textual dataset, such as, for example, a total number of text entries per key, or the character length of each text entry. Based on a consideration of such factors, the system sends batches of text entries, and a prompt, to a large language model processor, to collect intermediate batch results. The intermediate batch results can be used first to develop a numerical score or summary for each key, directed to various aspects of interest within the data; and subsequently to generate aggregated summaries and/or aspect scores associated with the textual dataset, for use in displaying visualizations or returning additional analytical information.
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to provide an AI-based assistant for use in software development. In accordance with an embodiment, an exemplary method can provide access to a data analytics environment by a computer including one or more processors. The method can provide a first agent operating on the computer, wherein the first agent monitors an application running at an application server. The method can provide a second agent operating on the computer, wherein the second agent comprises a connection to one or more large language models. The method can, upon detection by the first agent, of an error or exception associated with the application running at the application server, utilize, by the second agent, the LLM to generate a fix responsive to the detected error or exception.
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for augmenting large language models with graph knowledge generated by universal modelling of datasets. In accordance with an embodiment, a method for augmenting large language models with graph knowledge generated by universal modeling of datasets, is provided. The method can provide, by a computer including one or more processors, access to a data analytics environment. The method can create a graph schema associated with a dataset of the data analytics environment. The method can receive a query associated with the dataset of the data analytics environment. The method can receive, at a large language model, a parsed version of the query, together with the graph schema. The method can, based upon the received parsed query and the graph schema, generate, by the large language model, a graph query.
Embodiments described herein are generally related to cloud computing, cloud infrastructure, or data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment or other cloud computing environment to provide an open data share for data formats with Delta Sharing. The systems and methods described herein allow for a data sharing service to share data to a client regardless of the format of the data. In accordance with an embodiment, a data share server generates a data log associated with a data table at data source. The data share server can receive a request from a data sharing client. Based upon the created data log associated with the data source, the data share server can share the data table, together with the generated data log associated therewith, to the data sharing client.
In accordance with an embodiment, described herein is a system and method for automated data warehouse creation and extension from user natural language requests. A data augmentation system, operating on one or more computers, can receive a natural language input from a user, including an instruction to augment a set of data, for example to create a fact/dimension, or to extend an existing data entity by bringing additional columns from a source data and publishing the combined data to a target data warehouse instance. The system determines an understanding associated with the user instruction in plain-language terms (for example, "extend sales order transactions with approval status"), and determines and performs a corresponding course of actions to create, extend, or otherwise augment the set of data, without requirement for the user to have a detailed knowledge of the data warehouse, its schemas, or other data dependencies.
In accordance with an embodiment, described herein are systems and methods for use of an in-memory data grid as a vector database, with linearly-scalable data ingestion, for use in generative artificial intelligence (AI), data visualization, or other applications that include the use of a large language model (LLM) or a retrieval-augmented generation (RAG) process. In accordance with an embodiment, where AI-related tasks or processes, such as content ingestion and vectorization, or vector similarity searches, can be performed in parallel, the in-memory data grid provides efficient scaling and execution of such processes. When tasked with large amounts of content to be vectorized - for example in a cloud environment or as part of an on-premise solution - the system can scale its processing of the content, in parallel where indicated, to perform an optimal utilization of available computing hardware resources, and expeditiously perform required tasks or processes.
A resource analytics system (RAS) is disclosed that creates a single, centralized, and trusted source of cloud resource inventory that provides near real-time visibility for a user into cloud resources deployed across different geographical regions within a cloud environment. The RAS obtains resource metadata related to a set of resources deployed in a cloud environment and provides the resource metadata in a source data model. The RAS extracts user-specific resource metadata from the source data model and populates a target data model with the user-specific resource metadata. The target data model is created in a user tenancy associated with a user. The RAS receives a request to query the user-specific resource metadata in the target data model and obtains a query result related to execution of the query. The RAS causes display of the query result via one or more user interfaces.
Techniques for a unified data format that may be used across memory tiers are provided. In one technique, a compression unit is generated that comprises a plurality of data blocks. The compression unit stores tabular data in a columnar format. The plurality of data blocks includes (1) a primary header block that represents a first set of rows of the tabular data and (2) a secondary header block that represents a second set of rows, of the tabular data, that is different than the first set of rows. The compression unit is stored in persistent storage.
Techniques persist and restore in-memory neighbor graph vector indexes that include an index of vertex identifiers between layers of a plurality of layers for a graph-based approximate nearest neighbor search in a vector database. The plurality of layers include a higher layer and a lower layer that includes more vertices than the higher layer. A checkpoint is generated based on the neighbor graph vector index. The checkpoint can include a plurality of unit entries. Each unit entry can include vertex data that identifies vertices in respective subsets of a plurality of subsets of vertices in a lower layer of the neighbor graph vector index.
A size of a neighbor graph vector index can be estimated. The neighbor graph vector index can be an index of neighbor vertices for a graph-based approximate nearest neighbor search in a vector database. A vector memory pool size of a vector memory pool in a database instance memory can be determined based on the estimated size of the neighbor graph vector index. The database instance memory contains data and control information for a database instance. The neighbor graph vector index can be stored in the vector memory pool. An operation affecting available space in the database instance memory can be detected. The vector memory pool size can be automatically adjusted in response to detecting the operation.
Various embodiments of the present technology generally relate to systems and methods for providing an inter-domain engine for masking communications exchanged between a visitor network and a home network. In an aspect, the inter-domain engine may be part of a home network and determine a service request from a visitor consumer NF. Based on the service request, the inter-domain engine may determine a service response containing NF topology information for furnishing the service request within the home network. Responsive to determining the service response, the inter-domain engine may generate a mask NF profile based on the service response and generate a mask service response based on the mask NF profile and the service request. Once generated, the inter-domain engine may provide the mask service response to the visitor network.
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
H04W 12/02 - Protection de la confidentialité ou de l'anonymat, p. ex. protection des informations personnellement identifiables [PII]
H04W 48/18 - Sélection d'un réseau ou d'un service de télécommunications
27.
CONNECTING ZERO TRUST PACKET ROUTING ENABLED NETWORKS
Techniques are described for enforcing the flow of traffic through one or more gateways using ZPR policy. A method includes accessing a ZPR policy, identifying from the ZPR policy, one or more ZPR statements that specify one or more gateways and a connection between one or more first endpoints a first virtual cloud network (VCN) and one or more second endpoints that are external from the first VCN; generating rules to enforce the flow of traffic; and distributing one or more first rules of the rules to at least one of the one or more gateways to enforce the flow of traffic, and one or more second rules of the rules to a first enforcement point (EP) associated with the first VCN and one or more third rules or the rules to a second EP associated with the one or more second endpoints.
Systems, methods, and computer-readable media are provided for accessing a stored data structure representing a decision tree, determining a plurality of rows of text representing leaf nodes of the decision tree and a plurality of conditions that describe paths to the leaf nodes along with a label for the corresponding leaf node, generating a prompt including the plurality of rows of text and a request to generate a result comprising a natural language summary column, executing the prompt against a large language model, receiving a result comprising a natural language summary column, storing a first natural language summary of a first path from the natural language summary column in association with a first leaf node in the stored data structure, and storing a second natural language summary of a second path from the natural language summary column in association with a second leaf node in the stored data structure.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
29.
GENERATING COHESIVE EXPLANATIONS THAT COMMUNICATE INSIGHTS AND PATTERNS ON MULTI-DIMENSIONAL FINANCIAL PLANNING DATA
Systems, articles, and computer-implemented methods are disclosed for generating natural language summaries of a multi-dimensional analysis of a detected anomaly within a member of multi-dimensional data by prompting a LLM with a prompt generated to include data about the anomaly in a manner understandable by the LLM. The prompt to the LLM includes a path to a member of the hierarchy containing an anomaly with a delimiter between the member and ancestor nodes. The delimiter allows the ancestral context of the member of the hierarchy to be understood by the LLM. The prompt also includes a metric defining a magnitude of the anomaly in relation to another value, such as an average, a value of the anomaly, a time corresponding to the anomaly, and one or more examples of other anomalies with included data about those anomalies matching the type of data provided for the detected anomaly.
Systems, methods, and computer-readable media are provided for detecting user-specific context for a receipt and embedding the user-specific context in a prompt to provide a hint that helps a large language model detect value(s) for field(s) from the receipt. The receipt may then be integrated with an expense management system.
Systems, methods, and computer-readable media are provided for accessing a user request received in a user session with an application. Item(s) of data may be selected in the user session in association with the user request. A data management system determines that the request is for or otherwise relevant to item(s) of application functionality such as financial exceptions, account balances, and/or operations detail. The data management system generates a prompt by adding structured data return template(s) for triggering the item(s) of application functionality, the user request, and, if applicable, structured text representing the item(s) of data to a prompt template. The data management system prompts a large language model with the prompt and receives a result. The result includes a data structure conforming to the structured data return template(s) and that is based at least in part on the item(s) of data. The data management system triggers the relevant item(s) of application functionality based at least in part on the result and causes display of information indicating the item(s) of application functionality have been triggered.
Systems, methods, and computer-readable media provide a context-specific prompt to answer a user query. The systems, methods, and computer-readable media determine a context based on content of a natural language request and/or determine a role of a user who submitted the natural language request. Additionally or alternatively, templates or RAG sources that will be used for prompt generation may include financials domain-specific knowledge or other domain-specific knowledge or insights. Inclusion of this additional information in the prompt enhances the context to promote more accurate results from a large language model. In one embodiment, the prompt templates are created from various RAG sources, such as payables, general ledger, receivables, and asset management, containing structured data and information specific to the financial domain, enterprise, or other domain, which helps craft accurate prompts. A prompt is generated that identifies a subset of available fields and other selected information based on the role or other context. The prompt template may contain domain-specific knowledge uses a relevant domain or enterprise information to drive relevant results, and an executable query is generated by a large language model based on the prompt. The executable query causes data to be retrieved from a database to generate a result, and information is displayed based at least in part on the result.
Systems, methods, and computer-readable media are provided for providing access, via a read-optimized database service, to functionally oriented pre-built metadata based logical objects. Each logical object provides access to a set of resources defined by a logical schema relevant to a functional area. Each logical schema is determined based at least in part on a read-optimized, synchronized version of one or more underlying related database structures stored to a read-optimized database accessible via the logical schema using the read-optimized database service, with the logical schema being different than the database schema of the underlying database structures. Requests to the read-optimized database service from a consumer of a particular functional area are evaluated against a particular set of logical resources associated with the functional area and translated to map to relevant underlying database structures, thus eliminating the requirement for consumer to understand underlying complex database structures as well as to shield consumers from underlying database structure changes in the future. Further some of the key text data in reference logical objects can be vectorized for usage in LLM-RAG use cases for assisting in semantic/similarity search of user queries. An attribute defaulting configuration interface and process is also described.
Systems, methods, and computer-readable media are provided for determining matches between records of different systems based on aggregate record data, and graphically marking potentially matched groups of data along with predicted confidence levels. Preliminary matching tools may allow allow users to define various rules based on which a majority of the transactions can be matched and reconciled. However, remaining transactions are disposed of in an interactive matching process. The matches may be determined unidirectionally from a source transaction to transactions from a target ledger, or bidirectionally from transactions in the target ledger to transactions other than the source transaction. Transactions may be matched many-to-many, one-to-many, or many-to-one, and a proposed order of match selections may be presented in a user interface. Match metadata or insights may be displayed to show a confidence of the match, reasons for the confidence, and/or a confidence of other matches that may be more beneficial than a match with a source transaction. The confidence and match insights may be generated by a machine learning model with access to transactions from a source transaction ledger and a target transaction ledger. The machine learning model may be trained on manual activity for prior matches that have been made. Matches may be performed using a hybrid machine learning model that accounts for random forests, decision trees, neural networks, naïve bayes algorithm, and/or a generalized linear model. Machine learning models also incorporate ongoing feedback from the users who can either accept or reject suggested matches and hence the models undergo an evolution process and constantly update from user patterns.
A system accesses a code module that includes one or more units of code and instructs a machine learning model to generate a test suite based on a specification for the code module. The test suite includes tests for testing the code module to verify that one or more units of code successfully execute in accordance with the specification. The specification includes preconditions that precede successful execution of the one or more units of code and postconditions that exist following successful execution of the one or more units of code. The machine learning model generates the test suite. The system receives the test suite from the machine learning model. The system stores and/or transmits the test suite for use in testing the code module.
A multi-cloud control plane of a source cloud environment receives from a control plane of a target cloud environment, a first request for accessing a service provided in the source cloud environment, the first request including a plurality of identifiers that enable identifying a first set of resources in the target cloud environment that are allocated to a customer. A first identifier is extracted from the plurality of identifiers included in the first request. Responsive to validating the first identifier, the multi-cloud control plane obtains a resource principal session token (RSPT), and information related to a second set of resources in the source cloud environment that are allocated to the customer. The multi-cloud control plane triggers the service provided in the source cloud environment based on the RSPT, wherein the service deploys service-based resources based on the second set of resources in the source cloud environment.
A multi-cloud control plane of a source cloud environment receives from a control plane of a target cloud environment, a first request for accessing a service provided in the source cloud environment, the first request including a plurality of identifiers that enable identifying a first set of resources in the target cloud environment that are allocated to a customer. A first identifier is extracted from the plurality of identifiers included in the first request. Responsive to validating the first identifier, the multi-cloud control plane obtains a resource principal session token (RSPT), and information related to a second set of resources in the source cloud environment that are allocated to the customer. The multi-cloud control plane triggers the service provided in the source cloud environment based on the RSPT, wherein the service deploys service-based resources based on the second set of resources in the source cloud environment.
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to determine a probability of failure or downtime in work orders. In accordance with an embodiment, an example method can provide access to a work order application at a data analytics environment, the work order application providing a work order canvas at which a work order comprising an instance of a work order asset is identified. The method can generate, by a prediction engine of the data analytics environment, an indication of a likelihood of success of the work order, wherein the prediction engine utilizes data associated with the instance of the work order asset to provide the indication of the likelihood of success. The method can provide the indication of the likelihood of success of the work order via an interface.
G06Q 10/0635 - Analyse des risques liés aux activités d’entreprises ou d’organisations
G06Q 10/0637 - Gestion ou analyse stratégiques, p. ex. définition d’un objectif ou d’une cible pour une organisationPlanification des actions en fonction des objectifsAnalyse ou évaluation de l’efficacité des objectifs
G06Q 10/20 - Administration de la réparation ou de la maintenance des produits
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to provide enrichment of data from external sources via a large language model. In accordance with an embodiment, the systems and methods can utilize a high-powered LLM to suggest data-related factors that may not be explicitly represented in the user's dataset. For example, upon a user's selection of a pair of datapoints, the systems and methods can utilize a LLM to provide suggestions for root causes of those datapoints, even though such root causes are not explicitly represented in the dataset.
In accordance with an embodiment, described herein are systems and methods for use of an in-memory data grid as a vector database, with linearly-scalable data ingestion, for use in generative artificial intelligence (AI), data visualization, or other applications that include the use of a large language model (LLM) or a retrieval-augmented generation (RAG) process. In accordance with an embodiment, the in-memory data grid provides functionality to represent content as document chunks containing text, embedding, and metadata, which allows the system to support a variety of RAG framework integrations in a consistent manner. To further support the use of RAG processes, the system can support document ingestion via various types of document sources, such as the use of HTTP URLs that allow retrieval of documents using HTTP GET calls; or, for example in cloud environments, the use of object storage and/or other cloud provider storage services as appropriate.
A method and apparatus for offloading compute-intensive workloads is provided. A database system compiles an execution plan to generate an offload-enabled plan by identifying a candidate offloading region in the execution plan, generating and adding an offloading branch in the offload-enabled plan, corresponding to the candidate offloading region, for execution by a compute offload runtime, wherein the compute offload runtime comprises a compute offload runtime library executing on the database system and on each node of a compute offload server, and adding the candidate offloading region as a fallback branch in the offload-enabled plan. The database system executes the offload-enabled plan by executing the offloading branch using one or more compute nodes in the database server or the compute offload server using the offload runtime or by executing the fallback branch using one or more compute nodes in the database server.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
Systems, methods, and machine-readable media may facilitate programmable data trimming. One or more request instructions may be received from an application. The one or more request instructions may include a request length specifying a response size expected for a response from a responder. The one or more request instructions may further include a trim length specifying a portion of the response to be retained. A request may be configured based at least in part on the request length and the trim length. The request may be transmitted to the responder via a network. The response may be received from the responder. The response may be trimmed to retain only the portion of the response specified by the trim length. Storage of only the portion of the response in a memory location may be caused.
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
43.
EFFECTIVE LLM PROMPT CREATION FOR MULTI-DIMENSIONAL DATA ANALYSIS
Systems, methods, and computer-readable media are provided for detecting an anomaly involving multiple dimensions, and generating a summary of the anomaly at least in part by prompting an LLM with key-value pairs relevant to the anomaly. The key-value pairs provided may be determined by drilling down into dimensional members most relevant to the anomaly (e.g., Top N and/or Bottom N members) to provide context for the LLM to summarize the anomaly and account for various levels in a multidimensional hierarchy. The key-value pairs may additionally or alternatively be determined by comparing values from different times relevant to the anomaly to provide context for the LLM to summarize the anomaly and account for relevant time variances. The key-value pairs of the Top N and/or Bottom N members and/or time variant comparison values may be included to enrich the LLM's summary to account for the multidimensional hierarchy and/or relevant time variances without overwhelming the LLM with extraneous information.
Systems, methods, and computer-readable media are provided for triggering functionality on data to be generated in a user interface and/or data shown or visualized in a user interface based on a natural language request that references actions to be performed and data items to use in performing the actions. The user interface actions are triggered based on a structured object generated by a large language model (LLM), which may then be processed, validated, and used to carry out the actions. The structured object may cause generation, on the user interface, of a representation of slice(s) of data across different dimensions with different filter(s) applied to include one or more dimensions and exclude one or more other dimensions. The slice(s) of data may be determined based on available dimension(s) and/or dimension value(s) specified, in the prompt, for the data schema. If the representation is a grid, a shape of a grid may be recommended for showing the slice(s) of data determined.
Systems, methods, and computer-readable media are provided for triggering functionality on data to be generated in a user interface and/or data shown or visualized in a user interface based on a natural language request that references actions to be performed and data items to use in performing the actions. The user interface actions are triggered based on a structured object generated by a large language model (LLM), which may then be processed, validated, and used to carry out the actions. The LLM may be instructed to use control(s) of a displayed representation of a set of data, and the structured object generated by the LLM may cause updating, on the user interface, the displayed representation to reflect change(s) requested (e.g., to adjust filters, change a visualization or view, or zoom in or out on a set of multidimensional data). The control(s) may be selected from among representation transformation action(s) that are also available to be performed against the displayed representation via direct user input.
Systems, methods, and other embodiments associated with generative Al assistance that is integrated into a command line interface (CLI) to a cloud platform are described. In one embodiment, an example method includes intercepting, in a command line interface to a cloud platform, a malformed command to the cloud platform. The method records the malformed command in a conversation history and passes the malformed command to the cloud platform to execute. The method intercepts a response to the malformed command that was returned from the cloud platform to the command line interface. The method passes the response to a generative artificial intelligence model to initiate generation of an enhanced response that includes a correction to the malformed command based at least in part on the response and context from the conversation history. And, the method presents the enhanced response in the command line interface.
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Techniques for processing "Sessionless" transactions are provided. In one technique, a commit transaction instruction (that includes a transaction identifier) is received from a requesting entity at a first instance of a database server. In response a prepare message is transmitted to a second instance of the database server. A response that indicates that the second instance successfully performed an action in response to the prepare message is received from the second instance based on the prepare message. After the response is received, a transaction that is identified by the transaction identifier is committed. A response, to the commit transaction instruction, that indicates that the transaction is committed is transmitted to the requesting entity. Other techniques include piggybacking transaction instructions with database operation instructions and allowing different database server instances to perform work for a single transaction.
Systems, methods, and computer-readable media are provided for triggering functionality on data to be generated in a user interface and/or data shown or visualized in a user interface based on a natural language request that references actions to be performed and data items to use in performing the actions. The user interface actions are triggered based on a structured object generated by a large language model (LLM), which may then be processed, validated, and used to carry out the actions. The LLM may be further instructed based on available interface functionality control(s) and which content has been selected on the user interface. The structured object may be used to generate output content that is based on the selected content, such as a summary or other text transformation, a targeted visualization, output document for consumption by another application, or other consumable content. The output content may be stored in association with a content consumer for display in a user interface.
Systems and methods for implementing a secure multiparty protocol for fine-tuning of language models are disclosed. An end-to-end privacy-preserving protocol using secure multi-party computation (MPC) and executed on a plurality of computing nodes enables fine-tuning a language model targeting classification tasks using private, sensitive data while providing secure protection of the training data and without sacrificing model accuracy.
A database query processing method includes receiving a natural language request for information contained within a database from a user in an application session, prompting a large language model to generate a SQL request, and receiving a particular SQL request from the large language model that is parsed to identify a command to access one or more database structures. A security predicate is appended to the command, creating a modified SQL request, to enforce one or more database access constraints constraining a user-authenticated client device that submitted the request that is not enforced in a database session between the application and a database. The modified SQL request is used to access data in the database session, and a visualization of the accessed data is caused to be displayed in the application session.
The techniques described herein relate to a first infrastructure provided by a first cloud service provider, wherein the first infrastructure is connected, using an overlay bridge, to a second infrastructure of a second cloud service provider that is different from the first cloud service provider, wherein the first infrastructure comprises a first set of compute resources and the second infrastructure comprises a second set of compute resources; the first infrastructure is configured to form a cloud network between the first set of compute resources and a second set of compute resources; and the cloud network is configured to provide a cloud service of the second cloud service provider to a customer of the first cloud service provider using the first set of compute resources and the second set of compute resources.
A multi-cloud control plane of a source cloud environment receives from a control plane of a target cloud environment, first information included in a first metadata instance and second information included in a second metadata instance. The first information represents a logical organization of resources, and the second information indicates one or more networking resources that are to be created in the source cloud environment, respectively. The multi-cloud control plane creates the logical organization of resources associated with the first metadata instance and the one or more networking resources associated with the second metadata instance in the source cloud environment. Responsive to receiving a request from a customer of the target cloud environment for accessing a service provided by the source cloud environment, one or more service-based resources are deployed based on at least one of the first metadata instance and the second metadata instance.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/04 - Architectures ou dispositions de gestion de réseau
H04L 41/0894 - Gestion de la configuration du réseau basée sur des règles
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
H04L 41/5041 - Gestion des services réseau, p. ex. en assurant une bonne réalisation du service conformément aux accords caractérisée par la relation temporelle entre la création et le déploiement d’un service
H04L 41/5054 - Déploiement automatique des services déclenchés par le gestionnaire de service, p. ex. la mise en œuvre du service par configuration automatique des composants réseau
53.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING STREAM CONTROL TRANSMISSION PROTOCOL (SCTP) MULTIHOMING BETWEEN A KUBERNETES ENVIRONMENT AND A NON-KUBERNETES ENVIRONMENT
A method for providing stream control transmission protocol (SCTP) multihoming between a Kubernetes environment and a non-Kubernetes environment includes receiving, at an SCTP multihoming router (SMR) deployed as a pod within the Kubernetes environment and from a client in the non-Kubernetes environment, an SCTP INIT message for establishing a multi¬ homed SCTP association between first and second Internet protocol (IP) addresses of the client and first and second local IP addresses of the SMR. The first and second local IP addresses of the SMR are added to an SCTP header of the SCTP INIT message. Source and destination network address translations (NATs) are performed to change a source IP address and a destination IP address in an IP header of an IP datagram carrying the SCTP INIT message to a third local IP address of the SMR and a service IP address of a service in the Kubernetes environment, respectively.
Techniques for assessing security risk at scale for a computing environment are disclosed. In an example method, a computing system accesses a risk model specified for a computing environment including at least a set of individual risk factors, a set of composite risk factors, and a final composite function for computing an overall risk score. The computing system receives a set of one or more inputs. The computing system computes an individual risk score for each individual risk factor using at least one input. The computing system computes a composite risk score for each composite risk factor. The computing system computes the overall risk score using the final composite function using at least two composite risk scores and outputs the overall risk score.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
55.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR INCREASING RESILIENCE OF NETWORK TOPOLOGY HIDING ACROSS GEO-REDUNDANT SECURITY EDGE PROTECTION PROXIES (SEPPs)
A method for increasing resilience of network topology hiding across geo-redundant SEPPs includes subscribing, with an NRF to receive notification of network topology updates of producer NFs configured to send inter-PLMN messages. The method includes receiving a notification including updated network topology information regarding one of the producer NFs. The method further includes generating, based on the updated network topology information, updated network topology hiding information for the producer NF. The method further includes receiving an inter-PLMN SBI request message requiring network topology recovery. The method further includes performing, using the updated network topology hiding information, the network topology recovery and forwarding the inter-PLMN SBI request message to the producer NF.
Techniques for facilitating a connection between a governing tenancy and a subject tenancy are disclosed. A configuration request is received by an intermediary from a service that is associated with a governing tenancy. The configuration request is a request to configure a feature associated with a subject tenancy. The intermediary determines if any active tenancy link has been established between the governing tenancy and the subject tenancy. In response to confirming that an active tenancy link has been established between the governing tenancy and the subject tenancy, the intermediary a) issues a resource principal token that forms a basis for authorization for the first service within the governing tenancy to initiate actions, associated with the feature, within the subject tenancy and b) responds to the configuration request with the resource principal token.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
57.
ZERO TRUST PACKET ROUTING AGGREGATION AND LOG INGESTION
Techniques are described for visualizing enforcement of ZPR policy. A method includes aggregating log data associated with a flow of traffic within one or more networks; accessing rules associated with a policy that specifies how the flow of traffic is enforced between enforcement points within the one or more networks, wherein the policy includes one or more layer 4 rules and one or more layer 7 rules; determining, based on the rules associated with the policy, an enforcement of flow of traffic; generating a visualization of the enforcement of the flow of traffic between different enforcement points; and presenting the visualization for display within a user interface.
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
Systems and methods for page load timing with visible element detection are disclosed herein. In some embodiments, a method includes detecting an outgoing communication from a browser, detecting a change in one or more document object models (DOMs) visible to a user, automatically logging a start of a span based on the detection of both the outgoing communication and the change in the one or more DOMs, executing operations relating to the one or more DOMs, determining at least one of: attaining a calm state of the one or more DOMs; or a user interaction causing an additional change to one or more DOMs, and automatically logging an end of the span based upon the determining.
Disclosed herein are techniques related to load-aware selection of query execution databases. Techniques may include receiving a query to be executed on a database selected from a plurality of databases. Each database of the plurality of databases may have a different data layout. The techniques may also include generating, for the received query, one or more query features and one or more current load features. The one or more current load features may indicate availability of one or more computing resources for query execution on least one database of the plurality of databases. Additionally, the techniques may include generating an output based on applying a model to the one or more generated query features. The techniques may further include selecting, from the plurality of databases and based on the output, a database for executing the query. Thus, the query may be executed on the selected database.
Herein is privacy for a smart contract that contains chaincode that sends chaincode events. In a configurable and backwards compatible way, broadcast of a chaincode event can be restricted. Before committing a transaction to a blockchain and without storing a newly generated private event into the transaction, the following are stored into the private event: an event payload, a hash of the payload and, unlike the state of the art, an identifier of a subscriber or organization that can receive the private event. After committing the transaction is an asynchronous detection that the identifier of the subscriber is associated with the event and, responsively, the payload of the event is sent to the subscriber.
Techniques are disclosed herein for providing and using a natural language to logical form model having execution and sematic error correction capabilities. In one aspect, a method is disclosed that includes: accessing a set of training examples and generating a set of error correction training examples via an iterative process performed for each training example. The iterative process includes generating an inferred logical form, executing the inferred logical form on a database, when executing the inferred logical form on the database fails, obtaining an execution error message corresponding to the failure, and recording the inferred logical form and the execution error message as part of an execution error example, and populating an error correction prompt template with the execution error example to generate an error correction training example. A machine learning model may then be trained with at least the set of error correction training examples.
Techniques are described for enforcing the flow of traffic between VNICs using ZPR policy. A method includes accessing a ZPR policy that specifies how a flow of traffic is enforced between endpoints within the one or more networks, wherein the policy includes one or more layer 4 rules and one or more layer 7 rules; identifying from the ZPR policy, a ZPR statement that specifies a connection between a first virtual network interface card (VNIC) and an endpoint; generating, based on the ZPR statement, one or more network security group (NSG) rules; and distributing at least one of the one or more NSG rules to a first NSG associated with the first VNIC.
A cloud infrastructure system is provided for storing cross-tenancy authorization policies for authorizing different users from different tenancies to have different levels of access to bastion functionality that impacts the different tenancies. Stored cross-tenancy authorization policies include, for a first tenancy, policies that authorize a first set of users for bastion service creation and a second set of users for bastion service access, and, for a second tenancy, policies that authorize the second set of users for bastion session creation and the first set of users for deleting a portion of the second tenancy from which bastion sessions may be accessed. Based on the policies, the system authorizes creation of a bastion service that is configured to use a recording destination that is not modifiable by the second user, and then uses the bastion service to create a bastion session for securely accessing resource(s) of the first tenancy. Bastion session activity for the bastion session is logged to the recording destination.
Techniques are described for enabling uninterrupted services by computing resources on nodes of a consistency hash ring (CHR) while adding or removing nodes (i.e., making changes) of the CHR. In some embodiments, a duplicate of the existing CHR (i.e., old version) is created to become a new version for performing the changes. Two versions of consistent hash rings (CHRs) co-exist during the transition period of making changes. In some embodiments, computing resources on the nodes of these CHRs perform version upgrades and data migration while continuing to service client requests without interruption.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
G06F 16/907 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
65.
MULTI-LEVEL AUTHENTICATION FOR ACCESSING CLOUD RESOURCES
Techniques for a multi-level authentication within a cloud environment are disclosed. A first authentication request is received by an identity and access management (IAM) service from a device associated with a user. The first request is to authenticate the user for accessing one or more cloud resources through a gateway of the cloud environment. Responsive to the first authentication request, a redirection is performed to submit a second authentication request to an identity provider (IdP) to authenticate the user. A first token indicating a first identity for the user based on the first authentical is received by the IAM service, here the first authentication was performed by the IdP. The IAM service performs a second authentication of the user. The IAM service issues a second token indicating a second identity for the user. The device gains access to the one or more cloud resources, based on the second token.
G06F 21/40 - Authentification de l’utilisateur sous réserve d’un quorum, c.-à-d. avec l’intervention nécessaire d’au moins deux responsables de la sécurité
Techniques are disclosed for transmitting data across data diode of a cross domain system using an encoding algorithm. A sender node of the cross domain system can receive data for transmission across the cross domain system. The data can include a first number of data segments. The sender node can generate a datagram using the data and according to the encoding algorithm. The datagram can include a second number of data segments greater than the first number of data segments. The sender node can transmit the datagram to a receiver node of the cross domain system using a data diode. The receiver node can recover the data using at least a portion of the second number of data segments of the datagram.
H03M 13/15 - Codes cycliques, c.-à-d. décalages cycliques de mots de code produisant d'autres mots de code, p. ex. codes définis par un générateur polynomial, codes de Bose-Chaudhuri-Hocquenghen [BCH]
H03M 13/35 - Protection inégale ou adaptative contre les erreurs, p. ex. en fournissant un niveau différent de protection selon le poids de l'information d'origine ou en adaptant le codage selon le changement des caractéristiques de la voie de transmission
H03M 13/37 - Méthodes ou techniques de décodage non spécifiques à un type particulier de codage prévu dans les groupes
Here is database replication that, for consistency and security, is based on an innovative unidirectional gateway that modifies and annotates a sequence of human-readable database transaction files. On a communication network, the unidirectional gateway receives a batch of change entries in a database transaction extensible markup language (XML) file. The unidirectional gateway modifies change entry(s). The unidirectional gateway generates and inserts, into the database transaction XML file, metadata that describes the modifications of the change entries. The modified database transaction XML file is converted into a binary file format that for a downstream database and, before or after that conversion, downstream processing of the database transaction file is based on the metadata that describes the modifications of the change entries.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
68.
DETECTION OF UNACCOUNTED TENANCIES IN A CLOUD ENVIRONMENT
Techniques for detecting an unaccounted tenancy within a cloud environment are disclosed. The following are stored within a first database: (i) identifiers of a set of active cloud resources, and (ii) for each active cloud resource, an identifier of a corresponding tenancy. The following are stored within a second database: (i) identifiers of a set of tenancies, and (ii) for each tenancy within the set of tenancies, a corresponding tenancy status. Within the second database, each of a first subset of the set of tenancies has an active status, and each of a second subset of the set of tenancies has a terminated status. The first and second databases are queried, to identify a first active cloud resource within a first tenancy, such that the first tenancy has a terminated tenancy status. The first tenancy is tagged with an unaccounted tag, and mitigating actions are undertaken for the first tenancy.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06Q 30/0283 - Estimation ou détermination de prix
G06F 16/20 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données structurées, p. ex. de données relationnelles
The system and methods for creating, managing, and transmuting containerized applications to build machine-neutral applications that can run on different machines with different processor architectures. The present disclosure receives an input container comprising of a machine-neutral application layer, a metadata layer or one or more machine-dependent layers. An instruction set architecture (ISA) is determined and one or more modified input containers are generated based on the identified ISA. One or more machine-dependent layers are dynamically built and inserted in one or more modified input containers. The present disclosure creates a dynamic-composition metadata layer in modified input containers, wherein the dynamic-composition metadata includes execution instructions, environment variables, or machine specific attributes. The present disclosure selects and inserts the matching machine-dependent layer of one or more machine-dependent layers using dynamic-composition metadata in the modified input containers. One or more modified input containers that are ISA-agnostic are returned as output result.
Techniques are disclosed for imaging computing components of a scalable footprint data center in a prefab factory. A host device in a host region data center can execute a region replicator. The region replicator can obtain configuration information for the plurality of computing devices for the scalable footprint data center. The configuration information can include connection information for a management controller of a computing device of the plurality of computing devices. The region replicator can configure, using the configuration information, the management controller to execute an imaging process on the computing device. The imaging process can be configured to perform an imaging operation for a storage device of the computing device. The region replicator can receive an indication that the imaging operation is complete.
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
71.
STATIC RESOURCE IDENTIFIERS IN PRE-FABRICATED SCALABLE FOOTPRINT DATA CENTERS
Techniques are disclosed for static identifiers for software resources within the scalable footprint data center at various stages during the prefab process. A computing node in a region network and during a region build process can receive a static identifier for a software resource within the region network. The static identifier can include a static string corresponding to a region identifier for the region network. A service executing in the region network and after the region network is configured in a scalable footprint data center can receive a request that includes the static identifier. The service can determine whether the static identifier includes the static string. Based at least in part on a determination that the static identifier includes the static string, the service can obtain the region identifier for the region network from a datastore in the scalable footprint data center.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
Techniques are disclosed for building a scalable footprint data center using a virtual bootstrap environment. A computing system can implement a virtual bootstrap environment in a host region data center. The computing system can deploy a first service in the virtual bootstrap environment. The first service can have a dependency on a second service. The computing system can then deploy an instance of the second service in the virtual bootstrap environment. The instance of the second service can be configured to receive service traffic from the first service after the instance of the second service is deployed in the virtual bootstrap environment. The first service can be configured to send the service traffic to a corresponding instance of the second service in the host region prior to the instance of the second service being deployed in the virtual bootstrap environment.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
Techniques for transmitting data within a cloud environment are disclosed. Data is received by a transmission service, which transmits the data to a signature and encryption (SE) service, along with an encryption key. Signed and encrypted data, which is received from the SE service, is (i) encrypted using the encryption key and (ii) signed by the SE service. In an example, the SE service maintains a log of the data. The signed and encrypted data is transmitted to an intermediate zone, to facilitate the intermediate zone to verify a signature of the signed and encrypted data, and allow passage of the signed and encrypted data to a reception service. The transmission and reception services are within a first tenancy and a second tenancy, respectively, of a cloud environment; and the intermediate zone is within one of the first tenancy or a third tenancy of the cloud environment.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
74.
PREVENTION OF INFORMATION LEAKAGE THROUGH SIGNATURE
Techniques for preventing leakage of sensitive information through a signature are disclosed. Data is received, and the data is transmitted to a signature and encryption (SE) service, along with an encryption key. Signed and encrypted data is received from the SE service, where the signed and encrypted data is (i) encrypted using the encryption key and (ii) signed by the SE service. A verification is performed to verify that that sensitive information (such as the encryption key) is not leaked through a side channel of a signature of the signed and encrypted data, such as by (i) determining a length of the side channel of the signature, and (ii) verifying that the length of the side channel of the signature does not exceed a threshold length. Responsive to verifying that the sensitive information is not leaked through the side channel of the signature, the signed and encrypted data is transmitted.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
75.
RESOURCE RECONFIGURATION IN PRE-FABRICATED SCALABLE FOOTPRINT DATA CENTERS
Techniques are disclosed for reconfiguring resources (e.g., software resources) within the scalable footprint data center at various stages during the prefab process. A service executing on a computing device of a scalable footprint data center can receive, from a reconfiguration service, stage information for a reconfiguration process of services in the scalable footprint data center. The stage information can include a status of the reconfiguration process. The service can execute a reconfiguration operation of the reconfiguration process to update a service resource of the service. Responsive to executing the reconfiguration operation, the service can send to the reconfiguration service a status of the reconfiguration operation.
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
76.
UPDATING SERVICES IN PRE-FABRICATED SCALABLE FOOTPRINT DATA CENTERS
Techniques are disclosed for updating services within the scalable footprint data center during the prefab process. A first cumulative upgrade can be deployed to a first service executing on a computing device for a scalable footprint data center. The first cumulative upgrade can include a first update to a software resource of the first service. The first update to the software resource can produce an updated software resource. A second cumulative upgrade can be deployed to a second service executing on another computing device for the scalable footprint data center. The second service can have a dependency on the first service. The second cumulative upgrade can be deployed in parallel with the first cumulative upgrade.
G06F 8/658 - Mises à jour par incrémentMises à jour différentielles
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
Techniques are disclosed for enabling cross-realm communications using a virtual bootstrap environment. A computing system can deploy a cross-realm proxy service in a virtual bootstrap environment. The cross-realm proxy service can receive, from a first service in a target region data center, a first request that includes a first credential of the first service. The first credential can be associated with a first namespace of the target region data center. The cross-realm proxy service can authenticate the first credential and, based at least in part on the authentication of the first credential, send a second request to a second service in a host region data center. The second request can include a second credential of the cross-realm proxy service. The second credential can be associated with a second namespace of the host region data center.
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
78.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING END-TO-END MESSAGE INTEGRITY CHECKING FOR SERVICE-BASED INTERFACE (SBI) MESSAGES COMMUNICATED VIA A SERVICE COMMUNICATION PROXY (SCP)
A method for checking end-to-end SBI message integrity includes receiving a first traffic feed from a first NF, the first traffic feed including copies of SBI messages transmitted from the first NF to an SCP. The method further includes receiving a second traffic feed from a second NF, the second traffic feed including copies of SBI messages received by the second NF from the SCP. The method further includes identifying, from the first traffic feed, a copy of a first SBI message transmitted by the first NF to the SCP. The method further includes identifying, from the second traffic feed, a copy of a second SBI message received by the second NF from the SCP and that is associated with the copy of the first SBI message. The method further includes performing, using the message copies, an end-to-end SBI message integrity check for the first SBI message.
Techniques for establishing trust between entities in a cross-domain solution (CDS) are disclosed. In some embodiments, a high-side entity in a CDS transmits, to an intermediate entity in the CDS, a first version of a control message that comprises (a) a first public key associated with the high-side entity and (b) a first signature generated using a first private key associated with the high-side entity. The intermediate entity validates the first signature using the first public key. Responsive to validating the first signature, the intermediate entity generates a second version of the control message that comprises a second signature generated using a second private key associated with the intermediate entity. The intermediate entity transmits the second version of the control message to a low-side entity in the CDS.
G06F 21/64 - Protection de l’intégrité des données, p. ex. par sommes de contrôle, certificats ou signatures
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Embodiments described herein are generally related to data analytics, and computer-based methods of providing business intelligence data, and are particularly related to systems and methods for evaluation, implementation, and refinement of key performance indicators, dashboards, or scorecards, for use in analytics-based decision-making. In accordance with an embodiment, a data analytics environment can join several data sets, including an area of responsibility data, in order to determine one or more representatives responsible for particular organization units, during particular periods of time; and identify key measures or metrics under the purview of, or otherwise associated with those representatives, for use in generating a key performance indicator scorecard reflecting such relationships.
Techniques are described for securing data stored on a non-volatile storage medium from unauthorized access using improved network-bound data security techniques. The data is secured using network-bound security techniques without the entities involved in the processing (e.g., clients and servers) having to exchange any client-specific or server-specific keys, secrets, or other secret data with each other. The techniques disclosed herein provide the network-bound data security functionality using a sequence of Message Authentication Codes (macs) generated using Hash-based Message Authentication Code (HMAC) generation techniques.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
A method for implementing a proof-of-work challenge for transmission of data to a non-authenticated gateway is disclosed. The method includes receiving, by the gateway and from a device, a challenge request; and transmitting a proof-of-work challenge to the device. The method further includes receiving, from the device, a solution to the challenge, wherein the solution to the challenge accompanies data. The method further includes verifying a validity of the solution to the challenge; and storing and/or processing the data, responsive at least in part to the solution being valid for the challenge. In an example, the solution to the challenge is to be derived by the device, without an intervention by a user of the device. In an example, the challenge request and the solution to the challenge are received from a library that is packaged with a mobile application being executed within the device.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
83.
ARCHITECTURE AND COMPUTING ENVIRONMENT FOR ISOLATED AND CONTROLLED CODE REVIEW
Cloud computing architecture is described for implementing test modules in a communication-controlled cloud environment with access to private data. The test modules perform synchronous tests on the private data and export test results to an analytic environment subject to data export policies. An analytic application is used to asynchronously analyze the test results in the analytic environment. The cloud computing architecture alternatively or additionally includes an interface for deploying investigation-bound cloud environments in restricted subnets. A collection of software is instantiated in the investigation-bound cloud environment, and the investigation-bound cloud environment may be accessed with remote access credentials using a remote access protocol for testing the collection of software. Information about the investigation-bound cloud environment displayed in the analytic application, and the analytic application and the restricted subnet are forcibly deleted when the investigation is complete.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 11/3604 - Analyse de logiciel pour vérifier les propriétés des programmes
G06F 11/3698 - Environnements pour l’analyse, le débogage ou le test de logiciel
Cloud computing architecture is described for implementing test modules in a communication-controlled cloud environment with access to private data. The test modules perform synchronous tests on the private data and export test results to an analytic environment subject to data export policies. An analytic application is used to asynchronously analyze the test results in the analytic environment. The cloud computing architecture alternatively or additionally includes an interface for deploying investigation-bound cloud environments in restricted subnets. A collection of software is instantiated in the investigation-bound cloud environment, and the investigation-bound cloud environment may be accessed with remote access credentials using a remote access protocol for testing the collection of software. Information about the investigation-bound cloud environment displayed in the analytic application, and the analytic application and the restricted subnet are forcibly deleted when the investigation is complete.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
A switch included in a compute fabric receives an authentication request message from a GPU associated with a customer. The switch transmits the authentication request message to an authentication server. Responsive to the GPU associated with the customer being successfully authenticated, the switch receives an authentication response message including metadata associated with the customer; The switch configures an address for the GPU associated with the customer by: (i) configuring a first portion of the address prior to receiving the authentication request message, and (ii) configuring a second portion of the address based on the authentication response message. The switch transmits the address including the first portion of the address and the second portion of the address to the GPU associated with the customer.
Method includes: accessing text, where spans are identified within the text and include one or more pairs of target spans and one or more mid-context spans; generating embedding representations of tokens associated with each target span, tokens associated with the entity types of each target span, and tokens associated with each mid-context span; generating, for each target span, entity-focused span embedding representation based on embedding representations of tokens associated with each target span and embedding representations of tokens associated with entity type of target span; generating, for each mid-context span, mid-context embedding representation based on the embedding representations of tokens associated with each mid- context span; and generating probability distribution of each relation of set of relations based on entity-focused span embedding representations of subject span and object span that are included in each target pair and mid-context embedding representation for mid-context span appearing between subject span and object span.
Techniques are disclosed for converting a dataset from an optimized format to a target format. The system determines a hierarchy of types that are included within the dataset. Based on the hierarchy of types, the system identifies occurrences of the same method in the dataset. The same name is assigned to occurrences of the same method. Optimized opcodes included in the bytecode instructions of methods are translated into target opcodes. For each method, the system simulates executing the target opcodes that replace the optimized opcodes in the method to determine if the target opcodes affect the configuration of local variables differently than the optimized opcodes. Based on simulating the execution of the target opcodes, the system alters local variable references in the method to reflect the differing configurations of local variables that result from replacing the optimized opcodes with the target opcodes.
A central text repository may maintain, track, update, and modify text centrally that may then be distributed to applications to be used at runtime. The central text repository allows anyone involved in the software design processor lifecycle to edit, update, and/or correct text strings that are used in various applications. This allows updates to be rapidly pushed out to runtime applications without requiring the codes bases of those applications to be accessed at all. Instead, a change may be made centrally, and new resource bundles of text strings may be made available for runtime downloading usage by these applications. This effectively separates the storage and maintenance of text strings from the underlying applications. Hierarchies and modifiers may be used to override and inherit different text usages, languages, and so forth.
Techniques are disclosed for enforcing isolation in a cluster of computing nodes configured for executing containerized applications. The system receives a request from a requesting entity for access to a target namespace. The request is accompanied by a token. Based on the token, the system identifies a namespace that corresponds to an isolation namespace. To determine if the request is attempting to breach isolation, the system compares the target namespace to the corresponding namespace. If the target namespace is not the corresponding namespace, the system concludes that the request is attempting to breach isolation, and, therefore, denies the request. If the request is not attempting to breach isolation, the system determines if the request is allowed by any permissions that have been granted to the requesting entity. If the request is not allowed by a permission granted to the request entity, the system denies the request.
For database high availability and for accelerated recovery of a failed replica of a database, a storage computer is dynamically allocated and temporarily persists database content modifications until the database replica is ready to receive the modifications. The storage computer is not allocated storage that stores the database. The storage computer persists a recent portion of the database and later receives a request to synchronize the recovering replica. During recovery, the storage computer responsively sends the portion of the database to the recovering replica. For acceleration, recovery herein does not entail content interpretation such as replay of a redo log. For horizontally scaled acceleration involving two distinct storage computers per recovering replica, multiple replicas are concurrently recovered by respective storage computers that each receives recovered database content only from a respective distinct other storage computer.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
91.
SYSTEM AND METHOD FOR MANAGING SECURE SHELL PROTOCOL ACCESS IN CLOUD INFRASTRUCTURE ENVIRONMENTS
Techniques for creating, managing, and using SSH certificates with one or more target-specific principals are disclosed. A certificate authority receives a certificate signing request that includes both a user identifier and a resource identifier. The user identifier identifies a user, and the resource identifier represents one or more target hosts. The certificate authority forms a target- specific principal for use in creating the certificate. The target-specific principal indicates both the user and the resource identifier representing the resource(s) for which access is requested. The resource identifier may represent a host class associated with more than one host. Once the certificate authority verifies that the user is entitled to access the requested resource(s), it generates the certificate, signs it, and returns it to the requesting device.
Techniques are described for processing packets and enforcing network policies/rules across different network layers. Instead of having to create rules and polices for each of the different network layers and manually specifying where and what devices should enforce the rules/polices, techniques described herein are directed at allowing users to create a simple policy that integrates the different network layers. In some examples, the different network layers are defined by the Open Systems Interconnection (OSI) Model.
H04L 41/00 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
H04L 47/20 - Commande de fluxCommande de la congestion en assurant le maintien du trafic
Embodiments determine demand transference for an item assortment of a retailer. Embodiments receive historical sales data for a category of items corresponding to the retailer and receive hierarchy data for the category of items corresponding to the retailer. Based on the historical sales data and the hierarchy data, embodiments estimate first variables of a multinomial logit ("MNL") model. Based on the historical sales data and the hierarchy data, embodiments estimate second variables of a log linear retail sales model.
Techniques described herein include receiving, by a first cloud environment and from a second cloud environment, a request to provision a cloud service from among a plurality of cloud services provided by a cloud service provider associated with the first cloud environment. The techniques further include, performing a set of operations associated with provisioning the cloud service in the second cloud environment, wherein at least one operation of the set of operations comprises identifying one or more resource locations of a plurality of private clouds of the first cloud environment for executing the cloud service. The techniques further include, provisioning the cloud service in the plurality of private clouds, wherein the provisioning enables data pertaining to the cloud service to flow between a resource location of a first private cloud and a resource location of one or more second private clouds of the plurality of private clouds.
A system for controlling access to data. The system includes an electronic processor configured to receive, from a first computing device, a first resource request using a uniform resource locator (URL) and identify a first data record, from a plurality of data records, corresponding to the URL. The electronic processor is also configured to identify, from the first data record, a first resource and a first user and verify access rights of the first user identified from the first data record to the first resource identified from the first data record. The electronic processor is further configured to, in response to verifying the access rights of the first user to the first resource, execute a first query identified from the first data record on the first resource to generate a first set of query results and transmit, to the first computing device, the first set of query results.
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p. ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
A unified security agent manager plugin within a virtual machine compute instance manages at least one agent installed within the compute instance of a cloud environment. The plugin periodically receives agent inventory information, where the agent inventory information identifies (i) a plurality of platform types of the agent, and (ii) for each platform type, one or more deployable versions of the agent. The plugin selects a platform type from the plurality of platform types. The plugin compares the one or more deployable versions corresponding to the selected platform type with a version of the agent currently installed in the compute instance. If the version currently installed in the compute instance is older than the one or more deployable versions, the plugin fetches an agent object corresponding to a deployable version from an object storage repository, and updates the version currently installed in the compute instance to the fetched deployable version.
A method for issuing one or more certificates to a substrate instance of a cloud environment is disclosed. The method includes performing a first fetch to obtain one or more of: (i) an identifier of a compartment that includes the substrate instance, or (ii) an identifier of the substrate instance. The method further includes performing a second fetch to obtain an identifier of a tenancy that includes the substrate instance, based at least in part on one or more of: (i) the identifier of the compartment identified from the first fetch, or (ii) the identifier of the substrate instance identified from the first fetch. The method further includes issuing a principal certificate to the substrate instance, the principal certificate including the identifier of the tenancy that includes the substrate instance.
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Techniques discussed herein relate to image based operational health detection which may involve obtaining one or more images depicting an aerial view of a physical structure. The method may include obtaining one or more attributes associated with the physical structure and identifying a surface temperature corresponding to a portion of the physical structure from the one or more images depicting the aerial view of the physical structure. The method may include determining a temperature control system associated with the physical structure is likely operating a reduced capacity based at least in part on the surface temperature and the one or more attributes associated with the physical structure. One or more operations may be executed based at least in part on determining that the temperature control system associated with the physical structure is likely operating the reduced capacity.
Comparison and sorting of data stored according to a flexible schema data type is disclosed. A data object is accessed, the data object stored according to a set of one or more datatypes, wherein the set comprises a flexible schema datatype and data in the data object pertains to a plurality of domains. Data within the data object is translated to a sortable intermediate format that is configured to allow local ordering among elements in the respective plurality of domains while allowing global ordering among the plurality of domains based on a pre-selected convention. The translated data is stored in a storage system accessible by a database management system (DBMS). Structured query language (SQL) operations are performed on the stored translated data.
Techniques are disclosed for scaling a reduced footprint data center. A networking device can be implemented at the reduced footprint data center and can include a plurality of networking ports. The networking device can be connected to a plurality of reduced footprint server racks using a first networking port of the plurality of networking ports. The plurality of reduced footprint data center server racks can be connected in a ring network and host a cloud service. An additional server rack can be connected to the networking device using a second networking port of the plurality of networking ports. A computing device of the additional server rack can be provisioned to host a portion of a data plane of the cloud service.
H04B 10/00 - Systèmes de transmission utilisant des ondes électromagnétiques autres que les ondes hertziennes, p. ex. les infrarouges, la lumière visible ou ultraviolette, ou utilisant des radiations corpusculaires, p. ex. les communications quantiques
H04L 65/00 - Dispositions, protocoles ou services dans les réseaux de communication de paquets de données pour prendre en charge les applications en temps réel