A system is disclosed that includes capabilities by which a nested sub-resource residing in a service tenancy can access a customer-owned resource residing in a customer tenancy without the use of a cross-tenant policy. The disclosed system provides the ability for a nested sub-resource residing in a service tenancy to obtain the resource principal identity of a higher-level resource residing in the customer tenancy and use the identity of the higher-level resource to access a customer-owned resource residing in the customer tenancy. Using the resource principal identity of its higher-level resource, the sub-resource can access a customer-owned resource that resides in a customer tenancy in a seamless way without having to write a cross-tenancy policy statement that provides permission to the sub-resource to access the customer-owned resource.
Techniques for managing session lifecycles through custom scripts in a network environment are provided. In one technique, a container of a virtual machine receives a termination signal that is associated with a command to delete or deactivate a session of the container. In response and prior to terminating the session, the container identifies and executes a script that is associated with the command. After the script completes executing, the session is deleted or deactivated. In another technique, cloud system receives reference data that identifies a storage location of a script. A virtual machine is created in the cloud system. Based on the reference data, the script is downloaded from the storage location into storage that is local to the virtual machine. The script is executed and a session within a container of the virtual machine is initiated.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
3.
SYSTEM AND METHOD FOR CHAT-TO-VISUALIZATION USER INTERFACE FOR USE WITH A DATA ANALYTICS WORKBOOK ASSISTANT
In accordance with an embodiment, described herein is a system and method for providing a chat-to-visualization user interface for use with a data analytics workbook assistant. A data analytics system or environment can be integrated with a digital assistant system or environment which provides natural language processing, for purposes of leveraging a user's text or speech input while generating, modifying, or interacting with data visualizations. The user can interact with the system using a chat-like conversation. Based upon a received input from the user as part of the conversation, the system can generate data comprising a resolved intent and entities, and locate an appropriate dataset. The system supports complex follow-up interactions or questions that pertain to previous responses combined with the curated data. The user can use modifiers to further enhance their questioning or analysis of the data, and incorporate resulting insights into their visualization project.
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
The present disclosure relates to utilizing large language models (LLMs) to facilitate generation of incident reports or similar documents. One or more initial inputs may be received from a user, and one or more example incident reports may be identified. The one or more example incident reports and the one or more initial inputs may be sent to an LLM. A reviewable version of an incident report may be accessed that is based on output that the LLM generated based on the example incident reports and the one or more initial inputs. The reviewable version of the incident report may be presented in a human readable format via a graphical user interface (GUI). A modification corresponding to the reviewable version of the incident report may be received via the GUI. The modification and the reviewable version of the incident report may be sent to the LLM to cause the LLM to generate an updated version of the incident report.
The present disclosure relates to LLM orchestration with vector store generation. An embeddings model may be selected to generate an embedding for a digital artifact. Metadata for the digital artifact may also be generated and stored in a vector store in association with the embedding. A user query may be received and categorized. One of a plurality of machine learning models may be selected based on the categorization of the user query. A prompt may be generated based at least in part on the user query, and the selected machine learning model may generate a response to the user query based at least in part on the prompt.
A system and computer-implemented method include accessing a request for allocating graphical processing unit (GPU) resources for performing an operation. The request includes metadata identifying a client identifier associated with a client, throughput, and a latency of the operation. A predicted resource limit for performing the operation is determined based on the metadata. A parameter of GPU resources is obtained. The parameter includes a status indicating whether a GPU resource is occupied for performing another operation. A GPU resource utilization value is determined for each node based on the status. The GPU resource utilization value indicates the amount of utilization of GPU resources of the corresponding node. The GPU resource utilization value of each node is compared with a pre-defined resource utilization threshold value. The GPU resources are re-scheduled based on the predicted resource limit. Further, a set of GPU resources from the re-scheduled GPU resources for performing the operation.
A system and computer-implemented method include receiving a request for allocating graphical processing unit (GPU) resources for performing an operation. The request includes metadata identifying a client identifier (ID) associated with a client, throughput, and latency of the operation. A resource limit is determined for performing the operation based on the metadata. Attributes associated with each GPU resource of a plurality of GPU resources available for assignment are obtained. The attribute is analyzed that is associated with each GPU resource with respect to the resource limit. A set of GPU resources is identified from the plurality of GPU resources based on the analysis. A dedicated AI cluster is generated by patching the set of GPU resources within a single cluster. The dedicated AI cluster reserves a portion of a computation capacity of a computing system for a period of time and the dedicated AI cluster is allocated to the client associated with the client ID.
Adaptive data collections may include various type of data arrays, sets, bags, maps, and other data structures. A simple interface for each adaptive collection may provide access via a unified API to adaptive implementations of the collection. A single adaptive data collection may include multiple, different adaptive implementations. A system configured to implement adaptive data collections may include the ability to adaptively select between various implementations, either manually or automatically, and to map a given workload to differing hardware configurations. Additionally, hardware resource needs of different configurations may be predicted from a small number of workload measurements. Adaptive data collections may provide language interoperability, such as by leveraging runtime compilation to build adaptive data collections and to compile and optimize implementation code and user code together. Adaptive data collections may also provide language-independent such that implementation code may be written once and subsequently used from multiple programming languages.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
A processor may implement position-independent memory addressing by providing load and store instructions that include position-independent addressing modes. A memory address may contain a normalized pointer, where the memory address stores a normalized memory address that, when added to an offset previously determined for the memory address, defines another memory address. The position-independent addressing mode may also support invalid memory addresses using a reserved value, where a load instruction providing the position-independent addressing mode may return a NULL value or generate an exception when determining that the stored normalized memory address is equal to the reserved value and where a store instruction providing the position-independent addressing mode may store the reserved value when determining that the memory address is an invalid or NULL memory address.
Techniques are disclosed herein for routing an utterance to action for a digital assistant with generative artificial intelligence. An input query comprising particular data can be received from a user. An action and a set of input argument slots within a schema associated with the action can be identified based on the input query. The input argument slots can be filled by determining whether one or more parameters are derivable from the particular data and filling the input argument slot with a version of the parameters that conforms to the schema. An execution plan that comprises the action that includes the set of filled input argument sots can be sent to an execution engine configured to execute the action for generating a response to the input query.
The present disclosure relates to a “no code” framework for creating adapters to integrate a cloud service or other platform with a third-party service. User interface component metadata representing a user interface may be generated based at least in part on a metadata document. The metadata document may comprise metadata specifying a plurality of fields and one or more dependencies of one or more of the plurality of fields on one or more other of the plurality of fields. The user interface component metadata may comprise metadata representing a first field of the plurality of fields. An interaction with the first field in the user interface may be detected, and a second field of the plurality of fields having a dependency on the first field may be identified based at least in part on the metadata document. The user interface component metadata may be updated to comprise data representing the second field.
Techniques are disclosed herein for contextual query rewriting. The techniques include inputting a first user utterance and a conversation history to a first language model. The first language model identifies an ambiguity in the first user utterance and one or more terms in the conversation history to resolve the ambiguity, modifies the first user utterance to include the one or more terms identified to resolve the ambiguity to generate a modified utterance, and outputs the modified utterance. The computing system provides the modified utterance as input to a second language model. The second language model performs a natural language processing task based on the input modified utterance and outputs a result. The computing system outputs a response to the first user utterance based on the result.
The techniques described herein provide a novel medication order pipeline may be used to facilitate medication orders by identifying the medication ordering intent from a natural language utterance, and using the FHIR-compliance data structure to generate medication order information to fulfill medication orders through an EHR system. The medication order information may be a concise search phrase containing the medical entities extracted from the data structure, or converted EHR system-specific medical codes based on the standard medical codes in the data structure.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G16H 20/10 - TIC spécialement adaptées aux thérapies ou aux plans d’amélioration de la santé, p. ex. pour manier les prescriptions, orienter la thérapie ou surveiller l’observance par les patients concernant des médicaments ou des médications, p. ex. pour s’assurer de l’administration correcte aux patients
A summary generation system is disclosed that is configured to generate a summary for content to be summarized by identifying relevant chunks of information from the content to be summarized using a large language model (LLM) and a set of questions. The set of questions enable the system to identify and retrieve relevant chunks of information. Each question undergoes a translation or transformation process to generate multiple question variants for each question. The multiple question variants are used by the system to optimize the search to obtain relevant chunks of information. Then, using the multiple question variants and an LLM, the system extracts information (i.e., answers) from the relevant chunks of information. The summary generation system then collates the answers to create an accurate and comprehensive summary for the content to be summarized.
G06F 40/166 - Édition, p. ex. insertion ou suppression
G06F 40/289 - Analyse syntagmatique, p. ex. techniques d’états finis ou regroupement
G16H 50/70 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour extraire des données médicales, p. ex. pour analyser les cas antérieurs d’autres patients
Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.
Techniques are disclosed herein for managing ambiguous date mentions in natural language utterances in transforming natural language utterances to logical forms by encoding the uncertainties of the ambiguous date mentions and including the encoded uncertainties in the logical forms. In a training phase, training examples including natural language utterances, logical forms, and database schema information are automatically augmented and used to train a machine learning model to convert natural language utterances to logical form. In an inference phase, input database schema information is augmented and used by the trained machine learning model to convert an input natural language utterance to logical form.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
Techniques for standardizing text data are disclosed. The system may identify, within a content item, a target phrase that is to be standardized. A subset of characters of a verb in the target phrase may be selected for comparison to a list of nouns. The subset of characters may be compared to a list of nouns identified in a data corpus. A noun in the list of nouns may be added to a candidate subset of nouns to replace the verb if the noun includes a sequence of characters that matches the subset of characters. A particular noun to replace the verb may be selected from the candidate subset of nouns based on a frequency associated with the particular noun occurring within the data corpus. The system may convert the target phrase to generate a standard phrase at least by replacing the verb with the particular noun.
Techniques for using a LLM to detect OOS and OOD utterances. In one aspect, a method includes routing an utterance to a skill bot. The skill bot is configured to execute an action for completing a task associated with the utterance, and a workflow associated with the action includes a GenAI component state configured to facilitate completion of at least part of the task. The method further includes inputting a prompt into a GenAI model for processing. The prompt includes the utterance and scope-related elements that teach the GenAI model to output an invalid input variable when the utterance is OOS or OOD. When the GenAI model determines the utterance is OOS or OOD as part of the processing, the response is generated to include the invalid input variable, and the GenAI component state is caused to transition to a different state or workflow based on the response.
In an embodiment, a computer generates a respective original inference from each of many records. Permuted values are selected for a feature from original values of the feature. Based on the permuted values for the feature, a permuted inference is generated from each record. Fairness and accuracy of the original and permuted inferences are measured. For each of many features, the computer measures a respective impact on fairness of a machine learning model, and a respective impact on accuracy of the machine learning model. A global explanation of the machine learning model is generated and presented based on, for multiple features, the impacts on fairness and accuracy. Based on the global explanation, an interactive indication to exclude or include a particular feature is received. The machine learning model is (re-)trained based on the interactive indication to exclude or include the particular feature, which may increase the fairness of the model.
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for identifying entities for automatic SOAP note generation. A text transcript is accessed and segmented into portions. The text transcript can correspond to an interaction between a first entity and a second entity. Entities for the respective portions are identified using machine-learning models. A SOAP note is generated using the one or more machine-learning models and facts are derived from the text transcript based at least in-part on the entities. The SOAP note can be stored in a database in association with at least one of the first entity and the second entity.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G06N 20/20 - Techniques d’ensemble en apprentissage automatique
21.
UNIFIED RDBMS FRAMEWORK FOR HYBRID VECTOR SEARCH ON DIFFERENT DATA TYPES VIA SQL AND NOSQL
Techniques for a unified relational database framework for hybrid vector search are provided. In one technique, multiple documents are accessed and a vector table and a text table are generated. For each accessed document, data within the document is converted to plaintext, multiple chunks are generated based on the plaintext, an embedding model generates a vector for each of the chunks, the vectors are stored in the vector table along with a document identifier that identifies the accessed document, tokens are generated based on the plaintext, the tokens are stored in the text table along with the document identifier. Such processing may be performed in a database system in response to a single database statement to create a hybrid index. In response to receiving a hybrid query, a vector query and a text query are generated and executed and the respective results may be combined.
Techniques for managing events that record occurrences in a computing environment are disclosed. The system identifies events, and the system applies event processing mechanisms to the events. The event processing mechanisms generate incidents to represent the events. The system presents an interface that demonstrates how the events are mapped to the incidents. A user may interact with the interface to modify the event processing mechanisms and/or define new event processing mechanisms. Furthermore, the system may identify a group of uncompressed events, and the system may determine a candidate compression policy that would generate a single incident to represent the group of uncompressed events. The system may generate the candidate compression policy by applying a trained machine learning model to the group of uncompressed events. The system may simulate applying the candidate compression policy, and the system may present the results of the simulated application to the user on the interface.
Techniques are described for performing packet level data centric protection enforcement. Instead of being restricted to perimeter-based security and defining and creating rules that are difficult to maintain, techniques described herein allow users to create data-centric, intent-based policies that are enforced at different enforcement points within one or more networks. In some examples, a method comprises receiving a packet at an enforcement point (EP) within one or more networks that include a plurality of enforcement points (EPs); accessing enforcement data that indicates allowed communications between the EP and one or more other EPs, wherein the data are generated from a policy that specifies how traffic flows the one or more networks and a determination of possible data movements between at least two of EPs in the plurality of EPs; and enforcing the flow of the packet at the EP based on the data.
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for identifying entities for automatic SOAP note generation. A text transcript is accessed and segmented into portions. The text transcript can correspond to an interaction between a first entity and a second entity. One or more entities for the respective portions are identified using one or more machine-learning models. Facts are from the respective portions using the one or more machine-learning models based at least in-part on the context of the respective portions. A SOAP note is generated using the one or more machine-learning models and based at least in-part on the facts. The SOAP note can be stored in a database in association with at least one of the first entity and the second entity.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
Techniques for user gesture recording are provided. In one technique, while recording user actions with respect to a website, it is detected that a user entered text within a text field of a webpage of the website. An action pane is presented that includes a value text field and a test value text field. In response to the detection, the text is inserted into the test value text field of the action pane. An association between the text, the text field, and the test value text field is stored as part of a workflow. In a related technique, user input is received through the action pane, where the user input selects a reference, to a source of input, to include in the text field during execution of the workflow.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 16/957 - Optimisation de la navigation, p. ex. mise en cache ou distillation de contenus
26.
Generating Enhanced Queries Using Machine Learning Models
Techniques for generating terms to replace an initial set of search terms for a query are disclosed. A system generates a training data set for training a machine learning model. Generating the training data set includes generating search value vectors for each of a set of labels based on sets of search values associated respectively with the labels in the set of labels. The system trains a machine learning model to predict a target label for a target search vector based on the set of labels and the respectively associated search value vectors. The system generates a target search value vector based on an initial set of search values. The system then applies the trained machine learning model to the target search value vector to predict the target label. The target label is used as a search term, that replaces the initial set of search values, for executing the query.
Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.
Technology is disclosed herein for generating a visualization of data based on an AI-generated data object. In an implementation, an application, such as a data analytics application, receives a natural language input from a user which relates to a table of data in the application. The table includes data organized according to table columns. The application generates a prompt for a large language model (LLM) service which includes the names of the table columns. The prompt tasks the LLM service with selecting columns for the visualization based on the natural language input and the names of the table columns. The prompt tasks the LLM service with generating a response in a JSON format. The application populates the JSON object, which describes the visualization, according to the response. The application then creates visualization based on the JSON object.
Techniques for configuring autosave triggers in a computing environment based on environment and data conditions are disclosed. A system trains a machine learning model based on data attributes and environmental attributes to generate autosave value triggers for a computing environment. The autosave value triggers are triggered by different conditions. For example, one autosave trigger may be triggered when an error condition is detected. Another may be triggered when a certain number of operations are performed. The machine learning model generates autosave trigger values scores for one or more autosave triggers. The system may implement the autosave triggers in the computing environment based on the autosave trigger values.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 18/2113 - Sélection du sous-ensemble de caractéristiques le plus significatif en classant ou en filtrant l'ensemble des caractéristiques, p. ex. en utilisant une mesure de la variance ou de la corrélation croisée des caractéristiques
Examples provide a computer system including an electronic processor configured to obtain a set of source code and a plurality of test scenarios. Each of the plurality of test scenarios specifies a respective build architecture. For each respective test scenario of the plurality of test scenarios, the electronic processor is configured to instantiate a respective build environment according to the respective build architecture, compile the set of source code in the respective build environment to generate a respective binary file, and generate a respective set of one or more metrics for the respective binary file.
Techniques are disclosed herein for implementing digital assistants using generative artificial intelligence. An input prompt comprising a natural language utterance and candidate agents and associated actions can be constructed. An execution plan can be generated using a first generative artificial model based on the input prompt. The execution plan can be executed to perform actions included in the execution plan using agents indicated by the execution plan. A response to the natural language utterance can be generated by a second generative artificial intelligence model using one or more outputs from executing the execution plan.
The present disclosure relates to resource allocation among a plurality of clients, for using a cloud-based service, e.g., a generative artificial intelligence (GenAI) service. A first target amount of resource and a second target amount of resource can be allocated to a first client and a second client (respectively). A first and a second client, a first target amount of resource can be allocated to a first client, and a second target amount of resource can be allocated to a second client for using the service. A request can be received from a third client for allocating resources; estimating that (i) the first client is using a first subset of the first target amount and not using a second subset of first target amount, and (ii) the second client is using a third subset of the second target amount and not using a fourth subset of second target amount. It can be determined that the second subset is greater than the fourth subset. At least a portion of the second subset can be allocated as a third target amount of resource to the third client.
H04L 47/76 - Contrôle d'admissionAllocation des ressources en utilisant l'allocation dynamique des ressources, p. ex. renégociation en cours d'appel sur requête de l'utilisateur ou sur requête du réseau en réponse à des changements dans les conditions du réseau
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
H04L 47/74 - Mesures pour pallier la non-disponibilité des ressources
33.
Processing Transaction Data At Different Levels Of Granularity
A system accesses transaction data associated with a plurality of transactions, and based on characteristics of the transaction data, determines a set of functions to be applied to the transaction data at different corresponding levels of granularity. Determining the set of functions includes determining parallel processing requirements corresponding to the set of functions and determining an execution order corresponding to the set of functions based on the parallel processing requirements. The system schedules parallel execution of (a) a first function on the transaction data at a first level of granularity to generate a first dataset having the first level of granularity, and (b) a second function on the transaction data at a second level of granularity to generate a second dataset having the second level of granularity.
Techniques for managing secure virtual card number (VCN) transactions are disclosed. A POS terminal that processes payments receives an instruction in a secure digital communication over a network to process a payment from a customer to a supplier. Based on receiving a payment request via a network, the POS terminal identifies a VCN associated with the request. The POS terminal validates the VCN and processes the payment request. The POS terminal communicates the VCN to the supplier's bank to initiate a funds transfer between the supplier's bank and the customer's bank that issued the VCN. Upon completion of the transaction, the banks confirm the transaction to the customer and the POS terminal.
G06Q 20/34 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des cartes, p. ex. cartes à puces ou cartes magnétiques
G06Q 20/20 - Systèmes de réseaux présents sur les points de vente
Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.
Techniques for implementing a digital assistant with copilot support to enhance application usage. In one aspect, a method includes receiving a message payload, invoking, using a thread, a flow based on the message payload, generating a context variable data structure associated with the thread, responsive to invoking the flow, determining, using a machine learning model, an intent of the user, accessing, based on the intent, a prompt and an object schema, and revising the prompt based on the message payload, data in the context variable data structure, and the object schema. A generative artificial intelligence model then generates a list comprising one or more executable actions based on the prompt. The one or more executable actions are executed based on one or more parameters to obtain an output, and the output or a communication derived from the output are then sent to a user.
Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.
Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p. ex. messagerie instantanée [IM]
39.
SYSTEM AND METHOD FOR IMPROVING AN END-TO-END AUTOMATIC SPEECH RECOGNITION MODEL
Techniques are disclosed herein for improving the performance of an end-to-end (E2E) Automatic Speech Recognition (ASR) model in a target domain. A set of test examples are generated. The set of test examples comprise multiple subsets of test examples and each subset of test examples corresponds to a particular test category. A machine language model is then used to convert audio samples of the subset of test examples to text transcripts. A word error rate is determined for the subset of test examples. A test category is then selected based on the word error rates and a set of training examples is generated for training the ASR model in a particular target domain from a selected subset of test examples The training examples are used to fine-tune the model in the target domain. The trained model is then deployed in a cloud infrastructure of a cloud service provider.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p. ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
40.
Generating Recommendations Based On Predicted Query Execution Plan Performance
Techniques for generating recommendations based on the predicted performance of an execution plan are disclosed. A system predicts the future characteristics of a set of data objects associated with a set of structured query language (SQL) statements. The system predicts how the changes to the set of data objects will result in changes to a query execution plan associated with the SQL statements. The system predicts a set of performance metrics for the changed query execution plan. Based on the predicted performance, the system generates recommendations for modifying data, applications, or database server operations to improve performance.
System and method for providing a natural language generator service for use with data analytics environments. A data analytics system or environment can be integrated with a digital assistant system or environment which provides natural language processing, for purposes of leveraging a user's text or speech input, within a data analytics or data visualization project, for example while generating, modifying, or interacting with data visualizations.
Techniques for managing the implementation of application-code scanning processes are disclosed. A system scans application code by analyzing metadata associated with the application code to identify a set of data needed to scan the application code with a scanning application. Based on the information obtained from the application metadata, the system identifies extraction processes that are needed to obtain the set of data. The system applies a set of one or more application-code scanners by implementing the extraction processes. The system presents in a graphical user interface (GUI) a set of results from scanning operations.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
43.
AUTOMATIC SOAP NOTE GENERATION USING TASK DECOMPOSITION
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for automatic SOAP note generation using task decomposition. A text transcript is accessed and segmented into portions. The text transcript can correspond to an interaction between a first entity and a second entity. Machine-learning model prompts are used to extract entities and facts for the respective portions and generate SOAP note sections based at least in-part on the facts. A SOAP note is generated by combining the SOAP note sections. The SOAP note can be stored in a database in association with at least one of the first entity and the second entity.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
A resource manager tracks the amount of available memory for a cluster of machines and for each machine in the cluster. The resource manager receives a reservation request from a job for a graph processing operation. The reservation request specifies an identification of the job, a type of reservation, and an amount of memory requested. The resource manager determines whether to grant the reservation request based on the type of reservation, the amount of memory requested, and the amount of available memory in the cluster or in one or more machines in the cluster. In response to determining to grant the reservation request, the resource manager sends a response to the job indicating an amount of memory reserved and adjusts the amount of available cluster memory and the amount of available machine memory for at least one machine in the cluster based on the amount of memory reserved.
In accordance with an embodiment, described herein is a system and method for providing a chat-to-visualization user interface for use with a data analytics workbook assistant. A data analytics system or environment can be integrated with a digital assistant system or environment which provides natural language processing, for purposes of leveraging a user's text or speech input while generating, modifying, or interacting with data visualizations. The user can interact with the system using a chat-like conversation. Based upon a received input from the user as part of the conversation, the system can generate data comprising a resolved intent and entities, and locate an appropriate dataset. The system supports complex follow-up interactions or questions that pertain to previous responses combined with the curated data. The user can use modifiers to further enhance their questioning or analysis of the data, and incorporate resulting insights into their visualization project.
Operations of a certificate bundle validation service may include receiving a first certificate bundle that includes a first set of one or more digital certificates, and a digital signature, associated with the first certificate bundle; determining, using a public key of an asymmetric key pair associated with a second set of one or more digital certificates, that the digital signature is generated using a private key of the asymmetric key pair; and responsive to determining that the digital signature is generated using the private key, storing the first certificate bundle in a certificate repository as a trusted certificate bundle.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Techniques are disclosed for automatically generating prompts. A method comprises accessing first prompts, wherein each of the first prompts is a prompt for generating a portion of a SOAP note using a machine-learning model. For each respective first prompt of the first prompts: (i) using the respective first prompt to obtain a first result from a first machine-learning model, (ii) using the respective first prompt and the first result to obtain a second result from a second machine-learning model, the second result including an assessment of the first result, (iii) using the second result to obtain a third result from a third machine-learning model, the third result including a second prompt, (iv) setting the second prompt as the respective first prompt, (v) repeating steps (i)-(iv) a number of times to obtain a production prompt, (vi) adding the production prompt to a collection of prompts; and storing the collection of prompts.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G16H 15/00 - TIC spécialement adaptées aux rapports médicaux, p. ex. leur création ou leur transmission
Techniques are described for performing packet level data centric protection enforcement. Instead of being restricted to perimeter-based security and defining and creating rules that are difficult to maintain, techniques described herein allow users to create data-centric, intent-based policies that are enforced at different enforcement points within one or more networks. In some examples, a method comprises receiving a packet at an enforcement point (EP) within one or more networks that include a plurality of enforcement points (EPs); accessing enforcement data that indicates allowed communications between the EP and one or more other EPs, wherein the data are generated from a policy that specifies how traffic flows the one or more networks and a determination of possible data movements between at least two of EPs in the plurality of EPs; and enforcing the flow of the packet at the EP based on the data.
Techniques for preparing data for high-precision absolute localization of a moving object along a trajectory are provided. In one technique, a sequence of points is stored, where each point corresponds to a different set of Cartesian coordinates. A curve is generated that approximates a line that passes through the sequence of points. Based on the curve, a set of points is generated on the curve, where the set of points is different than the sequence of points. New Cartesian coordinates are generated for each point in the set of points. After generating the new Cartesian coordinates, Cartesian coordinates of a position of a moving object are determined.
The present disclosure relates to secure deployment of model weights from a generative artificial intelligence (GenAI) platform to a cloud service. The method includes accessing the model metadata and a set of weights of a GenAI model associated with a GenAI platform. These model weights may be encrypted using a first encryption key that may be provided in the model metadata. These encrypted model weights may be decrypted based on the model metadata by utilizing the first encryption key from the model metadata. Each key may be associated with the specific type of GenAI model. Before storing the model weights from the GenAI platform cloud tenancy to a cloud storage in GenAI home region, the model weights may be encrypted again by utilizing a second encryption key. This encryption by the cloud may enable independent control over the sensitive information during transit and storing.
Techniques for a unified relational database framework for hybrid vector search are provided. In one technique, multiple documents are accessed and a vector table and a text table are generated. For each accessed document, data within the document is converted to plaintext, multiple chunks are generated based on the plaintext, an embedding model generates a vector for each of the chunks, the vectors are stored in the vector table along with a document identifier that identifies the accessed document, tokens are generated based on the plaintext, the tokens are stored in the text table along with the document identifier. Such processing may be performed in a database system in response to a single database statement to create a hybrid index. In response to receiving a hybrid query, a vector query and a text query are generated and executed and the respective results may be combined.
Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
Techniques are disclosed herein for implementing digital assistants using generative artificial intelligence. An input prompt comprising a natural language utterance and candidate agents and associated actions can be constructed. An execution plan can be generated using a first generative artificial model based on the input prompt. The execution plan can be executed to perform actions included in the execution plan using agents indicated by the execution plan. A response to the natural language utterance can be generated by a second generative artificial intelligence model using one or more outputs from executing the execution plan.
A system is disclosed that includes capabilities by which a nested sub-resource residing in a service tenancy can access a customer-owned resource residing in a customer tenancy without the use of a cross-tenant policy. The disclosed system provides the ability for a nested sub-resource residing in a service tenancy to obtain the resource principal identity of a higher-level resource residing in the customer tenancy and use the identity of the higher-level resource to access a customer-owned resource residing in the customer tenancy. Using the resource principal identity of its higher-level resource, the sub-resource can access a customer-owned resource that resides in a customer tenancy in a seamless way without having to write a cross-tenancy policy statement that provides permission to the sub-resource to access the customer-owned resource.
G06F 21/10 - Protection de programmes ou contenus distribués, p. ex. vente ou concession de licence de matériel soumis à droit de reproduction
G06F 21/44 - Authentification de programme ou de dispositif
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
55.
STORAGE AND RETRIEVAL MECHANISMS FOR KNOWLEDGE ARTIFACTS ACQUIRED AND APPLICABLE ACROSS CONVERSATIONS
Techniques are disclosed for storage and retrieval mechanisms for knowledge artifacts acquired and applicable across conversations to enrich user interactions with a digital assistant. In one aspect, a method includes receiving a natural language utterance form a user during a session between the user and the digital assistant and obtaining a topic context instance for the utterance. The obtaining includes executing a search, determining whether the utterance satisfies a threshold of similarity with one or more topics, identifying the topic context instance associated with the topics, and associating the utterance with the topic context instance. A first generative artificial intelligence model can then be used to generate a list of executable actions. An execution plan is then created, and the topic context instances is updated with the execution plan. The execution plan is then executed, and an output or communication derived from the output is sent to the user.
In an embodiment, a computer generates a respective original inference from each of many records. Permuted values are selected for a feature from original values of the feature. Based on the permuted values for the feature, a permuted inference is generated from each record. Fairness and accuracy of the original and permuted inferences are measured. For each of many features, the computer measures a respective impact on fairness of a machine learning model, and a respective impact on accuracy of the machine learning model. A global explanation of the machine learning model is generated and presented based on, for multiple features, the impacts on fairness and accuracy. Based on the global explanation, an interactive indication to exclude or include a particular feature is received. The machine learning model is (re-)trained based on the interactive indication to exclude or include the particular feature, which may increase the fairness of the model.
Techniques are disclosed herein for routing an utterance to action for a digital assistant with generative artificial intelligence. An input query comprising particular data can be received from a user. An action and a set of input argument slots within a schema associated with the action can be identified based on the input query. The input argument slots can be filled by determining whether one or more parameters are derivable from the particular data and filling the input argument slot with a version of the parameters that conforms to the schema. An execution plan that comprises the action that includes the set of filled input argument sots can be sent to an execution engine configured to execute the action for generating a response to the input query.
A system receives a configuration request comprising an infrastructure definition that defines a set of resources, to be selected from a set of tenant-managed resources implemented on a tenant's premises, for implementing the compute target entity. The system generates a compute target entity associated with an addressable identifier. The compute target entity corresponds to the set of resources selected from the set of tenant-managed resources. The system receives an execution request for execution of a set of operations, where the execution request specifies the addressable identifier associated with the compute target entity for execution of the set of operations. The system maps the addressable identifier of the compute target entity to the set of resources. The system causes execution of the set of operations on the set of resources on the tenant's premises via the compute target entity.
Techniques for time-bound hyperparameter tuning are disclosed. The techniques enable the determination of optimized hyperparameters for a machine learning (ML) model given a specified time bound using a three-stage approach. A series of trials are executed, during each of which the ML model is trained using a distinct set of hyperparameters. In the first stage, a small number of trials are executed to initialize the algorithm. In the second and third stages, a certain number of trials are executed in each stage. The number of trials to run in each stage are determined using one or more computer-implemented techniques. The computer-implemented techniques can also be used to narrow the hyperparameter search space and the feature space. Following the third stage, a set of optimized hyperparameters is adopted based a predefined optimization criterion like minimization of an error function.
Systems, methods, and other embodiments associated with automated fine-tuning of text summarization for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a body of text and an example summary. The method fine-tunes a large language model (LLM) based on a loss function that compares the example summary and a generated summary generated by the LLM. The example and generated summaries are compared at sentence, paragraph, and/or article levels. The method generates an evaluation score for performance of the tuned LLM as a text summarizer based on a further comparison of a reference summary and a summary generated by the tuned LLM. The method then automatically determines to deploy the tuned LLM to a text summarization task in response to the evaluation score satisfying a threshold.
Disclosed herein are various approaches for sharing knowledge within and between organizations while protecting sensitive data. A machine learning model may be trained using training prompts querying a vector store to prevent unauthorized user disclosure of data derived from the vector store. A prompt may be received and a response to the prompt may be generated using the machine learning model based at least in part on the vector store.
The present disclosure relates to secure deployment of model weights from a generative artificial intelligence (GenAI) platform to a cloud service. The method includes accessing the model metadata and a set of weights of a GenAI model associated with a GenAI platform. These model weights may be encrypted using a first encryption key that may be provided in the model metadata. These encrypted model weights may be decrypted based on the model metadata by utilizing the first encryption key from the model metadata. Each key may be associated with the specific type of GenAI model. Before storing the model weights from the GenAI platform cloud tenancy to a cloud storage in GenAI home region, the model weights may be encrypted again by utilizing a second encryption key. This encryption by the cloud may enable independent control over the sensitive information during transit and storing.
The present disclosure relates to adaptively overlapping redo writes. A log writer, while operating in a thin mode, may assign a first log writer group of a plurality of log writer groups to write one or more first redo log records to an online redo log in response to determining that a pipelining parameter is satisfied. The thin mode may be associated with one or more target sizes that are less than one or more target sizes associated with a thick mode. The log writer may determine to operate the thick mode based at least in part on at least a portion of the plurality of log writer groups being unavailable to write one or more second redo log records to the online redo log. The log writer, while operating in the thick mode, may assign a second log writer group of the plurality of log writer groups to write one or more second redo log records from the log buffer to the online redo log in response to determining that an amount of redo log records in the log buffer meets one of the one or more target sizes associated with the thick mode. The log writer, while operating in the thick mode, may assign a third log writer group of the plurality of log writer groups to write one or more second redo log records from the log buffer to the online redo log in response to determining that a highest busy group number meets or exceeds a core threshold.
Techniques for language model (LM) summarization using semantical clustering are provided. In one technique, a plurality of concepts reflected in text data is identified. A plurality of concept clusters is generated based on similarity among the plurality of concepts. Thus, some concept clusters may include multiple concepts. For each concept cluster of the plurality of concept clusters, an LM generates a summary of the text corresponding to that concept cluster. A summary response of the text data is generated by aggregating the summary of each concept cluster of the plurality of concept clusters. In another technique, an LM generates a summary based on text data. A first set of concepts reflected in the summary is identified and a second set of concepts reflected in the text data is identified. A difference between the two sets may indicate that the summary is missing one or more concepts.
Techniques for metadata-driven rapid adapter building (RAB) are disclosed, including: receiving, by an RAB framework, a function call from a third-party application; obtaining, by the RAB framework, a metadata document that defines an adapter between a server-side runtime and the third-party application; determining that the metadata document includes one or more metadata fields that map the function call to one or more software development kit (SDK) functions exposed by the server-side runtime; responsive to receiving the function call and based on the one or more metadata fields, executing the one or more SDK functions exposed by the server-side runtime.
Techniques are disclosed herein for managing date-time intervals in transforming natural language utterances to logical forms by providing an enhanced grammar, a natural language utterance comprising a date-time interval, and database schema information to a machine learning model that has been trained to convert natural language utterances to logical forms; and using the machine learning model to convert the natural language utterance to an output logical form, wherein the output logical form comprises at least one of the date-time interval and an extraction function for extracting date-time information corresponding to the date-time interval from at least one date-time attribute of the database schema information.
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06F 40/166 - Édition, p. ex. insertion ou suppression
G06F 40/253 - Analyse grammaticaleCorrigé du style
Techniques are disclosed herein for configuring agents for use by digital assistants that use generative artificial intelligence. An agent may be in the form of a container that is configured to have one or more actions that can be executed by a digital assistant. The agent may be configured by initially defining specification parameters for the agent based on natural language input from a user. Configuration information for the one or more assets can be imported into the agent. One or more actions may then be defined for the agent based on importing of the configuration information, the natural language input from the user, or both. A specification document can be generated for the agent and can comprise various description metadata, such as agent, asset, or action metadata, or combinations thereof. The specification document may be stored in a data store that is communicatively coupled to the digital assistant.
Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.
Operations of a certificate bundle distribution service may include: detecting a trigger condition to distribute a certificate bundle that includes a set of certificate authority certificates; determining, for each of a plurality of network entities associated with a computer network, a fault domain representing at least one single point of failure; partitioning the plurality of network entities into a plurality of certificate distribution groups, based on a set of partitioning criteria that includes a fault domain of each particular network entity, in which each particular certificate distribution group includes a particular subset of network entities, and the particular subset of network entities are associated with a particular fault domain; selecting a particular certificate distribution group, of the plurality of certificate distribution groups, for distribution of the certificate bundle; and transmitting the certificate bundle to the particular subset of network entities in the particular certificate distribution group.
Systems, methods, and other embodiments associated with automated fine-tuning of software code generation by large language models are described herein. In one embodiment, a method accesses a collection of software code samples that intermix sample code and human language description. The method generates prompts to an LLM to write code that performs as described by the human language description of the sample code. The method fine-tunes a large language model to generate software code based on a code generation loss function that evaluates code generated by the LLM in response to the prompts. The method generates an evaluation score for performance of the tuned large language model as a code generator based on code generation loss for second generated code. And, the method automatically signals that fine-tuning of the tuned large language is complete in response to the evaluation score satisfying a threshold.
Techniques are provided for determining access affinity between services in a database cluster, and for placing workload of those services based, at least in part, on the access affinity. The techniques involve generating access records that indicate when sessions that are associated with each service operate on data blocks that were accessed by another session that is associated with another service. Access affinity information is generated based on the access records, where the access affinity information indicates access affinity (e.g. conflict scores) between each pair of services. The cluster then selects which node is to perform the work of a given session based on the access affinity information and the service associated with the session.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
72.
ENSURING THAT LANGUAGE MODELS FOLLOW INSTRUCTIONS INDICATED IN PROMPTS
Techniques for ensuring that language models follow instructions indicated in prompts are provided. In one technique, a first language model generates a response based on a prompt. A set of instructions in the prompt is identified. For each instruction in the set, a second language model determines whether the response indicates that the first language model followed the instruction. In another technique, for each prompt of a plurality of prompts: (1) a first language model generates a response based on the prompt; (2) multiple instructions are identified based on the prompt; (3) a second language model generates, based on the plurality of instructions, an output that indicates that the first language model followed each instruction; and (4) the prompt, the response, and the multiple instructions are stored in a training instance. The first language model is finetuned based on the training instances.
Techniques are disclosed for storage and retrieval mechanisms for knowledge artifacts acquired and applicable across conversations to enrich user interactions with a digital assistant. In one aspect, a method includes receiving a natural language utterance form a user during a session between the user and the digital assistant and obtaining a topic context instance for the utterance. The obtaining includes executing a search, determining whether the utterance satisfies a threshold of similarity with one or more topics, identifying the topic context instance associated with the topics, and associating the utterance with the topic context instance. A first generative artificial intelligence model can then be used to generate a list of executable actions. An execution plan is then created, and the topic context instances is updated with the execution plan. The execution plan is then executed, and an output or communication derived from the output is sent to the user.
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
74.
AUTOMATED SOAP NOTE EVALUATION USING MACHINE LEARNING MODELS
Techniques are disclosed for automatically evaluating SOAP notes. A method comprises accessing a Subjective, Objective, Assessment and Plan (SOAP) note and a checklist that includes checklist facts; using a first machine-learning model prompt to extract SOAP note facts from the SOAP note; using one or more second machine-learning model prompts to generate feedback for the SOAP note, the feedback indicating whether individual checklist facts are supported by at least one of the SOAP note facts, and whether individual SOAP note facts are supported by at least one of the checklist facts; and generating a score for the SOAP note based on the feedback.
Techniques for providing cross-cluster transaction risk assessment are disclosed herein. In one embodiment, the system: obtains customer transaction data including a number of transaction details; clusters the customer transaction data into clusters of transactions; calculates a centroid for each cluster of transactions, corresponding to a mean value within the corresponding cluster; determines, for each transaction, a relationship score indicating the distance of the transaction from the centroid of its cluster; clusters transactions across multiple customers within a posting period to determine a centroid for each customer; calculates a risk score for each transaction by evaluating the transaction's relationship scores against the centroid of the corresponding cluster and the centroids of the other clusters; assigns a risk flag to transactions having risk scores exceeding one or more predefined risk thresholds; and presents one or more notifications of the transactions with risk flags to one or more client devices associated with users.
G06Q 20/40 - Autorisation, p. ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasinExamen et approbation des payeurs, p. ex. contrôle des lignes de crédit ou des listes négatives
G06Q 20/38 - Protocoles de paiementArchitectures, schémas ou protocoles de paiement leurs détails
76.
CENTRALIZED REMOTE CONTROL OF CLIENT APPLICATION USER INTERFACE STATE AND NAVIGATION
Techniques are disclosed for assisting healthcare providers with common clinical tasks by way of a clinical software application that can be installed on and utilized from various client computing devices. The clinical software application(s) can enable a healthcare provider to record conversations with patients, dictate in natural language, generate patient notes, populate patient records, schedule tasks and generate task notifications, and perform numerous other clinical functions. A state of the application executing on the client computing devices can be centrally and remotely controlled by a cloud service provider platform. When a user is logged in to both a mobile client computing device and a desktop client computing device, a state of both applications can be concurrently controlled by the cloud service provider platform, and the applications can be linked and synchronized to provide the end user with a seamless experience when moving between the applications.
G16H 40/67 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement à distance
G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p. ex. pour la gestion du personnel hospitalier ou de salles d’opération
77.
Performance Optimization In Raft-Based Asynchronous Database Transaction Replication
Replication is improved in a globally distributed database, such as a replicated sharded database, which uses raft-based asynchronous database replication. Improvements include Raft log persistence, coordination of followers' processing speed, transaction outcome determination, and column name compression, and improved failover time through heartbeat consolidation and keeping apply processes of followers running across failovers.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
78.
STREAM ORCHESTRATION FOR VARIABLE-LENGTH MESSAGE STREAMS
Techniques are disclosed for stream orchestration for variable-length message streams, including routes specified using an implementation-independent stream orchestration language (SOL). In an example method, a computing system receives a variable-length message, the variable-length message including context information and a payload. The computing system determines, from the context information, routing information that identifies at least one consumer of the variable-length message. The computing system outputs the variable-length message to the consumer.
G16H 80/00 - TIC spécialement adaptées pour faciliter la communication entre les professionnels de la santé ou les patients, p. ex. pour le diagnostic collaboratif, la thérapie collaborative ou la surveillance collaborative de l’état de santé
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
79.
SESSION MANAGEMENT FOR VARIABLE-LENGTH MESSAGE STREAMS
Techniques are disclosed for session management for variable-length message streams. In an example method, a computing system establishes a first session by receiving, from a first computer system, registration information including a first session identifier and a specification of a channel; determining a stream orchestration instance for the channel; and joining the first computer system to the first session for the stream orchestration instance based on the first session identifier. The computing system receives, from a second computer system, a message including context information, the context information including the first session identifier. The computing system identifies the first session based on the first session identifier, the first session having one or more member computer systems. The computing system outputs the message to at least one of the one or more member computer systems of the first session.
H04L 67/146 - Marqueurs pour l'identification sans ambiguïté d'une session particulière, p. ex. mouchard de session ou encodage d'URL
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
80.
EXECUTING AN EXECUTION PLAN WITH A DIGITAL ASSISTANT AND USING LARGE LANGUAGE MODELS
Techniques are disclosed herein for executing an execution plan for a digital assistant with generative artificial intelligence (genAI). A first genAI model can generate a list of executable actions based on an utterance provided by a user. An execution plan can be generated to include the executable actions. The execution plan can be executed by performing an iterative process for each of the executable actions. The iterative process can include identifying an action type, invoking one or more states, and executing, by the one or more states, the executable action using an asset to obtain an output. A second prompt can be generated based on the output obtained from executing each of the executable actions. A second genAI model can generate a response to the utterance based on the second prompt.
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
A summary generation and summary selection system is disclosed that is capable of automatically evaluating multiple summaries generated for content and selecting a single summary that is deemed to be the “best” among the multiple generated summaries. The system includes capabilities to use multiple different selection techniques to select the best summary from multiple generated summaries. A first selection technique involves identifying entities and entity relationships from the content to be summarized and selecting a summary from multiple summaries generated for the content based on the entities and entity relationships identified in the content. A second selection technique involves determining a set of questions that are answered by each summary. The technique then selects a summary based upon the set of questions answered by each summary. The system then outputs the selected summary as the summary for the content.
A system is disclosed that provides the ability for a resource residing in one tenancy of a cloud service provider infrastructure (CSPI) to use the identity of a higher-level resource upon which the resource is built to access other resources residing in another tenancy of the CSPI. The system obtains a first identity associated with the first resource that is provisioned in a first tenancy of the CSPI and obtains a first token for the first resource. The system executes instructions to obtain a second identity associated with a second resource upon which the first resource is built. The second resource resides in a second tenancy of the CSPI. The system obtains a second identity associated with the second resource and obtains a second token for the first resource. The first resource uses the second token to access resources that reside in the second tenancy of the CSPI.
In accordance with an embodiment, described herein are systems and methods for providing a data analytics workbook assistant and integration with data analytics environments. A data analytics system or environment can be integrated with a provider operating as an implementation of a provider framework which provides natural language processing, for purposes of leveraging a user's text or speech input, within a data analytics or data visualization project, for example while generating, modifying, or interacting with data visualizations. The method can, upon receiving the input, process, by the selected provider, a text input or a speech input of the input, to generate, modify, or interact with a data analytics information or visualization.
Techniques are disclosed for assisting healthcare providers with common clinical tasks by way of a clinical software application that can be installed on and utilized from various client computing devices. The clinical software application(s) can enable a healthcare provider to record conversations with patients, dictate in natural language, generate patient notes, populate patient records, schedule tasks and generate task notifications, and perform numerous other clinical functions. Applications executing on a mobile computing device and a desktop computing device and concurrently associated with a same user session with a cloud service provider platform, can be paired with one another so that the mobile client application and the desktop client application can operate in concert, under the control of the cloud service provider platform, to provide an end user with a single seamless experience when the end user switches between client devices while performing a task.
H04L 67/1095 - Réplication ou mise en miroir des données, p. ex. l’ordonnancement ou le transport pour la synchronisation des données entre les nœuds du réseau
G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p. ex. pour la gestion du personnel hospitalier ou de salles d’opération
H04L 67/142 - Gestion des états de session pour les protocoles sans étatÉtats des sessions de signalisationSignalisation des états de sessionMécanismes de conservation d’état
Herein is an accelerated interface between a database server and a storage area network (SAN). Persistent torage being managed for a database is spread across a number of storage buckets. Global distributed storage metadata is used only for tracking the location of storage buckets on different storage servers. With this approach, a very small amount of memory is needed at a global distributed level to maintain the map. Each storage bucket can have any number of mirrored replicas for further increasing speed and reliability. A database server contains a storage bucket map in memory, and uses the map to do database online transaction processing (OLTP) I/O and smart (i.e. offloaded) database operations on storage. This allows for direct I/O between database server and storage server with lower latency and without using slow and remote middleware such as a logical unit number (LUN) metadata server on a separate network element.
The techniques described herein provide a novel clinical digital assistant (CDA) processing pipeline enabling medical entity detection and resolution that works against various EHRs and with different ontologies (e.g., medical coding systems). In some embodiments, the processing pipeline may involve two machine-learning models that can perform named entity recognition on the natural language utterance to identify medical entities that are associated with different medical entity types, and link the medical entities to medical codes of standard medical coding systems. A FHIR-compliance data structure may be generated using the identified medical codes, their associated medical coding systems, the identified medical entities, and their associated medical entity types.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G16H 80/00 - TIC spécialement adaptées pour faciliter la communication entre les professionnels de la santé ou les patients, p. ex. pour le diagnostic collaboratif, la thérapie collaborative ou la surveillance collaborative de l’état de santé
87.
AUTOMATIC DETECTION OF DESERIALIZATION ATTACKS WITH MARKOV CHAINS
A method for detecting a deserialization attack may include identifying, in a byte stream, a class name corresponding to a class, generating, for the class, a feature vector, generating, by applying a benign deserialization model to the feature vector, a benign probability window, generating, by applying a malicious deserialization model to the feature vector, a malicious probability window, comparing the benign probability window and the malicious probability window to obtain a comparison result, and determining, based on the comparison result, that the class is malicious.
Techniques are disclosed for returning references associated with an answer to a query. The techniques include accessing a text portion and identifying a plurality of sentences in the text portion. Each of the sentences is embedded to generate a respective plurality of text sentence embeddings. The text portion or a derivative thereof and a query are provided to a language model and a response to the query based on the text portion is received from the language model. A plurality of sentences are identified in the response. The plurality of sentences in the response is embedded to generate a plurality of response embeddings. The response embeddings are compared to the sentence embeddings to generate a similarity score for each sentence embedding-response embedding pair. Based on the similarity scores, an indication of a subset of the plurality of sentences is output with the response to the query.
Techniques for enhanced chatbot interaction using various large language model providers are provided. In one aspect, a method may include generating a request payload having a common request body specification based on an utterance such that the common request body specification may be a standardized data input format used by a generative artificial intelligence (GenAI) interface for interacting with GenAI model providers. In various embodiments, the method may include converting the common request body specification into a custom request body specification having a data input format associated with a GenAI model provider selected from the plurality of GenAI model providers, communicating, by the GenAI interface, the request payload with the custom request body specification to the GenAI provider for processing by a GenAI model, receiving, at the GenAI interface from the GenAI model provider, a response payload associated with: (i) an error, (ii) processing the request payload, or (iii) both.
Rolling maintenance involves partitioning the compute nodes of a host platform into multiple maintenance domains (MDs), and patching those MDs in a rolling fashion. Techniques are described herein for establishing the VM-to-compute-node placement in an “MD-aware” manner. Specifically, the VM-to-compute-node placement takes into account the MD-to-compute-node mapping, supports constraints and goals related to achieving the required levels of availability during rolling maintenance, and for any given customer, avoids having maintenance events (and corresponding notifications) at excessive frequencies.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Techniques are described for Transaction Guard to impose at-most-once execution by generating and using the database's native transaction identifier, DB XID. In an implementation, DB XID is unique within a (pluggable) database instance (with local undo) and uniquely identifies a transaction in the database. The Transaction Guard that is extended to use native transaction information determines the commit outcome using the native transaction identifier of the transaction instead of relying on the persistence of the Logical Transaction Identifier (LTXID) in a separate table. Using the native transaction identifier, the Transaction Guard significantly improves performance by eliminating the extra write(s) incurred during commit operations.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Techniques are disclosed for automatically generating prompts. A method comprises accessing first prompts, wherein each of the first prompts is a prompt for generating a portion of a SOAP note using a machine-learning model. For each respective first prompt of the first prompts: (i) using the respective first prompt to obtain a first result from a first machine-learning model, (ii) using the respective first prompt and the first result to obtain a second result from a second machine-learning model, the second result including an assessment of the first result, (iii) using the second result to obtain a third result from a third machine-learning model, the third result including a second prompt, (iv) setting the second prompt as the respective first prompt, (v) repeating steps (i)-(iv) a number of times to obtain a production prompt, (vi) adding the production prompt to a collection of prompts; and storing the collection of prompts.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
Systems, methods, and other embodiments associated with clustering of time series signals based on frequency domain analysis are described. In one embodiment, an example method includes accessing time series signals to be separated into clusters. The example method also includes determining similarity in the frequency domain among the time series signals. The example method further includes extracting a cluster of similar time series signals from the time series signals based on the similarity in the frequency domain. And, the example method includes training a machine learning model to detect anomalies based on the cluster.
Techniques are disclosed for assisting healthcare providers with common clinical tasks by way of a clinical software application that can be installed on and utilized from various client computing devices. The clinical software application(s) can enable a healthcare provider to record conversations with patients, dictate in natural language, generate patient notes, populate patient records, and perform numerous other clinical functions. Task entries to schedule such tasks may be generated at the express direction of an end user, or one or more machine-learning models may be used to analyze text transcribed from spoken conversations, to identify one or more tasks from dialogue within the text, and to create corresponding task entries. Notification configuration entries may be created and associated with task entries, and may be used to trigger sending of notifications for scheduled tasks at appropriate times. An end user interaction with a notification may initiate a conversation with a digital assistant.
G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p. ex. pour la gestion du personnel hospitalier ou de salles d’opération
G10L 15/18 - Classement ou recherche de la parole utilisant une modélisation du langage naturel
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
95.
DOCUMENT PROCESSING AND RETRIEVAL FOR KNOWLEDGE-BASED QUESTION ANSWERING
Techniques are disclosed herein for generating and using a knowledge base of information extracted from documents. The techniques include accessing a document comprising text and dividing the document into a plurality of chunks of text. The chunks are indexed by storing each chunk mapped to respective identifying metadata including a chunk index for each chunk. A query is received and a chunk relevant to the query is identified. A prompt is formulated including the query, the identified relevant chunk, and a subsequent chunk. The prompt is provided to a language model and output is received from the language model based on the prompt. An answer to the query is returned based on the received output.
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
G06F 16/31 - IndexationStructures de données à cet effetStructures de stockage
Techniques for maintaining state and context of conversations between a user and digital assistant using threads. In one aspect, a method includes receiving a natural language utterance from a user during a session, obtaining a topic context instance for the natural language utterance, and generating, by a GenAI model, a list comprising an executable action based on candidate actions associated with the topic context instance. The executable action is then executed to produce an output. The executing includes determining there is no thread running within the session that is associated with the topic context instance, the executable action, or both, and responsive to determining there is no thread running, creating a thread associated with the topic context instance, the executable action, or both, and executing, using the thread, the executable action to obtain the output. The output or a communication derived from the output is then sent to the user.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
Techniques are disclosed herein for selecting document chunks that are most relevant to a query. The techniques include receiving a query and comparing a plurality of stored text passages to the query using a first similarity metric. Based on the comparison, a subset of the plurality of stored text passages that are most similar to the query are selected. A plurality of sentences from the subset of the plurality of stored text passages are identified. The identified sentences are ranked based on the query and a second similarity metric. A subset of the sentences are selected based on the ranking. The subset of the sentences or a derivative thereof are output in response to the query.
A database-aware storage server provides instant creation of snapshots without the need to create an intermediate test master database. During the snapshot creation time, the source database stays read-write and completes ongoing reads and writes. The database-aware storage server allows creation of layers of writable snapshots in a hierarchy. All these multiple databases share common data blocks. Any new writes performed by the database post snapshot are stored in blocks of sparse files. This promotes space sharing and reduces the total amount of space used by all these related databases. The allocations for the source and all new snapshot databases share the same common pool of storage. The newly created snapshot databases can access the data store directly without going through an intermediate layer.
G06F 16/11 - Administration des systèmes de fichiers, p. ex. détails de l’archivage ou d’instantanés
G06F 16/174 - Élimination de redondances par le système de fichiers
G06F 16/22 - IndexationStructures de données à cet effetStructures de stockage
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
99.
MULTI-TASK FINE-TUNING FOR PLANNING PERFORMED BY LARGE LANGUAGE MODEL
Techniques are disclosed for fine-tuning a pre-trained machine learning model to be used by a digital assistant for supporting a user's interactions. In one aspect, a method includes accessing a set of training examples, generating a set of synthesized training examples using an iterative process including accessing a dialog script and corresponding prompt template and response template for a predefined scenario, generating one or more prompts based on the dialog script and corresponding prompt template, generating one or more responses associated with each of the one or more prompts based on the dialog script and the response template, and linking each of the responses with the associated prompts to generate one or more synthesized training examples in the set of synthesized training examples. The pre-trained machine learning model is then fine-tuned using the set of training examples and the set of synthesized training examples.
Techniques for providing a transactionally-consistent Hierarchical Navigable Small Worlds (HNSW) index are described. In one technique, a HNSW index for a plurality of vectors is stored. In response to receiving a set of changes to the plurality of vectors, storing the set of changes in a shared journal table instead of applying the set of changes to the HNSW index. in response to receiving a vector query that includes a query vector, a subset of the set of changes in the shared journal table is identified based on the query vector. Also, based on the query vector and the HNSW index, a subset of the plurality of vectors is identified. A result of the vector query is generated based on the subset of the set of changes and the subset of the plurality of vectors.