Techniques disclosed herein relate generally to text classification and include techniques for fusing word embeddings with word scores for text classification. In one particular aspect, a method for text classification is provided that includes obtaining an embedding vector for a textual unit, based on a plurality of word embedding vectors and a plurality of word scores. The plurality of word embedding vectors includes a corresponding word embedding vector for each of a plurality of words of the textual unit, and the plurality of word scores includes a corresponding word score for each of the plurality of words of the textual unit. The method also includes passing the embedding vector for the textual unit through at least one feed-forward layer to obtain a final layer output, and performing a classification on the final layer output.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
2.
SYSTEMS AND METHODS FOR CLINICAL DATA EXCHANGE FROM ELECTRONIC HEALTH RECORD SYSTEMS TO PARTICIPANTS
The present disclosure relates to cloud-based centralized clinical data exchange (CDeX) techniques leveraging a unified interoperability interface for sharing selectively and/or dynamically medical records of subjects with other participants. In some aspects, techniques may be provided to facilitate, support or perform notifying participants or external entities (e.g., regulators, payers, insurance companies or other entities) in real-time or near real-time when a subject encounter occurs and/or throughout the encounter by transmitting admission, discharge, and transfer (ADT) messages. The unified interoperability interface may enable data sharing by establishing subject-specific and/or participant-specific communication channels that can be initiated by either party, provided that the participants are on-boarded and registered within the CDeX system.
G16H 40/67 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement à distance
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
Techniques for receiving, storing, analyzing, and utilizing raw content are disclosed. The system receives raw text from a user, including a first tag and second tags. The system identifies and removes formatting attributes in the raw text, including whitespace characters, to generate a normalized input. The system stores the normalized input in target dataset(s) and parses the normalized input for the first tag and the second tags. In this case, the first tag corresponds to one or more topics, and the second tags represent a computing system associated with corresponding portions of the normalized input. The system analyses the first tag and the second tags to identify the topics. The system generates one or more topic maps for the target dataset(s) based on the first tag and the one or more second tags. The topic map(s) include one or more references to content items within the target dataset(s).
Systems, methods, and machine-readable media may facilitate programmable data trimming. Metadata associated with a collective operation may be determined by an application of a server. The metadata may specify a job identifier corresponding to a unit of work to be completed in conjunction with the collective operation, a collective type of the collective operation, and/or an ordering mode for packets corresponding to the collective operation. The metadata associated with the collective operation may be sent by the application to a network interface card (NIC). The NIC may be caused by the application to transmit a data packet with the metadata embedded in a cookie of the data packet to a switch of a network fabric to cause the switch to use a selected network path and/or selected load-balancing for the collective operation based on one or more of the job identifier, the collective type, and/or the ordering mode.
Techniques are disclosed for supporting heterogenous arrays. A method comprises determining, from metadata generated from sample data, a first discriminator and a second discriminator, wherein the first discriminator identifies an occurrence of a heterogeneous array included within received data that follows an open-standard data interchange format, and the second discriminator identifies one or more resource types included within the heterogenous array; receiving data that follows the open-standard data interchange format; determining, based on an occurrence of the first discriminator within the data, a heterogeneous array; determining, based on an occurrence of the second discriminator identified within the heterogeneous array, a resource type identified within the data; determining one or more attributes associated with the resource type from the data; generating a normalized resource from the resource type and the one or more attributes that conforms with an integration model; and performing one or more actions using the normalized resource.
In one embodiment, a method includes receiving from a service executing in a service tenancy and by an event broker, a request to modify a rule to deliver a set of events from a first tenancy to the service tenancy. Modify may include at least one of create or update. The method also includes receiving from the service and by the event broker, a proxy token for substantiating the request. The proxy token represents an authority of a user principal of the first tenancy. The method further includes determining, by the event broker, whether modification of the rule is authorized based at least on the authority of the user principal, and subsequent to determining that the modification of the rule is authorized, delivering, by the event broker, the set of events from the first tenancy to the service tenancy according to the rule.
Techniques are disclosed for simplifying extensions. A method comprises receiving data that comprises a standard resource that conforms to a Fast Healthcare Interoperability Resources (FHIR) standard, wherein the standard resource includes one or more standard data elements for representing and exchanging healthcare information; determining that the standard resource further includes an extension that incorporates a non-standard data element into the standard resource, wherein the non-standard data element is defined by a structure definition that is external from the data; converting the standard resource into a normalized resource that conforms with an integration model, wherein the converting includes normalizing the non-standard data element based on a schema of the integration model to generate a normalized extension element that includes an attribute that extends the standard resource, and wherein the normalized resource comprises the one or more standard data elements and the normalized extension element; and providing the normalized resource.
G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p. ex. pour la gestion du personnel hospitalier ou de salles d’opération
G16H 70/20 - TIC spécialement adaptées au maniement ou au traitement de références médicales concernant des pratiques ou des directives
H04L 67/12 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p. ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance
Techniques are disclosed for restricting access to a computing resource in a manner that does not block the performance of other operations in a multi-thread computing environment. A software gate receives a request from a thread for permission to access a computing resource. Responsive to receiving the request, the software gate determines that a dynamic permit limit currently prevents the request from being granted. The software gate returns a data structure indicating that the request is incomplete, adds the request to a queue of pending requests, and releases the thread. Once released, the thread is free to perform other operations while the request is pending. If the request subsequently becomes allowable, the software gate grants the request, removes the request from the queue, and updates the data structure to indicate the request is complete.
Techniques for a container orchestration system are disclosed. A system executes a virtual agent in a cloud network on a virtual node of a container orchestration system. The virtual node hosts multiple container instances within the cloud environment. The system executes a first container instance within the virtual node and connects the first container instance to a first subnet. The system executes a second container instance within the same virtual node and connects the second container instance to a second subnet distinct from the first subnet. The system enables access to the first container instance through the first subnet and enables access to the second container instance via the second subnet. This architecture allows for flexible network configurations within a single virtual node, enhancing resource utilization and network segmentation capabilities in containerized environments.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Techniques are disclosed for restricting access to a computing resource in a manner that does not block the performance of other operations in a multi-thread computing environment. A software gate receives a request from a thread for permission to access a computing resource. Responsive to receiving the request, the software gate determines that a dynamic permit limit currently prevents the request from being granted. The software gate returns a data structure indicating that the request is incomplete, adds the request to a queue of pending requests, and releases the thread. Once released, the thread is free to perform other operations while the request is pending. If the request subsequently becomes allowable, the software gate grants the request, removes the request from the queue, and updates the data structure to indicate the request is complete.
A system processes clinical guidance to enhance patient care management. The system receives clinical guidance in digital text form from authoritative medical sources. A large language model analyzes the received clinical guidance to extract key information and relationships. The system generates a structured pathway based on the analyzed clinical guidance. The generated pathway represents a comprehensive summary for a specific disease state, organized into logical steps. These steps encompass treatment goals, management strategies, and measures for preventing complications. The system integrates the generated pathway information into an electronic health record (EHR) system. Within the EHR, the system produces a patient chart incorporating the derived pathway information.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G16H 70/20 - TIC spécialement adaptées au maniement ou au traitement de références médicales concernant des pratiques ou des directives
12.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR NETWORK ANALYTICS DATA DIRECTOR (NADD)-ASSISTED DYNAMIC CONFIGURATION OF HYPERTEXT TRANSFER PROTOCOL (HTTP) PARAMETER SETTINGS AT NETWORK FUNCTIONS (NFs)
A method for NADD-assisted dynamic configuration of HTTP parameter settings at NFs includes receiving, at the NADD, SBI message feeds from a plurality of producer NFs. The method further includes determining, by the NADD and from at least one of the SBI message feeds, an HTTP parameter setting for one of the producer NFs. The method further includes communicating, by the NADD, the HTTP parameter setting to the producer NF. The method further includes receiving, by the producer NF and from the NADD, the HTTP parameter setting. The method further includes using, by the producer NF, the HTTP parameter setting to control traffic flow from a consumer NF to the producer NF.
Techniques are disclosed for using a proxy service to generate resource principals corresponding to a cross-realm request. A request to perform an operation in a target realm (TR) may be received by the proxy service of a host realm (HR). The request may comprise identity data that indicates an identifier of the requestor in one or more identity realms (e.g., in at least the TR). The proxy service of the HR may establish a trusted connection with a proxy service of the TR. The proxy service of the HR may transmit request data that indicates the identity of the requestor within the TR, causing the proxy service in the TR to generate a resource principal object corresponding to the identity of the requestor in the TR, whereby the resource principal object is used to execute (or to attempt execution of) the requested operation in the TR.
A system and method for enhancing electronic health record (EHR) systems through integration of proprietary and standardized medical terminologies. Embodiments proprietary terminology concepts from electronic sources and identifies matching standardized medical terminology concepts. A knowledge graph is generated, incorporating both proprietary and standardized concepts along with their relationships. The system receives patient data, including problem lists, medication lists, and lab results. This data is analyzed against the knowledge graph to generate suggestions for the EHR system. These suggestions are then provided to the EHR system for display, enhancing clinical decision support and improving patient care.
G06N 5/025 - Extraction de règles à partir de données
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
G16H 20/10 - TIC spécialement adaptées aux thérapies ou aux plans d’amélioration de la santé, p. ex. pour manier les prescriptions, orienter la thérapie ou surveiller l’observance par les patients concernant des médicaments ou des médications, p. ex. pour s’assurer de l’administration correcte aux patients
G16H 70/40 - TIC spécialement adaptées au maniement ou au traitement de références médicales concernant des médicaments, p. ex. leurs effets secondaires ou leur usage prévu
15.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR NETWORK ANALYTICS DATA DIRECTOR (NADD)-ASSISTED DYNAMIC CONFIGURATION OF HYPERTEXT TRANSFER PROTOCOL (HTTP) PARAMETER SETTINGS AT NETWORK FUNCTIONS (NFS)
A method for NADD-assisted dynamic configuration of HTTP parameter settings at NFs includes receiving, at the NADD, SBI message feeds from a plurality of producer NFs. The method further includes determining, by the NADD and from at least one of the SBI message feeds, an HTTP parameter setting for one of the producer NFs. The method further includes communicating, by the NADD, the HTTP parameter setting to the producer NF. The method further includes receiving, by the producer NF and from the NADD, the HTTP parameter setting. The method further includes using, by the producer NF, the HTTP parameter setting to control traffic flow from a consumer NF to the producer NF.
Embodiments are directed to operating a cloud based product configurator. Embodiments store, as vectorized data in a vector database product, information corresponding to a first product to be configured. While configuring the first product, embodiments receive a query regarding the first product. Embodiments augment the query in response to a context based semantic search of the vector database using the query. Embodiments prompt a large language model (“LLM”) using the augmented query and receiving an LLM response. Embodiments the provide the LLM response in response to the query.
A method generates static defect checkers using a language model. The method includes generating an example representation. The method further includes combining an explanation section, an instruction section, and a description section to generate a prompt. The explanation section includes the example representation, the instruction section includes instruction text, and the description section includes defect description text. The defect description text includes a natural language description of a defect corresponding to the example representation. The explanation section includes operations corresponding to the defect. The instruction text includes instructions in the natural language to generate defect checker code using the example representation and the defect description text. The method further includes executing a language model using the prompt to generate the defect checker code. The defect checker code is in a programming language.
Systems, methods, and computer-readable media are provided for generating an interactive node graph showing process states. A data management system accesses a data structure that includes a set of candidate process records representing a plurality of candidate process states, where each candidate process record represents a candidate process state and one or more sequentially connected candidate process states. The data management system generates an interactive node graph that includes nodes, each node representing a candidate process state. The interactive node graph includes edges representing candidate connections between candidate process states. The data management system causes display of the interactive node graph. The data management system receives input modifying settings for displaying the interactive node graph and causes display of an updated interactive node graph using the settings as modified.
Systems and methods and computer-readable media are provided for generating an interactive node graph showing aggregate node and/or edge metrics. The interactive node graph includes nodes and edges, each representing a process state or a connection between process states. The data management system marks each node graphically based on a first metric type that is based on an aggregation of occurrences of a corresponding process state and marks each edge graphically based on a second metric type that is based on an aggregation of occurrences of a transition between corresponding process states. The data management system causes display of the interactive node graph, receives input modifying a particular metric type of the first metric type or the second metric type, and causes display of an updated interactive node graph based at least in part on the particular metric type as modified.
Systems, methods, and computer-readable media are provided for animating an interactive node graph. A data management system generates an interactive node graph having nodes that represent process states and edges representing connections between process states. The data management system uses a first live metric for determining aggregated node values and a second live metric to use for determining aggregated edge values. The data management system causes display of the interactive node graph according to a selected data slice of a plurality of data slices. Based at least in part on a selection of an option to play the interactive node graph through the plurality of data slices, the data management system updates the display of the interactive node graph to show a different data slices of the plurality of data slices, and, after an amount of time, another different data slice of the plurality of data slices.
Techniques for enabling a service to perform operations corresponding to a subject tenancy on behalf of a governing tenancy are disclosed. The system receives a request from a service for a resource principal token. The request includes a resource principal, a service identifier for the service, and a link identifier that identifies a governance link. The governance link is associated with a governing tenancy, a subject tenancy, and a service. The system evaluates the governance link to determine if the governance link is active. After determining that the governance link is active, the system responds to the request from the service, providing a resource principal token. The resource principal token that is provided to the service forms a basis for authorizing the service to perform operations corresponding to the subject tenancy on behalf of the governing tenancy.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Systems, methods, and other embodiments associated with LLM-based generation of computing infrastructure are described. In one embodiment, an example method includes accessing infrastructure requirements for compute infrastructure that are in human language. The example method includes translating the infrastructure requirements into a physical infrastructure topology using one or more large language models. The example method includes converting the physical infrastructure topology into an executable deployment specification. And, the example method includes executing the deployment specification to automatically configure a target computer system to have the compute infrastructure described by the infrastructure requirements.
G06F 30/13 - Conception architecturale, p. ex. conception architecturale assistée par ordinateur [CAAO] relative à la conception de bâtiments, de ponts, de paysages, d’usines ou de routes
23.
DETECT REDUNDANT INITIALIZATION CHECKS USING STATIC ANALYSIS
A method detects redundant initialization checks using static analysis. The method includes inlining source code to generate inlined code. The inlined code includes instructions to materialize an element within a scope. The method further includes consolidating the inlined code to form consolidated code by moving the instructions to materialize the element to a point where the element escapes the scope. The method further includes running a points-to analysis on the consolidated code. The method further includes reducing the consolidated code to generate reduced code by removing an initialization check from the consolidated code.
Improved network traffic flow processing techniques are described. In a network device providing multiple processing planes, different processing resources can be allocated to affect efficient and rapid packet processing. This allocation of resources can be upset via receipt of a configuration update. When a configuration update is received, a previously programmed flow can be provisionally invalidated. To prevent the overwhelming of slow path resources, a provisionally invalid flow can continue to be processed according to previous programming by a fast path.
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
25.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING ACCESS TO COMMUNICATION NETWORK HEALTH INFORMATION USING COMMUNICATION-NETWORK-AWARE GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RETRIEVAL AUGMENTED GENERATION (RAG) MODEL AND NETWORK FUNCTION (NF)
A method for providing access to communication network health information using a communication-network-aware generative AI RAG model includes receiving, as a first input to the RAG model, a query for communication network health information and receiving, as a second input to the RAG model, at least one feed of communication network health information regarding at least one NF. The method further includes using the query to extract, from the communication network health information regarding the at least one network function, context information for the query for communication network health information, providing the query and the context information as inputs to a base LLM component of the RAG model, and generating, as output, a query response including an indication of the communication network health information requested by the query and in a natural language format.
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
Systems, methods, and computer-readable media are provided for storing a snapshot of an interactive node graph. A data management system generates an interactive node graph having nodes that represent process states and edges representing connections between process states.
Systems, methods, and computer-readable media are provided for storing a snapshot of an interactive node graph. A data management system generates an interactive node graph having nodes that represent process states and edges representing connections between process states.
The data management system uses a first live metric for determining aggregated node values and a second live metric to use for determining aggregated edge values. Based at least in part on a selection of an option to save the interactive node graph, the data management system stores a snapshot. The snapshot includes the aggregated node values based on the first live metric as of a particular time, the aggregated edge values based on the second live metric as of the particular time, a first mapping between the first live metric and the nodes, and a second mapping between the second live metric and the edges. The stored snapshot is loadable without access to the first live metric and the second live metric to display the interactive node graph.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
Various embodiments of the present technology generally relate to systems and methods for optimizing resources within a customer data platform (CDP). In certain embodiments, a method may comprise operating a resource optimizer system to implement a CDP resource optimization process to remove obsolete metadata, the CDP resource optimization process including monitoring metadata usage within a CDP, generating metrics for a metadata element based on the metadata usage, defining a rule set for selecting the obsolete metadata for removal based on the metrics, and applying the rule set to remove the obsolete metadata.
G06F 16/11 - Administration des systèmes de fichiers, p. ex. détails de l’archivage ou d’instantanés
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
A system for intelligent data flow control monitors an activity stream of an integration layer that processes events from an upstream system and transmits data to a downstream system. Upon detecting a recoverable error between the integration layer and the downstream system, the system instructs the upstream system to pause sending additional events. The system continues monitoring the activity stream to detect when the recoverable error is fixed. Once the error is resolved, the system instructs the upstream system to resume sending events to the integration layer. This approach prevents data loss during temporary system disruptions while maintaining efficient event processing flow.
Techniques for a centralized vulnerability security scanning and distributed detection system are disclosed. Some techniques set forth a set of operations including receiving, in a first cloud environment of a cloud system, image scan results from a second cloud environment of the cloud system, receiving container identity data from a particular deployed container of a set of deployed containers in the first cloud environment, based on a comparison between the image scan results and the container identity data, determining that the particular deployed container of the set of deployed containers is running a vulnerable software product, and generating, for presentation on a graphic user interface (GUI), information associated with the vulnerable software product. The image scan results correspond to a vulnerability scan of a plurality of software products running in the set of deployed containers.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING ACCESS TO COMMUNICATION NETWORK HEALTH INFORMATION USING COMMUNICATION-NETWORK-AWARE GENERATIVE ARTIFICIAL INTELLIGENCE (AI) RETRIEVAL AUGMENTED GENERATION (RAG) MODEL AND NETWORK FUNCTION (NF)
A method for providing access to communication network health information using a communication-network-aware generative Al RAG model includes receiving, as a first input to the RAG model, a query for communication network health information and receiving, as a second input to the RAG model, at least one feed of communication network health information regarding at least one NF. The method further includes using the query to extract, from the communication network health information regarding the at least one network function, context information for the query for communication network health information, providing the query and the context information as inputs to a base LLM component of the RAG model, and generating, as output, a query response including an indication of the communication network health information requested by the query and in a natural language format.
Techniques are disclosed for detecting a drift experienced by computing system(s). The system generates multiple snapshots as part of a drift detection process. Each snapshot contains state information of a computing system. Based on the snapshots, the system generates metrics sets according to a general specification. The general specification defines metrics generally suitable for detecting drift in the computing system(s). Based on the circumstances of the drift detection process, the system generates a custom specification. The system optionally employs trained machine learning model(s) for custom specification generation. The custom specification defines modifications to the metric sets designed to make the metric sets more suitable for the circumstances of the drift detection process. The system modifies the metric sets according to the custom specification. Subsequently, the system generates flattened vectors based on the modified metric sets, and the system performs a cluster analysis on the flattened vectors to detect any drift.
Various embodiments of the present technology generally relate to systems and methods for providing a query detection engine and its related functions. In an example, a method includes receiving, by a query detection engine, a plurality of queries and processing the queries to generate processed queries. For each of the processed queries, the query detection engine, generates an embedding and then groups the embeddings into clusters such that each cluster contains a subset of processed queries. The query detection engine then generates a cluster topic for each of the clusters. Once a new query is received, the query detection engine maps the new query to an appropriate cluster and generates a confidence score for the mapping of the new query to the appropriate cluster. Based on the confidence score, the query detection engine determines that the new query is an emergent query and generates an alert of the emergent query.
Techniques for modifying a query based on a data index of nodes in a data set are disclosed. A system modifies queries based on query terms associated with indexed data. The system modifies queries to include query terms based on indexed data or to obtain values for query terms that are not associated with indexed node properties. The system adds query terms, that reference indexed data, to a query in response to determining that none of a query's terms reference indexed data. The system derives values for query terms that are not associated with indexed node properties using a logical or mathematical formula. The system traverses parent nodes of a child node to identify values for query terms that are not associated with the child node in a data index, but are inherited from a parent node.
Aspects of the disclosure include a dynamic cloud workload reallocation based on an active ransomware attack. An example method includes receiving a first message that a computing instance is potentially infected by ransomware. The method further includes receiving a security state-based metric related to the computing instance based at least in part on the first message. The method further includes comparing the security state-based metric to a threshold metric. The method further incudes determining a likelihood of a ransomware attack based at least in part on the comparison. The method further includes transmitting second message to a job scheduler to reschedule workloads directed toward the computing instance based at least in part on the determination.
Disclosed is an improved approach to manage hierarchical metadata for a database system. A hierarchical metadata structure pertaining to a hierarchical object structure of a multitenant database architecture for a plurality of tenants may be maintained where the multitenant database architecture comprising a container database (CDB) that includes a pluggable database (PDB). A request for an access to a metadata object in the hierarchical metadata structure for one or more database objects in the container database may be received. In response to the request for the access to the metadata, access to at least a portion of the hierarchical metadata structure is provisioned.
Techniques for detecting stealing of principals in a cloud environment are disclosed. A request for a non-user principal to be used within a cloud environment is received. A log, which includes information associated with a receipt of the request for the non-user principal, is accessed. Based at least in part on the log, originating information of the request is determined. An anomaly associated with the originating information of the request is detected. In response to detecting the anomaly associated with the originating information of the request, information indicative of the detected anomaly associated with the originating information of the request is caused to be presented. In an example, the non-user principal is one of an instance principal, a resource principal, or a service principal to be assigned to a compute instance, a cloud resource, or a service, respectively, of the cloud environment.
A method of processing user queries includes receiving a request with natural language components. A logical dependency is determined between a first natural language component and a second natural language component and categories are determined for each of the natural language components. Based on the determined categories, selecting candidate natural language processing services for processing each of the natural language components. Sending the first natural language component to the first natural language processor, receiving a response, and sending the response and the second natural language component to the second natural language processing service. A response from the second natural language processing service is received which includes an option to trigger one or more actions in an application.
Described herein are systems and methods for use with a multidimensional database environment, for providing bottom-up multidimensional data analysis in complex formula types. Complex formulas expressions with conditional branches executed over very sparse regions can result in large computational overheads when the iteration is performed in a top-down mode without a knowledge of the target multidimensional cells that have real base data for evaluation. In accordance with an embodiment, the system employs a bottom-up approach that allows identification of those target cells having real data, and executes only those intersections for complex multidimensional expression evaluation. The bottom-up query path functionality can be activated in autonomous mode. The described approach minimizes the amount of redundant executions that have no base data, in some instances to zero level.
Described are improved systems, computer program products, and methods for providing an improved approach to implement database VM placement. An approach is provided to implement an efficient distribution of VMs to maintain high availability and performance alongside efficiently reserving and utilizing the common backend resources.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
40.
METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR ADAPTIVE MEMORY MANAGEMENT FOR DATA INGESTION AND FLUSH
Described is an improved approach to implement memory management. A predicted ingest rate and a predicted flush rate is determined for a memory area of a global memory for a database. A memory management task is determined for the memory area based at least in part upon the predicted ingest rate and the predicted flush rate. A buffer accessibility map is modified based at least in part upon the memory management task; and the memory area is adaptively resized at least by executing the memory management task on the memory area.
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Network interface cards and data processing units for use in
improving operational efficiency, security, and scalability
of computer software architecture, computer hardware and
software infrastructure, and network and storage
virtualization. Cloud control computing services for use in improving
operational efficiency, security, and scalability of
computer software architecture, computer hardware and
software infrastructure, and network and storage
virtualization.
Various embodiments of the present technology generally relate to systems and methods for providing auto-upgrade functionality for cloud service customizations, a customization being custom programming code designed to alter functionality of a software product. In certain embodiments, a method may comprise retrieving an existing customization written in a first version of a programming language, evaluating the existing customization using a first analyzer to identify compatibility errors between the existing customization and a selected version of the programming language supported by the software product, evaluating the existing customization using a second analyzer configured to identify elements of the existing customization that violate proprietary constraints on the selected version of the programming language imposed by the software product, generating a large-language model (LLM) prompt based on the analyzer outputs and the existing customization, and writing an updated customization compatible with the selected version of the programming language based on the LLM prompt.
Disclosed is a method, a computer program product, and a computer system for remapping database sessions by first identifying a change in the cardinality of a database instances. A plurality of co-located database sessions may be assigned to a database instance of the database based at least in part upon the change in the cardinality of the database instances. Multiple database instances of a plurality of database instances in the database that respectively have one or more co-located database sessions of the plurality of co-located database sessions may be identified based at least in part upon a co-location data structure. At least one co-located database sessions of at least one database instance of the multiple database instances may be terminated.
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 16/22 - IndexationStructures de données à cet effetStructures de stockage
G06F 16/28 - Bases de données caractérisées par leurs modèles, p. ex. des modèles relationnels ou objet
44.
DETECTING INTER-TENANCY EXFILTRATION IN A CLOUD ENVIRONMENT
Techniques for detecting inter-tenancy exfiltration of data in a cloud environment are disclosed. A log that includes information associated with receipt of a service message at a gateway within a cloud environment is accessed. Based on the log, (i) originating information of the service message (such as identification of an originating tenancy of the service message) and (ii) target information of the service message (such as identification of a target tenancy of the service message) are determined. The originating information and the target information are compared. A mismatch between the originating information of the service message and the target information of the service message is detected. In response to the detected mismatch between the originating information of the service message and the target information of the service message, information indicative of the detected mismatch is caused to be presented at a user interface.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p. ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures
45.
NON-DISRUPTIVE FASTPATH FLOW INVALIDATION SCHEME TO ADDRESS NETWORKING CONFIGURATION CHANGES
Improved network traffic flow processing techniques are described. In a network device providing multiple processing planes, different processing resources can be allocated to affect efficient and rapid packet processing. This allocation of resources can be upset via receipt of a configuration update. When a configuration update is received, a previously programmed flow can be provisionally invalidated. To prevent the overwhelming of slow path resources, a provisionally invalid flow can continue to be processed according to previous programming by a fast path.
For generation and iterative improvement of an original global explanation of a machine learning model, here is refinement of a linguistic prompt. For each technical requirement, a respective reviewer large language model (LLM) may detect inaccuracies in a global explanation that characterizes a machine learning (ML) model. Based on the detected inaccuracies, a linguistic prompt that contains the global explanation is generated. From the linguistic prompt, corrective natural language (NL) that describes how the global explanation is inaccurate is inferentially generated by a critic LLM. In each iteration of a feedback loop, the corrective NL is feedback from which an explainer LLM generatively infers a revised global explanation for the ML model, and this revised explanation is more or less monotonically more accurate than the original global explanation.
Disclosed is an improved approach to implement more efficient upgrades and patches of nodes in a multi-tenant environment. Described in an improved approach to handle VM cluster maintenance through maintenance domains which partitions the interval and pool of hardware nodes, and where subsequent operations handling these concepts provide the required maintenance schedules satisfying any customer requirements.
A multi-cloud control plane of a source cloud environment receives from a control plane of a target cloud environment, first information included in a first metadata instance and second information included in a second metadata instance. The first information represents a logical organization of resources, and the second information indicates one or more networking resources that are to be created in the source cloud environment, respectively. The multi-cloud control plane creates the logical organization of resources associated with the first metadata instance and the one or more networking resources associated with the second metadata instance in the source cloud environment. Responsive to receiving a request from a customer of the target cloud environment for accessing a service provided by the source cloud environment, one or more service-based resources are deployed based on at least one of the first metadata instance and the second metadata instance.
Disclosed is an improved approach to manage hierarchical metadata for a database system. A hierarchical metadata structure pertaining to a hierarchical object structure of a multitenant database architecture for a plurality of tenants may be maintained where the multitenant database architecture comprising a container database (CDB) that includes a pluggable database (PDB). A request for an access to a metadata object in the hierarchical metadata structure for one or more database objects in the container database may be received. In response to the request for the access to the metadata, access to at least a portion of the hierarchical metadata structure is provisioned.
Disclosed herein are techniques related to efficient database reads based on checkpoint avoidance. The techniques may include determining, by a database server instance in a multi-node database system, whether a database object is dirty. The multi-node database system may include a plurality of database server instances that share access to a database that stores the database object. The techniques may also include sending a request to one or more database server instances of the plurality of database server instances to perform a global checkpoint on the database object if the database object is dirty. The techniques may further include commencing an operation that requires reading the database object if the database object is not dirty.
G06F 16/00 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/22 - IndexationStructures de données à cet effetStructures de stockage
Techniques for filtering queries to a large language model (LLM) based on their relevance to an enterprise domain associated with the LLM involve training a machine learning model using historical LLM query data and associated relevance scores. These scores indicate how closely a query relates to the enterprise's operations. The trained model is then applied to new input queries, generating relevance scores for the input queries. Queries meeting a predetermined relevance threshold are passed to the LLM for processing. For queries falling below this threshold, remedial actions are taken instead of processing by the LLM. The techniques optimize computational resource allocation by prioritizing queries relevant to the enterprise while filtering out less pertinent ones. The techniques create a relevance-based gatekeeping mechanism for LLM query processing, enhancing efficiency and focusing the LLM's capabilities on enterprise-specific tasks.
Systems and methods for federating datasets hosted on separate servers are provided herein. An example data federation process includes receiving a federation request that contains a user-defined data domain distributed across two or more datasets hosted on separate servers. The federation request includes a request for first federation data and second federation data. The data federation process includes sending the federation request to the first server, which determines that it hosts the first federation data and determines call information associated with the first federation data. The first server then determines that the second server hosts the second federation data. The first server generates a model query including a procedure call for the first federation data and the second federation data. Upon fetching the first and second federation data based on the model query, the first server combines the first and second federation data together to generate a federated dataset.
Techniques are provided for customizing or fine-tuning a pre-trained version of a machine-learning model that includes multiple layers and is configured to process audio or textual language input. Each of the multiple layers is configured with a plurality of layer-specific pre-trained parameter values corresponding to a plurality of parameters, and each of the multiple layers is configured to implement multi-head attention. An incomplete subset of the multiple layers is identified for which corresponding layer-specific pre-trained parameter values are to be fine-tuned using a client data set. The machine-learning model is fine-tuned using the client data set to generate an updated version of the machine-learning model, where the layer-specific pre-trained parameter values configured for each layer of one of more of the multiple layers not included in the incomplete subset are frozen during the fine-tuning. Use of the updated version of the machine-learning model is facilitated.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
A computer program product, system, and computer implemented method for continuous database locking during database reconfiguration is provided herein. The present approach provides two different processing approaches to remaster locks that can execute in parallel. The first is an event-based lock state remastering process that executes a processing flow to ensure that all locks that need to be remastered are in fact remastered. As second is a request-based lock state remastering process that executes processing solely for the requested resource in order to quickly make the requested resource accessible. Additionally, each process is responsive to the other in the request-based lock state remastering process can continue from where the event-based lock state remastering process left the corresponding resource and the event-based lock state remastering process avoids further processing for locks that are, or have been, processed using the request-based lock state remastering process.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
Embodiments described herein are generally directed to computer-based data analytics and the processing of enterprise data, including the generation and use of data models for determining inferred characteristics associated with candidates. In accordance with an embodiment, the system utilizes data-processing pipelines and machine learning models to process structured, semi-structured, and/or unstructured sets of data, received from various sources; generate a multi-dimensional ontology and a taxonomy associated with the characteristics of open positions or potential candidates; identify, based on the data models, one or more additional or inferred characteristics associated with the candidates; and present the output by way of an analytics dashboard, scorecard, or other data visualization.
The present disclosure relates to manufacturing training and testing data by leveraging data augmentation techniques to generate examples of long context database schemas. Aspects are directed towards accessing a training dataset comprising training examples where each training example may include i) a prompt including a natural language utterance and a database schema having one or more tables, and ii) a gold logical form corresponding to the natural language utterance, combining the tables from the database schemas in the training examples may generate a combined database schema set, generating a set of long context training examples based on the training dataset and the combined database schema set, and incorporating the long context database schema into the selected training example to generate a long context training example to train a generative artificial intelligence model with at least the set of long context training examples to generate a trained generative artificial intelligence model.
Techniques are disclosed for data streaming and aggregation with client initiated recovery. In an example method, a client device provides, to a computing system, a first data message including a first sequence number, a first timestamp, and a first payload, the first sequence number indicating a first position within the audio data stream to which the first payload corresponds. The client device stores the first data message in a buffer of the client device, the buffer comprising one or more buffered data messages. The client device determines a first error condition for the first data message based on at least one of the one or more buffered data messages or a first acknowledgement status of the first data message. Responsive to determining the first error condition for the first data message, the client device re-provides, to the computing system, the first data message.
Various embodiments of the present technology generally relate to systems and methods for binding support function (BSF) capacity expansion. In some examples, a method may comprise operating a binding support function (BSF) of a mobile network, including determining whether an internet protocol (IP) address of a received message corresponds to a local IP address range, and when the IP address of the received message does not correspond to the local IP address range, accessing a first network function (NF) profile at a network repository function (NRF) to determine a second BSF corresponding to the IP address, and forwarding the received message to the second BSF.
Techniques for transforming code modules to different programming languages are disclosed. A system accesses a first non-code representation of a first code module expressed in a first programming language and parses the first non-code representation to identify a nested data element of the first non-code representation that represents a nested expression of the first code module. The system executes a transformation technique to transform the nested data element, in the first non-code representation, to a first non-nested data element in the first non-code representation. The system modifies the first non-code representation based on one or more attributes of a second programming language to generate a second non-code representation suitable for representing code modules in the second programming language. The system generates a second code module based at least on the second non-code representation.
A rack level cage and components thereof are disclosed herein. The rack level cage can be a physical security system. The physical security system can include a rack cage that can include at least one top opening. The system can also include a blocking plate secured to the rack cage to at least partially obstruct the top opening.
A rack level cage physical security system with magnetic sensor shield is described herein. The rack level can be a physical security system that can include a rack cage, a body defining an internal volume that can contain at least one server, a door coupled to the body and moveable between an open position and a closed position, and a magnetic securement system that can prevent an external magnetic field from affecting a magnetic switch. The internal volume of the body can be accessible via the door when the door is in the open position.
Techniques are described for dynamic cloud configuration changes based on a computing attack detection. An example method can include receiving an indication of a computing attack at a first processor, the first processor being at a first node of a network. The method can include transmitting control instructions to transition a workflow request from the first processor to a second processor at second node of the network based at least in part on the indication. The method can include determining a transition of the first processor from a non-secure state to a secure state. The method can include determining whether the first processor is subject to a computing attack based at least in part on the transition of the first processor from the non-secure state to the secure state. The method can include transmitting a determination of whether the first processor is subject to the computing attack.
G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures
G06F 21/54 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par ajout de routines ou d’objets de sécurité aux programmes
63.
Access Control Systems And Methods For Logical Secure Elements Running On The Same Secure Hardware
Techniques are described herein for applying access controls to logical secure elements (LSEs) running on the same secure element hardware platform. Embodiments include a firmware component that determines whether a message targeting an LSE is authorized to trigger an operation. For example, the firmware component may verify a signature of the received message using a public key, shared secret, or other access control key. Additionally or alternatively, access control policies may be defined to constrain the load of the LSEs on the SE platform hardware and/or to prioritize LSE access. For example, the access control policies may define usage thresholds, such as maximum threshold memory and/or processor utilization rates. As another example, the access controls may restrict the active time for an LSE to a threshold duration. If access constraints are violated or the message cannot be verified, then the firmware component may delay or deny the operation.
Techniques for selectively aggregating records based on a downstream function to be applied to the records are disclosed. A system obtains an instruction corresponding to a set of records and a function to be applied to the set of records. The system determines whether the function meets a particular criteria for aggregating records prior to transmitting the records to an application for executing the function on the records. If the system determines that the function does meet the records-aggregation criteria, the system stores a set of records in a buffer prior to sending the set of records to the function-executing application. The system sends the set of records to the application together as a group with an instruction to generate a set of function results that includes a separate value for each record in the set of records.
The present disclosure relates to an automatic assignment of new entities in an existing hierarchical structure by leveraging retrieval-augmented generation (RAG) that improves the performance of a generative artificial intelligence (AI) model by generating a suggested hierarchical structure. For an automatic assignment, a user input may be received that may include a hierarchical structure and a query for adding one or more new entities to the existing hierarchical structure. For each new embedding vector associated with the query for adding a new entity, a retrieval-augmented generation may generate a context from the hierarchical structure based on a relevance to the query, and an additional context from a knowledge base. The retrieval-augmented generation may input a human-readable prompt, combining query with the context, additional context and a system prompt, to the generative AI model such as a large language model for generating the suggested hierarchical structure.
Embodiments described herein are generally directed to computer-based data analytics and the processing of enterprise data, including the generation and use of data models for determining inferred characteristics associated with candidates. In accordance with an embodiment, the system utilizes data-processing pipelines and machine learning models to process structured, semi-structured, and/or unstructured sets of data, received from various sources; generate a multi-dimensional ontology and a taxonomy associated with the characteristics of open positions or potential candidates; identify, based on the data models, one or more additional or inferred characteristics associated with the candidates; and present the output by way of an analytics dashboard, scorecard, or other data visualization.
Various embodiments of the present technology generally relate to systems and methods for providing an inter-domain engine for masking communications exchanged between a visitor network and a home network. In an aspect, the inter-domain engine may be part of a home network and determine a service request from a visitor consumer NF. Based on the service request, the inter-domain engine may determine a service response containing NF topology information for furnishing the service request within the home network. Responsive to determining the service response, the inter-domain engine may generate a mask NF profile based on the service response and generate a mask service response based on the mask NF profile and the service request. Once generated, the inter-domain engine may provide the mask service response to the visitor network.
Techniques discussed herein relate to generating and utilizing snapshots (also referred to as “service images”) of a cloud-based service. A snapshot may be generated within a source environment (e.g., one compartment and/or region) and re-instantiated in a target environment (e.g., a different compartment and/or region, the same compartment/region as would be the case in a recovery scenario). The snapshot may include serialized data of any suitable combination of resource metadata, images, block/boot volume content, runtime state data, environmental variables, and the like of the service of the source environment, at a time at which the snapshot was generated. The snapshot may be deserialized in the target environment and used to perform infrastructure and/or artifact/software releases to bring the control plane and/or data plane resources of the target environment to a desired state corresponding to the state of the service in the source environment when the snapshot was generated.
Techniques for hard negative mining for ranking models are provided. In one technique, an input document that is associated with a query is received and input to an embedding model, which outputs a document embedding (DE). Based on the document embedding, multiple embeddings are identified. Clusters of embeddings are generated from the multiple embeddings. A cluster that includes the DE is identified. Based on the DE, two sets of embeddings are identified in the cluster. For each embedding in a first set of embeddings: (1) a particular embedding (PE) is selecting from the second set of embeddings based on a similarity score between the embedding and the PE; (2) a first document that is associated with the PE is identified; and (3) a training instance that includes the first document is generated and added to training data. A model is trained based on the training data.
Techniques for generating and managing sparse vector representations in a database system are provided. In one technique, an embedding that was generated by an embedding model is accessed. Based on one or more characteristics associated with the embedding, a particular storage format is selected from among multiple storage formats in which to store the embedding. A sparse vector representation is generated based on the embedding and the particular storage format. The sparse vector representation is stored. The sparse vector representation may be stored in the same VECTOR type column that stores sparse vector representations that are in different storage formats and/or dense vector representations.
Techniques are disclosed for capturing information surrounding a user's interactions with an application for enriching application context to be used by a digital assistant for supporting the user's interactions. In one aspect, a method includes detecting an event from a client device, where the event is associated with a user's interaction with an application on the client device. In response to detecting the event, application context is obtained for the application. The obtaining includes accessing the application context from a data store based on an identifier. A first generative artificial intelligence model can then be used to generate a list having an executable action based on the event and the application context. An execution plan is then created and executed, which includes executing the executable action using an asset to obtain an output. The output or a communication derived from the output is sent to the client device.
Techniques for creating a custom endpoint for a cloud application instance are disclosed. In some embodiments, a system receives a user request to enable a custom endpoint to be used to access a cloud application instance. In response to the user request, the system validates the custom endpoint in a Domain Name System (DNS) zone of a customer tenancy of a cloud platform in which the cloud application instance is hosted, obtains a security token of the cloud application instance from the customer tenancy, creates a DNS record in the DNS zone using the security token, obtains a digital certificate for the custom endpoint using the DNS record, and creates an association between the digital certificate and the custom endpoint on the cloud platform, wherein the association between the digital certificate and the custom endpoint enables access to the cloud application instance via the custom endpoint.
Techniques for synchronizing the topic of messages in a chat interface with content in an information pane include concurrently displaying a chat interface and an information pane in a GUI and monitoring the content currently displayed within the information pane to determine a set of one or more topics. The system identifies and displays a subset of stored chat messages associated with the set of topics concurrently with the content. When the system detects that the content in the information pane is changed to different content, the system determines a second set of one or more topics corresponding to the changed content and identifies a second subset of the stored chat messages based on the second set of topics. The second subset is displayed in the chat interface concurrently with the second set of content.
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p. ex. messagerie instantanée [IM]
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
H04L 51/216 - Gestion de l'historique des conversations, p. ex. regroupement de messages dans des sessions ou des fils de conversation
74.
Dashboard Interface For Initiating Actions Determined As A Function Of A Message
Techniques for generating and executing candidate actions from a message include detecting a message and determining a particular set of message attributes corresponding to the message.
Techniques for generating and executing candidate actions from a message include detecting a message and determining a particular set of message attributes corresponding to the message.
One or more target states are computed based on the particular set of message attributes, and a set of one or more candidate actions are determined for actions that are configured to produce the one or more target states. The candidate actions are concurrently displayed with the message in a messaging interface of a dashboard, where the dashboard is a component of a GUI presented by an application. Responsive to receiving a selection of a first candidate action of the set of candidate actions, the system initiates execution of the first candidate action by the application.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
H04L 51/21 - Surveillance ou traitement des messages
75.
Generative Model For Creating And Presenting Medical Orders
Techniques for using machine learning models to create and present medical orders for patients are disclosed. These techniques facilitate the identification, selection, and fulfillment of an order, e.g., prescription or treatment, in response to updates to patient data for the patient, e.g., reporting of test results, receipt of messages or referrals, and addition of discussions. The system monitors, in real time, updates to the patient data. The patient data may be part of an EHR. When the system determines that content of an update satisfies a trigger for generating an order, the system applies a machine learning model to the patient data to determine an order corresponding to the patient data. The machine learning model generates the order for the patient and presents the order to medical professionals for review.
G06Q 10/087 - Gestion d’inventaires ou de stocks, p. ex. exécution des commandes, approvisionnement ou régularisation par rapport aux commandes
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p. ex. pour des dossiers électroniques de patients
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
76.
Heterogeneous Content Management Engine And Related Systems And Methods
Techniques for heterogeneous content management are disclosed herein. Messages and other content items are managed by a heterogeneous content management engine. Content items and/or extracted features of content items are clustered by topic, source, and/or timestamp and arranged in navigable content item histories for clusters. Assets related to content items are generated and presented in accordance with the content items. Interaction with content items of the history results in generation of content item history or detailed view for the content item. Whan a user interacts with assets, interfaces for corresponding detailed views, item histories, or work items are related to the asset are provided to the user.
H04L 51/216 - Gestion de l'historique des conversations, p. ex. regroupement de messages dans des sessions ou des fils de conversation
G06F 16/383 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
H04L 51/56 - Messagerie unifiée, p. ex. interactions entre courriel, messagerie instantanée ou messagerie IP convergente [CPM]
77.
Continually Evolving Subjects Using Machine Learning
Techniques for generating a subject that accurately describes the current content of an electronic conversations are disclosed. A system receives an electronic message associated with a customer and stores the message in association with an electronic conversation. Based on customer rules, context information, and/or conversation content, the system generates a prompt for a machine learning model trained to generate subject lines and/or tags appropriate for the particular customer. The system submits the prompt to the machine learning model to obtain a subject line and/or subject tags that accurately describe(s) the current content of the conversation within a timeframe.
Techniques for generate personalized satisfaction surveys for particular customers are disclosed. A system tracks interactions and events involved in a service request received from a customer. The tracking includes logging interactions between the customer, customer service agents, and service teams. Using the logged information, the system engineers prompts for a large language model to generate a satisfaction survey that includes a survey question tailored to the customer's particular service request. After the system receives a response to the survey, the system submits the content to a machine learning model trained to determine a satisfaction score for the survey.
Techniques for fine-tuning a machine-learned model for reliable retrieval augmented generation are provided. In one technique, a question for a large language model (LLM) is identified. A context data item that is in an incorrect context relative to the question is also identified. The question and the context data item are input into the LLM, resulting in the LLM generating a response. A training instance that comprises the question, the context data item, a deny response as a correct answer, and the response as a rejected answer is generated. A machine-learned model (e.g., the LLM) is fine-tuned based on the training instance.
The present disclosure relates to machine learning techniques for In-Context-Learning (ICL) with pattern-based retrieval for the task of converting Natural Language (NL) to Structured Query Language (SQL). Aspects are directed towards acquiring a natural language utterance and a database schema, searching, using at least a portion of the natural language utterance as a key, a memory bank for one or more in-context examples that are relevant to the key, generating a prompt comprising the natural language query, the database schema, and the one or more in-context examples, transmitting the prompt to a first pretrained generative artificial intelligence model, receiving, from the first pretrained generative artificial intelligence model, a logical form corresponding to the natural language utterance based at least in part on the prompt, executing the logical form on a database to obtain a query result, and providing the query result to a user.
Techniques persist and restore in-memory neighbor graph vector indexes that include an index of vertex identifiers between layers of a plurality of layers for a graph-based approximate nearest neighbor search in a vector database. The plurality of layers include a higher layer and a lower layer that includes more vertices than the higher layer. A checkpoint is generated based on the neighbor graph vector index. The checkpoint can include a plurality of unit entries. Each unit entry can include vertex data that identifies vertices in respective subsets of a plurality of subsets of vertices in a lower layer of the neighbor graph vector index.
Techniques persist and restore in-memory neighbor graph vector indexes that include a vertex identifier to vector mapping and include a neighbor graph of vector neighbor vertices. At least one neighbor graph vector index checkpoint factor can be identified. A determination can be made as to whether to generate a full neighbor graph vector index checkpoint or an incremental neighbor graph vector index checkpoint based on the checkpoint factor.
Techniques for generating actionable links to Artificial Intelligence (AI)-generated content are disclosed. A system generates a prompt to a generative AI model to generate content, including a recommended action based on a set of analyzed data. The prompt further includes instructions to identify a resource used to generate the recommended action. The generative AI model identifies resources used to generate the recommended action based on a set of resources included in the prompt or based on fine-tuning the generative AI model with a dataset that includes system tools available in a system. A system analyzes content output from the generative AI model to identify a recommended action. The system matches the functionality of a resource used to generate the recommendation with the recommended action. The system generates software code to link the AI-generated content to the resource with functionality to perform the recommended action.
Techniques for data intake that prevent corruption of data repositories with faulty data are disclosed. A data load may include individual values that are erroneous and individual values that are non-erroneous. A system uses a machine learning (ML) model trained to classify the data load, as a whole, as erroneous or non-erroneous. In a data intake process, the system applies the ML model to the data load. In response to determining that the data load is erroneous, the system prevents the storage of the data load within a target data repository.
A rack level cage physical security system with magnetic sensor shield is described herein. The rack level can be a physical security system that can include a rack cage, a body defining an internal volume that can contain at least one server, a door coupled to the body and moveable between an open position and a closed position, and a magnetic securement system that can prevent an external magnetic field from affecting a magnetic switch. The internal volume of the body can be accessible via the door when the door is in the open position.
A rack level cage and components thereof are disclosed herein. The rack level cage can be a physical security system. The physical security system can include a rack cage that can include at least one top opening. The system can also include a blocking plate secured to the rack cage to at least partially obstruct the top opening.
The present disclosure relates to techniques for generating a ranked list of a set of subjects by predicting their potential health benefit from an intervention to prioritize subjects that may be at a risk of a negative outcome and likely to benefit from a proposed intervention. Additionally, the ranking may further account for potential cost-savings associated with early intervention to avoid acute-care utilization by applying a cost-modeling technique. The disclosed techniques may include analyzing subject-specific data, including demographic, clinical, and historical information, to compute a total net-benefit score by combining a predicted benefit probability with cost and revenue metrics. The benefit probability may be calculated using causal inference models to estimate a potential improvement in health outcomes from the proposed intervention or treatment. The disclosed techniques may further facilitate personalized subject care by dynamically updating rankings based on real-time data, enhancing clinical decision-making, and optimizing resource allocation.
G16H 50/30 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le calcul des indices de santéTIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicalesTIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour l’évaluation des risques pour la santé d’une personne
G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p. ex. pour la gestion du personnel hospitalier ou de salles d’opération
88.
System And Method To Scale Out For Compute-Intensive Workloads
A method and apparatus for offloading compute-intensive workloads is provided. A database system compiles an execution plan to generate an offload-enabled plan by identifying a candidate offloading region in the execution plan, generating and adding an offloading branch in the offload-enabled plan, corresponding to the candidate offloading region, for execution by a compute offload runtime, wherein the compute offload runtime comprises a compute offload runtime library executing on the database system and on each node of a compute offload server, and adding the candidate offloading region as a fallback branch in the offload-enabled plan. The database system executes the offload-enabled plan by executing the offloading branch using one or more compute nodes in the database server or the compute offload server using the offload runtime or by executing the fallback branch using one or more compute nodes in the database server.
A database system compiles an execution plan to generate an offload-enabled plan for execution by a compute offload runtime. Compiling the execution plan comprises dividing the offload-enabled plan into one or more pipelines. Each pipeline comprises a pipeline template and a resource binding. Each pipeline template comprises one or more logical tasks, each comprising code for processing one or more data items. The database system executes the offload-enabled plan using a set of compute nodes in the database system or the compute offload server using the compute offload runtime, comprising, for each given logical task of each given pipeline, executing one or more microtasks, each being an instantiation of the given logical task processing a particular data item based on the resource binding of the given pipeline.
A database system compiles an execution plan to generate a compute-offload plan for execution by a compute-offload runtime. The compute-offload plan specifies a set of tasks to be offloaded and metadata specifying resource binding parameter values, associating a set of data items stored in one or more storage nodes with the set of tasks. Executing the compute-offload plan comprises sending, using a first communication path, the set of tasks and the resource binding parameter values from the database system to a compute-offload cluster; transferring, using a second communication path, the set of data items from the storage nodes to offload execution nodes based on the resource binding parameter values; and executing the set of one or more tasks on the offload execution nodes to process the set of data items.
Systems, methods, and machine-readable media may facilitate programmable data trimming. One or more request instructions may be received from an application. The one or more request instructions may include a request length specifying a response size expected for a response from a responder. The one or more request instructions may further include a trim length specifying a portion of the response to be retained. A request may be configured based at least in part on the request length and the trim length. The request may be transmitted to the responder via a network. The response may be received from the responder. The response may be trimmed to retain only the portion of the response specified by the trim length. Storage of only the portion of the response in a memory location may be caused.
H04L 67/60 - Ordonnancement ou organisation du service des demandes d'application, p. ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
Systems, methods, and computer-readable media are provided for generating natural language project summaries via large language models including deterministically derived data value narratives. A computer-implemented method includes processing a first input configuring data stored in association with a plurality of fields, generating a narrative for a project, and causing display of the narrative in a report for the project. The narrative is generated by applying one or more deterministic operations to derive one or more values for the project based at least in part on at least one field of the plurality of fields, based at least in part on the configured data, generating a prompt, prompting a large language model with the prompt to generate a result, and storing the result as the narrative for the project. The prompt includes the one or more derived values and a context comprising the project for which a narrative is being generated.
Systems, methods, and computer-readable media are provided for generating a prompt that specifies a plurality of fields and corresponding values of record(s). The prompt specifies a data structure to use for filling in components of a change order and includes a particular natural language description of a particular issue that caused the change order. A large language model is prompted with the prompt to generate a result based at least in part on the corresponding values of the record(s). The result from the large language model includes a particular data structure comprising particular values of a particular change order, which may then be displayed on a user interface along with an option to save the particular change order. Information from the record(s) and/or result(s) from the large language model may indicate whether or not manual labor, financial resources, and/or other resources are impacted by the change, and an impact may be stored in association with the change order reflecting a corresponding type of impact. The user interface may display another option to provide natural language input to modify the particular change order, causing the large language model to be re-prompted to generate another result to trigger change order creation.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
94.
LEVERAGING LARGE LANGUAGE MODELS TO CRAFT MEANINGFUL SYNTHESIS OF THE UNDERLYING TRENDS AND PATTERNS IN A CERTAIN SEGMENTS
Systems, articles, and computer-implemented methods are provided for generating summaries of a plurality of insights in multi-dimensional data to describe underlying trends using a large language model. A data structure is generated describing the plurality of insights where the data structure encapsulates for each insight of the plurality of insights to be included: a member of a data hierarchy that fits a descendant dimension that includes the insight, a value of the descendant dimension that fits the insight, and a characteristic of the insight. The data structure is included within a prompt to a large language model to summarize the plurality of insights. The prompt may also include data representing a relationship between the plurality of insights, such as how a first insight of the plurality of insights contributes to a second insight of the plurality of insights.
Systems, methods, and computer-readable media are provided for using generative Al enriched with metadata about historical document characteristics to transform documents of various formats, including images, to the fields and values they represent. A prompt template may be selected in association with a type of document. The prompt template indicates field definition(s) of field(s) to be detected in the document and location(s) in which the field(s) have been detected in prior documents. A large language model is prompted with a prompt generated using the prompt template to generate a result that assigns value(s) to the field(s). Output from the language model is used for identifying the field to value mapping for the document, such that data detected from the document may be stored in appropriate database structures of a database. Metadata stored in association with the prompt template is updated based on location(s) in the document in which the field(s) were detected, and the value(s) of the field(s) are stored in a database. Outbound documents may be similarly translated to detect values of corresponding fields requested by third parties, even if those values are not stored in the database. In this scenario, values for fields may be detected in outbound documents using the prompt templates enriched with metadata as processed by the large language model before such information is prepared to be sent to a third party.
Techniques discussed herein relate to generating and utilizing snapshots (also referred to as "service images") of a cloud-based service. A snapshot may be generated within a source environment (e.g., one compartment and/or region) and re-instantiated in a target environment (e.g., a different compartment and/or region, the same compartment/region as would be the case in a recovery scenario). The snapshot may include serialized data of any suitable combination of resource metadata, images, block/boot volume content, runtime state data, environmental variables, and the like of the service of the source environment, at a time at which the snapshot was generated. The snapshot may be deserialized in the target environment and used to perform infrastructure and/or artifact/software releases to bring the control plane and/or data plane resources of the target environment to a desired state corresponding to the state of the service in the source environment when the snapshot was generated.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
H04L 41/084 - Configuration en utilisant des informations préexistantes, p. ex. en utilisant des gabarits ou en copiant à partir d’autres éléments
H04L 67/00 - Dispositions ou protocoles de réseau pour la prise en charge de services ou d'applications réseau
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
97.
SYSTEM AND METHOD FOR USE WITH A DATA ANALYTICS ENVIRONMENT TO ENABLE USE OF AI IN PROVIDING CUSTOMER SUPPORT
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to enable use of AI in providing customer support. Machine learning AI models are trained based on one or more previous service request lifecycles of service requests of a customer to determine latent emotions of the customer based on determined customer problem data. A customer service prioritization signal related to a current service request of the customer is generated by a predictive analytics application that includes the models. The customer service prioritization signal is indicative of a need to prioritize a current service request of the customer based on the determined latent emotions of the customer and is generated during and prior to the end of the lifecycle of the current service request whereby escalation of the current service request may be deferred or prevented.
G06Q 30/016 - Fourniture d’une assistance aux clients, p. ex. pour assister un client dans un lieu commercial ou par un service d’assistance après-vente
An interactive digital assistant action interface includes a computer including processors that provide access to a data analytics environment, a chat-assistance service or application, and a large language model (LLM). The chat-assistance service or application delivers to the LLM a prompt corresponding to a received query and a desired task is determined based on the LLM receiving the prompt. One or more processes, steps, and/or APIs of the determined desired task are executed at the data analytics environment, and results of the one or more processes, steps, and/or APIs of the determined desired task being executed at the data analytics environment are provided.
Embodiments described herein are generally related to data analytics environments, and are particularly directed to systems and methods for use with a data analytics environment to provide hi-query AI for use with the data analytics environment. Systems and methods disclosed can provide for query processing and semantic analysis. The system can take a user's natural language question and run a semantic search to discern the query's intent and find tables relevant to the question, and generate a query to run against a data store or data warehouse.
Embodiments described herein are generally related to data analytics environments, and to systems and methods for providing aggregated summaries and aspect scores associated with unstructured textual data. In accordance with an embodiment, the system uses a key-based or batch approach that assesses factors associated with an unstructured textual dataset, such as, for example, a total number of text entries per key, or the character length of each text entry. Based on a consideration of such factors, the system sends batches of text entries, and a prompt, to a large language model processor, to collect intermediate batch results. The intermediate batch results can be used first to develop a numerical score or summary for each key, directed to various aspects of interest within the data; and subsequently to generate aggregated summaries and/or aspect scores associated with the textual dataset, for use in displaying visualizations or returning additional analytical information.