The present disclosure involves systems, software, and computer implemented methods for user-specific access control for metadata tables. An example method includes receiving, from a user, a query that queries a metadata table. For each metadata table row, a determination is made as to whether the user owns the object represented by the metadata table row. If the user owns the object, the row is included in a result set for the query. If the user does not own the object, a determination is made as to whether the user has access permission to the object. If the user has access permission to the object, the row is included in the result set. If the user does not have access permission to the object, the row is excluded from the result set. After all metadata tables rows are processed, the result set is provided in response to the query.
To predict hardware safety margins, historic records of hardware metrics indicating amounts of allocated and used resources for one or more software applications are obtained. Feedback metrics indicating performance issues for the software are determined based on the metrics. Then a histogram is generated plotting a frequency of the feedback metric using bins based on a difference between the allocated resources and the used resources. A threshold value is determined for the difference by iteratively determining, starting with a rightmost bin, whether data points in that bin indicate poor performance of the software based on the difference between the allocated resources and the used resources. The threshold value indicates a safety margin for operating the one software applications without performing poorly. Resources for the one or more software applications are then re-allocated according to the safety margin.
In an implementation, providing phased configuration changes with fallback option includes creating, using a context manager (CM), a new context. Using the CM, deploying a new configuration-variant. Using the CM, the new context is assigned for use by users connecting to a database schema of the new context. Using the CM, a determination is made to phase out use of the new configuration-variant. The new context is cleaned up.
G06F 16/21 - Conception, administration ou maintenance des bases de données
G06F 8/71 - Gestion de versions Gestion de configuration
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
4.
User-Friendly Smart Contract Authoring on Distributed Ledger
A computer-implemented user-friendly system and method of designing and managing smart contracts on a distributed ledger (blockchain). The system creates a number of computer programs that correspond to a business user's model of the contract terms. In this manner, the business user can generate the smart contract without needing to understand programming or involve third parties like developers.
G06Q 20/40 - Autorisation, p. ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasinExamen et approbation des payeurs, p. ex. contrôle des lignes de crédit ou des listes négatives
G06Q 20/36 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des portefeuilles électroniques ou coffres-forts électroniques
A computer-implemented method may comprise receiving a first search value and generating a first set of search results based on a first search of database tables for the first search value. The first set of search results may identify each row of each database table that has a corresponding cell that includes the first search value. Each row of one of the database tables that is identified in the first set of search results may be displayed along with a corresponding cell value that is stored in each one of the cells of the row. A second search value defined by a user selection of one of the displayed cell values may be received, and, in response to receiving the second search value, a second set of search results may be generated based on a second search search of the database tables for the second search value.
Various embodiments for a disk-based merge for combining merged hash maps are described herein. An embodiment operates by identifying a first hash map and a second hash map, and comparing a first hash value from the first hash map with a second hash value from the second hash map, with the lowest index values. A lowest hash value is identified based on the comparison, and an entry corresponding to the lowest hash value is stored in a combined hash map. This process is repeated until all of the hash values from both the first set of hash values and the second set of hash values are stored in the combined hash map. A query is received, and processed based on the combined hash map.
A61F 13/66 - Vêtements, dispositifs de soutien ou de support dont les garnitures absorbantes ne forment pas partie intégrante
A61F 13/72 - Vêtements, dispositifs de soutien ou de support dont les garnitures absorbantes ne forment pas partie intégrante du type couvrant l'abdomen à bande sans fin ceignant la taille, p. ex. du genre slip
A61F 13/84 - Accessoires, non prévus ailleurs, pour garnitures absorbantes
A61L 15/18 - Bandages, pansements ou garnitures absorbant les fluides physiologiques tels que l'urine, le sang, p. ex. serviettes hygiéniques, tampons contenant des matériaux inorganiques
A61L 15/46 - Désodorisants ou produits pour neutraliser les mauvaises odeurs, p. ex. pour inhiber la formation d'ammoniac ou la multiplication de bactéries
7.
ENCODED IDENTIFIERS FOR CREDENTIAL ACCESS AND DISTRIBUTION
Systems and methods described herein relate to workforce credential management. A request is assigned to a first profile identified by a first identifier. The first profile includes worker credentials. An association between the first profile and a second profile identified by a second identifier in the request is stored. The first identifier is encoded into a digital code that is presentable by a first device associated with the first profile. Capturing of the digital code by a second device associated with the second profile is detected. In response to detecting the capturing of the digital code by the second device, the association between the first profile and the second profile is identified and at least a subset of the worker credentials is transmitted to the second device.
Arrangements for configuration changes using a wildcard engine are provided. A data pattern group associated with one or more configuration sets may be generated. The data pattern group may include data patterns with one or more wildcard characters. An input selection may be received. Values for the data pattern group associated with the one or more configuration sets may be selected based on the input selection. The one or more templates associated with the data pattern group may be modified by applying the selected values to the one or more templates. Old data patterns may be automatically replaced with new data patterns. The modified one or more templates associated with the data pattern group may be activated via a template engine and stored in a database.
A system may include a provider server that is associated with a provider of a cloud computing environment. The provider server may receive, from a tenant, customized enterprise configuration information associated with an onboarding process (e.g., a customized business configuration). The provider server may then create an original system in the cloud computing environment for the customized enterprise configuration information. The tenant may develop and test the customized enterprise configuration information via a project line of the original configuration before being taken over into a main line. Moreover, the project line can be changed by being reset by the tenant (e.g., without action by the provider) while changes to the main line (e.g., to correct mistakes) require that the provider provision another system for the tenant.
Systems and methods include input of a description of a procedure to a large language model to determine a domain of the description, determination of modifiers to the description based on the domain, determination of example procedure models and descriptions corresponding to the example procedure models based on the domain, generation of a procedure model prompt based on the description, the modifiers, the example procedure models and corresponding descriptions, provision of the procedure model prompt to the large language model, reception, in response to the prompt, of a generated procedure model from the large language model, and storage of the generated procedure model for execution by a workflow automation system.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projetsPlanification d’entreprise ou d’organisationModélisation d’entreprise ou d’organisation
Systems and methods described herein relate to automated troubleshooting for application development tools. First data comprise a plurality of delta objects and a user query. The plurality of delta objects identify design-time changes made to an application using a development tool. The user query is received via a user interface and relates to at least one of the design-time changes. The first data is preprocessed to obtain second data. The preprocessing of the first data includes modifying a subset of the first data and adding one or more predetermined instructions to the first data. The second data is provided to a machine learning model to obtain a response to the user query. Output indicative of the response is caused to be presented in the user interface.
Methods, systems, and computer-readable storage media for receiving a request through a web services API, the request comprising a query to query a database system, retrieving a set of weights that is specific to the web services, determining a factor score for each impact factor in a set of impact factors to provide a set of factor scores, providing a score total for the query based on the set of weights and the set of factor scores, returning a score response including the total score and at least one query suggestion, and receiving a modified request through the web services API, the modified request including the query modified to include at least a portion of the at least one query suggestion.
In an example embodiment, a root cause of a CI/CD pipeline is identified automatically from event logs of the CI/CD pipeline. A solution is then suggested automatically using an artificial intelligence analysis. More particularly, the identified root cause (e.g., error) and contextual information about the system and/or application being examined may be passed to an AI engine to predict one or more solutions to the root case.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
System, method, and various embodiments for data tagging and prompt generation are described herein. An embodiment operates by receiving input data, identifying metadata, generating one or more statistics based on the input data, calculating a sample size for the input data based on the one or more statistics and extracting a sample of the input data of the sample size. A prompt is generated based on a prompt template, and the prompt is provided to a language model configured to tag the input in accordance with the prompt. The output including tagged input data is received, and a query is executed against the tagged input data.
In some implementations, there is provided a method that includes configuring a first threshold for page-loadable data at a buffer cache associated with a database; checking the buffer cache to determine usage of the buffer cache by the page-loadable data; in response to the usage of the buffer cache being more than the first threshold, causing a background job to release one or more buffers in the buffer cache; checking, after releasing at least one buffer of the buffer cache, the buffer cache to determine whether usage by the page-loadable data is below the first threshold; in response to the usage being below the first threshold, stopping the release of additional one or more buffers in the buffer cache; and in response to the usage being above the first threshold, continuing the release of the additional one or more buffers in the buffer cache.
G06F 12/1009 - Traduction d'adresses avec tables de pages, p. ex. structures de table de page
G06F 12/123 - Commande de remplacement utilisant des algorithmes de remplacement avec listes d’âge, p. ex. file d’attente, liste du type le plus récemment utilisé [MRU] ou liste du type le moins récemment utilisé [LRU]
Disclosed herein are system, method, and computer program product embodiments for providing posting platform validation. An embodiment operates by receiving a first and a second URL associated with a posting platform, and a first and a second parameter associated with the first URL or the second URL. The embodiment compares a first segment of the first URL and a second segment of the second URL, and compares a third segment of the first URL and a fourth segment of the second URL. The embodiment then determines a pattern based on the comparisons. The embodiment then generates a third URL associated with the posting platform based on the pattern, the first parameter and the second parameter, issues a query to the posting platform that includes the third URL, receives a response from the posting platform; and provides an output indicating whether the third URL is being provided by the posting platform.
According to some embodiments, systems and methods are provided including receiving an artifact generating request; receiving selection of an artifact type; generating a user interface display including a modeling canvas, the modeling canvas including a client, a target and a build space; receiving one or more components associated with the artifact type on the modeling canvas, wherein the components are populated on the generated modeling canvas in response to received selection of the artifact type; receiving a flow for the artifact type, the flow including a first link of the client to a first component of the one or more components and a second link of the target to one of the one or more components; defining properties for each received component and each link; generating an artifact; and storing the artifact. Numerous other aspects are provided.
Technologies are described for correcting data, such as master data, in an unsupervised manner using supervised machine learning. Correction of master data can involve receiving a table containing unlabeled master data. Machine learning models are applied to the fields of one or more columns of the table to predict values of the fields, and the machine learning models use unsupervised learning. For example, a machine learning model can be applied to a particular field of a particular column to predict the value of the particular field. The machine learning model uses the fields of other columns as features. Results of applying the machine learning models include indications of recommended values, indications of probabilities of the recommended values, and indications of which original values do not match their respective recommended values. The results can be used to perform manual and/or automatic correction of the master data.
G06F 12/0888 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache utilisant la mémorisation cache sélective, p. ex. la purge du cache
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
Provided is a system and method for filtering data records via user interaction on a user interface. During the filtering process, the user interface can provide insights into the next filtering step by displaying additional insight on the user interface. In one example, the method may include displaying a user interface comprising interactive controls, receiving a selection of a filtering condition based on input on the user interface, in response to the selection, filtering a plurality of data records based on the selected filtering condition to identify a subset of data records that satisfy the filtering condition from among the plurality of data records, identifying a subset of filtering conditions from among the plurality of filtering conditions that are available for the subset of data records, and displaying an identifier of the subset of data records and identifiers of the subset of filtering conditions on the user interface.
A computer-implemented method includes translating into a routing configuration, tenant-specific preferences for primary and secondary datacenter locations. A service mesh is set up for communication between services within and across the primary and secondary datacenter locations. Service persistencies with endpoints in datacenter locations are used to configure replication agents between the service persistencies. Using service endpoints, configuring Virtual Services that implement the service mesh. An Ingress Gateway is configured to route end user requests into the service mesh to a first service instance in the tenant-selected primary datacenter. According to the tenant-specific preferences, data replication is configured to copy data to redundant storage. Using endpoints of persistent storage replication agents for each service persistence in the tenant-selected primary datacenter, configuring persistent storage replication agents for each service persistence in the tenant-selected primary datacenter.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
A computer-implemented method may comprise: receiving, from a software application, a first request to retrieve archived data from an archive, the first request being associated with a user of the software application; obtaining the archived data in a first format from the archive; transforming the archived data from the first format into a second format; extracting access control data from the archived data in the second format, the access control data defining one or more criteria for accessing the archived data; injecting the access control data into an access management system that is configured to control access to non-archived data; subsequent to the injecting the access control data, sending, to the access management system, a second request to evaluate access rights of the user; and based on the access management system evaluating the access rights of the user, sending the archived data in the second format to the software application.
Various examples are directed to systems and methods for determining table data from a document image depicting a plurality of words and at least one table comprising at least a portion of the plurality of words. For example, Optical Character Recognition (OCR) data may be determined based on the document image. A table detection model may be executed based at least in part on the OCR data.
G06V 30/416 - Extraction de la structure logique, p. ex. chapitres, sections ou numéros de pageIdentification des éléments de document, p. ex. des auteurs
G06V 30/412 - Analyse de mise en page de documents structurés avec des lignes imprimées ou des zones de saisie, p. ex. de formulaires ou de tableaux d’entreprise
Techniques and solutions are provided for improved query optimization, including for sub-portions of a query plan. A query is submitted to an inline query optimizer and at least one auxiliary query optimizer. The query is optimized by the inline query optimizer and the auxiliary query optimizer. A query processor can evaluate costs associated with query plans produced by the inline query optimizer and the auxiliary query optimizer and select a plan for execution that is most performant.
Properties for an ontology, such as used for semantic query execution, automated analytical reasoning, or for machine learning, are determined using instance graphs. A corpus of documents is received, representing a plurality of domain instances of a domain. Instance graphs are generated for instances of the plurality of instance graphs to provide a plurality of instance graphs. Properties represented in the plurality of instance graphs are determined. At least a portion of the properties are assigned to an ontology for the domain.
According to some embodiments, systems and methods are provided including executing a test, wherein the test evaluates a single object class; receiving a list of objects accessed by the test, wherein a test level classification is absent for each object included in the list; excluding one or more objects from the list of objects based on the object's inclusion in an exclusion object class; assigning the test level classification to the test based on the one or more non-excluded objects from the list; and storing the assigned test level classification with a test identifier. Numerous other aspects are provided.
Various examples are directed to systems and methods for generating a query of a database. An example method may comprise accessing a knowledge base data structure comprising plurality of nodes and a plurality of edges and accessing a plurality of training samples comprising a plurality of positive training samples and a plurality of negative training samples. The example method may also comprise determining a subgraph comprising a subset of the plurality of nodes and a subset of the plurality of edges, the subset of the plurality of nodes comprising at least two nodes that are also part of the plurality of training samples and executing a subgraph query of the knowledge base data structure, the subgraph query being based at least in part on the subgraph.
Aspects relate to a computer implemented method, computer-readable media and a computer system for detecting data leakage and/or detecting dangerous information. The method comprises receiving a knowledge graph and extracting data from at least one network service. The method further comprises identifying statements in the extracted data. For each identified statement, the method further comprises determining whether the identified statement is public or private using the knowledge graph, and/or determining whether the identified statement is true or false using the knowledge graph.
Systems and methods are provided for generating a combined list of attributes for at least one selected object by combining known attributes and a list of attributes for custom tables, determining a scrambling method for each attribute in the combined list of attributes for the at least one selected object, and scrambling each attribute of the combined list of attributes for the at least one selected object, according to the scrambling method for each attribute. The systems and methods further provided for generating a compliance report indicating what was changed in a system by the scrambling of each attribute and what scrambling methods were applied and allowing release of production data comprising the scrambled attributes for the at least one selected object, to a test system for use in testing functionality for an application or service.
Systems and processes for managing and applying data classification labels are provided. In a method for managing data classification labels, a request may be received from a client application to fetch data classification labels from a policy server. Authentication information may be retrieved and passed to the policy server, and an access token may be received from the policy server based on a rights check performed using the authentication information. The access token may be provided to an interface for the policy server for use in generating a request for a list of data classification labels accessible via the access token, and the list of data classification labels may be received. Output data usable by the client application to generate a presentation of the list of data classification labels for selection by a user of the client application to classify data managed by the client application may be generated.
Systems and methods described herein relate to an event handling platform for sensor installations. An event message is received from an identified sensor installation. The identified sensor installation is associated with a facility. The event message is indicative of an event detected by the identified sensor installation. The event is validated against a schema of an identified event type from among a plurality of event types by comparing the event message to metadata of the identified event type. The metadata may be stored in an event metadata repository. The event message is transmitted to a software application associated with the facility to trigger an event reaction. Based on the event reaction, status data for the facility is caused to be presented at a user device accessing the software application.
Class definitions for an ontology of a domain are determined using a materialized instance graph, where the ontology is used for semantic query execution, automated analytical reasoning, or for machine learning. A plurality of instances graphs for a respective plurality of domain instances are received. A materialized instance graph is generated from the plurality of instance graphs. One or more communities represented in the materialized instance graph are determined. Properties associated with respective communities of the one or more communities are determined. Class definitions are generated, where a class corresponds to a community of the one or more communities and at least a portion of properties associated with the community. Class definitions are assigned to the ontology for the domain.
In a computer-implemented method for a context-aware personal application memory (PAM), using an Application Memory Interface (AMIF) and to create captured data from one or more software applications, data related to user actions with the one or more software applications is captured. Using the AMIF and to create enhanced data, the captured data is enhanced with metadata, data, and semantic relations. Using the AMIF and to create filtered data, the enhanced data is filtered. The filtered data is sent by the AMIF to a personal application memory (PAM).
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 40/40 - Traitement ou traduction du langage naturel
33.
ANOMALOUS BEHAVIOR IDENTIFICATION FROM HOMOGENEOUS DYNAMIC DATA
Systems and methods include reception of time-series data of a metric for each of a plurality of computer servers, determination, for each computer server, of a representative value of the metric based on the time-series data of the metric for the computer server, determination, for each computer server, of a fluctuation value of the metric based on the time-series data of the metric for the computer server, determination of a standard value of the metric based on the determined representative values, determination of a standard fluctuation value based on the determined fluctuation values, determination, for each computer server, of a difference value based on a difference between the standard value and the representative value for the computer server and a difference between the standard fluctuation value and the fluctuation value for the computer server; and identification of one or more anomalous computer servers based on the difference values.
Techniques for automating a software application subscription process are presented herein. In an example, in response to receiving a subscription request from a client device, a subscription automator extracts a plurality of parameters from the subscription request. Also, the subscription automator retrieves a given script template from an automation framework, where the given script template corresponds to a given cloud service application to which one or more users wish to subscribe. The subscription automator generates an executable script by populating the first script template with the plurality of parameters. Next, the subscription automator causes the first executable script to be executed to initiate a subscription process of the given cloud service application. Then, the subscription automator creates subscriptions to the given cloud service application for the one or more users as a result of initiating the subscription process.
Systems and methods include receipt of search terms, determination of an embedding for each of the search terms, generation of a composite embedding based on the determined embeddings, determination of similarities between the composite embedding and second composite embeddings associated with each of a plurality of hierarchical group codes, determination of a hierarchical group code of the plurality of hierarchical group codes based on the determined similarities, and generation of search results based on the search terms and the hierarchical group code.
A primary database system loads database objects into a primary in-memory store according to a given format. The primary database captures, in replay logs, the loading of the database objects according to the given format. The primary database sends the replay logs to a secondary database system. In response to receiving a replay log, the secondary database checks the value of a log replay configuration parameter. If the configuration parameter is a first value, the secondary database replays the replay log to load the corresponding database objects into a secondary in-memory store according to a first format. If the configuration parameter is a second value, the secondary database replays the log to load the objects according to a second format, and if the configuration parameter is a third value, the secondary database replays the log to load the objects in a same format which was used by the primary database.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
G06F 16/25 - Systèmes d’intégration ou d’interfaçage impliquant les systèmes de gestion de bases de données
37.
Automatic extension of a partially-editable dataset copy
The present disclosure involves systems, software, and computer implemented methods for automatically extending a partially-editable dataset copy. One example method includes identifying, for a data set, extension filter criteria that extends a current filter that defines an editable portion of the data set. An extended filter is automatically generated for the data set based on the extension filter criteria and the current filter. Additional data is copied into the partially-editable copy of the data set based on the extended filter and the current filter to generate an updated partially-editable copy of the data set. The current filter is replaced with the extended filter to create a new current filter. An updated exposed view is generated using the new current filter that exposes the updated partially-editable copy of the data set and an updated non-editable portion of the data set.
Isolated environments for development of modules of a software system, such as an enterprise resource planning (ERP) system, can be generated using a container image generated from a copy of a central development environment. A graph-based machine learning model can be trained and applied to a graph of the software system to predict dependencies between the modules of the software system. An isolation forest machine learning model can be trained and applied to a selected module to verify its integrity. The container image can be modified based on the predicted dependencies and the integrity verification, among other factors. The modified container image can be executed to generate an isolated environment for the selected module. A version management utility and a transport system can be used during subsequent development in the isolated environment to manage and register repositories and objects associated with the isolated environment.
Various examples described herein are directed to systems and methods involving a database management system programmed to maintain a first database comprising data associated with a first organization of a business enterprise and a second database comprising data associated with a second organization of the business enterprise. An enterprise resource planning application may be programmed to access a source service order data structure from the first database. The source service order data structure describing a source service order to be completed by the first organization, the source service order data structure comprising an identifier of the first organization, and service item data describing at least one service item to be performed to complete the source service order. The enterprise resource planning application may generate a first inter-organization service order data structure describing a first inter-organization service order to be completed by the second organization.
This disclosure describes systems, software, and computer implemented methods for maintaining travel databases and providing improved search results from them. Implementations include querying a plurality of data sources using a plurality of extractors. The plurality of extractors can receive travel information from the plurality of data sources and populate a software object with the travel information to generate structured travel information which can be submitted to an extraction queue. A data keeper can extract structured travel information of the particular category from the extraction queue and submit the structured travel information to a database queue. A canonical database manager (CDM) can extract the structured travel information of the particular category from the database queue.
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
Disclosed herein are system, method, and computer program product embodiments for a dynamic generation of a mesh service. An embodiment operates by receiving, by at least one processor, an input indicating a plurality of service containers, and retrieving a container image from a container repository responsive to the receiving of the input. The embodiment further operates by creating a new container image based on the container image and the plurality of service containers indicated in the input. In addition the embodiment operates by creating a component by calling an application programming interface (API) of an orchestration platform, and receiving, from the remote server, the additional user attribute data. Then the embodiment operates by creating the mesh service based on the new container image and the component.
The present disclosure involves systems, software, and computer implemented methods for automating handling of data subject requests for data privacy integration protocols. One example method includes receiving a ticket for performing a data privacy integration protocol for a data subject. A work package that includes a work package parameter that is based on a ticket parameter is provided to responder applications. Processing of the work package by responder applications includes determining, for at least one object associated with the data subject, purposes associated with the object. The responder application determines, for each purpose, a purpose setting that corresponds to the work package parameter. The responder application processes the work package based on the work package parameter and the purpose settings and provides feedback to a data privacy integration service, which processes the feedback, to continue the data privacy integration protocol for the ticket.
Embodiments of the present disclosure include techniques for recovering data. In one embodiment, data is copied to a buffer. A plurality of processing functions receive the data in the buffer as data pages and perform processing operations. The processed data pages are then stored in persistent memory. The main memory is monitored so that the main memory of the database is maintained in a state such that a consistent flow of data may be written to persistent memory during the recovery process.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 11/10 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p. ex. en utilisant des codes de contrôle en ajoutant des chiffres binaires ou des symboles particuliers aux données exprimées suivant un code, p. ex. contrôle de parité, exclusion des 9 ou des 11
44.
LANDSCAPE RECONFIGURATION BASED ON OBJECT ATTRIBUTE DIFFERENCES
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes receiving normalized and hashed object data for multiple landscape systems in a multi-system landscape. The normalized and hashed object data from different landscape systems is compared to identify at least one difference between normalized and hashed object data between landscape systems for at least one object. At least one misconfiguration in the multi-system landscape is identified based on the at least one difference between normalized and hashed object data between landscape systems. A reconfiguration of the multi-system landscape is identified for correcting the misconfiguration; and the reconfiguration is applied in the multi-system landscape to correct the misconfiguration.
Various examples are directed to systems and methods of managing and integration platform. A cloud environment may execute an integration runtime that runs a plurality of integration flows, a first integration flow of the plurality of integration flows may be configured to interface at least one message between a first software component and a second software component. The cloud environment may also execute at least one agent associated with the integration runtime, the at least one agent being programmed to monitor usage of a first cloud environment resource by at least one integration flow of the plurality of integration flows. The cloud environment may also execute an integration inspect service to receive, from the at least one agent, resource usage data describing use of the first cloud environment resource by the first integration flow.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
46.
SYSTEMS AND METHODS FOR ODATA API PERFORMANCE TESTING
According to some embodiments, systems and methods are provided including an Application Programming Interface (API) source; a memory storing processor-executable program code; and a processing unit to execute the processor-executable program code to cause the system to: receive an API from the API source; insert one or more parameters into an endpoint of the API; execute, for a plurality of iterations, the API on a target system; receive performance data based on each of the plurality of executions of the API and the inserted one or more parameters; receive API information based on the inserted one or more parameters and an execution of the API; and display the performance information on a graphical user interface. Numerous other aspects are provided.
Provided is a system that can validate a permission of a user with respect to data based on a hash value generated from a permission object. The hash value may be hashed more than one during the validation process. In one example, the method may include storing application data in a data store, receiving a request to access the application data within the data store, the request comprising an identifier of a user and a hash value, retrieving a permissions object of the user and hashing fields of data within the permission object to generate a locally-generated hash value, determining whether or not the locally-generated hash value is a match to the hash value in the received request, and in response to the determination that the locally-generated hash value is the match, granting permission to the application data in the data store.
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
48.
ENCRYPTED HANDSHAKE FOR TRUST VALIDATION BETWEEN TWO APPLICATIONS
In an example embodiment, a framework is provided that provides a secure mechanism to limit misuse of licensed applications. Specifically, a mutual handshake is established, using existing properties of a requesting application, and wraps objects with dynamic parameters, such as a current timestamp, to perform masking, hashing, and encryption for the handshake.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Various embodiments for a user interface modification and recommendations system are described herein. An embodiment operates by displaying in a first section of a user interface a plurality of saved/favorite apps of a first user based on a list of saved/favorite apps of the first user. A list of authorized apps is generated from the first list of apps and comparing a user profile of the first user with permissions provided by the client system. A final list of recommended apps is generated by copying a preliminary list and removing all the saved/favorites apps of the first user. The first section of the user interface is updated by adding the at least one app from the final list of recommended apps to the first section.
Embodiments of the present disclosure include techniques for generating code. Input code is received from a user. The code may not be conforming to a particular policy. The input code may be used to retrieve corresponding policies relevant for the code. In some embodiments, the input code may have a particular version, and a schema corresponding to the code version may be retrieved. The input code, policy, and schema may be input to a large language model to generate modified code conforming to the policy and the schema, for example.
A system for migrating master data to a target system, comprising: at least one data processor; and at least one memory result in operations comprising: extracting the master data from a source database; validating the extracted master data at a database layer; mapping the validated master data to the target database specific datasets, and inserting the mapped master data into a target database, wherein the extraction, validation, mapping, and insertion are performed at the database layer.
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes receiving, from responder applications that participate in but do not initiate a data privacy integration protocol, end-of-purpose information for at least one object. The responders respond to protocol commands for executions of the protocol requested by a requester application. Identifying information for objects can be provided to each requester application in a message to the requester application that requests the requester application to determine whether the requester application currently stores the objects. At least one orphaned object can be identified from information in the responses received from the requester applications. An orphaned object is an object for which a responder application has provided end-of-purpose information but for which no requester application currently stores the object. Execution of the data privacy integration protocol can be triggered for each orphaned object.
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes providing an end-of-purpose query to applications in a landscape that requests an application to determine whether the application is able to block an object. Votes are received from applications that are either a can-block vote that indicates that the application can block the object or a veto vote that indicates that the application cannot block the object. At least one relevant-application veto model is identified that models which applications can raise a relevant veto vote with respect to another application. Received end-of-purpose votes and the relevant-application veto models are evaluated to determine whether any applications should be block instruction recipients. If any block instructions recipients have been identified, a block instruction for the object is set to each block instruction recipient.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
54.
LANDSCAPE RECONFIGURATION BASED ON CROSS-SYSTEM DATA STOCKTAKING
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes receiving, from multiple systems in a multi-system landscape, data stocktaking data regarding objects in respective systems. The data stocktaking data comprises, for each respective system, a list of objects under processing in the respective system and a list of objects not under processing in the respective system. The data stocktaking data is evaluated at a central monitoring system to determine at least one misconfiguration of a data privacy integration component that manages data privacy integration in the multi-system landscape. For each identified misconfiguration, a reconfiguration of the data privacy integration component is identified. The identified reconfiguration of the data privacy integration component is applied to correct the misconfiguration.
Embodiments of the present disclosure include techniques for backing up data. In one embodiment, a single buffer memory is allocated. Data pages are read from a datastore and loaded into a first portion of the single buffer memory. When the first portion of the single buffer memory is full, data is from the datastore is loaded into a second portion of the single buffer memory while a plurality of jobs process data pages in parallel.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 13/16 - Gestion de demandes d'interconnexion ou de transfert pour l'accès au bus de mémoire
Embodiments of the present disclosure include techniques for backing up data. In one embodiment, a plurality of read requests are issued. In response to the read requests, a plurality of data pages are retrieved. The plurality of data pages are stored in a plurality of buffers. During said storing, for each data page, an indication that storage of a particular data page of the plurality of data pages has been completed is generated. In response to an indication that storage of a particular data page has been completed, the data page is processed with one of a plurality of jobs, where a plurality of data pages are processed by the plurality of jobs in parallel.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
In an implementation of a computer-implemented method: to create extracted data records, an extract filter is instructed to extract relevant data records from log messages of two runs of a software pipeline. To create diff records using the extracted data records, a diff filter is instructed to compare and identify differences in messages between the two runs, where the diff records are amended with labeled data status information of a software pipeline run the extracted data records have been taken from. A recommendation engine is instructed to execute a machine-learning model training with the diff records. The recommendation engine is called to analyze the diff records for a failure-indicator. A determination is made that a failure causing the failure-indicator has been corrected in a later run of the software pipeline. A change is identified in a configuration or version of a software application associated with a correction. A failure-indicator-solution combination is generated.
Some embodiments provide a program that receives natural language input containing words. The words are associated with configurable user interface controls in a user interface comprising visual representations. The program further receives user input modifying a configuration of the visual representations. In response, visual representations are mapped to numeric values, which are then mapped to predefined natural language terms to generate a prompt consumable by a large language machine learning model. The prompt is sent to the large language machine learning model to produce content aligning with the prompt. In response, the large language machine learning model produces one or more output images and the program populates the user interface with a preview corresponding to the one or more output images.
A system associated with an enterprise cloud computing environment having an integration service may include an integration flow design guidelines data store that contains a plurality of electronic records, each record comprising an integration flow design guideline identifier and human-readable integration flow design guideline requirements. An integration flow design guideline validator may receive, from an integration developer, an integration flow model for the integration service defined in a standardized graphical notation protocol. The validator may then determine which integration flow design guideline requirements are applicable to the received integration flow model. The system automatically generates compliance results based on whether the received integration flow model complies with each applicable integration flow design guideline requirement using rule concept semantics. In addition, the validator may perform a non-compliance analysis, determine a non-compliance severity indication, generate at least one compliance recommendation, and/or provide an output to the integration developer.
A system for executing a multi-tenant application includes at least one processor and at least one memory storing program instructions. The multi-tenant application generates one or more page size recommendations and one or more sequential request count recommendations for one or more calls to an external database. The multi-tenant application performs a first call to the external database using a page size which is based on a first page size recommendation, where the page size specifies a number of records to retrieve from the external database. The multi-tenant application also performs, in a sequential manner by the multi-tenant application, the first call and a number of subsequent calls to the external database, where the number of subsequent calls is based on a first sequential request count recommendation. Related methods and computer program products are also provided.
Briefly, embodiments of a system, method, and article for receiving a user input requesting that an initial application container compatible with a first programming model be modified to generate an adjusted application container compatible with a second programming model. The second programming model is different from the first programming model. A first set of development artifacts may be determined for the initial application container. A first subset of the first set of development artifacts may be automatically adjusted to enable compatibility with the second programming model. automatically generate One or more behavior definitions associated with the automatically adjusted first subset may be automatically generated. The adjusted application container comprising the initial application container and the one or more behavior definitions may be generated.
A system and method include determination of an error in source program code associated with a source runtime environment, determination of a source code statement of the source program code associated with the error, determination of a similarity between the source code statement and each of a plurality of code statements associated with the source runtime environment, determination, based on the determined similarities, of one of the plurality of code statements which is similar to the source code statement, determination of a first name of a first code portion of the source program code which includes the determined one of the plurality of code statements, determination of a second code portion of program code associated with a target runtime environment and having the first name, and determination of a resolution to the error based on the first code portion and the second code portion.
The present disclosure involves systems, software, and computer implemented methods for integrated data privacy services. An example method includes receiving a request to initiate an aligned purpose disassociation protocol for a purpose for an object instance. A determination is made as to whether a timestamp is stored for the purpose and the object instance that indicates an earliest time that the purpose can be disassociated from the object instance. The request is accepted in response to determining that no timestamp is stored for the purpose and the object instance that is greater than the current time. A status request is sent to applications that requests a status response that indicates whether an application can disassociate the purpose from the object instance. Status responses are received from at least some of the applications. A disassociation decision for the purpose and the object instance is determined based on the received status responses.
Systems and methods described herein relate to the handling of resource-intensive computing jobs in a cloud-based job execution environment. An unexecuted computing job has a plurality of features. A resource intensity prediction is generated for the unexecuted computing job based on the features and on historical job data that classifies each of a plurality of executed computing jobs as either resource intensive or non-resource intensive. The resource intensity prediction indicates that the unexecuted computing job is predicted to be classified as resource intensive. A predicted resource intensity category of the unexecuted computing job is determined. Utilization data associated with one or more of a plurality of job execution destinations may be accessed. The unexecuted computing job may be assigned to a selected job execution destination from among the plurality of job execution destinations based on the predicted resource intensity category and the utilization data.
The present disclosure provides techniques and solutions for retrieving and presenting test analysis results. A central testing program includes connectors for connecting to one or more test management systems. Test data, such as test results in test logs, is retrieved from the one or more test management systems. For failed tests, failure reasons are extracted from the test data. Test results are presented to a user in a user interface, including presenting failure reasons. A link to a test log can also be provided. A user interface can provide functionality for causing a test to be reexecuted.
Systems and methods described herein relate to the real-time verification of data purges. A first subprocess of a data purging process is executed to purge a plurality of data items. A system accesses purge result data providing an indication of a result of the first subprocess. The system determines, based on the purge result data, that the first subprocess was not executed in accordance with a purge policy associated with the data purging process. In response to determining that the first subprocess was not executed in accordance with the purge policy, the system adjusts a state of the data purging process. A second subprocess of the data purging process is then executed according to the adjusted state.
G06F 16/215 - Amélioration de la qualité des donnéesNettoyage des données, p. ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
67.
FINE-TUNABLE DISTILLED INTERMEDIATE REPRESENTATION FOR GENERATIVE ARTIFICIAL INTELLIGENCE
In an example embodiment, rather than use large language model (LLM) to directly generate desired computer code, an intermediate representation is generated by the LLM. The LLM is used to generate the portion of the computer code that cannot be computed programmatically (which may be called the “creative” part for purposes of the present disclosure). The intermediate representation can then be fed into a separate programmatic component that compiles the intermediate representation into compilable computer code. This fine-tuning may involve, for example, sanitizing the intermediate representation, enhancing the intermediate representation, and formatting the intermediate file, as well as modifying the intermediate representation based on a feature set.
Methods, systems, and computer-readable storage media for training a global matching ML model using a set of enterprise data associated with a set of enterprises, receiving a subset of enterprise data associated with an enterprise that is absent from the set of enterprises, fine tuning the global matching ML model using the subset of enterprise data to provide a fine-tuned matching ML model. deploying the fine-tuned matching ML model for inference, receiving feedback to one or more inference results generated by the fine-tuned matching ML model, receiving synthetic data from a LLM system in response to at least a portion of the feedback, and fine tuning one or more of the global matching ML model and the fine-tuned ML model using the synthetic data.
A computer implemented method can receive a condition expression for a query, parse the condition expression to identify parameter names and corresponding values, and evaluate validity of the parameter names and corresponding values. Responsive to finding that the parameters names and corresponding values are valid, the method can create a tree structure representing a logical relationship between the parameter names and corresponding values in a memory space, create a parameterized query comprising a modified condition expression which includes the parameter names and placeholders for the corresponding values, map the modified condition expression to a vector comprising values corresponding to the parameter names, and send the parameterized query and the vector to a query processing engine which pairs the parameter names in the modified condition expression with corresponding values contained in the vector when executing the parameterized query.
Systems and processes for evaluating algorithms for aligning weakly-annotated data to recognized characters in a document are provided. In a method for evaluating an algorithm for aligning annotation data to recognized characters, strong annotations and weak-to-strong annotations, which are generated by applying a weak-to-strong annotation alignment algorithm, for a document are received and matched to generate respective pairs of matched annotations. For each pair of matched annotations, respective metrics are calculated including comparisons of aspects of the strong annotations to the weak-to-strong annotations. The respective metrics are aggregated, and an indication of the aggregated metrics are output to a graphical user interface or targeted application. Aggregated metrics determined for different weak-to-strong annotation alignment algorithms may be compared in order to select or adjust an algorithm to be used for Optical Character Recognition (OCR) operations.
A system and method include reception of first code associated with an on-premise runtime environment and a first programming language, identification of tokens associated with the first programming language in the first code, removal of state dependencies from the first code based on the identified tokens and on transformation data mapping one or more of the identified tokens to a respective code transformation, to generate second code, execution of performance transformations on the second code to generate third code, execution of functional tests on the third code, in response to a determination that the functional tests were passed, separation of the third code into functional units to generate fourth code, application of a security function to one or more of the functional units to generate fifth code, and deployment of the fifth code to a cloud-based runtime environment.
Some embodiments are directed to a generating an executable data query. The query is configured for execution at a data source for the purpose of data retrieval therefrom. A machine learning model is applied to a query example to adjust the query example according to an input query. thus obtaining the executable data query.
A key protection framework for a platform includes a key protection engine for interfacing between an external key management system (KMS) and an external encryption service. A customer of the platform can select an existing external KMS and external encryption service to use with the framework. The key protection engine can onboard the external KMS with the platform by obtaining a configuration for the external KMS. Information extracted from the configuration can be used to establish a connection between the key protection engine and the external KMS, via which the key protection engine can interface with the external KMS to initiate rotation of a cryptographic key at the external KMS. Responsive to detection of a new version of a master key, the key protection engine can transmit a request to the external KMS to re-encrypt the cryptographic key with the new version of the master key.
The present disclosure relates to computer-implemented methods, software, and systems for extracting information from business documents based on training techniques to generate a document foundation model by pretraining. First training data based on a plurality of unlabeled documents is obtained for use in training a first model for document information extraction. The first model is pretrained according to a dynamic window adjustable to a word token count for each document of the plurality of unlabeled documents. The pretraining comprises evaluating word tokens in each of the plurality of unlabeled documents where masking is applied according to individual masking rates determined for the word tokens. The individual masking rates are indicative of respective informative relevance of the word tokens. The pretrained first model is provided for initializing a second document information extraction model to be trained based on labeled documents as second training data.
The present disclosure relates to computer-implemented methods, software, and systems for extracting information from documents based on training techniques to generate a document foundation model that is used to initialize a document information extraction model that is fine-tuned to business document specifics. A document information extraction model is initialized based on weights provided from a first pretrained model. Fine-tuning of the document information extraction model is performed based on labeled business documents as second training data. The labeled business documents are labeled and evaluated according to a virtual adversarial training (VAT). Based on the performed fine-tuning, a classifier for classification of information extraction is generated.
The present disclosure provides techniques and solutions for sorting data. In a particular implementation, a sorting technique is provided that places values in a sorted order by adding an offset value to values that are not in a sorted order. The resulting sorted set of values is not truly sorted, in that the set of modified values is sorted, but the underlying data itself is not sorted. In another implementation, a sorting technique can use multiple streams or sets. When an out of order element is encountered, it can be added to a new stream, if such a stream is available. The sorting techniques can be used for a variety of purposes, including provided sorted data for use in generating summary data, or for providing sorted data to be used in determining an intersection between two datasets.
The present disclosure provides techniques and solutions for determining whether a particular value is in a dataset using summary information. A sorted set of unique values is received. The sorted set of unique values includes gaps between at least certain values. The gaps are determined, and the set of unique values is represented as a gap filter. The gap filter includes a starting value of the set of unique values, a set of gap lengths, and identifiers indicating a number of unique values between respective gaps. The gap filter serves as summary information that can be used to determine whether a value be present in the dataset. In at least some cases, the use of the summary information may provide false positive results. The representation of the gap filter can be modified to improve its compressibility, but may increase the number of false positives produced by the gap filter.
Disclosed herein are system, method, and computer program product embodiments for compressing metadata in a Software-as-a-Service (SaaS) system. A metadata compression service operating on a computing device detects one or more global properties in entity metadata of each tenant in a plurality of tenants. The metadata compression service partitions the plurality of tenants into one or more groups and identifies common properties in each group. The metadata compression service compiles the one or more global properties in a global-level list and the one or more common properties for each group in a group-level list. The metadata compression service obtains one or more tenant-specific properties in the entity metadata of each tenant in the plurality of tenants and defines a data structure of an entity object for the tenant using the global-level list, the group-level list for the group that contains the tenant, and the one or more tenant-specific properties.
Arrangements for an intelligent client copy tool are provided. In a client copy procedure, access to a target client may be locked and all target data associated with the target client may be deleted. A before trigger for execution before a modifying operation on a database table may be defined. The trigger may be executed and, based on the trigger identifying a query associated with the modifying operation, access to the database table may be locked and an insert operation may be executed. Then, the trigger may be deleted. Thereafter, the modifying operation on the target client may be performed and access to the database table unlocked. A database view of the database table, including pointers to the source client, may be generated. Nonstatic data may be copied from the source client to the target client using the insert operation. After the copying, the target client may be unlocked.
Embodiments are described for a code generating system comprising a memory and at least one processor coupled to the memory. The at least one processor is configured to receive an instruction to generate a source code and retrieve the code profile from a profile manager. The instruction include a user description and information of a code profile. The at least one processor is further configured to generate a first command by combining the user description and the code profile and transmit the first command to an artificial intelligence (AI) proxy. Finally, the at least one processor is configured to receive the source code from the AI proxy and transmit the source code to a user device.
Provided is a system and method for evaluating the performance of a process using external process data, for example, from another similar process. In one example, the method may include generating a diagram of a process based on data from the process, where the diagram comprises a sequence of nodes that correspond to a sequence of events and edges between the sequence of nodes which indicate execution times between the events, displaying the diagram via a user interface of a software application, selecting a reference diagram of a reference process that includes a different sequence of nodes corresponding to a different sequence of events, identifying an improvement to the process based on the reference diagram, and modifying the diagram to include a different execution flow included in the reference diagram based on the identified improvement.
Methods and apparatus are disclosed for extracting structured content, as graphs, from text documents. Graph vertices and edges correspond to document tokens and pairwise relationships between tokens. Undirected peer relationships and directed relationships (e.g. key-value or composition) are supported. Vertices can be identified with predefined fields, and thence mapped to database columns for automated storage of document content in a database. A trained neural network classifier determines relationship classifications for all pairwise combinations of input tokens. The relationship classification can differentiate multiple relationship types. A multi-level classifier extracts multi-level graph structure from a document. Disclosed embodiments support arbitrary graph structures with hierarchical and planar relationships. Relationships are not restricted by spatial proximity or document layout. Composite tokens can be identified interspersed with other content. A single token can belong to multiple higher level structures according to its various relationships. Examples and variations are disclosed.
A large language model can be used to implement a service assistant. Natural language commands can be sent to the large language model, which identifies intents and responds with actions and API payloads. The command can then be implemented by an appropriate API call. The assistant can support actions that span a plurality of applications. A wide variety of human languages can be supported, and the large language model can maintain context between commands. Useful functionality such as prompting for missing parameters and the like can be supported.
The present disclosure relates to computer-implemented methods, software, and systems for implementing selection and distribution of tests to run over microservices executed on various infrastructure landscape types. A set of products that include microservices to be tested is determined. A set of infrastructure landscape types are determined for test executions for each respective product so that each type is associated with a predefined probability of selection from each set corresponding to each product. For each iteration of a schedule of iterations for test executions for a respective product over a period of time, a respective infrastructure landscape type from a respective set of infrastructure landscape types for hosting each product from the set of products is selected, and a test from the set is executed over the respective product when the product is running on a selected infrastructure landscape type according to the selection.
The present disclosure relates to computer-implemented methods, software, and systems for test selection for execution over microservices in a cloud environment. Metadata of a set of changed files is obtained. The set of changed files is to be deployed in a software product and is stored at a source code repository. The metadata of the set of changed files and content of at least one of the changed files is analyzed, based on a rule set, to determine a subset of tests of a default test plan to be executed. The subset of tests is executed at a test landscape running a set of software components associated with the set of changed files.
Embodiments facilitate deployment of customized code at a local site, for reference by a service that is being called by a remote system. At a design time, a visual code editor (e.g., Blockly) is utilized to create and store customized code at the local site. During a subsequent runtime, in response to dispatched service call initiated by the remote system, the customized code is retrieved and executed at the local site. By maintaining the customized code locally, embodiments confer security and avoid congestion associated with having the customized code stored remotely (with the remote system). This selective dispatch of a service call for handling by the local customized code, can be implemented based upon an extension scheme.
Embodiments of the present disclosure include techniques for controlling access to electronic content. In one embodiment, a user generates content in an electronic document. The system retrieves the content and a profile for the user. A predictive engine determines an access control list comprising a plurality of entries based on the content and the profile. The access control list may be presented to the user, and the system receives a verification from the user of the plurality of entries in the access control list.
A computer-implemented method may comprise creating a first view of a first data source comprising a first online analytical processing (OLAP) cube based on a first user input, creating a second view of a second data source based on a second user input, combining the first view and the second view, and creating metadata objects for elements of the first view and the second view. The method may further comprise generating a query execution plan comprising a first native query and a second native query based on a user-defined query specification and the metadata objects, executing the first native query on the first data source to retrieve a first dataset from the first data source and the second native query on the second data source to retrieve a second dataset from the second data source, and generating a federated dataset using the first dataset and the second dataset.
Systems and methods described herein relate to the efficient handling of data purge requests in the context of a distributed storage system. A plurality of data purge requests is stored in a first data structure. The data purge requests may be grouped into batches that are processed at least partially in parallel. A first data purge request from the plurality of data purge requests is successfully processed, and is moved from the first data structure to a second data structure. Processing of a second data purge request from the plurality of data purge requests is unsuccessful. The second data purge request is retained in the first data structure. Purge status data is generated based on the first data purge request being in the second data structure and the second data purge request being in the first data structure. The purge status data may be presented at a user device.
The example embodiments are directed to systems and methods which can ship a viable software product to a customer in a very short amount of time and follow up the initially shipment with a more robust version of the software at a later time. In one example, a method may include storing multiple blueprints of a software application, wherein each blueprint comprises different code dependencies between code modules of the software application, receiving a request to run the software application from a computing device, identifying a most-recent blueprint, from among the multiple blueprints, which has fulfilled one or more prerequisites, and executing one or more code modules of the software application at the computing device based on dependencies between the one or more code modules included in the identified most-recent blueprint.
Methods, systems, and computer-readable storage media for receiving metric data of a cloud system periodically; transforming the metric data of each type into a byte array using mapping tables, wherein the byte array is an encoded format of the metric data, where each field of the metric data is encoded as a field ID and a field type ID that are short integer variables; merging and storing the byte arrays of multiple metric data into a binary file, wherein the binary file comprises multiple blocks with each block comprising multiple byte arrays; generating indexes for common fields of different metric data in the binary file; receiving a retrieval request requesting metric records including a common field of a particular value; determining storage locations of one or more metric records satisfying the retrieval request; and obtaining the one or more metric records from the binary file using the corresponding storage locations.
Embodiments may be associated with a data source and a data service tool. A performance optimizer may determine a new type of data job to be executed based on a job execution parameter, perform a first execution of the new type of data job (such that data operations are performed at the data service tool), and collect first performance results. The performance optimizer then performs a second execution of the new type of data job (such that data operations are pushed down and performed at the data source) and collects second performance results. The first and second performance results are compared, and a result storage is updated with an indication of whether subsequent executions of the new type of data job will perform data operations at the data service tool or at the data source. The indication stored in the result storage may comprise, for example, a pushdown flag.
Systems and methods include acquisition of an asynchronous message from a message producer, the asynchronous message associated with a message consumer, determination that the asynchronous message matches a stored message, identification, in response to determining that the asynchronous message matches a stored message, of a stored error message associated with the stored message, and return of a return message based on the stored error message to the message producer.
A system associated with data pipeline orchestration may include a data pipeline data store that contains, for each of a plurality of data pipelines, a series of data pipeline steps associated with a data pipeline use case. A data pipeline orchestration server may receive, from a data engineering operator, a selection of a data pipeline use case in the data pipeline data store. The data pipeline orchestration server may also receive first configuration information for the selected data pipeline use case and second configuration information, different than the first configuration information, for the selected data pipeline use case. The data pipeline orchestration server may then store representations of both the first configuration information and the second configuration information in connection with the selected data pipeline use case. Execution of the selected pipeline is then arranged in accordance with one of the first configuration information and the second configuration information.
H04L 67/1095 - Réplication ou mise en miroir des données, p. ex. l’ordonnancement ou le transport pour la synchronisation des données entre les nœuds du réseau
95.
SUPPLEMENTATION OF LARGE LANGUAGE MODEL KNOWLEDGE VIA PROMPT MODIFICATION
The present disclosure provides techniques and solutions for automatically and dynamically supplementing user prompts to large language models with information to be used by the large language model in formulating a response. In particular, entities are identified in the original prompt. A semantic framework is searched for information about such entities, and such information is added to the original user prompt to provide a modified user prompt. In a particular example, the identified entities comprise triples, and verbalized triples are added to provide the modified user prompt. The modified prompt may be hidden from the user, so that a response of the large language model appears to be in response to the original prompt.
A method includes receiving a message query from an entity identifier participating in a social network. The message query specifies one or more entities, one or more requirements, and one or more constraints. A set of message query parameters is generated based on the message query. A set of queries for a semantic graph of the social network is generated based on the set of message query parameters. The set of queries is applied to the semantic graph to obtain a set of query results. A message context of the entity identifier is determined based on the set of query results and the set of message query parameters. A set of messages from a message repository is determined based on the message context. The set of messages can be presented on a client computer associated with the entity identifier.
An ingress component may receive, from a client, an HTTP URL and HTTP header information for an incoming protocol message (e.g., an AS2 message). An endpoint selector may determine the HTTP header information along with an endpoint address associated with the incoming protocol message. Based on the incoming HTTP header information and the endpoint address of the incoming protocol message, the endpoint selector may dynamically resolve an appropriate deployed endpoint and output an indication of the dynamically resolved appropriate deployed endpoint. A runtime component of an integration platform can then execute the incoming protocol message and interface with the appropriate deployed endpoint.
Certain aspects of the disclosure concern a computer-implemented method for improved data security in large language models. The method includes receiving a prompt query entered through a user interface, extracting a plurality of named entities from the prompt query and classifying the plurality of named entities into respective entity classes, tagging the plurality of named entities to be security compliant or security noncompliant based on the respective entity classes, and responsive to finding that one or more named entities are tagged to be security noncompliant, generating an alert on the user interface.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program takes a first snapshot of a first set of data stores configured to store data associated with a database system. After taking the first snapshot of the first set of data stores, the program further takes a second snapshot of a second set of data stores configured to store a set of encryption keys for a set of tenants of the database system. The program also transmits data included in the first snapshot of the first set of data stores to a secondary system. The program further transmits data included in the second snapshot of the second set of data stores to the secondary system.
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p. ex. par clés ou règles de contrôle de l’accès
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Arrangements for deployment of updates to configuration templates correlated to software upgrades are provided. In some aspects, a content configuration upgrade may be initiated within a system landscape including a development system, a test system, and a production system. A transport request including content configuration upgrade data may be received, in an inactive state, at the development system. The content configuration upgrade data may be released to the test system via the transport request. The test system may be restricted from user interaction. The test system may be set to enable customizing using the test system. The content configuration upgrade data may be activated in the test system. In addition, the activating may cause configuration changes to be added one or more database tables and a new transport request to be generated. The test system may be restored for user interaction with upgraded content configuration data.