Various examples are directed to systems and methods for generating a query of a database. An example method may comprise accessing a knowledge base data structure comprising plurality of nodes and a plurality of edges and accessing a plurality of training samples comprising a plurality of positive training samples and a plurality of negative training samples. The example method may also comprise determining a subgraph comprising a subset of the plurality of nodes and a subset of the plurality of edges, the subset of the plurality of nodes comprising at least two nodes that are also part of the plurality of training samples and executing a subgraph query of the knowledge base data structure, the subgraph query being based at least in part on the subgraph.
According to some embodiments, systems and methods are provided including executing a test, wherein the test evaluates a single object class; receiving a list of objects accessed by the test, wherein a test level classification is absent for each object included in the list; excluding one or more objects from the list of objects based on the object's inclusion in an exclusion object class; assigning the test level classification to the test based on the one or more non-excluded objects from the list; and storing the assigned test level classification with a test identifier. Numerous other aspects are provided.
Aspects relate to a computer implemented method, computer-readable media and a computer system for detecting data leakage and/or detecting dangerous information. The method comprises receiving a knowledge graph and extracting data from at least one network service. The method further comprises identifying statements in the extracted data. For each identified statement, the method further comprises determining whether the identified statement is public or private using the knowledge graph, and/or determining whether the identified statement is true or false using the knowledge graph.
Various examples are directed to systems and methods for determining table data from a document image depicting a plurality of words and at least one table comprising at least a portion of the plurality of words. For example, Optical Character Recognition (OCR) data may be determined based on the document image. A table detection model may be executed based at least in part on the OCR data.
Properties for an ontology, such as used for semantic query execution, automated analytical reasoning, or for machine learning, are determined using instance graphs. A corpus of documents is received, representing a plurality of domain instances of a domain. Instance graphs are generated for instances of the plurality of instance graphs to provide a plurality of instance graphs. Properties represented in the plurality of instance graphs are determined. At least a portion of the properties are assigned to an ontology for the domain.
A computer-implemented method may comprise: receiving, from a software application, a first request to retrieve archived data from an archive, the first request being associated with a user of the software application; obtaining the archived data in a first format from the archive; transforming the archived data from the first format into a second format; extracting access control data from the archived data in the second format, the access control data defining one or more criteria for accessing the archived data; injecting the access control data into an access management system that is configured to control access to non-archived data; subsequent to the injecting the access control data, sending, to the access management system, a second request to evaluate access rights of the user; and based on the access management system evaluating the access rights of the user, sending the archived data in the second format to the software application.
Techniques and solutions are provided for improved query optimization, including for sub-portions of a query plan. A query is submitted to an inline query optimizer and at least one auxiliary query optimizer. The query is optimized by the inline query optimizer and the auxiliary query optimizer. A query processor can evaluate costs associated with query plans produced by the inline query optimizer and the auxiliary query optimizer and select a plan for execution that is most performant.
Systems and methods are provided for generating a combined list of attributes for at least one selected object by combining known attributes and a list of attributes for custom tables, determining a scrambling method for each attribute in the combined list of attributes for the at least one selected object, and scrambling each attribute of the combined list of attributes for the at least one selected object, according to the scrambling method for each attribute. The systems and methods further provided for generating a compliance report indicating what was changed in a system by the scrambling of each attribute and what scrambling methods were applied and allowing release of production data comprising the scrambled attributes for the at least one selected object, to a test system for use in testing functionality for an application or service.
In a computer-implemented method for a context-aware personal application memory (PAM), using an Application Memory Interface (AMIF) and to create captured data from one or more software applications, data related to user actions with the one or more software applications is captured. Using the AMIF and to create enhanced data, the captured data is enhanced with metadata, data, and semantic relations. Using the AMIF and to create filtered data, the enhanced data is filtered. The filtered data is sent by the AMIF to a personal application memory (PAM).
G06F 9/451 - Execution arrangements for user interfaces
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 40/40 - Processing or translation of natural language
Class definitions for an ontology of a domain are determined using a materialized instance graph, where the ontology is used for semantic query execution, automated analytical reasoning, or for machine learning. A plurality of instances graphs for a respective plurality of domain instances are received. A materialized instance graph is generated from the plurality of instance graphs. One or more communities represented in the materialized instance graph are determined. Properties associated with respective communities of the one or more communities are determined. Class definitions are generated, where a class corresponds to a community of the one or more communities and at least a portion of properties associated with the community. Class definitions are assigned to the ontology for the domain.
Systems and methods described herein relate to an event handling platform for sensor installations. An event message is received from an identified sensor installation. The identified sensor installation is associated with a facility. The event message is indicative of an event detected by the identified sensor installation. The event is validated against a schema of an identified event type from among a plurality of event types by comparing the event message to metadata of the identified event type. The metadata may be stored in an event metadata repository. The event message is transmitted to a software application associated with the facility to trigger an event reaction. Based on the event reaction, status data for the facility is caused to be presented at a user device accessing the software application.
Techniques for automating a software application subscription process are presented herein. In an example, in response to receiving a subscription request from a client device, a subscription automator extracts a plurality of parameters from the subscription request. Also, the subscription automator retrieves a given script template from an automation framework, where the given script template corresponds to a given cloud service application to which one or more users wish to subscribe. The subscription automator generates an executable script by populating the first script template with the plurality of parameters. Next, the subscription automator causes the first executable script to be executed to initiate a subscription process of the given cloud service application. Then, the subscription automator creates subscriptions to the given cloud service application for the one or more users as a result of initiating the subscription process.
Systems and processes for managing and applying data classification labels are provided. In a method for managing data classification labels, a request may be received from a client application to fetch data classification labels from a policy server. Authentication information may be retrieved and passed to the policy server, and an access token may be received from the policy server based on a rights check performed using the authentication information. The access token may be provided to an interface for the policy server for use in generating a request for a list of data classification labels accessible via the access token, and the list of data classification labels may be received. Output data usable by the client application to generate a presentation of the list of data classification labels for selection by a user of the client application to classify data managed by the client application may be generated.
Systems and methods include reception of time-series data of a metric for each of a plurality of computer servers, determination, for each computer server, of a representative value of the metric based on the time-series data of the metric for the computer server, determination, for each computer server, of a fluctuation value of the metric based on the time-series data of the metric for the computer server, determination of a standard value of the metric based on the determined representative values, determination of a standard fluctuation value based on the determined fluctuation values, determination, for each computer server, of a difference value based on a difference between the standard value and the representative value for the computer server and a difference between the standard fluctuation value and the fluctuation value for the computer server; and identification of one or more anomalous computer servers based on the difference values.
Systems and methods include receipt of search terms, determination of an embedding for each of the search terms, generation of a composite embedding based on the determined embeddings, determination of similarities between the composite embedding and second composite embeddings associated with each of a plurality of hierarchical group codes, determination of a hierarchical group code of the plurality of hierarchical group codes based on the determined similarities, and generation of search results based on the search terms and the hierarchical group code.
A primary database system loads database objects into a primary in-memory store according to a given format. The primary database captures, in replay logs, the loading of the database objects according to the given format. The primary database sends the replay logs to a secondary database system. In response to receiving a replay log, the secondary database checks the value of a log replay configuration parameter. If the configuration parameter is a first value, the secondary database replays the replay log to load the corresponding database objects into a secondary in-memory store according to a first format. If the configuration parameter is a second value, the secondary database replays the log to load the objects according to a second format, and if the configuration parameter is a third value, the secondary database replays the log to load the objects in a same format which was used by the primary database.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 16/25 - Integrating or interfacing systems involving database management systems
17.
Automatic extension of a partially-editable dataset copy
The present disclosure involves systems, software, and computer implemented methods for automatically extending a partially-editable dataset copy. One example method includes identifying, for a data set, extension filter criteria that extends a current filter that defines an editable portion of the data set. An extended filter is automatically generated for the data set based on the extension filter criteria and the current filter. Additional data is copied into the partially-editable copy of the data set based on the extended filter and the current filter to generate an updated partially-editable copy of the data set. The current filter is replaced with the extended filter to create a new current filter. An updated exposed view is generated using the new current filter that exposes the updated partially-editable copy of the data set and an updated non-editable portion of the data set.
In an example embodiment, a framework is provided that provides a secure mechanism to limit misuse of licensed applications. Specifically, a mutual handshake is established, using existing properties of a requesting application, and wraps objects with dynamic parameters, such as a current timestamp, to perform masking, hashing, and encryption for the handshake.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Isolated environments for development of modules of a software system, such as an enterprise resource planning (ERP) system, can be generated using a container image generated from a copy of a central development environment. A graph-based machine learning model can be trained and applied to a graph of the software system to predict dependencies between the modules of the software system. An isolation forest machine learning model can be trained and applied to a selected module to verify its integrity. The container image can be modified based on the predicted dependencies and the integrity verification, among other factors. The modified container image can be executed to generate an isolated environment for the selected module. A version management utility and a transport system can be used during subsequent development in the isolated environment to manage and register repositories and objects associated with the isolated environment.
Various examples described herein are directed to systems and methods involving a database management system programmed to maintain a first database comprising data associated with a first organization of a business enterprise and a second database comprising data associated with a second organization of the business enterprise. An enterprise resource planning application may be programmed to access a source service order data structure from the first database. The source service order data structure describing a source service order to be completed by the first organization, the source service order data structure comprising an identifier of the first organization, and service item data describing at least one service item to be performed to complete the source service order. The enterprise resource planning application may generate a first inter-organization service order data structure describing a first inter-organization service order to be completed by the second organization.
This disclosure describes systems, software, and computer implemented methods for maintaining travel databases and providing improved search results from them. Implementations include querying a plurality of data sources using a plurality of extractors. The plurality of extractors can receive travel information from the plurality of data sources and populate a software object with the travel information to generate structured travel information which can be submitted to an extraction queue. A data keeper can extract structured travel information of the particular category from the extraction queue and submit the structured travel information to a database queue. A canonical database manager (CDM) can extract the structured travel information of the particular category from the database queue.
Disclosed herein are system, method, and computer program product embodiments for a dynamic generation of a mesh service. An embodiment operates by receiving, by at least one processor, an input indicating a plurality of service containers, and retrieving a container image from a container repository responsive to the receiving of the input. The embodiment further operates by creating a new container image based on the container image and the plurality of service containers indicated in the input. In addition the embodiment operates by creating a component by calling an application programming interface (API) of an orchestration platform, and receiving, from the remote server, the additional user attribute data. Then the embodiment operates by creating the mesh service based on the new container image and the component.
The present disclosure involves systems, software, and computer implemented methods for automating handling of data subject requests for data privacy integration protocols. One example method includes receiving a ticket for performing a data privacy integration protocol for a data subject. A work package that includes a work package parameter that is based on a ticket parameter is provided to responder applications. Processing of the work package by responder applications includes determining, for at least one object associated with the data subject, purposes associated with the object. The responder application determines, for each purpose, a purpose setting that corresponds to the work package parameter. The responder application processes the work package based on the work package parameter and the purpose settings and provides feedback to a data privacy integration service, which processes the feedback, to continue the data privacy integration protocol for the ticket.
Embodiments of the present disclosure include techniques for recovering data. In one embodiment, data is copied to a buffer. A plurality of processing functions receive the data in the buffer as data pages and perform processing operations. The processed data pages are then stored in persistent memory. The main memory is monitored so that the main memory of the database is maintained in a state such that a consistent flow of data may be written to persistent memory during the recovery process.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 11/10 - Adding special bits or symbols to the coded information, e.g. parity check, casting out nines or elevens
25.
LANDSCAPE RECONFIGURATION BASED ON OBJECT ATTRIBUTE DIFFERENCES
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes receiving normalized and hashed object data for multiple landscape systems in a multi-system landscape. The normalized and hashed object data from different landscape systems is compared to identify at least one difference between normalized and hashed object data between landscape systems for at least one object. At least one misconfiguration in the multi-system landscape is identified based on the at least one difference between normalized and hashed object data between landscape systems. A reconfiguration of the multi-system landscape is identified for correcting the misconfiguration; and the reconfiguration is applied in the multi-system landscape to correct the misconfiguration.
Various examples are directed to systems and methods of managing and integration platform. A cloud environment may execute an integration runtime that runs a plurality of integration flows, a first integration flow of the plurality of integration flows may be configured to interface at least one message between a first software component and a second software component. The cloud environment may also execute at least one agent associated with the integration runtime, the at least one agent being programmed to monitor usage of a first cloud environment resource by at least one integration flow of the plurality of integration flows. The cloud environment may also execute an integration inspect service to receive, from the at least one agent, resource usage data describing use of the first cloud environment resource by the first integration flow.
According to some embodiments, systems and methods are provided including an Application Programming Interface (API) source; a memory storing processor-executable program code; and a processing unit to execute the processor-executable program code to cause the system to: receive an API from the API source; insert one or more parameters into an endpoint of the API; execute, for a plurality of iterations, the API on a target system; receive performance data based on each of the plurality of executions of the API and the inserted one or more parameters; receive API information based on the inserted one or more parameters and an execution of the API; and display the performance information on a graphical user interface. Numerous other aspects are provided.
Provided is a system that can validate a permission of a user with respect to data based on a hash value generated from a permission object. The hash value may be hashed more than one during the validation process. In one example, the method may include storing application data in a data store, receiving a request to access the application data within the data store, the request comprising an identifier of a user and a hash value, retrieving a permissions object of the user and hashing fields of data within the permission object to generate a locally-generated hash value, determining whether or not the locally-generated hash value is a match to the hash value in the received request, and in response to the determination that the locally-generated hash value is the match, granting permission to the application data in the data store.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
29.
USER INTERFACE MODIFIER BASED ON APP RECOMMENDATIONS
Various embodiments for a user interface modification and recommendations system are described herein. An embodiment operates by displaying in a first section of a user interface a plurality of saved/favorite apps of a first user based on a list of saved/favorite apps of the first user. A list of authorized apps is generated from the first list of apps and comparing a user profile of the first user with permissions provided by the client system. A final list of recommended apps is generated by copying a preliminary list and removing all the saved/favorites apps of the first user. The first section of the user interface is updated by adding the at least one app from the final list of recommended apps to the first section.
Embodiments of the present disclosure include techniques for generating code. Input code is received from a user. The code may not be conforming to a particular policy. The input code may be used to retrieve corresponding policies relevant for the code. In some embodiments, the input code may have a particular version, and a schema corresponding to the code version may be retrieved. The input code, policy, and schema may be input to a large language model to generate modified code conforming to the policy and the schema, for example.
A system for migrating master data to a target system, comprising: at least one data processor; and at least one memory result in operations comprising: extracting the master data from a source database; validating the extracted master data at a database layer; mapping the validated master data to the target database specific datasets, and inserting the mapped master data into a target database, wherein the extraction, validation, mapping, and insertion are performed at the database layer.
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes receiving, from responder applications that participate in but do not initiate a data privacy integration protocol, end-of-purpose information for at least one object. The responders respond to protocol commands for executions of the protocol requested by a requester application. Identifying information for objects can be provided to each requester application in a message to the requester application that requests the requester application to determine whether the requester application currently stores the objects. At least one orphaned object can be identified from information in the responses received from the requester applications. An orphaned object is an object for which a responder application has provided end-of-purpose information but for which no requester application currently stores the object. Execution of the data privacy integration protocol can be triggered for each orphaned object.
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes providing an end-of-purpose query to applications in a landscape that requests an application to determine whether the application is able to block an object. Votes are received from applications that are either a can-block vote that indicates that the application can block the object or a veto vote that indicates that the application cannot block the object. At least one relevant-application veto model is identified that models which applications can raise a relevant veto vote with respect to another application. Received end-of-purpose votes and the relevant-application veto models are evaluated to determine whether any applications should be block instruction recipients. If any block instructions recipients have been identified, a block instruction for the object is set to each block instruction recipient.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
34.
LANDSCAPE RECONFIGURATION BASED ON CROSS-SYSTEM DATA STOCKTAKING
The present disclosure involves systems, software, and computer implemented methods for data privacy. One example method includes receiving, from multiple systems in a multi-system landscape, data stocktaking data regarding objects in respective systems. The data stocktaking data comprises, for each respective system, a list of objects under processing in the respective system and a list of objects not under processing in the respective system. The data stocktaking data is evaluated at a central monitoring system to determine at least one misconfiguration of a data privacy integration component that manages data privacy integration in the multi-system landscape. For each identified misconfiguration, a reconfiguration of the data privacy integration component is identified. The identified reconfiguration of the data privacy integration component is applied to correct the misconfiguration.
Embodiments of the present disclosure include techniques for backing up data. In one embodiment, a single buffer memory is allocated. Data pages are read from a datastore and loaded into a first portion of the single buffer memory. When the first portion of the single buffer memory is full, data is from the datastore is loaded into a second portion of the single buffer memory while a plurality of jobs process data pages in parallel.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
Embodiments of the present disclosure include techniques for backing up data. In one embodiment, a plurality of read requests are issued. In response to the read requests, a plurality of data pages are retrieved. The plurality of data pages are stored in a plurality of buffers. During said storing, for each data page, an indication that storage of a particular data page of the plurality of data pages has been completed is generated. In response to an indication that storage of a particular data page has been completed, the data page is processed with one of a plurality of jobs, where a plurality of data pages are processed by the plurality of jobs in parallel.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
In an implementation of a computer-implemented method: to create extracted data records, an extract filter is instructed to extract relevant data records from log messages of two runs of a software pipeline. To create diff records using the extracted data records, a diff filter is instructed to compare and identify differences in messages between the two runs, where the diff records are amended with labeled data status information of a software pipeline run the extracted data records have been taken from. A recommendation engine is instructed to execute a machine-learning model training with the diff records. The recommendation engine is called to analyze the diff records for a failure-indicator. A determination is made that a failure causing the failure-indicator has been corrected in a later run of the software pipeline. A change is identified in a configuration or version of a software application associated with a correction. A failure-indicator-solution combination is generated.
Some embodiments provide a program that receives natural language input containing words. The words are associated with configurable user interface controls in a user interface comprising visual representations. The program further receives user input modifying a configuration of the visual representations. In response, visual representations are mapped to numeric values, which are then mapped to predefined natural language terms to generate a prompt consumable by a large language machine learning model. The prompt is sent to the large language machine learning model to produce content aligning with the prompt. In response, the large language machine learning model produces one or more output images and the program populates the user interface with a preview corresponding to the one or more output images.
A system associated with an enterprise cloud computing environment having an integration service may include an integration flow design guidelines data store that contains a plurality of electronic records, each record comprising an integration flow design guideline identifier and human-readable integration flow design guideline requirements. An integration flow design guideline validator may receive, from an integration developer, an integration flow model for the integration service defined in a standardized graphical notation protocol. The validator may then determine which integration flow design guideline requirements are applicable to the received integration flow model. The system automatically generates compliance results based on whether the received integration flow model complies with each applicable integration flow design guideline requirement using rule concept semantics. In addition, the validator may perform a non-compliance analysis, determine a non-compliance severity indication, generate at least one compliance recommendation, and/or provide an output to the integration developer.
A system for executing a multi-tenant application includes at least one processor and at least one memory storing program instructions. The multi-tenant application generates one or more page size recommendations and one or more sequential request count recommendations for one or more calls to an external database. The multi-tenant application performs a first call to the external database using a page size which is based on a first page size recommendation, where the page size specifies a number of records to retrieve from the external database. The multi-tenant application also performs, in a sequential manner by the multi-tenant application, the first call and a number of subsequent calls to the external database, where the number of subsequent calls is based on a first sequential request count recommendation. Related methods and computer program products are also provided.
Briefly, embodiments of a system, method, and article for receiving a user input requesting that an initial application container compatible with a first programming model be modified to generate an adjusted application container compatible with a second programming model. The second programming model is different from the first programming model. A first set of development artifacts may be determined for the initial application container. A first subset of the first set of development artifacts may be automatically adjusted to enable compatibility with the second programming model. automatically generate One or more behavior definitions associated with the automatically adjusted first subset may be automatically generated. The adjusted application container comprising the initial application container and the one or more behavior definitions may be generated.
A system and method include determination of an error in source program code associated with a source runtime environment, determination of a source code statement of the source program code associated with the error, determination of a similarity between the source code statement and each of a plurality of code statements associated with the source runtime environment, determination, based on the determined similarities, of one of the plurality of code statements which is similar to the source code statement, determination of a first name of a first code portion of the source program code which includes the determined one of the plurality of code statements, determination of a second code portion of program code associated with a target runtime environment and having the first name, and determination of a resolution to the error based on the first code portion and the second code portion.
Systems and methods described herein relate to the handling of resource-intensive computing jobs in a cloud-based job execution environment. An unexecuted computing job has a plurality of features. A resource intensity prediction is generated for the unexecuted computing job based on the features and on historical job data that classifies each of a plurality of executed computing jobs as either resource intensive or non-resource intensive. The resource intensity prediction indicates that the unexecuted computing job is predicted to be classified as resource intensive. A predicted resource intensity category of the unexecuted computing job is determined. Utilization data associated with one or more of a plurality of job execution destinations may be accessed. The unexecuted computing job may be assigned to a selected job execution destination from among the plurality of job execution destinations based on the predicted resource intensity category and the utilization data.
The present disclosure provides techniques and solutions for retrieving and presenting test analysis results. A central testing program includes connectors for connecting to one or more test management systems. Test data, such as test results in test logs, is retrieved from the one or more test management systems. For failed tests, failure reasons are extracted from the test data. Test results are presented to a user in a user interface, including presenting failure reasons. A link to a test log can also be provided. A user interface can provide functionality for causing a test to be reexecuted.
Systems and methods described herein relate to the real-time verification of data purges. A first subprocess of a data purging process is executed to purge a plurality of data items. A system accesses purge result data providing an indication of a result of the first subprocess. The system determines, based on the purge result data, that the first subprocess was not executed in accordance with a purge policy associated with the data purging process. In response to determining that the first subprocess was not executed in accordance with the purge policy, the system adjusts a state of the data purging process. A second subprocess of the data purging process is then executed according to the adjusted state.
In an example embodiment, rather than use large language model (LLM) to directly generate desired computer code, an intermediate representation is generated by the LLM. The LLM is used to generate the portion of the computer code that cannot be computed programmatically (which may be called the “creative” part for purposes of the present disclosure). The intermediate representation can then be fed into a separate programmatic component that compiles the intermediate representation into compilable computer code. This fine-tuning may involve, for example, sanitizing the intermediate representation, enhancing the intermediate representation, and formatting the intermediate file, as well as modifying the intermediate representation based on a feature set.
Methods, systems, and computer-readable storage media for training a global matching ML model using a set of enterprise data associated with a set of enterprises, receiving a subset of enterprise data associated with an enterprise that is absent from the set of enterprises, fine tuning the global matching ML model using the subset of enterprise data to provide a fine-tuned matching ML model. deploying the fine-tuned matching ML model for inference, receiving feedback to one or more inference results generated by the fine-tuned matching ML model, receiving synthetic data from a LLM system in response to at least a portion of the feedback, and fine tuning one or more of the global matching ML model and the fine-tuned ML model using the synthetic data.
A computer implemented method can receive a condition expression for a query, parse the condition expression to identify parameter names and corresponding values, and evaluate validity of the parameter names and corresponding values. Responsive to finding that the parameters names and corresponding values are valid, the method can create a tree structure representing a logical relationship between the parameter names and corresponding values in a memory space, create a parameterized query comprising a modified condition expression which includes the parameter names and placeholders for the corresponding values, map the modified condition expression to a vector comprising values corresponding to the parameter names, and send the parameterized query and the vector to a query processing engine which pairs the parameter names in the modified condition expression with corresponding values contained in the vector when executing the parameterized query.
The present disclosure involves systems, software, and computer implemented methods for integrated data privacy services. An example method includes receiving a request to initiate an aligned purpose disassociation protocol for a purpose for an object instance. A determination is made as to whether a timestamp is stored for the purpose and the object instance that indicates an earliest time that the purpose can be disassociated from the object instance. The request is accepted in response to determining that no timestamp is stored for the purpose and the object instance that is greater than the current time. A status request is sent to applications that requests a status response that indicates whether an application can disassociate the purpose from the object instance. Status responses are received from at least some of the applications. A disassociation decision for the purpose and the object instance is determined based on the received status responses.
Systems and processes for evaluating algorithms for aligning weakly-annotated data to recognized characters in a document are provided. In a method for evaluating an algorithm for aligning annotation data to recognized characters, strong annotations and weak-to-strong annotations, which are generated by applying a weak-to-strong annotation alignment algorithm, for a document are received and matched to generate respective pairs of matched annotations. For each pair of matched annotations, respective metrics are calculated including comparisons of aspects of the strong annotations to the weak-to-strong annotations. The respective metrics are aggregated, and an indication of the aggregated metrics are output to a graphical user interface or targeted application. Aggregated metrics determined for different weak-to-strong annotation alignment algorithms may be compared in order to select or adjust an algorithm to be used for Optical Character Recognition (OCR) operations.
A system and method include reception of first code associated with an on-premise runtime environment and a first programming language, identification of tokens associated with the first programming language in the first code, removal of state dependencies from the first code based on the identified tokens and on transformation data mapping one or more of the identified tokens to a respective code transformation, to generate second code, execution of performance transformations on the second code to generate third code, execution of functional tests on the third code, in response to a determination that the functional tests were passed, separation of the third code into functional units to generate fourth code, application of a security function to one or more of the functional units to generate fifth code, and deployment of the fifth code to a cloud-based runtime environment.
Some embodiments are directed to a generating an executable data query. The query is configured for execution at a data source for the purpose of data retrieval therefrom. A machine learning model is applied to a query example to adjust the query example according to an input query. thus obtaining the executable data query.
A key protection framework for a platform includes a key protection engine for interfacing between an external key management system (KMS) and an external encryption service. A customer of the platform can select an existing external KMS and external encryption service to use with the framework. The key protection engine can onboard the external KMS with the platform by obtaining a configuration for the external KMS. Information extracted from the configuration can be used to establish a connection between the key protection engine and the external KMS, via which the key protection engine can interface with the external KMS to initiate rotation of a cryptographic key at the external KMS. Responsive to detection of a new version of a master key, the key protection engine can transmit a request to the external KMS to re-encrypt the cryptographic key with the new version of the master key.
The present disclosure relates to computer-implemented methods, software, and systems for extracting information from business documents based on training techniques to generate a document foundation model by pretraining. First training data based on a plurality of unlabeled documents is obtained for use in training a first model for document information extraction. The first model is pretrained according to a dynamic window adjustable to a word token count for each document of the plurality of unlabeled documents. The pretraining comprises evaluating word tokens in each of the plurality of unlabeled documents where masking is applied according to individual masking rates determined for the word tokens. The individual masking rates are indicative of respective informative relevance of the word tokens. The pretrained first model is provided for initializing a second document information extraction model to be trained based on labeled documents as second training data.
The present disclosure relates to computer-implemented methods, software, and systems for extracting information from documents based on training techniques to generate a document foundation model that is used to initialize a document information extraction model that is fine-tuned to business document specifics. A document information extraction model is initialized based on weights provided from a first pretrained model. Fine-tuning of the document information extraction model is performed based on labeled business documents as second training data. The labeled business documents are labeled and evaluated according to a virtual adversarial training (VAT). Based on the performed fine-tuning, a classifier for classification of information extraction is generated.
The present disclosure provides techniques and solutions for sorting data. In a particular implementation, a sorting technique is provided that places values in a sorted order by adding an offset value to values that are not in a sorted order. The resulting sorted set of values is not truly sorted, in that the set of modified values is sorted, but the underlying data itself is not sorted. In another implementation, a sorting technique can use multiple streams or sets. When an out of order element is encountered, it can be added to a new stream, if such a stream is available. The sorting techniques can be used for a variety of purposes, including provided sorted data for use in generating summary data, or for providing sorted data to be used in determining an intersection between two datasets.
The present disclosure provides techniques and solutions for determining whether a particular value is in a dataset using summary information. A sorted set of unique values is received. The sorted set of unique values includes gaps between at least certain values. The gaps are determined, and the set of unique values is represented as a gap filter. The gap filter includes a starting value of the set of unique values, a set of gap lengths, and identifiers indicating a number of unique values between respective gaps. The gap filter serves as summary information that can be used to determine whether a value be present in the dataset. In at least some cases, the use of the summary information may provide false positive results. The representation of the gap filter can be modified to improve its compressibility, but may increase the number of false positives produced by the gap filter.
Disclosed herein are system, method, and computer program product embodiments for compressing metadata in a Software-as-a-Service (SaaS) system. A metadata compression service operating on a computing device detects one or more global properties in entity metadata of each tenant in a plurality of tenants. The metadata compression service partitions the plurality of tenants into one or more groups and identifies common properties in each group. The metadata compression service compiles the one or more global properties in a global-level list and the one or more common properties for each group in a group-level list. The metadata compression service obtains one or more tenant-specific properties in the entity metadata of each tenant in the plurality of tenants and defines a data structure of an entity object for the tenant using the global-level list, the group-level list for the group that contains the tenant, and the one or more tenant-specific properties.
Arrangements for an intelligent client copy tool are provided. In a client copy procedure, access to a target client may be locked and all target data associated with the target client may be deleted. A before trigger for execution before a modifying operation on a database table may be defined. The trigger may be executed and, based on the trigger identifying a query associated with the modifying operation, access to the database table may be locked and an insert operation may be executed. Then, the trigger may be deleted. Thereafter, the modifying operation on the target client may be performed and access to the database table unlocked. A database view of the database table, including pointers to the source client, may be generated. Nonstatic data may be copied from the source client to the target client using the insert operation. After the copying, the target client may be unlocked.
Embodiments are described for a code generating system comprising a memory and at least one processor coupled to the memory. The at least one processor is configured to receive an instruction to generate a source code and retrieve the code profile from a profile manager. The instruction include a user description and information of a code profile. The at least one processor is further configured to generate a first command by combining the user description and the code profile and transmit the first command to an artificial intelligence (AI) proxy. Finally, the at least one processor is configured to receive the source code from the AI proxy and transmit the source code to a user device.
Provided is a system and method for evaluating the performance of a process using external process data, for example, from another similar process. In one example, the method may include generating a diagram of a process based on data from the process, where the diagram comprises a sequence of nodes that correspond to a sequence of events and edges between the sequence of nodes which indicate execution times between the events, displaying the diagram via a user interface of a software application, selecting a reference diagram of a reference process that includes a different sequence of nodes corresponding to a different sequence of events, identifying an improvement to the process based on the reference diagram, and modifying the diagram to include a different execution flow included in the reference diagram based on the identified improvement.
A large language model can be used to implement a service assistant. Natural language commands can be sent to the large language model, which identifies intents and responds with actions and API payloads. The command can then be implemented by an appropriate API call. The assistant can support actions that span a plurality of applications. A wide variety of human languages can be supported, and the large language model can maintain context between commands. Useful functionality such as prompting for missing parameters and the like can be supported.
The present disclosure relates to computer-implemented methods, software, and systems for implementing selection and distribution of tests to run over microservices executed on various infrastructure landscape types. A set of products that include microservices to be tested is determined. A set of infrastructure landscape types are determined for test executions for each respective product so that each type is associated with a predefined probability of selection from each set corresponding to each product. For each iteration of a schedule of iterations for test executions for a respective product over a period of time, a respective infrastructure landscape type from a respective set of infrastructure landscape types for hosting each product from the set of products is selected, and a test from the set is executed over the respective product when the product is running on a selected infrastructure landscape type according to the selection.
The present disclosure relates to computer-implemented methods, software, and systems for test selection for execution over microservices in a cloud environment. Metadata of a set of changed files is obtained. The set of changed files is to be deployed in a software product and is stored at a source code repository. The metadata of the set of changed files and content of at least one of the changed files is analyzed, based on a rule set, to determine a subset of tests of a default test plan to be executed. The subset of tests is executed at a test landscape running a set of software components associated with the set of changed files.
Embodiments facilitate deployment of customized code at a local site, for reference by a service that is being called by a remote system. At a design time, a visual code editor (e.g., Blockly) is utilized to create and store customized code at the local site. During a subsequent runtime, in response to dispatched service call initiated by the remote system, the customized code is retrieved and executed at the local site. By maintaining the customized code locally, embodiments confer security and avoid congestion associated with having the customized code stored remotely (with the remote system). This selective dispatch of a service call for handling by the local customized code, can be implemented based upon an extension scheme.
Embodiments of the present disclosure include techniques for controlling access to electronic content. In one embodiment, a user generates content in an electronic document. The system retrieves the content and a profile for the user. A predictive engine determines an access control list comprising a plurality of entries based on the content and the profile. The access control list may be presented to the user, and the system receives a verification from the user of the plurality of entries in the access control list.
Methods and apparatus are disclosed for extracting structured content, as graphs, from text documents. Graph vertices and edges correspond to document tokens and pairwise relationships between tokens. Undirected peer relationships and directed relationships (e.g. key-value or composition) are supported. Vertices can be identified with predefined fields, and thence mapped to database columns for automated storage of document content in a database. A trained neural network classifier determines relationship classifications for all pairwise combinations of input tokens. The relationship classification can differentiate multiple relationship types. A multi-level classifier extracts multi-level graph structure from a document. Disclosed embodiments support arbitrary graph structures with hierarchical and planar relationships. Relationships are not restricted by spatial proximity or document layout. Composite tokens can be identified interspersed with other content. A single token can belong to multiple higher level structures according to its various relationships. Examples and variations are disclosed.
A computer-implemented method may comprise creating a first view of a first data source comprising a first online analytical processing (OLAP) cube based on a first user input, creating a second view of a second data source based on a second user input, combining the first view and the second view, and creating metadata objects for elements of the first view and the second view. The method may further comprise generating a query execution plan comprising a first native query and a second native query based on a user-defined query specification and the metadata objects, executing the first native query on the first data source to retrieve a first dataset from the first data source and the second native query on the second data source to retrieve a second dataset from the second data source, and generating a federated dataset using the first dataset and the second dataset.
Systems and methods described herein relate to the efficient handling of data purge requests in the context of a distributed storage system. A plurality of data purge requests is stored in a first data structure. The data purge requests may be grouped into batches that are processed at least partially in parallel. A first data purge request from the plurality of data purge requests is successfully processed, and is moved from the first data structure to a second data structure. Processing of a second data purge request from the plurality of data purge requests is unsuccessful. The second data purge request is retained in the first data structure. Purge status data is generated based on the first data purge request being in the second data structure and the second data purge request being in the first data structure. The purge status data may be presented at a user device.
The example embodiments are directed to systems and methods which can ship a viable software product to a customer in a very short amount of time and follow up the initially shipment with a more robust version of the software at a later time. In one example, a method may include storing multiple blueprints of a software application, wherein each blueprint comprises different code dependencies between code modules of the software application, receiving a request to run the software application from a computing device, identifying a most-recent blueprint, from among the multiple blueprints, which has fulfilled one or more prerequisites, and executing one or more code modules of the software application at the computing device based on dependencies between the one or more code modules included in the identified most-recent blueprint.
Methods, systems, and computer-readable storage media for receiving metric data of a cloud system periodically; transforming the metric data of each type into a byte array using mapping tables, wherein the byte array is an encoded format of the metric data, where each field of the metric data is encoded as a field ID and a field type ID that are short integer variables; merging and storing the byte arrays of multiple metric data into a binary file, wherein the binary file comprises multiple blocks with each block comprising multiple byte arrays; generating indexes for common fields of different metric data in the binary file; receiving a retrieval request requesting metric records including a common field of a particular value; determining storage locations of one or more metric records satisfying the retrieval request; and obtaining the one or more metric records from the binary file using the corresponding storage locations.
Embodiments may be associated with a data source and a data service tool. A performance optimizer may determine a new type of data job to be executed based on a job execution parameter, perform a first execution of the new type of data job (such that data operations are performed at the data service tool), and collect first performance results. The performance optimizer then performs a second execution of the new type of data job (such that data operations are pushed down and performed at the data source) and collects second performance results. The first and second performance results are compared, and a result storage is updated with an indication of whether subsequent executions of the new type of data job will perform data operations at the data service tool or at the data source. The indication stored in the result storage may comprise, for example, a pushdown flag.
Systems and methods include acquisition of an asynchronous message from a message producer, the asynchronous message associated with a message consumer, determination that the asynchronous message matches a stored message, identification, in response to determining that the asynchronous message matches a stored message, of a stored error message associated with the stored message, and return of a return message based on the stored error message to the message producer.
A system associated with data pipeline orchestration may include a data pipeline data store that contains, for each of a plurality of data pipelines, a series of data pipeline steps associated with a data pipeline use case. A data pipeline orchestration server may receive, from a data engineering operator, a selection of a data pipeline use case in the data pipeline data store. The data pipeline orchestration server may also receive first configuration information for the selected data pipeline use case and second configuration information, different than the first configuration information, for the selected data pipeline use case. The data pipeline orchestration server may then store representations of both the first configuration information and the second configuration information in connection with the selected data pipeline use case. Execution of the selected pipeline is then arranged in accordance with one of the first configuration information and the second configuration information.
The present disclosure provides techniques and solutions for automatically and dynamically supplementing user prompts to large language models with information to be used by the large language model in formulating a response. In particular, entities are identified in the original prompt. A semantic framework is searched for information about such entities, and such information is added to the original user prompt to provide a modified user prompt. In a particular example, the identified entities comprise triples, and verbalized triples are added to provide the modified user prompt. The modified prompt may be hidden from the user, so that a response of the large language model appears to be in response to the original prompt.
Embodiments are described for a database management system comprising a memory and at least one processor coupled to the memory. The at least one processor is configured to receive a request to display a first data management process and identify a data query quality and a data transform quality for display. The first data management process comprises a first data source, a first data query, and a first data transform. The at least one processor is further configured to determine a first value of the data query quality for the first data query and a second value of the data transform quality for the first data transform and display the first data management process based on the first value and the second value.
An ingress component may receive, from a client, an HTTP URL and HTTP header information for an incoming protocol message (e.g., an AS2 message). An endpoint selector may determine the HTTP header information along with an endpoint address associated with the incoming protocol message. Based on the incoming HTTP header information and the endpoint address of the incoming protocol message, the endpoint selector may dynamically resolve an appropriate deployed endpoint and output an indication of the dynamically resolved appropriate deployed endpoint. A runtime component of an integration platform can then execute the incoming protocol message and interface with the appropriate deployed endpoint.
Certain aspects of the disclosure concern a computer-implemented method for improved data security in large language models. The method includes receiving a prompt query entered through a user interface, extracting a plurality of named entities from the prompt query and classifying the plurality of named entities into respective entity classes, tagging the plurality of named entities to be security compliant or security noncompliant based on the respective entity classes, and responsive to finding that one or more named entities are tagged to be security noncompliant, generating an alert on the user interface.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program takes a first snapshot of a first set of data stores configured to store data associated with a database system. After taking the first snapshot of the first set of data stores, the program further takes a second snapshot of a second set of data stores configured to store a set of encryption keys for a set of tenants of the database system. The program also transmits data included in the first snapshot of the first set of data stores to a secondary system. The program further transmits data included in the second snapshot of the second set of data stores to the secondary system.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Arrangements for deployment of updates to configuration templates correlated to software upgrades are provided. In some aspects, a content configuration upgrade may be initiated within a system landscape including a development system, a test system, and a production system. A transport request including content configuration upgrade data may be received, in an inactive state, at the development system. The content configuration upgrade data may be released to the test system via the transport request. The test system may be restricted from user interaction. The test system may be set to enable customizing using the test system. The content configuration upgrade data may be activated in the test system. In addition, the activating may cause configuration changes to be added one or more database tables and a new transport request to be generated. The test system may be restored for user interaction with upgraded content configuration data.
Arrangements for intelligent generation of unit tests are provided. A facade of a method of a class isolating code from a subsystem may be generated. The facade may include input variables and output variables. The facade of the method may be parsed to identify allowed input values for the input variables and expected output values for the output variables. User input specifying one or more parameters for the input variables may be received based on the identified allowed input values. Based on the received user input, a database table with every combination of the input variables with the output variables may be populated and stored in a data store. A unit test method may be executed on each row of the database table. Outputs of the unit test method may be compared to corresponding expected output values in the database table to determine whether there is a match.
Arrangements for execution of programs after import for configuration changes are provided. One or more execution of programs after import objects may be generated based on one or more database table definitions. Metadata configured by a user via a configuration interface of a user device may be received. The metadata may be associated with an update to at least a portion of data included in one or more data structures stored in one or more database systems. An execution of programs after import object associated with the update may be executed by identifying a scenario associated with the update, generating a WHERE clause including one or more conditions associated with the identified scenario, executing the WHERE clause, and automatically replacing, based on the received metadata, an old data value with a new data value.
A method includes receiving a message query from an entity identifier participating in a social network. The message query specifies one or more entities, one or more requirements, and one or more constraints. A set of message query parameters is generated based on the message query. A set of queries for a semantic graph of the social network is generated based on the set of message query parameters. The set of queries is applied to the semantic graph to obtain a set of query results. A message context of the entity identifier is determined based on the set of query results and the set of message query parameters. A set of messages from a message repository is determined based on the message context. The set of messages can be presented on a client computer associated with the entity identifier.
In an example embodiment, a Language Server Protocol (LSP) is utilized to connect IDEs to test frameworks via a shared language server. More particularly, the shared language server is modified to permit feedback to be delivered regarding test results from the test framework to be delivered to the IDEs, either directly via a code action that supports direct feedback, or indirectly by causing the language server to write test results to the IDEs as code comments within the software code itself. The result is that a single test framework can be utilized by developers using completely different IDEs, without requiring a separate test framework to be developed for each IDE.
In one embodiment, a first entry in a first database is modified to include data from a highest-ranked one of one or more available data tables that correspond to the first entry. Each of one or more characters fields of the modified first entry are converted into a respective one or more first-entry tokens, and each of one or more character fields of each of a plurality of second entries in a second database is converted into a respective one or more second-entry tokens. The first-entry tokens are compared to the second-entry tokens, and, in response to the comparison, it is determined whether the first entry matches one of the second entries. In response to determining that the first entry matches one of the second entries, the first entry and the matching second entry are associated with one another in one or both the first and second databases.
A system and method include determination of a source tenant system, determination of a target tenant system, determination of target configuration data associated with the target tenant system, comparison of the source configuration data and the target configuration data to identify similar configuration data and dissimilar configuration data, and presentation a first hierarchy of the source configuration data and a second hierarchy of the target configuration data along with indicators of the similar configuration data and the dissimilar configuration data.
Methods, systems, and computer-readable storage media for receiving a document provided as a computer-readable file, receiving a set of questions, for each question in the set of questions, generating an inference input including a question, at least a portion of text of the document, and multiple tokens, processing, by a PLM, the inference input to generate a set of text embeddings, processing, by a neural network, the set of text embeddings to provide sets of tokens, each set of tokens being specific to a segment of the document and including a start token and an end token respectively identifying a start position and an end position of the segment, determining, from the sets of tokens, a segment for display, and displaying at least a portion of the document in a UI and an annotation indicating the segment within the at least a portion of the document.
Embodiments of the present disclosure include techniques for securely connecting computer systems. In one embodiment, the system allows many users to connect with many different secure computer systems having many different connection types. A user selects an entity and is presented with connection types for the selected entity for the entities target systems. The user selects a connection type and corresponding target, and a tunnel proxy server is configured to connect the user to the selected target. In some embodiments, the connection type is associated with other information. In one embodiment, an application associated with the connection type is automatically launched.
A computer implemented method can receive an incoming query statement, identify a database object specified in the incoming query statement, and search a query hint registry for a first hint record which includes the database object and a first hint paired with the database object. Responsive to finding the first hint record in the query hint registry, the method can generate a modified query statement by appending the first hint to the incoming query statement and obtain a query execution plan based on the modified query statement.
Root causes of network anomalies can be identified as follows. A subset of network entities that have experienced network anomalies during a time period are determined based on historical network data. A set of root cause candidates are selected among the plurality of network entities by iterating through the network topology, each root cause candidate being directly upstream of two or more network entities in the subset of network entities that have experienced network anomalies according to the network topology. Network entities that are root causes of the network anomalies are identified by removing root cause candidates that have a common upstream network entity that is also a root cause candidate from the set of root cause candidates leaving a set of remaining root cause candidates that are the root causes.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
To predict network maintenance, historic records of hardware metrics are obtained for a plurality of network interfaces. An average of the metrics over a specified time span is determined for a plurality of time spans. Feedback metrics are determined for the network interfaces for each of the time spans. A histogram is generated that plots a frequency of the feedback metric for specified ranges of the hardware metric. A threshold value for the hardware metric is determined by iteratively determining whether a hardware metric bin of the histogram meets a specified non-zero value for the feedback metric starting from a highest hardware metric bin of the histogram. Then new records of hardware metrics are obtained and one or more network interfaces are determined to be needing maintenance based on an average of the hardware metrics in the new records meeting or exceeding the determined threshold value for the hardware metric.
Automated digital assistant generation can be implemented via large language models. Domain-specific documents can be loaded into a large language model that is then prompted to generate intents, entities, exemplar utterances, and the like. Such configuration components can then be assembled into a digital assistant definition that can then be deployed as a digital assistant for the domain in question. Skills can be aggregated so that the digital assistant can address the domain along with other domains, whether closely related or not.
Data processing tools often use pipeline-based workflows, which consist of a sequence of operations. Each operation is configured according to configuration settings provided by a pipeline developer. The operations may use other software components such as frameworks that may also be configured. An application developer that defines a series of operations to be performed to achieve a desired result is able to provide the configuration settings for the operations. However, the application developer may not have the expertise to efficiently define configuration settings for the underlying frameworks. As discussed herein, a pipeline configuration system is used to generate configuration settings for frameworks used by a pipeline based on configuration settings for the operations of the pipeline. The operation configuration may include non-transformable properties, transformable properties, and internal properties. The pipeline configuration system may primarily modify the transformable properties.
In an example embodiment, a solution is provided that automatically adds a system message to natural language text provided by a user to generate a prompt to a Large Language Model (LLM) to automatically generate a code in a declarative language format, the code corresponding to the natural language text. Furthermore, retrieval augmented generation may be utilized to overcome the maximum number of contextual tokens permitted as input to an LLM. More particularly, the system message may be designed to include an instruction to the LLM to generate search calls for one or more entity definitions in a specified format from a database. The search calls may then be performed on the database via a similarity search to obtain the relevant information, which can then be passed back into the LLM for the generation of the code.
In an example, a software package is provided that provides an interface such that when the software is executed, a user is able to provide a natural language prompt using a scripting language that directs the interface to interact with a Large Language Model (LLM), adding contextual information to prompts sent to the LLM. The interface is then also able to extract and evaluate programming code generated by the LLM, so that the generated code may be used immediately in a software package, without the need for user edits and/or copy/pasting.
The present disclosure relates to computer-implemented methods, software, and systems for managing an in-memory cache to support auto-sizing features and to support faster reading of files. A request to read a file from a file system of the file service is received by the in-memory cache. In response to the received request, a file entry for the file that is stored in the in-memory cache is determined. The file entry comprises a counter defining a number of times the file has been requested to be read. The counter is evaluated according to a cache storage rule. In response to the evaluation, the file is obtained by the in-memory cache file from the file system; and the file is stored in the in-memory cache to update the file entry. The file as obtained from the file system and provided by the in-memory cache.
Embodiments of the present disclosure include techniques for determining faults across multiple software applications. In one embodiment, a configuration table is loaded with information specifying relationships between software applications. Fault events from the software applications are received and stored in a fault log database. A query is received pertaining to a fault in one of the applications, and the information in the configuration table is used to determine details about faults in other software applications related to the fault specified in the query. The other software systems are accessed to retrieve fault information, and a fault relationship table is populated providing more insight into relationships between faults across the applications.
Methods, systems, and computer-readable storage media for receiving a request that requires a connection to a database, wherein the application server is initially allocated with a set of base connections by a central server; determining that there are available idle connections based on a number of in-use connections and a number of allocated connections; in response to determining that there are available idle connections, assigning an idle connection to the request and updating the number of in-use connections; determining an in-use percentage using the number of in-use connections and the number of allocated connections; and executing one of: requesting new connections from the central server in response to determining that the in-use percentage satisfies an upper percentage threshold, and returning idle connections to the central server in response to determining that the in-use percentage satisfies a lower percentage threshold.
A computing system configured to perform artificial intelligence-powered search and evaluation functionalities is disclosed. The computer system can receive a natural language search input entered through a user interface, request a search engine to perform a search based on the natural language search input, receive search results returned by the search engine, and submit, in runtime, structured data contained in the search results to a large language model. The structured data includes a table including one or more records and a plurality of fields. The computing system can prompt, in runtime, the large language model to generate scores for the one or more records and present the structured data and the scores on the user interface.
Methods, systems, and computer-readable storage media for generating an expected configuration checksum based on a configuration file associated with an ETL job, the ETL job being executable to provide a target entity for consumption by one or more consuming applications, the target entity including data of one or more source entities; retrieving an ETL timestamp indicating a last time that the ETL job was executed, and determining, based on one of the expected configuration checksum and the ETL timestamp, that a target data schema of the target entity is to be updated, and in response, providing target metadata for the target entity and updating the target data schema based on the target metadata to provide an updated target data schema.