A data management system receives updates to records of a source dimension. Some records of the source dimension reference target dimensions. The data management system identifies template records from existing records in the source dimension for modeling changes to connections with the target dimensions based on the updated records in the source dimension. The template records are discovered using rules-driven processes, AI-driven processes, or a serial or parallel hybrid processes including rules and AI. These processes use ancestor information from the updated records to find best-matching template records. The rules-driven processes additionally rely on matching fields, and the AI-driven processes additionally rely on vector embeddings and optionally clustering. Updates are made to the target records in the target dimensions, including any roll-up structures indicated for data propagation, identified using the template records, and downstream applications using the target records may consume the updates.
Techniques for incrementally delivering stream data are disclosed. A system receives a portion of a unit of stream data and decodes metadata included in the portion of the unit of stream data. Based on the decoded metadata, the system determines that data included in the portion of the unit of stream data will be delivered without waiting for a remainder of the unit of stream data to be received by the system. The system generates a runtime object to track the incremental delivery of data in the unit of stream data, and decodes data included in the portion of the unit of stream data. The system delivers the decoded data to a stream recipient. When the remainder of the unit of stream data is received, that remainder of the unit of stream data is decoded and delivered to the stream recipient.
Operations may include receiving, from a first network entity, a first request for a first certificate revocation list (CRL) that identifies a first CRL distribution point (CDP) corresponding to the first CRL; mapping the first CDP to a first CRL identifier of a set of available CRL identifiers; locating, in a CRL repository, a first CRL based on the first CRL identifier; and transmitting the first CRL to the first network entity.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 67/568 - Storing data temporarily at an intermediate stage, e.g. caching
4.
SCALABLE ARCHITECTURE FOR AUTOMATIC GENERATION OF CONTENT DISTRIBUTION IMAGES
Methods and systems are disclosed for automatic generation of content distribution images that include receiving user input corresponding to a content-distribution operation. The user input may be parsed to identify keywords. Image data corresponding to the keywords can be identified. Image-processing operations may be executed on the image data. Executing a generative adversarial network on the processed image data, which includes executing a first neural network on the processed-image data to generate first images that correspond to the keywords, the first images generated based on a likelihood that each image of the first images would not be detected as having been generated by the first neural network. A user interface can display the first images with second images that include images that were previously part of content-distribution operations or images that were designated by an entity as being available for content-distribution operations.
Techniques are provided for augmenting training data using gazetteers and perturbations to facilitate training named entity recognition models. The training data can be augmented by generating additional utterances from original utterances in the training data and combining the generated additional utterances with the original utterances to form the augmented training data. The additional utterances can be generated by replacing the named entities in the original utterances with different named entities and/or perturbed versions of the named entities in the original utterances selected from a gazetteer. Gazetteers of named entities can be generated from the training data and expanded by searching a knowledge base and/or perturbing the named entities therein. The named entity recognition model can be trained using the augmented training data.
A system utilizes testing configurations for network entities to orchestrate a testing process that includes, in response to receiving a first configuration update, rolling forward a first testing configuration at least by configuring the first testing configuration to indicate that a certificate issuance process is to use a new CA certificate for issuing entity certificates for a network entity. Additionally, the testing process includes, in response to receiving a second configuration update, rolling back the first testing configuration at least by configuring the first testing configuration to indicate that the certificate issuance process is to revert back to using a current CA certificate for issuing entity certificates for the first network entity.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
7.
Orchestrating Testing Of Digital Certificates In An Execution Environment Of A Computing Network
A system orchestrates a testing process for testing a new certificate authority (CA) certificate in an execution environment prior to the new CA certificate superseding a current CA certificate in the execution environment. Orchestrating the testing process includes issuing a first entity certificate based on the new CA certificate for a first network entity executing in the execution environment that is designated for performing testing operations and distributing the first entity certificate to the first network entity for performing the testing operations. While performing the testing operations, the system distributes a second entity certificate, issued based on the current CA certificate, to a second network entity executing in the execution environment that is not designated for performing testing operations. The system removes the current CA certificate from the execution environment responsive to determining that the testing operations are successful, and the new CA certificate supersedes the current CA certificate.
A computer-implemented method includes receiving an input for a model from a data stream, computing an output from the model, and storing the input and the output as an element of a cache. The method also includes using an algorithm to determine a set of parameters associated with the cache; the algorithm optimizes a function including a time taken by the model to generate outputs from a set of inputs sampled from the data stream. The method further includes calculating a caching score associated with each cache element, based on the set of parameters and the time taken by the model to generate the output, a usage of the element expressed as a number of iterations over which the element has been retained in the cache, and a frequency of usage of the element. The method also includes subsequently removing from the cache the element having the lowest caching score.
Techniques for parsing stream data asynchronously are disclosed. A system translates protocol frames that are delivered asynchronously into an ordered flow by reordering protocol frames that arrive out of order, dropping duplicate protocol frames, and eliminating overlaps between protocol frames. When a protocol frame is delivered to the system, the system compares a frame offset of the protocol frame to an expected offset that is maintained by the system. If the frame offset is lower than the expected offset, the system drops at least a part of that protocol frame. If the frame offset matches the expected offset, the system releases at least part of that protocol frame for further processing. If the frame offset is greater than the expected offset, the system adds at least part of that protocol frame to an ordered queue of frames that are being held in buffer memory by the system.
A query is received for stored data items that have a plurality of attributes that include a first attribute and a second attribute that has a hierarchical relationship with the first attribute. A first sort of the stored data items is performed based on a first set of ordering keys that include the first attribute and the second attribute. A second sort of the stored data items is performed based on a second set of one or more ordering keys, the second set being a proper subset of the first set and including the second attribute as an ordering key. First pointers to the stored data items are inserted into a first sorted structure of the second sort of the stored data items. Second pointers to the stored data items are inserted into a second sorted structure of the second sort of the stored data items.
Techniques for data synchronization using transaction identifications within objects are disclosed. In some embodiments, a method comprises the following: executing a first data synchronization process for synchronizing data objects comprising corresponding transaction identifications (IDs) from a source data repository to a destination data repository, wherein an interruption occurs in the first data synchronization process; identifying a first transaction ID for the first data synchronization process that was last processed prior to the interruption; identifying a second transaction ID that is subsequent to the first transaction ID in a sequence of transaction IDs; identifying a second set of one or more data objects that each comprise the second transaction ID; and executing a second data synchronization process for synchronizing the second set of one or more data objects by copying the second set of one or more data objects from the source data repository to the destination data repository.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Techniques for automatically calibration the accuracy of vector queries is provided. In one technique, a vector query that includes a query vector and that is associated with an accuracy value is received. The accuracy value may be a percentage value. In response to receiving the vector query, a value for a vector search parameter is determined based on the accuracy value and a plurality of past accuracy scores. For IVF vector indexes, the vector search parameter may be a number of centroid partitions to scan during the search. For HNSW vector indexes, the vector search parameter value may be a size of a results heap. A search of a vector index is performed based on the query vector and the value for the vector search parameter. A set of results is generated based on the search of the vector index.
Techniques are disclosed for abstracting multiple fragments of a dataset into a single abstraction that can be used to manipulate the fragmented dataset. Fragments of the dataset are represented in memory by multiple runtime objects generated by the system. The system abstracts the runtime objects by generating a single runtime object to represent the runtime objects. While the dataset remains fragmented, the single runtime object presents the fragmented dataset as a continuous sequence of elements. The system subsequently reads the continuous sequence of elements to decode the fragmented dataset. While reading an element in the continuous sequence of elements, the system may advance a read position of the single runtime object, and the system may advance a read position of an individual runtime object that represent that element. Once an element has been read through the single runtime object, that element may be released from the continuous sequence of elements.
Techniques for deploying artifacts to a computing environment are disclosed. A system includes a deployment service for routing requests to destination addresses in a target computing environment. The deployment service detects a request from an artifact deployment tool to deploy an artifact to the target computing environment. The deployment service obtains a deployment token representing verification that a set of one or more customer designated conditions are satisfied to deploy the artifact to the target computing environment. The deployment service obtains validation of the deployment token. Responsive to successfully obtaining validation of the deployment token, the deployment service directs the artifact to a destination address in the target computing environment. The artifact is received at the destination address and deployed in the target computing environment.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06Q 30/015 - Providing customer assistance, e.g. assisting a customer within a business location or via helpdesk
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Techniques for optimizing generative AI summarization are provided. In one technique, a plurality of portions of text data is identified. For each portion of the plurality of portions, an embedding is generated based on that portion. Based on a plurality of embeddings that are generated for the plurality of portions, a plurality of clusters of embeddings is generated. For each cluster of embeddings of the plurality of clusters of embeddings, (1) a first language model generates a cluster summary based on portions, of the plurality of portions, that correspond to embeddings associated with that cluster of embeddings, and (2) the cluster summary is added to a set of cluster summaries. A second language model is used to generate a final summary based on the set of cluster summaries.
Techniques for parsing stream data are disclosed. A system receives a first frame of a first set of frames. The first set of frames embeds a second set of frames. The system generates a first runtime object to represent at least part of the first frame that is stored in a first memory section. Based on determining that a first portion of the first frame does not need to be retained in memory, the system evaluates a size of a portion of the first frame relative to a threshold. Based on the evaluation, the system either (a) generates a second runtime object to represent a second portion of the first frame that is copied from the first memory section to a second memory section or (b) generates a third runtime object to represent the second portion of the first frame residing in the first memory section.
The techniques described herein relate to a first infrastructure provided by a first cloud service provider, wherein the first infrastructure is connected, using an overlay bridge, to a second infrastructure of a second cloud service provider that is different from the first cloud service provider, wherein the first infrastructure comprises a first set of compute resources and the second infrastructure comprises a second set of compute resources; the first infrastructure is configured to form a cloud network between the first set of compute resources and a second set of compute resources; and the cloud network is configured to provide a cloud service of the second cloud service provider to a customer of the first cloud service provider using the first set of compute resources and the second set of compute resources.
A method for selecting NF profiles of NF set mates for alternate routing includes receiving an NF discovery request, accessing an NF profiles database, and identifying NF profiles that match query parameters in the NF discovery request. The method further includes determining a value of an NF profiles limit parameter, selecting a first number of NF profiles that is less than the value of the NF profiles limit parameter, selecting a second number of NF profiles, wherein the NF profiles in the second number of NF profiles correspond to NF set mates of NFs corresponding to NF profiles in the first number of NF profiles and a sum of the first and second numbers number is less than or equal to the value of the NF profiles limit parameter, and generating and transmitting an NF discovery response including the NF profiles in the first and second numbers of NF profiles.
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 45/302 - Route determination based on requested QoS
19.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING END-TO-END MESSAGE INTEGRITY CHECKING FOR SERVICE-BASED INTERFACE (SBI) MESSAGES COMMUNICATED VIA A SERVICE COMMUNICATION PROXY (SCP)
A method for checking end-to-end SBI message integrity includes receiving a first traffic feed from a first NF, the first traffic feed including copies of SBI messages transmitted from the first NF to an SCP. The method further includes receiving a second traffic feed from a second NF, the second traffic feed including copies of SBI messages received by the second NF from the SCP. The method further includes identifying, from the first traffic feed, a copy of a first SBI message transmitted by the first NF to the SCP. The method further includes identifying, from the second traffic feed, a copy of a second SBI message received by the second NF from the SCP and that is associated with the copy of the first SBI message. The method further includes performing, using the message copies, an end-to-end SBI message integrity check for the first SBI message.
A data synchronization system accesses a plurality of data sets. The system receives a subscription from a third application to a first data set of a first application. When the system receives a notification of an update to a first item of the first data set, and determines via a mapping within the subscription, a value to use as a version of the first field for a corresponding item of a third data set. When the system receives input overriding the first mapping to determine values from a first field of the second data set, the system stores override metadata indicating the second data set is a data source for the third data set. After the mapping is overridden, when receiving a notification of an update to the second data set, the system triggers a modification to the first corresponding item based on the update to the second corresponding item.
Techniques are described for securing data stored on a non-volatile storage medium from unauthorized access using improved network-bound data security techniques. The data is secured using network-bound security techniques without the entities involved in the processing (e.g., clients and servers) having to exchange any client-specific or server-specific keys, secrets, or other secret data with each other. The techniques disclosed herein provide the network-bound data security functionality using a sequence of Message Authentication Codes (macs) generated using Hash-based Message Authentication Code (HMAC) generation techniques.
G06F 21/79 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
22.
RECOVERING AN APPLICATION STACK FROM A PRIMARY REGION TO A STANDBY REGION USING A RECOVERY PROTECTION GROUP
Techniques for recovering an application stack from a primary region to a standby region using a recovery protection group are provided. In one technique, a first plurality of cloud resources, that reside in a first computing region, are identified to include in a recovery protection group. Each cloud resource of the first plurality of cloud resources is automatically analyzed to identify its characteristics. Based on the characteristics, a recovery plan is automatically generated that comprises multiple actions that includes two or more actions that are to be performed in a particular sequence relative to two or more types of cloud resources in the first plurality. The recovery plan is executed, which comprises performing the multiple actions, which results in allocation, in a second computing region that is different than the first computing region, of a second plurality of cloud resources that correspond to the first plurality of cloud resources.
Embodiments described herein are generally related to data analytics, and computer-based methods of providing business intelligence data, and are particularly related to systems and methods for evaluation, implementation, and refinement of key performance indicators, dashboards, or scorecards, for use in analytics-based decision-making. In accordance with an embodiment, a data analytics environment can join several data sets, including an area of responsibility data, in order to determine one or more representatives responsible for particular organization units, during particular periods of time; and identify key measures or metrics under the purview of, or otherwise associated with those representatives, for use in generating a key performance indicator scorecard reflecting such relationships.
Techniques for establishing trust between entities in a cross-domain solution (CDS) are disclosed. In some embodiments, a high-side entity in a CDS transmits, to an intermediate entity in the CDS, a first version of a control message that comprises (a) a first public key associated with the high-side entity and (b) a first signature generated using a first private key associated with the high-side entity. The intermediate entity validates the first signature using the first public key. Responsive to validating the first signature, the intermediate entity generates a second version of the control message that comprises a second signature generated using a second private key associated with the intermediate entity. The intermediate entity transmits the second version of the control message to a low-side entity in the CDS.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 9/14 - Arrangements for secret or secure communicationsNetwork security protocols using a plurality of keys or algorithms
25.
METHOD AND SYSTEM FOR LEVERAGING LANGUAGE MODELS IN DESIGNING NO-CODE WORKFLOWS FOR MACHINE LEARNING WITHIN LOW-CODE/NO-CODE PLATFORMS
Here is natural language processing (NLP) for workflow development. A generative large language model (LLM) explains and modifies a workflow graph in an integrated development environment (IDE) that streamlines design, development, and deployment of machine learning (ML) workflows in a low-code/no-code (LC/NC) environment that is productive for users having a wide variety of engineering proficiency. A user is assisted in creating a sophisticated ML workflow through an intuitive and potentially no-code interface. This includes a variety of activities including the generation of code snippets, recommending best ML practices, automatically configuring workflow components, optimizing algorithmic parameters, and providing natural language explanations for each activity. The IDE generates a linguistic prompt that contains a definition of a workflow graph and natural language that specifies an interaction to apply to the workflow graph. The generative LLM accepts the linguistic prompt as input and inferentially generates a result of the interaction for the workflow graph.
Techniques are described herein for dynamically provisioning clusters in cloud environments based on specific node configurations. Systems and methods involve receiving a request to set up a cluster with multiple nodes, each requiring a particular configuration. The process begins with assessing if it's feasible to provision all nodes as requested. If not, the method identifies which subset of nodes can be provisioned initially and proceeds accordingly. After the initial provisioning, a further assessment is made to determine which additional subset of nodes can be provisioned next. This approach enables the gradual setup of complex clusters in a flexible manner, adapting to available resources and configurations in real-time.
Techniques for establishing trust between entities in a cross-domain solution (CDS) are disclosed. In some embodiments, a high-side entity in a CDS transmits, to an intermediate entity in the CDS, a first version of a control message that comprises (a) a first public key associated with the high-side entity and (b) a first signature generated using a first private key associated with the high-side entity. The intermediate entity validates the first signature using the first public key. Responsive to validating the first signature, the intermediate entity generates a second version of the control message that comprises a second signature generated using a second private key associated with the intermediate entity. The intermediate entity transmits the second version of the control message to a low-side entity in the CDS.
G06F 21/64 - Protecting data integrity, e.g. using checksums, certificates or signatures
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network, or by a multi-tenancy service on behalf of the customer. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network) or by the multi-tenancy service. The customer network can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
A unified security agent manager plugin within a virtual machine compute instance manages at least one agent installed within the compute instance of a cloud environment. The plugin periodically receives agent inventory information, where the agent inventory information identifies (i) a plurality of platform types of the agent, and (ii) for each platform type, one or more deployable versions of the agent. The plugin selects a platform type from the plurality of platform types. The plugin compares the one or more deployable versions corresponding to the selected platform type with a version of the agent currently installed in the compute instance. If the version currently installed in the compute instance is older than the one or more deployable versions, the plugin fetches an agent object corresponding to a deployable version from an object storage repository, and updates the version currently installed in the compute instance to the fetched deployable version.
H04L 41/046 - Network management architectures or arrangements comprising network management agents or mobile agents therefor
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
30.
POLICY TAGS FOR 5G NETWORK FUNCTION INCONSISTENCY DETECTION AND RECONCILIATION
Systems and methods for providing policy tags to detect and reconcile inconsistencies between a network function (NF) producer and an NF consumer are provided herein. In an example, a system includes instructions for a NF producer to establish a first state with an NF consumer, where the NF producer and the NF consumer are in a 5G network, generate a first policy tag corresponding to the first state, and store the first policy tag as the latest stored policy tag. The latest policy tag is then transmitted in subsequent signaling from/to the NF consumer and signaling including the latest received policy tag is received from the NF consumer. A validation process is then performed with the latest received policy tag from the NF consumer and the communication is processed based on the validation process of the received policy tag.
Combining allowlist and blocklist support in data queries includes performing operations including obtaining a runtime query and extracting a set of runtime tuples from the runtime query. The operations further include processing the set of runtime tuples by an allowlist semantic comparator comparing the set of runtime tuples with an allowlist to obtain a first comparison result and by a blocklist semantic comparator comparing the set of runtime tuples with a blocklist to obtain a second comparison result. The blocklist semantic comparator performs an inverse comparison of the allowlist semantic comparator. The operations further include combining the first comparison result with the second comparison to form an access determination and executing the runtime query according to the access determination.
Embodiments described herein are generally related to data analytics, and computer-based methods of providing business intelligence data, and are particularly related to systems and methods for evaluation, implementation, and refinement of key performance indicators, dashboards, or scorecards, for use in analytics-based decision-making. In accordance with an embodiment, a data analytics environment can join several data sets, including an area of responsibility data, in order to determine one or more representatives responsible for particular organization units, during particular periods of time; and identify key measures or metrics under the purview of, or otherwise associated with those representatives, for use in generating a key performance indicator scorecard reflecting such relationships.
Techniques are described for securing data stored on a non-volatile storage medium from unauthorized access using improved network-bound data security techniques. The data is secured using network-bound security techniques without the entities involved in the processing (e.g., clients and servers) having to exchange any client-specific or server-specific keys, secrets, or other secret data with each other. The techniques disclosed herein provide the network-bound data security functionality using a sequence of Message Authentication Codes (macs) generated using Hash-based Message Authentication Code (HMAC) generation techniques.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Here is message compression using schema inference for condensing semantic content by removal of syntactic structure in multiple kinds of content. Topological structures of different trees are generalized to generate a merged tree. Because compression discards redundant content and often only semantic content is retained, the signal-to-noise ratio is increased, which increases accuracy of downstream semantic analytics such as machine learning. Compression based on the merged tree removes redundant information from new messages that, without obscuring semantic content, decreases the data volume for downstream analytics or archiving. This compression extracts semantic values that can be assembled into a sequence of lexical tokens that is suitable for natural language processing (NLP), and the sequence of lexical tokens does not contain tokens that represent syntax or structure. Thus, compression provides fewer tokens to be processed by a downstream language model, which is suitable for efficient processing of a live data stream.
Systems, methods, and apparatuses may implement a Semantic Model Inference Attack (SMIA) to determine whether a given input text was included in a training data set for a machine learning model, such as a Large Language Model (LLM), according to SMIA scores generated for the given input text and neighbors in a semantic space. An SMIA may generate SMIA scores by generating neighbors of input text in a semantic space, generating embedding vectors and loss values for the input text and neighbors and inputting the vectors and loss values to an attack model trained on loss values of member and non-member data. SMIA scores may then be compared to a threshold to determine whether the input text was used as part of training the machine learning model.
A computer assigns many threads to a hardware pipeline that contains a sequence of hardware stages that include a computing stage, a suspending stage, and a resuming stage. Each cycle of the hardware pipeline can concurrently execute a respective distinct stage of the sequence of hardware stages for a respective distinct thread. A read of random access memory (RAM) can be requested for a thread only during the suspending stage. While a previous state of a finite state machine (FSM) that implements a coroutine of the thread is in the suspending stage, a read of RAM is requested, and the thread is unconditionally suspended. While the coroutine of the thread is in the resuming stage, an asynchronous response from RAM is correlated to the thread and to a next state of the FSM. While in the computing stage, the next state of the FSM executes based on the asynchronous response from RAM.
Described herein are systems and methods for automatically enriching datasets in a data analytics environment, with system knowledge data. The system can operate, upon an analysis of a data set, to automatically enrich the data set. Users of data analytics environments, such as business users preparing data visualizations, may be unaware of additional data and system knowledge data that could be utilized to improve the data visualizations. The systems and methods described herein can provide an automatic enrichment of data from, for example, a knowledge repository, which can be delivered to a data analytics customer using various delivery means.
A method for generating multilingual aspect-based sentiment annotations in different languages includes, by a computing system, receiving first content in a first language and performing an inference of the first content for presence of a plurality of aspects, including identifying aspects within the first content, annotating the first content in accordance with the identified aspects within the first content, and generating an annotated first content. The method further includes receiving second content in a second language, including a translation of the first content, performing the inference of the second content for presence of the aspects to generate an annotated second content and producing a training set in the second language from the annotated second content. The training set is suitable for use, in the second language, in refining the inference in classifying portions of the second content into one of a plurality of polarities associated with the plurality of aspects.
A process management system, method, and article are provided for generating and configuring aggregate span graphs to analyze process monitoring data. The process management system receives process monitoring data reporting on different instances of same and different processes. The process management system uses the process monitoring data to generate a structured object that identifies spans of processing time corresponding to processes involved in handling requests. The structured object includes, for each span: a unique identity of the span, a name of a process corresponding to the span, if the process was initiated by a parent, an identity of the parent, and a time during which the process ran. Using the structured object, the process management system generates a graph including sections. Each section represents spans having a process initiation path corresponding to the section and has a section width determined using an aggregate metric of spans in the section. The graph shows child spans stacked on parent spans.
Techniques for evaluating the efficacy of large language models on classification tasks are disclosed. A prompt that includes an instruction and a content item to be classified is submitted multiple times to a large language model. For each submission of the prompt, a corresponding classification label from a set of two or more classification labels is returned. Each classification label is compared to the expected classification label for the content item using a label distance value metric. Using the label distance value metric, a confidence score is generated.
A method for implementing a proof-of-work challenge for transmission of data to a non-authenticated gateway is disclosed. The method includes receiving, by the gateway and from a device, a challenge request; and transmitting a proof-of-work challenge to the device. The method further includes receiving, from the device, a solution to the challenge, wherein the solution to the challenge accompanies data. The method further includes verifying a validity of the solution to the challenge; and storing and/or processing the data, responsive at least in part to the solution being valid for the challenge. In an example, the solution to the challenge is to be derived by the device, without an intervention by a user of the device. In an example, the challenge request and the solution to the challenge are received from a library that is packaged with a mobile application being executed within the device.
Techniques discussed herein relate to enabling a hypervisor to self-recover. In particular, a watchdog daemon may be executed at the hypervisor to perform periodic write disk checks of the boot volume associated with the hypervisor. Suppose an attempt to write to disk fails (e.g., an Error Input/Output (EIO) or Error Read Only File System (EROFS) return code is received. In that case, the daemon may determine that the boot volume is in read-only mode, post metrics to one or more logging services to indicate that the daemon has detected a read-only boot volume and reboot the respective hypervisor.
Cloud computing architecture is described for implementing test modules in a communication-controlled cloud environment with access to private data. The test modules perform synchronous tests on the private data and export test results to an analytic environment subject to data export policies. An analytic application is used to asynchronously analyze the test results in the analytic environment. The cloud computing architecture alternatively or additionally includes an interface for deploying investigation-bound cloud environments in restricted subnets. A collection of software is instantiated in the investigation-bound cloud environment, and the investigation-bound cloud environment may be accessed with remote access credentials using a remote access protocol for testing the collection of software. Information about the investigation-bound cloud environment displayed in the analytic application, and the analytic application and the restricted subnet are forcibly deleted when the investigation is complete.
Systems, methods, and other embodiments associated with a centralized product catalog with standardization from independent data sources are described. In one embodiment, a computing system is configured to access and download product information for a plurality of products that is maintained in a non-standard format. Metadata is identified and extracted. Feature vectors may be generated from the metadata and used to predict and assign class codes to products. The product attributes are extracted from each product, which include a non-standard product description. The non-standard product description is converted into a standardized text description. Standardized records are generated for each product record and a centralized database is generated with the standardized records.
A method for implementing a proof-of-work challenge for transmission of data to a non-authenticated gateway is disclosed. The method includes receiving, by the gateway and from a device, a challenge request; and transmitting a proof-of-work challenge to the device. The method further includes receiving, from the device, a solution to the challenge, wherein the solution to the challenge accompanies data. The method further includes verifying a validity of the solution to the challenge; and storing and/or processing the data, responsive at least in part to the solution being valid for the challenge. In an example, the solution to the challenge is to be derived by the device, without an intervention by a user of the device. In an example, the challenge request and the solution to the challenge are received from a library that is packaged with a mobile application being executed within the device.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
46.
ARCHITECTURE AND COMPUTING ENVIRONMENT FOR ISOLATED AND CONTROLLED CODE REVIEW
Cloud computing architecture is described for implementing test modules in a communication-controlled cloud environment with access to private data. The test modules perform synchronous tests on the private data and export test results to an analytic environment subject to data export policies. An analytic application is used to asynchronously analyze the test results in the analytic environment. The cloud computing architecture alternatively or additionally includes an interface for deploying investigation-bound cloud environments in restricted subnets. A collection of software is instantiated in the investigation-bound cloud environment, and the investigation-bound cloud environment may be accessed with remote access credentials using a remote access protocol for testing the collection of software. Information about the investigation-bound cloud environment displayed in the analytic application, and the analytic application and the restricted subnet are forcibly deleted when the investigation is complete.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 11/3604 - Analysis of software for verifying properties of programs
G06F 11/3698 - Environments for analysis, debugging or testing of software
Described herein is a system and method for providing a network-visualization browser extension in an analytics environment. An analytic applications environment can be provided by, or otherwise operate at, a computer system providing access to a data warehouse, or data warehouse instance. The system is adapted for generating a network graph from analytic artifacts, including display of one or more dataflows as dataflow constellation visualizations that illustrates the dataflows and a lineage or other relationship information. The systems and methods disclosed herein can provide tools to provide insights for users of an analytics environment with regard to the users' analytic artifacts and relationships among the same.
Systems, methods, and other embodiments associated with quasi-supervised clustering for activity pattern characterization and anomalous activity detection are described. In one embodiment, a method generates a first sparse similarity matrix for nearest neighbors of a plurality of data points. The data points each characterize a pattern of activity associated with an account. The method generates a second sparse similarity matrix for random neighbors of the plurality of data points. The method recursively clusters the plurality of data points based on the first sparse similarity matrix. The method quasi-supervises the recursive clustering based on the second sparse similarity matrix to stop the iterative clustering when the data points are split into N clusters. The value of N is not pre-determined. The method detects that the individual data point has changed clusters, indicating anomalous activity. And, the method generates an electronic alert that the anomalous activity is associated with the account.
H04L 67/1396 - Protocols specially adapted for monitoring users’ activity
G06F 18/23213 - Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
49.
ARCHITECTURE AND COMPUTING ENVIRONMENT FOR ISOLATED AND CONTROLLED CODE REVIEW
Cloud computing architecture is described for implementing test modules in a communication-controlled cloud environment with access to private data. The test modules perform synchronous tests on the private data and export test results to an analytic environment subject to data export policies. An analytic application is used to asynchronously analyze the test results in the analytic environment. The cloud computing architecture alternatively or additionally includes an interface for deploying investigation-bound cloud environments in restricted subnets. A collection of software is instantiated in the investigation-bound cloud environment, and the investigation-bound cloud environment may be accessed with remote access credentials using a remote access protocol for testing the collection of software. Information about the investigation-bound cloud environment displayed in the analytic application, and the analytic application and the restricted subnet are forcibly deleted when the investigation is complete.
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
Various embodiments of the present technology generally relate to a VnicSet operator system containing instructions to implement a process to manage a virtual network interface controller (Vnic) on an application pod of a containerized software environment, the Vnic being directly reachable from a network external to the containerized software environment. In an aspect, the process includes monitoring a plurality of Vnics created for deployment within the containerized software environment, each of the plurality of Vnics being associated with a respective application pod within the containerized software environment, determining one or more non-active Vnics within the plurality of Vnics, and rectifying the one or more non-active Vnics.
Cloud computing architecture is described for implementing test modules in a communication-controlled cloud environment with access to private data. The test modules perform synchronous tests on the private data and export test results to an analytic environment subject to data export policies. An analytic application is used to asynchronously analyze the test results in the analytic environment. The cloud computing architecture alternatively or additionally includes an interface for deploying investigation-bound cloud environments in restricted subnets. A collection of software is instantiated in the investigation-bound cloud environment, and the investigation-bound cloud environment may be accessed with remote access credentials using a remote access protocol for testing the collection of software. Information about the investigation-bound cloud environment displayed in the analytic application, and the analytic application and the restricted subnet are forcibly deleted when the investigation is complete.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Operations of a certificate authority (CA) service may include aggregating in a certificate repository, a plurality of sets of CA certificates, in which each set of CA certificates is issued by a particular CA that is associated with a particular trust zone and that is trusted by a particular set of network entities located in the particular trust zone. The operations may further include distributing for access by an additional set of network entities, an aggregate set of CA certificates that includes the plurality of sets of CA certificates. The additional set of network entities may utilize the plurality of sets of CA certificates to authenticate network entities located in different trust zones.
Parameter permutation is performed for federated learning to train a machine learning model. Parameter permutation is performed by client systems of a federated machine learning system on updated parameters of a machine learning model that have been updated as part of training using local training data. An intra-model shuffling technique is performed at the client systems according to a shuffling pattern. Then, the encoded parameters are provided to an aggregation server using Private Information Retrieval (PIR) queries generated according to the shuffling pattern.
Systems and methods for single sign-on between two independent systems are disclosed herein. The method can include receiving a request to access a first application of a first system having a first login protocol. The method can include receiving user login credentials and authenticating the user login credentials. The method can include logging the user in to the first system and a second system based on the received login credentials. The second system can have a second login protocol independent of the first login protocol.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
A method implements static dataflow analysis for build pipelines. The method includes receiving a workflow file that includes an operation. The method further includes applying an extraction model to the workflow file to generate an extracted statement for the operation. The method further includes applying a statement model to the extracted statement to identify an unresolved parameter of the extracted statement. The method further includes applying the statement model to the unresolved parameter to generate a resolved parameter using a set of extracted statements including the extracted statement. The method further includes presenting an output statement including the extracted statement with the resolved parameter.
Techniques are disclosed herein for an automatic speech recognition system. Tokens are selected for a particular language or script while other tokens not used by the particular language or script are removed from the ASR vocabulary. Numerical tokens and tokens that are special tokens for the underlying ASR model of the system are also selected. Tokens that have a reading direction different than the particular language or script are removed. Rows are removed from an embedding matrix for the ASR model corresponding to the removed tokens. Similarly, the final token classification layer is adjusted using the selected tokens. The subset of tokens, the embedding matrix with rows removed and the adjusted classification layer are used to generate language specific models from a multilingual speech model. The language specific models are stored and used for generating a transcript in target languages or scripts.
Techniques are disclosed for projecting information between runtime objects and memory regions. To this end, the system maps elements of a reference type to constituent layout objects of a compound layout object. Based on the mappings, the system determines a first method for extracting data from a source memory region described by the compound layout object and generating a target runtime object of the reference type to represent the extracted data in a runtime memory area that is managed by a garbage collector. Furthermore, based on the mappings, the system determines a second method for writing data represented by a source runtime object of the reference type to a target memory region in accordance with the compound layout object. The system subsequently uses the first method and/or the second method to project information between runtime objects of the reference type and memory regions corresponding to the compound layout object.
A switch included in a compute fabric receives an authentication request message from a GPU associated with a customer. The switch transmits the authentication request message to an authentication server. Responsive to the GPU associated with the customer being successfully authenticated, the switch receives an authentication response message including metadata associated with the customer; The switch configures an address for the GPU associated with the customer by: (i) configuring a first portion of the address prior to receiving the authentication request message, and (ii) configuring a second portion of the address based on the authentication response message. The switch transmits the address including the first portion of the address and the second portion of the address to the GPU associated with the customer.
Techniques are described for the discovery of source range partitioning information. An example method includes a device determining a partition boundary value for the data based at least in part on the following steps. The device can determine a first plurality of bounded value sets and a second plurality of bounded value sets. The device can calculate a first average value of a first value and a second average value. The device can determine a first deviation value of the first average value from the first value and a second deviation value of the second average value from a third value. The device can determine the first partition boundary value based at least in part on the first deviation value and the second deviation value, the first partition boundary value being the first candidate partition boundary value or the second candidate partition boundary value.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
60.
Distributing Certificate Bundles According To Fault Domains
Operations of a certificate bundle distribution service may include: detecting a trigger condition to distribute a certificate bundle that includes a set of certificate authority certificates; determining, for each of a plurality of network entities associated with a computer network, a fault domain representing at least one single point of failure; partitioning the plurality of network entities into a plurality of certificate distribution groups, based on a set of partitioning criteria that includes a fault domain of each particular network entity, in which each particular certificate distribution group includes a particular subset of network entities, and the particular subset of network entities are associated with a particular fault domain; selecting a particular certificate distribution group, of the plurality of certificate distribution groups, for distribution of the certificate bundle; and transmitting the certificate bundle to the particular subset of network entities in the particular certificate distribution group.
Method includes: accessing text, where spans are identified within the text and include one or more pairs of target spans and one or more mid-context spans; generating embedding representations of tokens associated with each target span, tokens associated with the entity types of each target span, and tokens associated with each mid-context span; generating, for each target span, entity-focused span embedding representation based on embedding representations of tokens associated with each target span and embedding representations of tokens associated with entity type of target span; generating, for each mid-context span, mid-context embedding representation based on the embedding representations of tokens associated with each mid-context span; and generating probability distribution of each relation of set of relations based on entity-focused span embedding representations of subject span and object span that are included in each target pair and mid-context embedding representation for mid-context span appearing between subject span and object span.
Techniques are disclosed for converting a dataset from an optimized format to a target format. The system determines a hierarchy of types that are included within the dataset. Based on the hierarchy of types, the system identifies occurrences of the same method in the dataset. The same name is assigned to occurrences of the same method. Optimized opcodes included in the bytecode instructions of methods are translated into target opcodes. For each method, the system simulates executing the target opcodes that replace the optimized opcodes in the method to determine if the target opcodes affect the configuration of local variables differently than the optimized opcodes. Based on simulating the execution of the target opcodes, the system alters local variable references in the method to reflect the differing configurations of local variables that result from replacing the optimized opcodes with the target opcodes.
Techniques are disclosed for implementing a self-learning cloud-based message broker are disclosed. The message broker can receive an event trigger that includes information usable to identify a subscribing client of a publisher-subscriber messaging system. The message broker can determine message parameters for one or more messages by sampling a distribution. The message broker can determine the message parameters in response to receiving the event trigger. The message broker can send the one or more messages to the subscribing client. The one or more messages can be characterized by the message parameters. The message broker can receive a response status from the subscribing client and, based on the response status, update the distribution.
Techniques for scheduling system maintenance operations in a cloud environment are disclosed. A system receives a first request to execute a first system maintenance operation within a first time period. The system selects a first plurality of candidate execution times within the first time period for execution of the first system maintenance operation. The system executes a hash function on a first attribute associated with the first request to select a first execution time within the first plurality of candidate execution times. The system schedules the execution of the first system maintenance operation at the first execution time.
Method includes: accessing text, where spans are identified within the text and include one or more pairs of target spans and one or more mid-context spans; generating embedding representations of tokens associated with each target span, tokens associated with the entity types of each target span, and tokens associated with each mid-context span; generating, for each target span, entity-focused span embedding representation based on embedding representations of tokens associated with each target span and embedding representations of tokens associated with entity type of target span; generating, for each mid-context span, mid-context embedding representation based on the embedding representations of tokens associated with each mid- context span; and generating probability distribution of each relation of set of relations based on entity-focused span embedding representations of subject span and object span that are included in each target pair and mid-context embedding representation for mid-context span appearing between subject span and object span.
Techniques are disclosed for converting a dataset from an optimized format to a target format. The system determines a hierarchy of types that are included within the dataset. Based on the hierarchy of types, the system identifies occurrences of the same method in the dataset. The same name is assigned to occurrences of the same method. Optimized opcodes included in the bytecode instructions of methods are translated into target opcodes. For each method, the system simulates executing the target opcodes that replace the optimized opcodes in the method to determine if the target opcodes affect the configuration of local variables differently than the optimized opcodes. Based on simulating the execution of the target opcodes, the system alters local variable references in the method to reflect the differing configurations of local variables that result from replacing the optimized opcodes with the target opcodes.
Techniques for managing temporal dependencies between sets of foreign resources are disclosed, including: allocating, in a runtime environment, a segment of foreign memory to a first memory session, the runtime environment being configured to use a garbage collector to manage memory in a heap, and the foreign memory including off-heap memory that is not managed by the garbage collector; opening, in the runtime environment, a second memory session that descends from the first memory session; while the second memory session is open, encountering a request to close the first memory session; responsive to encountering the request to close the first memory session, determining that the first memory session has at least one open descendant memory session; responsive to determining that the first memory session has at least one open descendant memory session, declining the request to close the first memory session.
Techniques are described for performing an automated region build with real time region data. Region data including region identifiers and execution target identifiers for the region may be maintained. When a modification of the region data is detected (or new region data is detected), configuration files corresponding to bootstrapping resources (e.g., at the execution targets) within the region may be obtained. Operations are executed to cause the configuration files to be updated. This may include recompiling or otherwise injecting region data into the configuration files. A region build may be executed to bootstrap resources within the region using the updated configuration files.
Various embodiments of the present technology generally relate to systems and methods for providing an SSH engine. In an example, a method includes receiving, by an SSH engine, a request for a Secured Shell (SSH) configuration file from a client device. The SSH engine may then determine access privileges associated with the client device and generate rules based on the access privileges. The access privileges may identify resources that the client device has authority to access. The SSH engine may then validate each rule of the rules based on the access privileges and generate the SSH configuration file including the rules for the client device.
Techniques for access governance are disclosed, including: computing, based on attributes associated with users having one or more access permissions in common, an aggregate similarity score of the users; computing, based at least on the attributes, an individual similarity score between the users and a target user, where the target user has the one or more access permissions or the target user requests to have the one or more access permissions; determining, based at least on the individual similarity score and the aggregate similarity score, a recommended action with respect to the one or more access permissions for the target user, where the recommended action includes one of (a) administrative review or (b) administrative approval of the one or more access permissions for the target user; and generating a notification of the recommended action with respect to the one or more access permissions for the target user.
An aspect of the present disclosure provides a technological collaboration tool facilitating peer help and in-person assistance in field services. A system (executing the tool) obtains a training data containing characteristics of multiple technicians and characteristics of activities completed by each technician in resolving prior issues. trains a machine learning (ML) model based on the training data, the ML model thereafter operable to determine a proficiency level of technicians in helping with a given activity. Upon receiving, from a technician, a request for help with an activity, the system determines, based on the ML model and the activity, a set of technicians capable of helping with the activity. The system then creates a group chat including the technician and the set of technicians as a response to the request.
Techniques for management of data storage in distributed storage systems are provided. A method may include receiving, by a computer system, a request to write data to a volume. The method may include identifying, by the computer system, a zone segment mapped to the volume. The zone segment may include a plurality of zones. The method may include identifying, by the computer system, a segment pointer indicating a write location in a zone of the zone segment. The method may include writing, by the computer system, the data to one or more zones of the plurality of zones of the zone segment, starting at the write location. The method may also include updating, by the computer system, the segment pointer according to a data endpoint of the data in the zone segment.
The present disclosure provide a multiple factor authentication process using text pass codes. A process performs a first verification of a user using an authentication credential transmitted via a first communication channel. Based on successfully performing the first verification, the process performs a second verification using a textual phrase transmitted to the user via a different communication channel. The words included in the textual phrase can be selected to avoid ambiguous pronunciations and spellings.
G10L 17/02 - Preprocessing operations, e.g. segment selectionPattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal componentsFeature selection or extraction
G10L 17/04 - Training, enrolment or model building
G10L 17/08 - Use of distortion metrics or a particular distance between probe pattern and reference templates
G10L 17/24 - the user being prompted to utter a password or a predefined phrase
74.
Audio Processing Engine Using Segmentation And Pruning
Techniques for diarization using embedding pruning are disclosed. A set of audio content segments and their associated tokens are accessed by a speaker enumeration module of a speech processing engine. The speaker enumeration module uses various pruning criteria to prune audio content segments from the set to result in a pruned set of audio content segments. The pruned set of audio content segments is analyzed using a clustering process to determine a number of speakers. The number of speakers is used in a second clustering process to identify speakers in the original set of audio content segments prior to pruning. A transcription of the original audio content with speaker labels is generated using the number of speakers identified for the pruned set of audio content segments.
G10L 17/02 - Preprocessing operations, e.g. segment selectionPattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal componentsFeature selection or extraction
G10L 25/78 - Detection of presence or absence of voice signals
75.
MACHINE LEARNING (ML) MODEL BASED PREDICTION OF DELAYS IN WORKFLOWS
According to an aspect, a system collects historical data indicating details of multiple closed workflows and trains an ML model based on the multiple closed workflows, the ML model thereafter operable to predict delays for open workflows. Upon receiving, after the training, details of an additional set of closed workflows, the system adds the received details to the historical data to form an updated historical data. The system checks whether the updated historical data has a data growth (in comparison to the historical data) exceeding a threshold. If the data growth exceeds the threshold, the system determines whether there exists a data drift in the updated historical data in comparison to the historical data. If the data drift exists, the system retrains the ML model based on the updated historical data, wherein the retrained ML model is thereafter operable to predict delays for open workflows.
A central text repository may maintain, track, update, and modify text centrally that may then be distributed to applications to be used at runtime. The central text repository allows anyone involved in the software design processor lifecycle to edit, update, and/or correct text strings that are used in various applications. This allows updates to be rapidly pushed out to runtime applications without requiring the codes bases of those applications to be accessed at all. Instead, a change may be made centrally, and new resource bundles of text strings may be made available for runtime downloading usage by these applications. This effectively separates the storage and maintenance of text strings from the underlying applications. Hierarchies and modifiers may be used to override and inherit different text usages, languages, and so forth.
The technology disclosed herein enables continuity of a call recording when a recording system restarts. In a particular example, a method includes maintaining a counter value indicating a number of times the session recording system has restarted and establishing a recording session with a session recording client to record communications received from the session recording client. The communications are received by the session recording client during a communication session between at least two endpoints. After establishing the recording session, the method includes incrementing the counter value due to a restart of the session recording system. After incrementing the counter value, the method includes transmitting the counter value to the session recording client and receiving a request to establish a second recording session with the session recording client to record the communications.
A method may include transmitting a request for metadata associated with a compute instance and receiving, by a computing system, metadata associated with the compute instance signed with a private key. The private key may be associated with a public key. The method may include receiving a request to access a cloud resource and transmitting the request for the metadata. The method may also include receiving the metadata. The metadata may indicate that the compute instance is hosted on the computing system. The method may also include transmitting, to an instance principal service, a request for an instance principal certificate. The request may include the metadata signed with the private key and be cryptographically verified by the instance principal service using the public key. The method may also include receiving the instance principal certificate and providing access to the could resource based on the instance principal certificate.
Techniques are described for providing a multi-cloud gateway (MCG) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider). The MCG implemented in the first cloud environment, receives a first request requesting a first operation to be performed in a second cloud environment. Responsive to receiving the first request the MCG generates a first API call directed to the second cloud environment and causes the first API call to be communicated to the second cloud environment. The MCG receives a second request requesting a second operation to be performed in a third cloud environment. Responsive to receiving the second request, the MCG generates a second API call directed to the third cloud environment and causes the second API call to be communicated to the third cloud environment, wherein each of the cloud environments is provided by a unique cloud services provider.
Various embodiments of the present technology generally relate to systems and methods for providing a framework for optimizing configuration settings of application instances. In certain embodiments, a method may comprise operating an optimizer service to implement an application optimizer process to improve performance of an application instance. The process may include receiving a plurality of checks from an application development system for the application instance, the plurality of checks including scans and fixes for configuration settings of the application instance. The process may further include executing the scans on the application instance, determining a selection of fixes configured to improve the performance of the application instance in response to a result of the scans, providing a notification to the application instance recommending user implementation of the selection of fixes, and providing analytics data corresponding to the user implementation of the selection of fixes to the application development system.
Techniques are disclosed for enforcing isolation in a cluster of computing nodes configured for executing containerized applications. The system receives a request from a requesting entity for access to a target namespace. The request is accompanied by a token. Based on the token, the system identifies a namespace that corresponds to an isolation namespace. To determine if the request is attempting to breach isolation, the system compares the target namespace to the corresponding namespace. If the target namespace is not the corresponding namespace, the system concludes that the request is attempting to breach isolation, and, therefore, denies the request. If the request is not attempting to breach isolation, the system determines if the request is allowed by any permissions that have been granted to the requesting entity. If the request is not allowed by a permission granted to the request entity, the system denies the request.
Systems, media, and computer-implemented methods are provided for identifying similar chunks of text to tune a text similarity model, such as a text similarity model that is used to find content in response to queries. Using a masked language model, a machine learning model may be tuned on different content from that which the machine learning model was trained. The machine learning model as tuned may be used to determine vector embeddings for terms in chunks of content. Chunks may be matched to each other by finding a term in one chunk having a highest similarity score with a corresponding term in another chunk. Aggregate similarity scores may be determined between the chunks based on the term-to-term similarity scores. If an aggregate similarity score for a pair of chunks satisfies one or more conditions, a text similarity model may be tuned to identify the pair as similar.
Techniques are disclosed that enable a self-regulating process to meet a service level objective (SLO). In some embodiments, a self-regulating process is a background process comprising a regulator that receives background job requests and historical information related to the background process for evaluation to determine actions (e.g., speed up, slow down, or maintain the same speed), enabling the background process to adjust its pace gradually and smoothly even when encountering unexpected big changes in load.
Methods, systems, and computer readable media for mitigating network security attacks by linking network function (NF) discovery results with subsequent messages at proxy NF
A method for mitigating network security attacks by linking NF discovery results to subsequent messages includes receiving, at a proxy NF, NF discovery messages. The method further includes reading, by the proxy NF, producer NF and consumer-NF-identifying parameters from the NF discovery messages. The method further includes creating, by the proxy NF, records in an NF-discovery-linked security database maintained by the proxy NF, wherein the records include the consumer NF and producer-NF-identifying parameters read from the NF discovery messages. The method further includes receiving, by the proxy NF, a service-based interface (SBI) request message. The method further includes screening, by the proxy NF and using the records in the NF-discovery-linked security database, the SBI request message. The method further includes performing, by the proxy NF, a network security action for the SBI request message based on results of the screening.
A computer program product, system, and computer implemented method for adaptive throttling of storage service traffic within a client application. The approaches provided herein allow a client or user of a storage service to dynamically and adaptively determine the capability of a storage service to service requests even when the client is not provided with a fixed reserved capacity and when other user may cause the unused capacity to vary. For instance, the approach may include maintaining a computing cluster that accesses a storage service, wherein the remote storage service has limited capacity to service requests, the limited capacity is share among a plurality of clients that access the storage service. Repeatedly determining, at the computing cluster, an available capacity of the storage service based on success or failure of requests, and adaptively throttling requests to the storage service based on at least a then current available capacity.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/1001 - Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
H04L 67/1012 - Server selection for load balancing based on compliance of requirements or conditions with available server resources
Techniques are described for processing packets and enforcing network policies/rules across different network layers. Instead of having to create rules and polices for each of the different network layers and manually specifying where and what devices should enforce the rules/polices, techniques described herein are directed at allowing users to create a simple policy that integrates the different network layers. In some examples, the different network layers are defined by the Open Systems Interconnection (OSI) Model.
Techniques are disclosed for enforcing isolation in a cluster of computing nodes configured for executing containerized applications. The system receives a request from a requesting entity for access to a target namespace. The request is accompanied by a token. Based on the token, the system identifies a namespace that corresponds to an isolation namespace. To determine if the request is attempting to breach isolation, the system compares the target namespace to the corresponding namespace. If the target namespace is not the corresponding namespace, the system concludes that the request is attempting to breach isolation, and, therefore, denies the request. If the request is not attempting to breach isolation, the system determines if the request is allowed by any permissions that have been granted to the requesting entity. If the request is not allowed by a permission granted to the request entity, the system denies the request.
For database high availability and for accelerated recovery of a failed replica of a database, a storage computer is dynamically allocated and temporarily persists database content modifications until the database replica is ready to receive the modifications. The storage computer is not allocated storage that stores the database. The storage computer persists a recent portion of the database and later receives a request to synchronize the recovering replica. During recovery, the storage computer responsively sends the portion of the database to the recovering replica. For acceleration, recovery herein does not entail content interpretation such as replay of a redo log. For horizontally scaled acceleration involving two distinct storage computers per recovering replica, multiple replicas are concurrently recovered by respective storage computers that each receives recovered database content only from a respective distinct other storage computer.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
89.
PROCESS SECURITY CAPABILITY REQUIREMENTS IDENTIFICATION
A framework for determining capabilities for execution of a system call a container and/or process within a computing system. For example, techniques for determining capabilities prerequisite for execution of a system call and determining whether the system call has been assigned the capabilities prerequisite for execution of the system call.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR IDENTIFYING, TO NETWORK FUNCTION (NF) DISCOVERY SERVICE CONSUMERS, NF DISCOVERY RESULTS IMPACTED BY AN NF PROFILE UPDATE
A method for identifying cached NF discovery results impacted by an NF profile update to NF discovery service consumers includes receiving an NF discovery request including query parameters, generating an NF discovery response, communicating the NF discovery response to the NF discovery service consumer, and caching information associated with the NF discovery response. The method further includes receiving an NF update message for changing a value of at least one attribute of an NF profile of a producer NF, determining that the change in the value of at least one attribute of the NF profile of the producer NF would have impacted the NF discovery response, updating the cached information associated with the NF discovery response, and communicating, to the NF discovery service consumer, a notification message including the NF profile of the producer NF and identifying the NF discovery response as an impacted NF discovery response.
Techniques for operating a chatbot system for enterprise-level conversational agents are disclosed. These techniques are performed by an application or cloud service executing on one or more computing devices. An enterprise system can deploy conversational agents onto user devices to run as chat interfaces for logging analytics question-answering. One example application or cloud service may be a multi-model chat mechanism configured to support these chat interfaces with backend functionality. In response to an incoming question, the chat mechanism first consolidates the question with any conversation history and then, classifies the user's question as either a question regarding unstructured document data, a question regarding structured log data, or a hybrid question. Based on the classification, the chat mechanism can generate a proper large language model (LLM) response.
For database high availability and for accelerated recovery of a failed replica of a database, a storage computer is dynamically allocated and temporarily persists database content modifications until the database replica is ready to receive the modifications. The storage computer is not allocated storage that stores the database. The storage computer persists a recent portion of the database and later receives a request to synchronize the recovering replica. During recovery, the storage computer responsively sends the portion of the database to the recovering replica. For acceleration, recovery herein does not entail content interpretation such as replay of a redo log. For horizontally scaled acceleration involving two distinct storage computers per recovering replica, multiple replicas are concurrently recovered by respective storage computers that each receives recovered database content only from a respective distinct other storage computer.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
93.
Machine Learning Based Overbooking Limit Optimization
Embodiments optimize hotel room reservations for hotel rooms of a hotel. Embodiments receive pending hotel reservations, the pending hotel room reservations including individual reservations and group reservations. Using a first trained machine learning (“ML”) model, embodiments predict a first cancellation probability for each of the individual reservations. Using a second trained ML model, embodiments predict a second cancellation probability for each of the group reservations. Based on the first cancellation probabilities and the second cancellation probabilities, embodiments build a probability distribution for the pending hotel room reservations and, based on an occupancy forecast for the hotel, embodiments determine an overbooking limit for one or more categories of the hotel rooms.
Techniques for evaluating the output of a large language model are disclosed. A training data set that includes deterministic computational metrics that measure features of large language model output and qualitative metrics that provide a non-deterministic measure of large language model output quality may be used to train a ML model. The ML model may then be used to estimate the qualitative metrics of large language model output by using deterministic computational metrics as input.
Techniques for creating, managing, and using SSH certificates with one or more target-specific principals are disclosed. A certificate authority receives a certificate signing request that includes both a user identifier and a resource identifier. The user identifier identifies a user, and the resource identifier represents one or more target hosts. The certificate authority forms a target-specific principal for use in creating the certificate. The target-specific principal indicates both the user and the resource identifier representing the resource(s) for which access is requested. The resource identifier may represent a host class associated with more than one host. Once the certificate authority verifies that the user is entitled to access the requested resource(s), it generates the certificate, signs it, and returns it to the requesting device.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
A central text repository may maintain, track, update, and modify text centrally that may then be distributed to applications to be used at runtime. The central text repository allows anyone involved in the software design processor lifecycle to edit, update, and/or correct text strings that are used in various applications. This allows updates to be rapidly pushed out to runtime applications without requiring the codes bases of those applications to be accessed at all. Instead, a change may be made centrally, and new resource bundles of text strings may be made available for runtime downloading usage by these applications. This effectively separates the storage and maintenance of text strings from the underlying applications. Hierarchies and modifiers may be used to override and inherit different text usages, languages, and so forth.
Apparatuses, systems, and other embodiments associated with a specialized antenna expansion card for EMI fingerprint characterization of target computing systems are described. In one embodiment, an antenna expansion card includes a nonconductive frame, a planar antenna printed in conductive material on a dielectric substrate and supported by the nonconductive frame, an I/O bracket affixed to the nonconductive frame, and a connector communicably coupled to the planar antenna and accessible from an exterior surface of the I/O bracket. In one embodiment, a planar antenna PCB includes a substrate conforming to dimensional specifications of an expansion card, a triangular antenna region on the substrate flanked by ground regions separated from the antenna region by gaps that progressively widen, and a connector communicably coupled to the antenna region. In one embodiment, a computer system includes a chassis, EMI-generating components, and an antenna expansion card installed within the chassis in an expansion slot.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
98.
EMI ANOMALY DETECTION IN COMPUTER SYSTEMS USING ANTENNA IN EXPANSION CARD FORM FACTOR
Systems, methods, and other embodiments associated with a specialized antenna expansion card for EMI fingerprint characterization of target computing systems are described. In one embodiment, a method for EMI scanning using a broadband antenna expansion card installed within a target computer includes causing the target computer to execute a test pattern of computer operations. The method includes taking readings of radiofrequency EMI generated by execution of the test pattern through the broadband antenna card that is installed within a chassis of the target computer. The method includes detecting that hardware of the target computer system is behaving anomalously based on a dissimilarity between the readings of radiofrequency EMI and machine learning estimates of radiofrequency EMI for nominal operation of a reference computer system. And, the method includes generating an electronic alert that the hardware of the target computer system is behaving anomalously.
G01R 29/08 - Measuring electromagnetic field characteristics
G06F 21/73 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by creating or determining hardware identification, e.g. serial numbers
99.
Graphical User Interface (GUI) For Triggering The Application Of A Generative Artificial Intelligence (AI) Model To Generate Insight-Based Content In A User-Selected Target Region Of The GUI
Techniques for generating content using generative artificial intelligence (AI) are disclosed. A system detects a user interaction with a graphical user interface (GUI) to drag-and-drop an insight from one region of the GUI into another region of the GUI. Based on detecting the drag-and-drop action, the system identifies a set of underlying data associated with the insight. The system generates a prompt for a generative AI model based on a portion of the underlying data. The system presents content generated by the generative AI model in the region of the GUI into which the user dragged-and-dropped the insight.
The present disclosure relates to efficiently constructing in-memory HNSW vector indexes in a database management system (DBMS). A DBMS may store in memory a segmented array, wherein the segmented array may store representations of a plurality of vectors from a database. A hierarchical navigable small world (HNSW) vector index may be constructed in the memory, wherein the HNSW vector index may index the plurality of vectors. A similarity search may be performed using the HNSW vector index.