Systems and methods are disclosed for implementing a cloud based network function. In certain embodiments, a method may comprise operating a custom operator in a containerized software environment such as Kubernetes to manage a virtual network interface controller (Vnic) on an application pod, the Vnic being reachable directly from a network external to the containerized software environment. The method may include identifying the application pod to which to add the Vnic, determining a worker node in the containerized software environment on which the application pod is running, creating the Vnic on the worker node, and executing a job on the worker node to inject the Vnic into the application pod.
A secure, modular multi-tenant machine learning platform is configured to: receive untrusted code supplied by a first tenant; perform a security scan of the untrusted code to determine whether the untrusted code satisfies a set of one or more security requirements; responsive to determining that the untrusted code satisfies the security requirement(s): deploy the untrusted code to a runtime execution environment; deploy a machine learning model associated with the first tenant to the runtime execution environment, the untrusted code being configured to perform one or more functions using the machine learning model; receive a set of untrusted code supplied by a second tenant; perform a security scan of the untrusted code to determine whether the untrusted code satisfies the security requirement(s); and responsive to determining that the untrusted code does not satisfy the security requirement(s): refraining from deploying the untrusted code to a runtime execution environment.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
Systems and methods are disclosed for implementing cloud network service management. In certain embodiments, a method may comprise operating a cloud native application (CnApp) custom operator in a containerized software environment to dynamically manage cloud native network service on a target application pod via a persistent network interface to an external network. The method may include obtaining a first resource definition data, for a first custom resource, to define attributes for a bundle of resources used to implement the cloud native network service, and creating the first custom resource based on the first resource definition data, including initializing the target application pod. The method may include generating a second resource definition data, derived from the first resource definition data, to define attributes for a virtual network interface to associate with the target application pod, and applying the second resource definition data to initialize creation of a second custom resource.
The technology disclosed herein enables a service manager of a container orchestration platform to handle failovers of pods executing an application in a high availability mode. In a particular example, a method includes receiving pod information including unique application identifiers generated by the application and indications of which of the pods are active and standby. The method further includes configuring service objects provided by the container orchestration platform of the pods to each correspond to respective ones of the pods based on the unique application identifiers. The method also includes receiving updated pod information indicating a first pod of the pods, which was on standby, is now active having first application identifier of the unique application identifiers previously assigned to a second pod that failed. Additionally, the method includes reconfiguring a service object associated with the first application identifier to correspond to the first pod instead of the second pod.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A client cookie management system is disclosed that includes capabilities for securely managing a session between a web-based application and a user interacting with the web-based application using session cookies. The system receives a request from a user to access a resource provided by a web server and forwards the request to the web server. The web server generates a session cookie comprising a session identifier associated with a session created for the user. The system receives the session cookie from the web server and generates a new session cookie comprising a new session identifier and transmits the new session cookie to the client application. The system receives a second request to access a different resource from the client application. The second request comprises the new session cookie. Upon determining that the new session cookie is not modified, the system transmits the second request to the web server.
Techniques are disclosed for generating a topology of components based on a set of components provided by a user. The system identifies, for each particular component of the first set of components, one or more characteristics. The characteristics may include at least one of: a rule associated with the particular component, a requirement associated with the particular component, a data input type corresponding to the particular component, and data output type corresponding to the particular component. Based on the characteristics, the system determines that an additional component not included in the first set of components is required for connecting the first set of components. The system selects the additional component and determines a topology of components that includes the first set of components and the additional component. The system also determines a dataflow between components in the topology of components.
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
H04L 41/084 - Configuration by using pre-existing information, e.g. using templates or copying from other elements
H04L 45/76 - Routing in software-defined topologies, e.g. routing between virtual machines
7.
MINIMIZING TRANSACTION COORDINATOR INTERACTIONS FOR DISTRIBUTED TRANSACTIONS
Techniques are provided for reducing coordinator roundtrips during a distributed transaction while still ensuring that the transaction coordinator has all the information the coordinator needs to finalize the distributed transaction. The techniques involve inserting “finalization information” from the participants of a transaction in the return messages sent by the participants of the transaction. The finalization information that is included in the return messages sent by each participant is information that enables a coordinator to confirm/cancel or commit/rollback the changes made by the participant, and any participants that are downstream from the participant. When the participants are done, the finalization information obtained from the return messages is provided to the coordinator to allow the coordinator to finalize the distributed transaction.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Various embodiments of the present technology generally relate to systems and methods for managing configuration data in a virtual or containerized software environment. A configuration data management system may enable ConfigMaps to be added to an application pod of a virtual software environment without restarting the application pod, a ConfigMap including a data object containing configuration data. The configuration data management process may monitor for creation of a first ConfigMap in the virtual software environment, append a name of the first ConfigMap to a data element name from the first ConfigMap to produce an appended data element, and store the appended data element to a super ConfigMap, the super ConfigMap including a specialized ConfigMap configured to contain data elements from multiple ConfigMaps.
Systems and methods are disclosed for implementing a cloud based network function. In certain embodiments, a method may comprise operating a custom operator in a containerized software environment such as Kubernetes to manage a virtual network interface controller (Vnic) on an application pod, the Vnic being reachable directly from a network external to the containerized software environment. The method may include identifying the application pod to which to add the Vnic, determining a worker node in the containerized software environment on which the application pod is running, creating the Vnic on the worker node, and executing a job on the worker node to inject the Vnic into the application pod.
Various embodiments of the present technology generally relate to systems and methods for managing configuration data in a virtual or containerized software environment. A configuration data management system may enable ConfigMaps to be added to an application pod of a virtual software environment without restarting the application pod, a ConfigMap including a data object containing configuration data. The configuration data management process may monitor for creation of a first ConfigMap in the virtual software environment, append a name of the first ConfigMap to a data element name from the first ConfigMap to produce an appended data element, and store the appended data element to a super ConfigMap, the super ConfigMap including a specialized ConfigMap configured to contain data elements from multiple ConfigMaps.
Systems and methods are disclosed for implementing a virtual IP for a container pod. In certain embodiments, a method may comprise operating a cloud based network system in a containerized software environment to assign a virtual internet protocol (VIP) address to an application pod of a containerized software environment, the VIP being directly reachable from a network external to the containerized software environment. The method may include reserving a range of internet protocol (IP) addresses for use as VIP addresses, assigning a first fixed IP address to a first application pod, assigning a first VIP address from the range of IP addresses to the first application pod, and routing traffic directed to the first VIP address to the first fixed IP address.
A system synchronizes a server-side DOM tree and a browser-side DOM tree with one another. Server may receive from a browser, a hash value of the browser-side DOM tree, and a server-side update instruction for applying a first server-side update to the server-side DOM tree to synchronize with a first browser-side update by the browser to the browser-side DOM tree. The server may identify the server-side DOM tree based on the hash value. The server may execute upon the server-side DOM tree, the first server-side update and a second server-side update that is triggered by the first server-side update. The server may compute a browser-side update instruction for applying a second browser-side update to the browser-side DOM tree to synchronize with the server-side DOM tree. The server may transmit the browser-side update instruction to the browser, and the browser may apply the second browser-side update to the browser-side DOM tree.
Techniques are described for providing session management functionalities using an access token (e.g., an Open Authorization (OAuth) access token). Upon successful user authentication, a session (e.g., a single sign-on session) is created for the user along with a user identity token that includes information identifying the session. The user identity token is presentable in an access token request sent to an access token issuer authority (e.g., an OAuth server). Upon receiving the access token request, the user identity token is parsed to identify and validate the session against information stored for the session. The validation can include various session management-related checks. If the validation is successful, the token issuer authority generates the access token. In this manner, the access token that is generated is linked to the session. The access token can then be used by an application to gain access to a protected resource.
Subject level privacy attack analysis for federated learning may be performed. A request that selects an analysis of one or more inference attacks may be received to determine a presence of data of a subject in a training set of a federated machine learning model. The selected inference attacks may be performed to determine the presence of the data of subject in the training set of the federated machine learning model. Respective success measurements may be generated for the selected inference attacks based on the performance of the selected inference attacks, which may then be provided.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
A processor may implement self-relative memory addressing by providing load and store instructions that include self-relative addressing modes. A memory address may contain a self-relative pointer, where the memory address stores a memory offset that, when added to the memory address, defines another memory address. The self-relative addressing mode may also support invalid memory addresses using a reserved offset value, where a load instruction providing the self-relative addressing mode may return a NULL value or generate an exception when determining that the stored offset value is equal to the reserved offset value and where a store instruction providing the self-relative addressing mode may store the reserved offset value when determining that the pointer is an invalid or NULL memory address.
Herein is natural language processing (NLP) to detect an anomalous log entry using a language model that infers an encoding of the log entry from novel generation of numeric lexical tokens. In an embodiment, a computer extracts an original numeric lexical token from a variable sized log entry. Substitute numeric lexical token(s) that represent the original numeric lexical token are generated, such as with a numeric exponent or by trigonometry. The log entry does not contain the substitute numeric lexical token. A novel sequence of lexical tokens that represents the log entry and contains the substitute numeric lexical token is generated. The novel sequence of lexical tokens does not contain the original numeric lexical token. The computer hosts and operates a machine learning model that generates, based on the novel sequence of lexical tokens that represents the log entry, an inference that characterizes the log entry with unprecedented accuracy.
Techniques for selecting medical items for presentation using an artificial intelligence architecture are provided. In one technique, summary note data that is composed by a physician for a patient is received. A machine-learned (ML) language model generates, based on the summary note data, a set of feature values. A profile of the patient and a profile of the physician are identified. An ML recommendation model determines, based on the profile of the patient, the profile of the physician, and the set of feature values, a plurality of candidate medical items. An ML reinforcement learning model generates a ranking of the plurality of candidate medical items. A subset of the plurality of candidate medical items is caused to be presented on a screen of a computing device based on the ranking.
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
G06F 40/40 - Processing or translation of natural language
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
G16H 20/10 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
18.
REPRESENTATION AND MANAGEMENT OF INFRASTRUCTURE STATE OF A DISTRIBUTED DATABASE CLOUD SERVICE
Techniques are provided for maintaining snapshots of state information for a plurality of resource entities, within a distributed cloud service. A first snapshot of state information for the plurality of resource entities is maintained at an endpoint in persistent storage. A request to modify resources allocated to resource entities is received. A virtual lock on the state information reflected in the first snapshot is obtained. Upon obtaining the virtual lock, a service determines, based on the first snapshot, that there are available resources to perform the request to modify resources. A second snapshot of the state information, reflecting the modification of resources allocated to resource entities, is then generated and stored at the endpoint in persistent storage. The virtual lock on the state information is released and the resources allocated to the resource entities are modified according to the request.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Systems and methods are disclosed for implementing a virtual IP for a container pod. In certain embodiments, a method may comprise operating a cloud based network system in a containerized software environment to assign a virtual internet protocol (VIP) address to an application pod of a containerized software environment, the VIP being directly reachable from a network external to the containerized software environment. The method may include reserving a range of internet protocol (IP) addresses for use as VIP addresses, assigning a first fixed IP address to a first application pod, assigning a first VIP address from the range of IP addresses to the first application pod, and routing traffic directed to the first VIP address to the first fixed IP address.
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
Systems and methods are disclosed for implementing cloud network service management. In certain embodiments, a method may comprise operating a cloud native application (CnApp) custom operator in a containerized software environment to dynamically manage cloud native network service on a target application pod via a persistent network interface to an external network. The method may include obtaining a first resource definition data, for a first custom resource, to define attributes for a bundle of resources used to implement the cloud native network service, and creating the first custom resource based on the first resource definition data, including initializing the target application pod. The method may include generating a second resource definition data, derived from the first resource definition data, to define attributes for a virtual network interface to associate with the target application pod, and applying the second resource definition data to initialize creation of a second custom resource.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
Techniques are described herein for performing thread-local garbage collection. The techniques include automatic profiling and separation of private and shared objects, allowing for efficient reclamation of memory local to threads. In some embodiments, threads are assigned speculatively-private heaps within memory. Unless there is a prior indication that an allocation site yields shared objects, then a garbage collection system may assume and operate as if such allocations are private until proven otherwise. Object allocations in a private heap may violate the speculative state of the heap when reachable outside of the thread. When violations to the speculative state are detected, an indication may be generated to notify the garbage collection system, which may prevent thread-local memory reclamation operations until the speculative state is restored. The garbage collection system may learn from the violations to reduce the allocation of invalidly private objects and increase the efficiency of the garbage collection system.
Techniques for extracting information from unstructured documents that enable an ML model to be trained such that the model can accurately distinguish in-distribution (“in-D”) elements and out-of-distribution (“OO-D”) elements within an unstructured document. Novel training techniques are used that train an ML model using a combination of a regular training dataset and an enhanced augmented training dataset. The regular training dataset is used to train an ML model to identify in-D elements, i.e., to classify an element extracted from a document as belonging to one of the in-D classes contained in the regular training dataset. The augmented training dataset, which is generated based upon the regular training dataset may contain one or more augmented elements which are used to train the model to identify OO-D elements, i.e., to classify an augmented element extracted from a document as belonging to an OO-D class instead of to an in-D class.
Embodiments analyze a customer of an organization. Embodiments select the customer and receive historical data corresponding to a plurality of transactions of the customer with the organization, the historical data including, for each of the transactions, a target variable including a number of days of delayed payment for each transaction. Based on the historical data, embodiments determine a cost of a delayed payment from the customer and determine an average delay of payments of the customer. Embodiments convert the cost of delayed payments to a first Z-score and the average delay of payments to a second Z-score. Embodiments then determine a reliability score of the customer comprising determining a Euclidean distance of the first Z-score and the second Z-score.
In accordance with an embodiment, an analytic applications environment enables data analytics within the context of an organization's enterprise software application or data environment, or a software-as-a-service or other type of cloud environment; and supports the development of computer-executable software analytic applications. A data pipeline or process, such as, for example, an extract, transform, load process, can operate in accordance with an analytic applications schema adapted to address particular analytics use cases or best practices, to receive data from a customer's (tenant's) enterprise software application or data environment, for loading into a data warehouse instance. Each customer (tenant) can additionally be associated with a customer tenancy and a customer schema. The data pipeline or process populates their data warehouse instance and database tables with data as received from their enterprise software application or data environment, as defined by a combination of the analytic applications schema, and their customer schema.
A data hierarchy including individual data nodes may be used to represent a wide variety of data collections. Requests to change or add nodes in the data hierarchy may be received from many different sources over time. Instead of considering these change requests individually, an interface allows a plurality of change requests to be consolidated together into a single consolidated request. The consolidated request may be displayed in an interface such that changes from each of the original requests may be displayed together in an interface so that a cumulative effect of each of the change requests may be considered before the data changes are committed to the underlying data structure. The consolidated request may maintain links and update underlying data objects representing each of the original requests to provide a record of actions related to the consolidated request.
Systems, methods, and other embodiments associated with adaptive code scanning are described. In one example method, a valid configuration that specifies approved parameters of use for a software component is defined. Software code is scanned to detect that the software component exists in the software code. Where the component is detected, the software code is scanned to identify implemented parameters of use for the software component. The implemented parameters are compared to the approved parameters. Based on the comparison, the software component is determined to be not implemented according to the valid configuration. Where the software component is implemented according to the valid configuration, the method automatically determines to proceed with the automated action based on the implemented parameters. The automated action is performed to adapt to the implemented parameters.
Embodiments generate a machine learning (“ML”) model. Embodiments receive training data, the training data including time dependent data and a plurality of dates corresponding to the time dependent data. Embodiments date split the training data by two or more of the plurality of dates to generate a plurality of date split training data. For each of the plurality of date split training data, embodiments split the date split training data into a training dataset and a corresponding testing dataset using one or more different ratios to generate a plurality of train/test splits. For each of the train/test splits, embodiments determine a difference of distribution between the training dataset and the corresponding testing dataset. Embodiments then select the train/test split with a smallest difference of distribution and train and test the ML model using the selected train/test split.
Embodiments predict a target variable for accounts receivable in response to receiving historical data corresponding to a plurality of transactions corresponding to a plurality of customers, the historical data including, for each of the transactions, the target variable. Embodiments segment each of the customers based on the historical data corresponding to each of the customers, the segmenting including determining a variation of the target variable for each customer and, based on the variation, classifying each customer as having a low variation, a medium variation, or a high variation. For each low variation customer, embodiments create a regular ML model without a grace period that is trained and tested using the historical data. For each medium variation customer, embodiments create the regular ML model and create two or more grace period ML models, each grace period ML model adding a different grace period to the target variable.
Embodiments generate a machine learning ("ML") model. Embodiments receive training data, the training data including time dependent data and a plurality of dates corresponding to the time dependent data. Embodiments date split the training data by two or more of the plurality of dates to generate a plurality of date split training data. For each of the plurality of date split training data, embodiments split the date split training data into a training dataset and a corresponding testing dataset using one or more different ratios to generate a plurality of train/test splits. For each of the train/test splits, embodiments determine a difference of distribution between the training dataset and the corresponding testing dataset. Embodiments then select the train/test split with a smallest difference of distribution and train and test the ML model using the selected train/test split.
From many features and many multidimensional points, a computer generates exploratory training configurations. Each point contains a value for each of the features. Each exploratory training configuration identifies a random subset of the features and a random subset of the points. A performance score is generated for each of the exploratory training configurations. A feature weight is generated for each of the features that is based on the performance scores of the exploratory training configurations whose random subset of features contains the feature. A point weight is generated for each of the points that is based on the performance scores of the exploratory training configurations whose random subset of the many points contains the point. A machine learning model is trained using an optimized training corpus that consists of a subset of the many features based on feature weight and a subset of the many points based on point weight.
Embodiments predict a target variable for accounts receivable using a machine learning model. For a first customer, embodiments receive a plurality of trained ML models corresponding to the target variable, the plurality of trained ML models trained using the historical data and comprising a first trained model having no grace period for the target variable and two or more grace period trained models, each grace period trained model having different grace periods for the target variable. Embodiments determine a Matthews' Correlation Coefficient (“MCC”) for the first trained model. When the MCC for the first trained model is low, embodiments determine the MCC for each of the grace period trained models, and when one or more MCCs for each of the grace period trained models is higher than the MCC for the first trained model, embodiments select the corresponding grace period trained model having a highest MCC.
A key management service (KMS) in a cloud computing environment has an internal vault for cryptographic operations by an internal cryptographic key within the cloud environment and a proxy key vault communicatively coupled to an external key manager (EKM) that stores an external cryptographic key. The KMS uses a provider-agnostic application program interface (API) that permits the cloud service customer to use the same interface request and format for cryptographic operation requests regardless of whether the request is for an operation directed to an internal vault or to an external vault and regardless of the particular vendor of the external key management service operating on the external hardware device.
An identity service in a cloud environment is communicatively coupled to a proxy key vault in the cloud environment and to an external key manager (EKM) located outside of the cloud environment. The identity service receives a token request for a communication credential from the proxy key vault and verifies the request based on a client credential associated with the proxy key vault. The identity service generates the client credential and signs the communication credential with a private key associated with the EKM. The identify service transmits the signed communication credential to the proxy key vault. The communication credential can be used to substantiate cryptographic operation requests to the EKM.
The present disclosure relates to systems and methods for an adaptive pipelining composition service that can identify and incorporate one or more new models into the machine learning application. The machine learning application with the new model can be tested off-line with the results being compared with ground truth data. If the machine learning application with the new model outperforms the previously used model, the machine learning application can be upgraded and auto-promoted to production. One or more parameters may also be discovered. The new parameters may be incorporated into the existing model in an off-line mode. The machine learning application with the new parameters can be tested off-line and the results can be compared with previous results with existing parameters. If the new parameters outperform the existing parameters as compared with ground-truth data, the machine learning application can be auto-promoted to production.
A database manager is disclosed that retrieves database records having binary encoded data from a database and instantiating objects in an in-memory database. Binary encoding compresses data, allowing many subrecords to be stored a single blob field of a database record. Retrieving chunks from storage reduces transfer time by reducing the size of data and the number of operations needed to retrieve all the subrecords.
A database manager is disclosed that retrieves database records having binary encoded data from a database and instantiating objects in an in-memory database. Binary encoding compresses data, allowing many subrecords to be stored a single blob field of a database record. Retrieving chunks from storage reduces transfer time by reducing the size of data and the number of operations needed to retrieve all the subrecords.
The database manager receives database access requests from a database application. Changes made to the database objects and committed by the application are written back to the persistent database as versioned delta records. In a subsequent session, loading the database from storage includes first loading the most recent snapshot record, then applying changes to the data stored in delta records. The changes stored in the delta records are applied to the data in the snapshot record in the order in which they were made.
Fast modern interconnects may be exploited to control when garbage collection is performed on the nodes (e.g., virtual machines, such as JVMs) of a distributed system in which the individual processes communicate with each other and in which the heap memory is not shared. A garbage collection coordination mechanism (a coordinator implemented by a dedicated process on a single node or distributed across the nodes) may obtain or receive state information from each of the nodes and apply one of multiple supported garbage collection coordination policies to reduce the impact of garbage collection pauses, dependent on that information. For example, if the information indicates that a node is about to collect, the coordinator may trigger a collection on all of the other nodes (e.g., synchronizing collection pauses for batch-mode applications where throughput is important) or may steer requests to other nodes (e.g., for interactive applications where request latencies are important).
A key management service (KMS) in a cloud computing environment has an internal vault for cryptographic operations by an internal cryptographic key within the cloud environment and a proxy key vault communicatively coupled to an external key manager (EKM) that stores an external cryptographic key. The KMS uses a provider-agnostic application program interface (API) that permits the cloud service customer to use the same interface request and format for cryptographic operation requests regardless of whether the request is for an operation directed to an internal vault or to an external vault and regardless of the particular vendor of the external key management service operating on the external hardware device.
In accordance with various embodiments, described herein is a system and method for providing a traceable push-based pipeline and monitoring of data quality in extract, transform, load or other enterprise computing environments. The system can include a combination of features including one or more of: an end-to-end push-based pipeline, which uses task-based events to trigger downstream jobs; the application of a table-of-tables or control table, by which the system can trace with detail the performance of each task and corresponding data table changes; a decoupling of pipeline components across several dimensions, for example: task, data, role, and/or time; a user interface or dashboard for monitoring pipeline health or data quality over the pipeline components and dimensions; and an orchestrator that can learn from pipeline health data and task/table changes, and identify actual or potential issues involving the pipeline, including associated root causes.
A system may display a Graphical User Interface including a source region presenting a plurality of source data-serialization elements and a destination region presenting a plurality of destination data-serialization elements. The system may receive a user input associating a first destination data-serialization element, of the plurality of destination data-serialization elements, and a first source data-serialization element of the plurality of source data-serialization elements. Responsive to receiving the user input, the system may generate and store a mapping expression that defines a mapping association between the first source data-serialization element and the first destination data-serialization element. The system may present in a mapping region of the GUI displayed concurrently with the source region and the destination region, a mapping element representing the mapping association between the first source data-serialization element and the first destination data-serialization element.
In an embodiment, a method may include accessing, by a computing system, a multi-node problem. The multi-node problem may include a plurality of nodes, each respective node having one or more node features. The method may include providing, by the computing system, each respective node with each respective node feature to a machine learning model. The method may include determining, by the computing system using the machine learning model, a subset of nodes of the plurality of nodes based at least in part on the respective node features. The method may include calculating, by the computing system, one or more solutions to the multi-node problem based at least in part on the subset of nodes. The method may include storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer tenancy, where the traffic is generated by a multi-tenancy service. The traffic can be destined to the target service. The traffic can be tagged by the multi-tenancy service with information indicating that the traffic is egressing therefrom on behalf of the customer tenancy. The customer tenancy can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network). The customer network can be associated with the egress policy. The customer can define the egress policy at different granularity levels by using different attributes. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy, based on the customer-defined attributes, on the traffic that the target service is receiving.
Techniques are disclosed for rotating network addresses following the installation of a prefab region network at a destination site. A manager service executing within a distributed computing system can allocate a rotation network address pool to a root allocator service that may be configured to provide network addresses from network address pools to dependent nodes within the distributed computing system, with each dependent node associated with a corresponding first network address of the network address pools. The manager service can receive an indication that a second network address of the rotation network address pool is associated with a dependent node. In response, the manager service can execute a migration operation for the dependent node to redirect network traffic within the distributed computing system from the first network address to the second network address.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network, or by a multi-tenancy service on behalf of the customer. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network) or by the multi-tenancy service. The customer network can be associated with the egress policy. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy on the traffic that the target service is receiving.
Techniques for enforcing an egress policy at a target service are described. In an example, traffic is generated for a customer, where the traffic is generated by a customer network of the customer, such as a customer tenancy or an on-premise network. The traffic can be destined to the target service. The traffic can be tagged by the customer network (e.g., by a gateway of the customer network). The customer network can be associated with the egress policy. The customer can define the egress policy at different granularity levels by using different attributes. The target service can determine the egress policy based on the information tagged to the traffic and can enforce the egress policy, based on the customer-defined attributes, on the traffic that the target service is receiving.
Techniques for smoothing a signal are disclosed. The system partitions the portion of the data sequence into a stable subsequence and an unstable subsequence of data points. The system applies a rate of change exhibited by the stable subsequence to the unstable subsequence to create a smoothed, more stable subsequence.
Embodiments onboard users of a cloud based application from an old authentication process to a new authentication process. Embodiments receive an onboarding time window and receive a first authentication request from a user via a browser. When the first authentication request is received during the onboarding time window, embodiments associate a cookie with the browser and use the new authentication process to authenticate the user in response to the first authentication request. When the first authentication request is received outside of the onboarding time window, embodiments use the old authentication process to authenticate the user in response to the first authentication request.
A training request including an identifier that is indicative of a type of a machine learning (ML) model that is to be trained is received. A plurality of workers are maintained in a training pool, and a plurality of jobs are maintained in a queue of training jobs. Each worker is configured to train a particular type of ML model. Upon the training request being validated, a training job is created for the request and submitted to the queue of training jobs. For each type of ML model, a first metric and a second metric is obtained. A target metric is computed based on the first and the second metrics. The number of workers included in the training pool is modified based on the target metric.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
49.
Controlling actions in a file system environment using buckets corresponding to priority
A method can include receiving requests to perform actions in a file system environment. The method can include populating a first bucket with first tokens. The first bucket can be associated with actions in the file system environment. The method can include populating second buckets, which can correspond to different tenants, with corresponding second tokens based on priorities of the tenants. The second tokens may correspond to allowable actions on behalf of the tenants. Each token of the first tokens and the second tokens may be in one-to-one correspondence with a single action. The method can include prioritizing the second buckets. The method can include generating an execution list for executing the requests. The method can include executing the execution list based on the first tokens and the second tokens.
Techniques for predicting a missing value in an image-type document are disclosed. A system predicts the identity of a supplier associated with an image-type document in which the supplier's identity may not be extracted by text recognition. When a system determines that the supplier identity cannot be identified using a text recognition application, the system generates a set of machine learning model input features from features extracted from the image-type document to predict the supplier's identity. One input feature is a data file bounds feature indicating whether the image-type document is a scanned document or a non-scanned document. The system predicts a value for the supplier's identity based on the data file bounds value and additional feature values, including color channel characteristics and spatial characteristics of regions-of-interest. The system generates a mapping of values to defined attributes based in part on the predicted value for the supplier's identity.
Techniques are disclosed for deploying a computing resource (e.g., a service) in response to user input. A computer-implemented method can include operations of receiving (e.g., by a gateway computer of a cloud-computing environment) a request comprising an identifier for a computing component of the cloud-computing environment. The computing device receiving the request may determine whether the identifier exists in a routing table that is accessible to the computing device. If so, the request may be forwarded to the computing component. If not, the device may transmit an error code (e.g., to the user device that initiated the request) indicating the computing component is unavailable and a bootstrap request to a deployment orchestrator that is configured to deploy the requested computing component. Once deployed, the computing component may be added to a routing table such that subsequent requests can be properly routed to and processed by the computing component.
Techniques are disclosed for providing a number of user interfaces. A computing system may execute a declarative infrastructure provisioner. The computing system may provide declarative instructions and instruct the declarative infrastructure provision to deploy a plurality of infrastructure resources and a plurality of artifacts. One example user interface may provide a global view of the plurality of infrastructure components and artifacts. Another example user interface may provide corresponding states and change activity of the plurality of infrastructure components and artifacts. Yet another user interface may be provided that presents similarities and/or differences between a locally-generated safety plan indicating first changes for a computing environment and a remotely-generated safety plan indicating second changes for the computing environment.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 9/451 - Execution arrangements for user interfaces
G06F 9/48 - Program initiating; Program switching, e.g. by interrupt
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/32 - Monitoring with visual indication of the functioning of the machine
G06F 11/36 - Preventing errors by testing or debugging of software
G06F 16/901 - Indexing; Data structures therefor; Storage structures
H04L 41/0806 - Configuration setting for initial configuration or provisioning, e.g. plug-and-play
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
H04L 41/5041 - Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 67/00 - Network arrangements or protocols for supporting network services or applications
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
H04L 67/1031 - Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests
H04L 67/566 - Grouping or aggregating service requests, e.g. for unified processing
53.
MULTI-FEATURE BALANCING FOR NATURAL LANGUAGE PROCESSORS
A method includes receiving an indication of a first coverage value corresponding to a desired overlap between a dataset of natural language phrases and a training dataset for training a machine learning model; determining a second coverage value corresponding to a measured overlap between the dataset of natural language phrases and the training dataset; determining a coverage delta value based on a comparison between the first coverage value and the second coverage value; modifying, based on the coverage delta value, the dataset of natural language phrases; and processing, utilizing a machine learning model including the modified dataset of natural language phrases, an input dataset including a set of input features. The machine learning model processes the input dataset based at least in part on the dataset of natural language phrases to generate an output dataset.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
54.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR LOAD BALANCING SESSION INITIATION PROTOCOL (SIP) REGISTRATION REQUESTS USING Cx INTERFACE HEALTH STATUS
An method for load balancing session initiation protocol (SIP) registration requests using Cx health status includes monitoring, by an S-CSCF, at least one health parameter of a Cx interface associated with the S-CSCF. The method further includes determining, by the S-CSCF, a health category of the Cx interface associated with the S-CSCF based on the at least one health parameter. The method further includes sending, by the S-CSCF to an Interrogating Call Session Control Function (I-CSCF), an indication of the health category of the Cx interface associated with the S-CSCF. The method further includes load balancing, by the I-CSCF, SIP registration requests between the S-CSCF and at least one additional S-CSCF based on the received indication of the health category of the Cx interface associated with the S-CSCF.
Techniques for layout-aware multi-modal networks for document understanding are provided. In one technique, word data representations that were generated based on words that were extracted from an image of a document are identified. Based on the image, table features of one or more tables in the document are determined. One or more table data representations that were generated based on the table features are identified. The word data representations and the one or more table data representations are input into a machine-learned model to generate a document data representation for the document. A task is performed based on the document data representation. In a related technique, instead of the one or more table data representations, one or more layout data representations that were generated based on a set of layout features, of the document, that was determined based on the image are identified and input into the machine-learned model.
Techniques for generating high-precision localization of a moving object on a trajectory are provided. In one technique, a particular image that is associated with a moving object is identified. A set of candidate images is selected from a plurality of images that were used to train a neural network. For each candidate image in the set of candidate images: (1) output from the neural network is generated based on inputting the particular image and said each candidate image to the neural network; (2) a predicted position of the particular image is determined based on the output and a position that is associated with said each candidate image; and (3) the predicted position is added to a set of predicted positions. The set of predicted positions is aggregated to generate an aggregated position for the particular image.
Techniques for generating high-precision localization of a moving object on a trajectory are provided. In one technique, a particular image that is associated with a moving object is identified. A set of candidate images is selected from a plurality of images that were used to train a neural network. For each candidate image in the set of candidate images: (1) output from the neural network is generated based on inputting the particular image and said each candidate image to the neural network; (2) a predicted position of the particular image is determined based on the output and a position that is associated with said each candidate image; and (3) the predicted position is added to a set of predicted positions. The set of predicted positions is aggregated to generate an aggregated position for the particular image.
A data corpus is partitioned into text strings for header classification. A group characteristic is computed for a text string, and whether the group characteristic satisfies a group characteristic criterion is determined. The text string may be disqualified from header classification if the group characteristic criterion is not satisfied, or one or more font characteristics may be determined for the text string if the group characteristic criterion is satisfied. A font characteristic that meets one or more prevalence criteria may be identified and evaluated to determine whether the font characteristic meets at least one font characteristic criterion. The text string may be disqualified from header classification if the font characteristic criterion is not satisfied, or if the font characteristic meets the font characteristic criterion, the text string is classified as a header, and tagged content is generated by applying a header tag to the text string.
Debiasing pre-trained sentence encoders with probabilistic dropouts may be performed by various systems, services, or applications. A sentence may be received, where the words of the sentence may be provided as tokens to an encoder of a machine learning model. A token-wise correlation using semantic orientation may be determined to determine a bias score for the tokens in the input sentence. A probability of dropout that for tokens in the input sentence may be determined from the bias scores. The machine learning model may be trained or tuned based on the probabilities of dropout for the tokens in the input sentence.
A computing device may receive a first packet addressed to a destination node. The device may check a packet counter to determine if the counter exceeds a threshold, the counter recording a number of packets addressed to the destination node that have been received during a first time period. The device may in response to the packet counter exceeding the threshold: send, by the computing device, a query to an intermediate node; generate, by the device, a query flag in response to sending the query. The query flag can indicate that a query has been sent to the intermediate node. A reply from the intermediate node can be received by the device. The reply can identify a set of processes that the intermediate node is configured to perform on the first packet. The set of processes can be applied by the device to the first packet.
A computer performs deduplication of an original training corpus for maintaining accuracy of accelerated training of a reconstructive or other machine learning (ML) model. Distinct multidimensional points are detected in the original training corpus that contains duplicates. Based on duplicates in the original training corpus, a respective observed frequency of each distinct multidimensional point is increased. In a reconstructive embodiment and based on a particular distinct multidimensional point as input, a reconstruction of the particular distinct multidimensional point is generated by a reconstructive ML model. Based on increasing the observed frequency of the particular distinct multidimensional point, a scaled error of the reconstruction of the particular distinct multidimensional point is increased. Based on the scaled error of the reconstruction of the particular distinct multidimensional point, accuracy of the reconstructive model is increased. In an embodiment, the reconstructive ML model is an artificial neural network that is a denoising autoencoder that detects anomalous database statements.
Some embodiments are directed to an improved approach to implement deployments where a client can get application-level redirects to different servers, where the service is running in a different cloud environment. Dynamic port mapping may be performed at runtime. Routes may be added to IP tables to implement redirects from a first cloud to a second cloud.
Techniques for layout-aware multi-modal networks for document understanding are provided. In one technique, word data representations that were generated based on words that were extracted from an image of a document are identified. Based on the image, table features of one or more tables in the document are determined. One or more table data representations that were generated based on the table features are identified. The word data representations and the one or more table data representations are input into a machine-learned model to generate a document data representation for the document. A task is performed based on the document data representation. In a related technique, instead of the one or more table data representations, one or more layout data representations that were generated based on a set of layout features, of the document, that was determined based on the image are identified and input into the machine-learned model.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
Techniques discussed herein relate to implementing a distributed computing cluster (the “cluster”) including a plurality of edge devices (e.g., devices individually configured to selectively execute within an isolated computing environment). Each of the edge devices of the cluster may be configured with a respective control plane computing component. A subset of the edge devices may be selected to operate in a distributed control plane of the computing cluster. Any suitable combination of the subset selected to operate in the distributed control plane can instruct remaining edge devices of the distributed computing cluster to disable at least a portion of their respective control planes. Using these techniques, the cluster's distributed control plane can be configured and modified to scale as the cluster grows. The edge devices of the control plane may selectively connect to a centralized cloud, alleviating the remaining edge devices from needless and costly connections.
Techniques for using user configurable reflection operations to access layered information are disclosed. A user may identify a method and type to be monitored in an application in a configuration file that is uploaded to the APM agent. The APM agent parses the configuration file to determine the method and type to be monitored and to identify any trace method. The APM agent may configure an execution environment to invoke a trace method upon the invocation of a target method. The trace method may execute a reflective operation and/or chained reflective operations to collect and report application information.
A method for providing a dedicated region cloud at customer is provided. A first physical port of a network virtualization device (NVD) included in a datacenter is communicatively coupled to a first top-of-rack (TOR) switch and a second TOR switch. A second physical port of the NVD is communicatively coupled with a network interface card (NIC) associated with a host machine. The second physical port provided a first logical port and a second logical port for communications between the NVD and the NIC. The NVD receives a packet from the host machine via the first logical port or the second logical port. Upon receiving the packet, the NVD determines a particular TOR, from a group including the first TOR and the second TOR, for communicating the packet. The NVD transmits the packet to the particular TOR to facilitate communication of the packet to a destination host machine.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR AUTOMATICALLY BINDING A SERVICE-BASED INTERFACE (SBI) COMMUNICATIONS DIGITAL CERTIFICATE LIFECYCLE TO A NETWORK FUNCTION (NF) LIFECYCLE
A method for automatically binding an SBI communications digital certificate lifecycle to an NF lifecycle includes receiving, at an NRF, an NF deregister request message for deregistering an NF. The method further includes generating, by the NRF and in response to the NF deregister request message or successful completion of deregistration of the NF, a certificate revocation request message for revoking at least one digital certificate used by NF for SBI communications. The method further includes transmitting, by the NRF, the certificate revocation request message to a certificate authority. The method further includes receiving, by the NRF, an NF register request message identifying the NF. The method further includes determining, by the NRF, that the at least one digital certificate of the NF has been revoked. The method further includes, in response to determining that the at least one digital certificate of the NF has been revoked, performing, by the NRF, a network security action in response to the NF register request message.
A technique for creating a GraphQL Application Programing Interface (API) schema is disclosed. The technique includes generating a filter input object for an object defined in a GraphQL API schema. The filter input object provides the ability for a GraphQL API user (i.e., an API developer or an end user) to perform filtering operations in a query operation on schema objects defined in a GraphQL API schema. The filter input object comprises a set of object attributes and a set of custom attributes. The custom attributes provide the ability for an API developer (or an end user) to perform complex filtering operations in a query operation on schema objects defined in a GraphQL API schema. The technique further includes receiving a query operation to be performed and executing the query operation against a backend datasource to obtain a query result. The query result is transmitted to a GraphQL API user.
Techniques are disclosed for managing aspects of identifying and/or deploying hardware of a dedicated cloud to be hosted at a customer location (a “DRCC”). A DRCC may comprise cloud infrastructure components provided by a cloud provider but hosted by computing devices located at the customer's (a “cloud owner's”) location. Services of the central cloud-computing environment may be similarly executed at the DRCC. A number of user interfaces may be hosted within the central cloud-computing. These interfaces may be used to track deployment and region data of the DRCC. A deployment state may be transitioned from a first state to a second state based at least in part on the tracking and the deployment state may be presented at one or more user interfaces. Using the disclosed user interfaces, a user may manage the entire lifecycle of a DRCC and its corresponding hardware components.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/0806 - Configuration setting for initial configuration or provisioning, e.g. plug-and-play
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
H04L 67/53 - Network services using third party service providers
H04L 67/75 - Indicating network or usage conditions on the user display
70.
Machine Learning Model Selection For Forecasting Entity Energy Usage
Embodiments relate to generating time-series energy usage forecast predictions for energy consuming entities. Machine learning model(s) can be trained to forecast energy usage for different energy consuming entities. For example, a local coffee shop location and a large grocery store location are both considered retail locations, however their energy usage over days or weeks may differ significantly. Embodiments organize energy consuming entities into different entity segments and store trained machine learning models that forecast energy usage for each of these individual entity segments. For example, a given machine learning model that corresponds to a given entity segment can be trained using energy usage data for entities that match the given entity segment. A forecast manager can generate a forecast prediction for an energy consuming entity by matching the entity to a given entity segment and generating the forecast prediction using the entity segment's trained machine learning model.
Embodiments visualize graph data including vertices interconnected by edges. Embodiments receive a selection of a source vertex and generate a filtered layer corresponding to a filter condition relative to the source vertex. Embodiments automatically place a plurality of vertices that correspond to the filter condition on the filtered layer.
A computing device may access a target code for implementing an application. The device may identify addresses for one or more functions or one or more variables associated with the target code. The device may generate an interval tree comprising a root node and one or more function nodes. The device may in response to the target code invoking a function or variable: generate an intercept function configured to intercept communication between the target code and a call address for the at least one of the one or more functions or the one or more variables invoked by the target code. The device may intercept data communicated between the target code and the call address. The device may store the intercepted data as a function node in the interval tree. The device may transmit the interval tree to a user device.
Techniques for machine-learning of long-term seasonal patterns are disclosed. In some embodiments, a network service receives a set of time-series data that tracks metric values of at least one computing resource over time. Responsive to receiving the time-series data, the network service detects a subset of metric values that are outliers and associated with a plurality of timestamps. The network service maps the plurality of timestamps to one or more encodings of at least one encoding space that defines a plurality of encodings for different seasonal patterns. Based on the mapped encodings, the network service generates a representation of a seasonal pattern. Based on the representation of the seasonal pattern, the network service may perform one or more operations in association with the at least one computing resource.
Improving program execution using interprocedural escape analysis with inlining includes expanding a call graph of a target program to obtain an expanded call graph, performing, using the expanded call graph, an interprocedural escape analysis (IEA) to generate a materialization map, and calculating an inlining benefit value for a callee using the materialization map. Improving program execution further includes inlining, using the expanded call graph and in the target program, the callee according to the inlining benefit value, updating, after inlining the callee, an allocation in the target program, and completing, after updating the allocation, compilation of the target program.
In an embodiment, a computer infers, from an input (e.g. that represents a person) that contains a value of a sensitive feature that has a plurality of multipliers, a probability of a majority class (i.e. an outcome). Based on the value of the sensitive feature in the input, from the multipliers of the sensitive feature, a multiplier is selected that is specific to both of the sensitive feature and the value of the sensitive feature. The input is classified based on a multiplicative product of the probability of the majority class and the multiplier that is specific to both of the sensitive feature and the value of the sensitive feature. In an embodiment, a black-box bi-objective optimizer generates multipliers on a Pareto frontier from which a user may interactively select a combination of multipliers that provide a best tradeoff between fairness and accuracy.
The present disclosure relates to techniques for using variant inconsistency attack (VIA) as a simple and effective adversarial attack method to create useful adversarial examples for adversarial training of machine-learning models. In one particular aspect, a method is provided that includes obtaining a set of input examples for attacking a machine-learning model (the set of examples do not have corresponding labels), modifying an example from the set of examples in a utility preserving manner to generate a pair of modified examples, attacking the machine-learning model with the pair of modified examples in order generate a pair of predictions for the pair of modified examples, comparing the pair of predictions to determine whether the pair of predictions are the same or different, and in response to the pair of predictions being different, adding the pair of modified examples to a set of adversarial examples.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
77.
DISTRIBUTING ON CHIP INDUCTORS FOR MONOLITHIC VOLTAGE REGULATION
Distributions of on-chip inductors for monolithic voltage regulation are described. On-chip voltage regulation may be provided by integrated voltage regulators (IVRs), such as a buck converter with integrated inductors. On-chip inductors may be placed to ensure optimal voltage regulation for high power density applications. With this technology, integrated circuits may have many independent voltage domains for fine-grained dynamic voltage and frequency scaling that allows for higher overall power efficiency for the system.
H01L 23/00 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details of semiconductor or other solid state devices
H01L 23/522 - Arrangements for conducting electric current within the device in operation from one component to another including external interconnections consisting of a multilayer structure of conductive and insulating layers inseparably formed on the semiconductor body
H02M 1/00 - APPARATUS FOR CONVERSION BETWEEN AC AND AC, BETWEEN AC AND DC, OR BETWEEN DC AND DC, AND FOR USE WITH MAINS OR SIMILAR POWER SUPPLY SYSTEMS; CONVERSION OF DC OR AC INPUT POWER INTO SURGE OUTPUT POWER; CONTROL OR REGULATION THEREOF - Details of apparatus for conversion
H02M 1/14 - Arrangements for reducing ripples from dc input or output
H02M 3/158 - Conversion of dc power input into dc power output without intermediate conversion into ac by static converters using discharge tubes with control electrode or semiconductor devices with control electrode using devices of a triode or transistor type requiring continuous application of a control signal using semiconductor devices only with automatic control of output voltage or current, e.g. switching regulators including plural semiconductor devices as final control devices for a single load
78.
Scalable Low-Loss Disaster Recovery for Data Stores
Systems and methods are disclosed to improve disaster recovery by implementing a scalable low-loss disaster recovery for a data store. The disaster recovery system enables disaster recovery for a linearizable (e.g., externally consistent) distributed data store. The disaster recovery system also provides for a small lag on the backup site relative to the primary site, thereby reducing the data loss by providing a smaller data loss window compared to traditional disaster recovery techniques. The disaster recovery system implements a timestamp for log records based on a globally synchronized clock. The disaster recovery system also implements a watermark service that updates a global watermark timestamp that a backup node uses to apply log records.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 1/14 - Time supervision arrangements, e.g. real time clock
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Techniques for extracting data from conversations across different types of communication channels are disclosed. A system applies a set of rules to extract data from conversations based, at least in part, on a type of communication channel used for conducting the conversation. The system applies a machine learning model to recognize semantic content in conversations. The system divides conversations into conversation segments and classifies the conversation segments based on the semantic content. The system selects conversation segments to be extracted based on the semantic content and the type of communication channel over which a conversation is conducted. The system maps conversation segments from different conversations conducted on different types of communication channels to a same set of transactions.
G10L 15/16 - Speech classification or search using artificial neural networks
G10L 15/183 - Speech classification or search using natural language modelling using context dependencies, e.g. language models
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
80.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR USING NETWORK FUNCTION (NF) REPOSITORY FUNCTION (NRF) TO PROVIDE MAPPING OF SINGLE NETWORK SLICE SELECTION ASSISTANCE INFORMATION (S-NSSAI) FOR ROAMING AND INTER-PUBLIC LAND MOBILE NETWORK (INTER-PLMN) TRAFFIC
A method for using a network function (NF) repository function (NRF) for mapping single network slice selection assistance information (S-NSSAI) includes, storing at an NRF in a first network, an S-NSSAI mapping database including mappings between S-NSSAI values of the first network and S-NSSAI values of other networks. The method further includes receiving, from a requesting NF in a second network, a request message including a requester S-NSSAI attribute including an S-NSSAI value of the second network. The method further includes performing a lookup in the S-NSSAI mapping database and locating a mapping between the S-NSSAI value of the second network and the S-NSSAI value of the first network. The method further includes generating a response message including the S-NSSAI value of the first network. The method further includes transmitting the response message to the requesting NF in the second network.
In some implementations, the techniques disclosed herein may include monitoring, by a robotic device, a physical space, the physical space having a portion of a datacenter. In addition, the techniques may include detecting, by the robotic device, a person within the physical space. The techniques may include attempting to authenticate the detected person by an authentication process that includes: prompting, by the robotic device, the detected person to authenticate themselves; receiving, by the robotic device, an authentication credential from the detected person; and determining, by the robotic device, whether the authentication of the detected person has passed. Moreover, the techniques may include in accordance with a determination that the authentication of the detected person has failed, performing an action commensurate with the determination that the authentication has failed.
Techniques for providing machine-learned (ML)-based artificial intelligence (AI) capabilities are described. In one technique, multiple AI capabilities are stored in a cloud environment. While the AI capabilities are stored, a request for a particular AI capability is received from a computing device of a user. Also, in response to receiving training data based on input from the user, the training data is stored in a tenancy, associated with the user, in the cloud environment. In response to receiving the request, the particular AI capability is accessed, a ML model is trained based on the particular AI capability and the training data to produce a trained ML model, and an endpoint, in the cloud environment, is generated that is associated with the trained ML model. The endpoint is provided to the tenancy associated with the user.
Systems, methods, and other embodiments associated with quadratic acceleration boost of compute performance for ML prognostics are described. In one embodiment, a prognostic acceleration method includes separating time series signals into a plurality of alternative configurations of clusters based on correlations between the time series signals. Machine learning models are trained for individual clusters in the alternative configurations of clusters. One or more of the alternative configurations of clusters is determined to be viable for use in a production environment based on whether the trained machine learning models for the individual clusters satisfy an accuracy threshold and a completion time threshold. Then, one configuration is selected from the alternative configurations of clusters that were determined to be viable configurations. Production machine learning models are deployed into the production environment to detect anomalies in the time series signals based on the selected configuration.
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
Systems and methods implement partial evaluation of single execution methods. A native image is built from a build image compiled from source code. At image build time, a single execution method of the build image is executed to update an image heap of the native image. The single execution method is executed with a single execution for a native instance. The image heap is stored to the native image built from the build image.
In an embodiment, a computer generates a multi-sequence vector that contains a plurality of distinct sequences of distinct nodes of a parse tree of source logic. Based on the multi-sequence vector, the computer trains a logic encoder. After training and in a production environment, the logic encoder infers a fixed-size encoded logic from new source logic. Based on the fixed-size encoded logic, the new source logic is detected as anomalous by an anomaly detector. Both of the logic encoder and the anomaly detector are machine learning models and, herein, they may be separately trained. In an embodiment, the logic encoder is based on a natural language processing (NLP) language model architecture such as bidirectional encoder representations from transformers (BERT), or novel training herein may be self-supervised according to skip-gram for use with an unlabeled training corpus.
Embodiments relate to generating time-series energy usage forecast predictions for energy consuming entities. Machine learning model(s) can be trained to forecast energy usage for different energy consuming entities. For example, a local coffee shop location and a large grocery store location are both considered retail locations, however their energy usage over days or weeks may differ significantly. Embodiments organize energy consuming entities into different entity segments and store trained machine learning models that forecast energy usage for each of these individual entity segments. For example, a given machine learning model that corresponds to a given entity segment can be trained using energy usage data for entities that match the given entity segment. A forecast manager can generate a forecast prediction for an energy consuming entity by matching the entity to a given entity segment and generating the forecast prediction using the entity segment's trained machine learning model.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
Techniques for providing machine-learned (ML)-based artificial intelligence (AI) capabilities are described. In one technique, multiple AI capabilities are stored in a cloud environment. While the AI capabilities are stored, a request for a particular AI capability is received from a computing device of a user. Also, in response to receiving training data based on input from the user, the training data is stored in a tenancy, associated with the user, in the cloud environment. In response to receiving the request, the particular AI capability is accessed, a ML model is trained based on the particular AI capability and the training data to produce a trained ML model, and an endpoint, in the cloud environment, is generated that is associated with the trained ML model. The endpoint is provided to the tenancy associated with the user.
When a coordinator of a sharded DBMS receives from a client a query that has an XML operator that references a column in a sharded table and returns an XML image having an XML image type, then the coordinator issues a remote query that uses a new operator to ensure that the shard returns a TBX BLOB having a TBX type. In response to receiving the remote query with the new operator, each shard extracts a binary large object (BLOB) out of the XML image at the shard and returns the TBX BLOB data to the coordinator. In addition, the sharded DBMS provides a make-XML operator that the coordinator uses to work with the TBX BLOB received from each shard and recreate an XML type image, which is the result that the client expects.
Systems that analyze the performance of a computing resource based on a usage information timeline are disclosed. A system detects peak activity periods occurring in the usage information of the computer resource and scores the individual peak activity periods. Based on the respective scores, the system identifies an anchor period from the peak activity periods. Using the anchor period, the system aggregates the peak activity periods around the anchor period. The aggregating include incrementally sliding a window through the usage information around the anchor period, wherein increments represent candidate activity period. The system selects a candidate activity period including peak activities periods with the greatest workload. The system allocates capacity to the computer resource based on characteristic of the selected candidate activity period.
Embodiments optimize inventory assortment and allocation of a group of products, where the group of products are allocated from a plurality of different warehouses to a plurality of different retail stores. Embodiments receive historical sales data for the group of products and estimate demand model parameters of a demand model that models a demand of the group of products. Embodiments solve an optimization problem for the inventory assortment and allocation of the group of products, the optimization including a plurality of decision variables, an objective function, and a corresponding Lagrangian relaxation. The solving to generate an optimized solution includes determining a gradient of the objective function with respect to the decision variables, updating the decision variables based on a direction of the gradient and updating dual lambda variables of the Lagrangian relaxation.
G06Q 10/0631 - Resource planning, allocation, distributing or scheduling for enterprises or organisations
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
91.
DATASOURCE INTROSPECTION USER INTERFACE FOR GRAPHQL API SCHEMA AND RESOLVER GENERATION
A technique is disclosed for creating a GraphQL Application Programing Interface (API) schema by introspecting various different types of backend datasources. The technique includes receiving a selection of a datasource type to create a GraphQL API schema and introspecting the selected datasource type to determine a set of individual elements associated with the selected datasource type. The technique involves outputting the set of individual elements associated with the selected datasource type via a User Interface (UI) screen of a computer system. The technique further includes receiving a selection of one more individual elements associated with the selected datasource type via a UI screen of the computer system. The technique includes generating a GraphQL API schema comprising a set of objects and a set of resolver functions based on the selected elements and presenting the GraphQL API schema via a UI screen associated with the computer system.
Techniques are described that include receiving, by a computing system, a request to create a restored block volume using a first manifest, the first manifest comprising: (i) a block identifier for a block and (ii) a first block sequence number corresponding to the block identifier and associated with a first snapshot, and (iii) a manifest identifier. The techniques further include receiving, by the computing system, the request to create the restored block volume using a second manifest, the second manifest comprising: (i) the block identifier for the block, (ii) a second block sequence number corresponding to the block identifier and associated with a second snapshot. The techniques further include determining, by the computing system, whether the second block sequence number is indicative of the block having been altered after the first manifest was generated, and responsive to the determination by the computing system, creating the restored block volume.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
93.
Dynamic User Interface Mode Selection Based On Physical Activity Detection
Techniques for improving the convenience of activating different computing applications on a mobile computing device are disclosed. Sensors associated with a mobile computing device (e.g., accelerometers, gyroscopes, light sensors, microphones, image capture sensors) may receive inputs of various physical conditions to which the mobile computing device is being subjected. Based on one or more of these inputs, the mobile computing device may automatically select a content presentation mode that is likely to improve the consumption of the content by the user. In other embodiments, image analysis may be used to access different mobile computing applications.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
Systems and methods provide tiered assessment of use of services in a cloud environment. An operator cloud environment running on computers including microprocessors, wherein the operator cloud environment is deployed within a first realm owned by an operator tenant of the realm, a set of software products provided to the first realm from a cloud infrastructure provider of the cloud environment for access via the first realm by a plurality of end users as vendor cloud services, and a metering service. Usage data that records usage of services in a realm includes identification data associating user entities with their usage of the services is provided to the operator tenant associated with control of the realm. A second set of data is generated by processing the usage data to remove or convert the identification data and is provided to the cloud infrastructure provider associated with control of the cloud environment.
Techniques are disclosed for creating an attachment between two compute instances. An infrastructure and a generalized method is described for attaching two or more cloud resources (e.g., two compute instances) in spite of the compute resources being provisioned by two different services from different cloud tenancies. An automated process is described that is executed for wiring the compute instances. The automated process can be generally applied to attach any two compute instances providing two different services and provisioned from two different service tenancies.
In an embodiment, a computer combines first original hyperparameters and second original hyperparameters into combined hyperparameters. In each iteration of a binary search that selects hyperparameters, these are selected: a) important hyperparameters from the combined hyperparameters and b) based on an estimated complexity decrease by including only important hyperparameters as compared to the combined hyperparameters, which only one boundary of the binary search to adjust. For the important hyperparameters of a last iteration of the binary search that selects hyperparameters, a pruned value range of a particular hyperparameter is generated based on a first original value range of the particular hyperparameter for the first original hyperparameters and a second original value range of the same particular hyperparameter for the second original hyperparameters. To accelerate hyperparameter optimization (HPO), the particular hyperparameter is tuned only within the pruned value range to discover an optimal value for configuring and training a machine learning model.
Systems, methods, and other embodiments associated with concurrently joining voice channels and web channels are described. In one embodiment, a method includes establishing a voice session to communicate over an audio channel, wherein a live agent communicates audio voice signals with a user. In response to identifying an issue from the user, transmitting a navigation link wherein the navigation link, when activated, navigates a browser to a web page associated with the issue. A web session is established to communicate between the browser and the web page. The voice session and the web session associated with the user are linked together. A call controller may then communicate simultaneously with both channels since they are connected allowing a live agent to disconnect from the audio channel.
Federated training of a machine learning model with enforcement of subject level privacy is implemented. Respective samples of data items from a training data set are generated at multiple nodes of a federated machine learning system. Noise values are determined for individual ones of the sampled data items according to respective counts of data items of particular subjects and the cumulative counts of the items of the subjects. Respective gradients for the data items are the determined The gradients are then clipped and noise values are applied. Each subject's noisy clipped gradients in the sample are then aggregated. The aggregasted gradients for the entire sample are then used for determining machine learning model updates.
Systems, methods, and other embodiments associated with generating aggregate data geospatial grid cells for encoding in vector tiles are described. In one embodiment, a method includes accepting an input to adjust a zoom level of a map. In response to the input to adjust the zoom level, the method automatically (i) identifies finest-resolution hexagons contained in a vector tile that appears in the map at the adjusted zoom level, (ii) selects a hexagon resolution level that allows for n hexagons to be placed along an axis of the vector tile, (iii) generates, from the finest resolution hexagons, new hexagons at the hexagon resolution level, wherein the new hexagons aggregate data values of the finest resolution hexagons, and (iv) encodes the new hexagons in the vector tile. The method then transmits the vector tile for display in the map with the new hexagons overlaid on the vector tile.
Network entities associated with a virtual cloud network are transitioned through a certificate bundle distribution process for distributing new certificate authority certificates to the network entities. Operations may include executing, in relation to each of the network entities, a first operation associated with a first phase of the process; obtaining, for each particular network entity, individual entity information associated with a progress of a particular network entity in relation to the first phase; computing, based on the individual entity information, an aggregate metric indicative of an aggregate progress of the network entities in relation to the first phase; determining, based on the aggregate metric, that one or more transition criteria are satisfied for transitioning the network entities from the first phase to a second phase of the process; and executing, in relation to each of the network entities, a second operation associated with the second phase of the process.