An HTM-assisted Combining Framework (HCF) may enable multiple (combiner and non-combiner) threads to access a shared data structure concurrently using hardware transactional memory (HTM). As long as a combiner executes in a hardware transaction and ensures that the lock associated with the data structure is available, it may execute concurrently with other threads operating on the data structure. HCF may include attempting to apply operations to a concurrent data structure utilizing HTM and if the HTM attempt fails, utilizing flat combining within HTM transactions. Publication lists may be used to announce operations to be applied to a concurrent data structure. A combiner thread may select a subset of the operations in the publication list and attempt to apply the selected operations using HTM. If the thread fails in these HTM attempts, it may acquire a lock associated with the data structure and apply the selected operations without HTM.
A network environment comprises a plurality of host machines that are communicatively coupled to each other via a network fabric comprising a plurality of switches that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from and to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane. A packet originating at the first host machine and destined for the second host machine is communicated using only ports from the first subset of ports.
Techniques are described for deploying a fault tolerant data center by determining that the physical infrastructure deployment of the data center meets the fault tolerance levels and the fault domains specified for the data center. Techniques are described for obtaining configuration information related to various infrastructure resources deployed in a data center. A resource graph for the data center is generated based on the configuration information. The resource graph represents a logical representation of a set of vertices representing the physical and logical resources used to power a data center and a set of edges that connect the set of vertices. The resource graph is used to determine if a set of infrastructure nodes deployed in the data center meet the fault tolerance levels and fault domains specified for the data center. Results indicative of whether a deployed data center is fault tolerant are then transmitted to a user.
The present disclosure relates generally to establishing a connection between a client and an endpoint in a manner that reduces network latency. In an example, a network layer proxy receives a request of a client for an endpoint connection establishment, the request including endpoint information. The network layer proxy sends, to an application layer proxy, the endpoint information, the endpoint information sent using a connection-less protocol. Thereafter, the network layer proxy receives, from the application layer proxy, a network address of an endpoint selected by the application layer proxy based on the endpoint information and application layer information. The network layer proxy sends a response to the client such that a connection is established to the endpoint using a connection-based protocol and such that the connection bypasses the application layer proxy.
H04L 69/165 - Combined use of TCP and UDP protocolsImplementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP] selection criteria therefor
A network environment comprises a plurality of host machines that are communicatively coupled to each other via a network fabric comprising a plurality of switches that in turn include a plurality of ports. Each host machine comprises one or more GPUs that execute customer workloads. Described herein are different approaches that provide for addressing the problem of handling network overlay encapsulation without causing adverse impact to the performance of workloads executed on the GPU clusters.
H04L 47/43 - Assembling or disassembling of packets, e.g. segmentation and reassembly [SAR]
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 47/263 - Rate modification at the source after receiving feedback
Systems and methods described herein provide for a customizable console, for use with providing cloud environments. Cloud computing offerings enable access within the context of a cloud environment by third-party operators acting as resellers of products or services owned or managed by a cloud provider. An operator provides access to their customers via consoles that are customizable by the operators to enable greater control over their cloud-based products and services.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
G06F 3/04842 - Selection of displayed objects or displayed text elements
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
Techniques for generating and maintaining a student academic ledger are disclosed. In some embodiments, student data is received from a first set of one or more members of a blockchain network. In response, one or more distributed ledgers are updated in the blockchain network. The distributed ledgers are accessible to a student member of the blockchain network using a private key. The blockchain network receives requests from the student member to initiate a transaction with a second set of one or more members that requires access to at least a subset of the student data. Responsive to the request, the second set of one or more members are granted access to at least the subset of the student data from at least one distributed ledger.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
Techniques for presenting a graphical user interface (GUI) for configuring a distributed resource instance are disclosed. The system presents an interactive GUI displaying a geographical map and displays a plurality of user interface (UI) elements overlaid on the geographical map, each respective UI element respectively corresponding to a respective computing resource of plurality of computing resources. The system displays the respective UI element at a position on the geographical map that corresponds to a geographical location of physical hardware being used to implement the respective computing resource. The system receives a first user input selecting a first UI element of the plurality of UI elements; and responsive to receiving the first user input selecting the first UI element: identifies a first computing resource corresponding to the first UI element; and presents a first resource configuration GUI that displays a set of configurable attributes associated with the first computing resource.
Techniques for determining an absolute longitudinal position of a moving object on non-linear sections of a trajectory are described. In one technique, an estimated track boundary segment is generated based on a digital image associated with a moving object. For each position of multiple positions in an actual track boundary segment pertaining to a track for one or more moving objects, an alignment of the estimated track boundary segment with the actual track boundary segment is made based on that position. Also, based on the alignment, a difference measurement between the estimated track boundary segment and a portion of the actual track boundary segment is generated. After each of the positions is considered, a particular alignment, of multiple alignments, that is associated with the lowest difference measurement among the multiple positions is selected. Based on the particular alignment, a longitudinal value of the moving object is determined.
Techniques for generating a unified user experience (UX) score using sentiment analysis and theme classification, training multiple layers of a machine learning environment to perform sentiment analysis and theme classification, and arranging layers of a machine learning environment based on noise from training data are provided. A unified UX score is generated from categories that are indicative of a user's journey in association with the cloud service provider. Machine learning environments are trained and used to perform sentiment analysis and theme classification on user feedback data. The layers of a machine learning environment can also be arranged based on noise generated from training data used to train the models of the machine learning environments.
Techniques for generating a unified user experience (UX) score using sentiment analysis and theme classification, training multiple layers of a machine learning environment to perform sentiment analysis and theme classification, and arranging layers of a machine learning environment based on noise from training data are provided. A unified UX score is generated from categories that are indicative of a user's journey in association with the cloud service provider. Machine learning environments are trained and used to perform sentiment analysis and theme classification on user feedback data. The layers of a machine learning environment can also be arranged based on noise generated from training data used to train the models of the machine learning environments.
Techniques for generating a unified user experience (UX) score using sentiment analysis and theme classification, training multiple layers of a machine learning environment to perform sentiment analysis and theme classification, and arranging layers of a machine learning environment based on noise from training data are provided. A unified UX score is generated from categories that are indicative of a user's journey in association with the cloud service provider. Machine learning environments are trained and used to perform sentiment analysis and theme classification on user feedback data. The layers of a machine learning environment can also be arranged based on noise generated from training data used to train the models of the machine learning environments.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
A network environment comprises a plurality of host machines that are communicatively coupled to each other via a network fabric comprising a plurality of switches that in turn include a plurality of ports. Each host machine comprises one or more GPUs that execute customer workloads. Described herein are different approaches that provide for addressing the problem of handling network overlay encapsulation without causing adverse impact to the performance of workloads executed on the GPU clusters.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
Embodiments are directed to a cloud based rotation of a secret stored in a secrets storage and stored in a target system. Embodiments receive an identifier of a function for rotating the secret or an identifier of the target system when the target system includes a management Application Programming Interface (“API”) for rotating the secret. Embodiments determine that the secret needs to be rotated based on a rotating schedule. When the identifier of the function is received, embodiments rotate the secret using the function and when the identifier of the target system is received, embodiments rotate the secret using the management API. Rotating the secret includes updating the secret at the secret storage and at the target system.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
Operations include: presenting a Graphical User Interface (GUI) displaying a viewport of a scrollable container associated with a set of interface elements; receiving, by the GUI, user input to initiate a scrolling operation in relation to the scrollable container; responsive to determining that the interface element conversion criterion is met, converting a restricted-scroll interface element to a fully-scrollable interface element; executing the scrolling operation at least by: removing the converted fully-scrollable interface element from the viewport of the scrollable container and scrolling a second fully-scrollable interface element into the viewport.
In accordance with an embodiment, described herein is a system and method for providing extensibility in an analytic applications environment, including a semantic layer that enables the use of custom semantic extensions to extend a semantic data model (semantic model). In accordance with an embodiment, the system enables use of a fragmented query model—when customizations are made to the semantic model, the system can dynamically merge the changes from the various deltas when queries are generated at runtime, to dynamically surface appropriate data based on the extended semantic model.
Systems, devices, and methods are disclosed for enforcing relational database security policies with respect to database components stored in a data lake. The techniques may include receiving, by a data lake security service associated with a data lake, a file system call comprising a uniform resource identifier and a credential. The service may obtain relational database metadata and identify, from the metadata, a relational database component corresponding to the uniform resource identifier of the file system call. A relational security policy corresponding to that component may be obtained and access to a storage location at which the data associated with the relational database component may be authorized (e.g., based on the credential received).
Discussed herein is a mechanism of building/constructing a network fabric for a cluster of GPUs. A plurality of sets of GPUs are created, wherein each set of GPUs is created by selecting one GPU from each host machine in the plurality of host machines. Each set of GPUs is coupled to a different group of switches in a plurality of groups of switches. The coupling included: (i) coupling each GPU in the set of GPUs to a unique ingress port of a first switch included in a corresponding group of switches that is associated with the set of GPUs, and (ii) mapping virtually, each ingress port of the first switch to a unique egress port of a plurality of egress ports of the first switch. A packet originating at a source GPU and destined for a destination GPU is communicated via the network fabric.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
A network environment comprises a plurality of host machines that are communicatively coupled to each other via a network fabric comprising a plurality of switches that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from and to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane. A packet originating at the first host machine and destined for the second host machine is communicated using only ports from the first subset of ports.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
A network environment comprises a plurality of host machines that are coupled to each other via a network fabric comprising a plurality of switches, that in turn include a plurality of ports. Each host machine comprises one or more GPUs. A first subset of ports from is associated with a first virtual plane, wherein the first virtual plane identifies a first collection of resources to be used for communicating packets from/to host machines associated with the first virtual plane. A second subset of ports is associated with a second virtual plane that is different from the first virtual plane. A first host machine and a second host machine are associated with the first virtual plane and the second virtual plane, respectively. A packet is communicated from the first host machine to the second host machine using ports from the first subset of ports and the second subset of ports.
The present disclosure relates generally to establishing a connection between a client and an endpoint in a manner that reduces network latency. In an example, a network layer proxy receives a request of a client for an endpoint connection establishment, the request including endpoint information. The network layer proxy sends, to an application layer proxy, the endpoint information, the endpoint information sent using a connection-less protocol. Thereafter, the network layer proxy receives, from the application layer proxy, a network address of an endpoint selected by the application layer proxy based on the endpoint information and application layer information. The network layer proxy sends a response to the client such that a connection is established to the endpoint using a connection-based protocol and such that the connection bypasses the application layer proxy.
Discussed herein is a mechanism of building/constructing a network fabric for a cluster of GPUs. A plurality of sets of GPUs are created, wherein each set of GPUs is created by selecting one GPU from each host machine in the plurality of host machines. Each set of GPUs is coupled to a different group of switches in a plurality of groups of switches. The coupling included: (i) coupling each GPU in the set of GPUs to a unique ingress port of a first switch included in a corresponding group of switches that is associated with the set of GPUs, and (ii) mapping virtually, each ingress port of the first switch to a unique egress port of a plurality of egress ports of the first switch. A packet originating at a source GPU and destined for a destination GPU is communicated via the network fabric.
Techniques for disintermediating a network path between a source and a destination are described. In an example, the source sends a first packet destined to a destination. A network node on the network path between the source and the destination performs a network operation on this packet and generates a set of instructions indicating the network operation and parameters used for performing the network operations. This set of instructions is sent to the source as a flow update. When the source needs to send a second packet to the destination, the source applies the instructions to the second packet. As such, a similar network operation is performed on the second packet at the source, thereby avoiding the need to send the second packet on the same network path that includes the network node. Accordingly, the second packet is sent on a different network path that bypasses the network node.
Various embodiments of the present technology generally relate to systems and methods for preventing malicious service access over long-lived connections. In certain embodiments, a network traffic analysis system may comprise one or more processors, and a memory having stored thereon instructions. The instructions, upon execution, may cause the one or more processors to receive, from a first network function (NF) on a 5G network, a copy of a message sent over a long-lived connection between the first NF and a second NF on the 5G network, the copy of the message including details for a transport layer security (TLS) certificate involved in the long-lived connection. The network traffic analysis system may compare the details against a list of revoked certificates to determine whether the TLS certificate has been revoked, and when the TLS certificate has been revoked, send a notification directing the first NF to close the long-lived connection.
Techniques for securely accessing a computer network are described. An access provider sends network access credentials to an access management device. Upon receiving the credentials, the access management device generates an image key that embeds the credentials. The access management device then presents the image key to a client device. The client device receives the image key and extracts the credentials from within the image key. The client device transmits the credentials to the access provider with an authentication request. Based on the credentials included with the authentication request, the access provider attempts to authenticate the client device. If authentication is successful, the access provider grants the client device access to the wireless network and resources accessible via the wireless network.
Techniques for recording submissions of a user actions in relation to interface elements of a GUI for replay are disclosed. Data arguments generated in response to the user actions required for executing a command associated with the user actions are recorded. A system monitors execution of an application. The system detects a command or action that corresponds to submission of a user action in relation to target interface elements displayed by a GUI. When the system detects the command, the system records the user action in relation to the target interface element and data arguments selected for executing the command. When the system receives a request to replay the submission of the user action, the system retrieves the data arguments for executing the command and causes execution of the command by submitting the user action to the GUI along with the data arguments for executing the command.
A method for service aggregation for alternate routing of callback messages includes registering, by an NF and with an NRF, an NF profile of the NF including an aggregated service name representing a plurality of different service types provided by the NF. The method further includes receiving, by the NF, requests from consumer NFs relating to the different service types. The method further includes obtaining, by the NF, and in response to the requests from the consumer NFs, resource information from a producer NF, re-using the resource information to respond to the requests, and communicating the aggregated service name to the producer NF as binding information. The method further includes maintaining, by the NF, a registered status of the aggregated service name with the NRF as long as any of the service types remain available at the NF.
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for training data collection and evaluation for automatic SOAP note generation. Training data is accessed, and evaluation process is performed on the training data to result in evaluated training data. A fine-tuned machine-learning model is generated using the evaluated training data. The fine-tuned machine-learning model can be used to perform a task associated with generating a SOAP note.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Herein are graph machine learning explainability (MLX) techniques for invalid traffic detection. In an embodiment, a computer generates a graph that contains: a) domain vertices that represent network domains that received requests and b) address vertices that respectively represent network addresses from which the requests originated. Based on the graph, domain embeddings are generated that respectively encode the domain vertices. Based on the domain embeddings, multidomain embeddings are generated that respectively encode the network addresses. The multidomain embeddings are organized into multiple clusters of multidomain embeddings. A particular cluster is detected as suspicious. In an embodiment, an unsupervised trained graph model generates the multidomain embeddings. Based on the clusters of multidomain embeddings, feature importances are unsupervised trained. Based on the feature importances, an explanation is automatically generated for why an object is or is not suspicious. The explained object may be a cluster or other batch of network addresses or a single network address.
In some aspects, a computing device may receive, at a data processing system, a set of utterances for training or inferencing with a named entity recognizer to assign a label to each token piece from the set of utterances. The computing device may determine a length of each utterance in the set and when the length of the utterance exceeds a pre-determined threshold of token pieces: dividing the utterance into a plurality of overlapping chunks of token pieces; assigning a label together with a confidence score for each token piece in a chunk; determining a final label and an associated confidence score for each chunk of token pieces by merging two confidence scores; determining a final annotated label for the utterance based at least on the merging the two confidence scores; and storing the final annotated label in a memory.
G06F 40/284 - Lexical analysis, e.g. tokenisation or collocates
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
Techniques for securely accessing a computer network are described. An access provider sends network access credentials to an access management device. Upon receiving the credentials, the access management device generates an image key that embeds the credentials. The access management device then presents the image key to a client device. The client device receives the image key and extracts the credentials from within the image key. The client device transmits the credentials to the access provider with an authentication request. Based on the credentials included with the authentication request, the access provider attempts to authenticate the client device. If authentication is successful, the access provider grants the client device access to the wireless network and resources accessible via the wireless network.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
Embodiments couple a corresponding IoT gateway to each IoT device, each IoT gateway monitoring for operation events of a smart contract of a distributed ledger, each IoT device and IoT gateway coupled to the distributed ledger. In response to a client initiating an operation of a first IoT device, embodiments generate a corresponding event by the smart contract and transmit an authorization request to an authorization system and in response receive an access token corresponding to the operation. Embodiments transmit the access token to one or more of the IoT gateways, each IoT gateway monitoring for the event and determining whether it corresponds to the first IoT device and then implementing the operation at the first IoT device.
H04L 9/00 - Arrangements for secret or secure communicationsNetwork security protocols
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04L 12/66 - Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
H04L 67/125 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
43.
APPLICATION-LAYER CONNECTION REDISTRIBUTION AMONG SERVICE INSTANCES
The technology disclosed herein enables redistribution of connections among service instances by determining a subset of the connections and terminating the subset. In a particular example, a method includes identifying the application-layer connections established between service instances and peers and identifying a high-load service instance of the service instances. A number of the application-layer connections established with the high-load service instance satisfies load criteria. The method further includes determining a subset of connections from a portion of the application-layer connections connected to the high-load service instance and terminating the subset of connections.
Techniques for using logit values for classifying utterances and messages input to chatbot systems in natural language processing. A method can include a chatbot system receiving an utterance generated by a user interacting with the chatbot system. The chatbot system can input the utterance into a machine-learning model including a set of binary classifiers. Each binary classifier of the set of binary classifiers can be associated with a modified logit function. The method can also include the machine-learning model using the modified logit function to generate a set of distance-based logit values for the utterance. The method can also include the machine-learning model applying an enhanced activation function to the set of distance-based logit values to generate a predicted output. The method can also include the chatbot system classifying, based on the predicted output, the utterance as being associated with the particular class.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
45.
Machine Learning Based Spend Classification Using Hallucinations
Embodiments classify a product to one of a plurality of product classifications. Embodiments receive a description of the product and create a first prompt for a trained large language model (“LLM”), the first prompt including the description of the product and contextual information of the product. In response to the first prompt, embodiments use the trained LLM to generate a hallucinated product classification for the product. Embodiments word embed the hallucinated product classification and the plurality of product classifications and similarity match the embedded hallucinated product classification with one of the embedded plurality of product classifications. The matched one of the embedded plurality of product classifications is determined to be a predicted classification of the product.
Techniques are disclosed herein for onboarding users from a single tenant cloud environment to a multi-tenant cloud environment. In one aspect, a method is provided that includes in response to an eligibility status check indicating that a first cloud service instance running a first version of a cloud service in a first cloud environment is eligible for the upgrade, exporting a copy of data from the first cloud service instance to a common storage device, provisioning a second cloud service instance running on the second version of the cloud service in a second cloud environment, importing the first copy of the data from the common storage device to the second cloud service instance, and activating the second cloud service instance to run the second version of the cloud service. During the exporting, provisioning, and importing the first cloud service instance continues to run the first version of the cloud service.
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
47.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR SELECTING NETWORK FUNCTION (NF) PROFILES OF NF SET MATES TO ENABLE ALTERNATE ROUTING
A method for selecting NF profiles of NF set mates for alternate routing includes receiving an NF discovery request, accessing an NF profiles database, and identifying NF profiles that match query parameters in the NF discovery request. The method further includes determining a value of an NF profiles limit parameter, selecting a first number of NF profiles that is less than the value of the NF profiles limit parameter, selecting a second number of NF profiles, wherein the NF profiles in the second number of NF profiles correspond to NF set mates of NFs corresponding to NF profiles in the first number of NF profiles and a sum of the first and second numbers number is less than or equal to the value of the NF profiles limit parameter, and generating and transmitting an NF discovery response including the NF profiles in the first and second numbers of NF profiles.
Techniques for defining and using reusable modules to generate form control code are disclosed, including: displaying a form control implementation interface for applying form control functions to forms; receiving via the form control implementation interface: a first user input selecting a form control function of the form control functions; a second user input selecting one or more input parameters, for the form control function, that are to be extracted from the target form; a third user input selecting a target field of a target form, one or more attributes of the target field to be modified via execution of the form control function; generating form control code that extracts the one or more input parameters from form data received for the target form and applies the form control function to the one or more input parameters to modify the one or more attributes of the target field.
Different sampling orders of random variables in a Bayesian model may be generated for Markov Chain Monte Carlo sampling techniques. Code may be received that causes a Markov Chain Monte Carlo sampling technique to be performed with respect to a Bayesian model that includes random variables representing different parameterized probability distributions and connected via edges in a Directed Acyclical Graph (DAG). Instructions may be generated to execute the code that cause the Markov Chain Monte Carlo sampling technique, the instructions including performing different orders for sampling different random variables in the DAG in different iterations of the Markov Chain Monte Carlo sampling technique.
Techniques for perspective-preserving seamless application switching are disclosed. A system may display a first interface using a first application. The first interface includes interface elements representing a plurality of objects. The system may detect a zoom-in command, received by the first application, requesting a particular zoom level for a first interface element, corresponding to a first object in the first plurality of objects. The system may determine that the requested zoom level exceeds a threshold. Responsive to determining that the requested zoom level exceeds the threshold, the system may display, using a second application, a second interface corresponding to the first object. The second interface may include one or more of: (a) characteristics associated with the first object that were not displayed by the first application, or (b) user input elements for executing operations associated with the first object that were not displayed by the first application.
Systems, methods, and machine-readable media to migrate data from source databases to target databases are disclosed. Data may be received, relating to the source databases and the target databases. For each source database, a migration assessment may be generated based on analyzing the data, and a migration method may be selected. A migration plan that specifies a parallel migration of a set of databases to the target databases may be created, with a first migration method to migrate a first subset of the set of databases and a second migration method to migrate a second subset of the set of databases. The parallel migration may be executed according to the migration plan may be caused so that the first subset of the set of databases is migrated with the first migration method while the second subset of the set of databases is migrated with the second migration method.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
Embodiments optimize hotel room reservations for a hotel. For a first day of a plurality of future days, embodiments automatically determine, based on an objective function, an overbooking limit for each category of hotel rooms for the hotel, where the hotel includes a plurality of different room categories. Embodiments receive a first reservation request for the first day for a first category room. When the determined overbooking limit for the first category room has not been reached, embodiments accept the first reservation request. When the accepted first reservation request is being checked in to the hotel on the first day, embodiments automatically determine, based on the objective function, to reject the first reservation request, accept the first reservation request, or upgrade the first reservation request to a higher category room.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 10/0631 - Resource planning, allocation, distributing or scheduling for enterprises or organisations
Techniques are disclosed to establish trust in a cluster of edge devices. An edge device cloud service can associate a first cloud-computing edge device with a fleet of cloud-computing edge devices and provision the first cloud-computing edge device with a master encryption key. The edge device cloud service can associate a second cloud-computing edge device with the fleet and provision the second cloud-computing edge device with the master encryption key and the first public encryption key. The first cloud-computing edge device can receive from the second cloud-computing edge device encrypted message data comprising the second public encryption key. The first cloud-computing edge device can decrypt the encrypted message data using the master encryption key stored in the first key store and update the first key store with the second public encryption key.
09 - Scientific and electric apparatus and instruments
Goods & Services
Computer programs for virtualization of hardware and
operating systems, namely, for simulation, hiding and
abstracting the physical characteristics of hardware and
operating systems.
55.
CONTEXTUAL RE-RANKING BASED ON CURSOR POSITION FOR DOCUMENTATION RECOMMENDER SYSTEMS
Here is dynamic and contextual ranking of reference documentation based on an interactively selected position in new source logic. A computer receives a vocabulary of lexical tokens, a sequence of references that contains a first reference to a first reference document before a second reference to a second reference document, respective subsets of the vocabulary that occur in the first and second reference documents, a new source logic that contains a sequence of lexical tokens, respective measurements of semantic distance between the new source logic and the first and second reference documents, and a selected position in the sequence of lexical tokens. Based on the selected position, the measurements of semantic distance are selectively increased. Based on that increasing the measurements of the semantic distance, a relative ordering of the first and second references is reversed to generate and display a reordered sequence of references.
The present disclosure relates to intelligent network encryption of traffic between a source and a destination. In an example, a network element receives, during a session between the source and the destination, first traffic exchanged between the source and the destination. The network element determines whether a traffic exchange between the source and the destination is expected to be secured by at least one of the source or the destination at any of a network layer, a transport layer, or an application layer. The network element generates a decision whether to secure the first session at the network layer based on whether the traffic exchange is expected to be secured or unsecured. The network element implements the decision on at least one of the first traffic or second traffic exchanged between the source and the destination during the first session.
In an embodiment, a computer hosts and operates an input neural layer of an artificial neural network that generates, based on all of the features of a first vertex of a first vertex type in a graph, an embedding of the first vertex. The embedding of the first vertex has a predefined size that does not depend on the first vertex type. The input neural layer generates, based on all of the features of a first edge of a first edge type in the graph, an embedding of the first edge. A subsequent neural layer of the artificial neural network generates an embedding of a second vertex of a second vertex type in the graph, and this generating is based on: the embedding of the first vertex and all of the features of the second vertex, including a particular feature that is not a feature of the first vertex type.
A method includes disassembling a reference binary of a library to generate a control flow graph of the referenced binary, normalizing the control flow graph to generate a normalized graph, traversing the normalized graph to generate execution traces from the normalized graph, and generating library vector embeddings. Generating library vector embeddings includes, for each execution trace of at least a subset of the execution traces, processing the execution trace by a vector embedding model to generate a library vector embedding of the execution trace. The method further includes relating, in storage, a library identifier of the library to the plurality of library vector embeddings as a fingerprint of the library.
Various embodiments of the present technology generally relate to systems and methods for routing messages for 4G and 5G sessions. In certain embodiments, a Policy and Charging Rules Function (PCRF) system may comprise one or more processors, and a memory having stored thereon instructions. The instructions, upon execution, may cause the one or more processors to receive a 4G communication session initiation request for a 4G session, and in response to the 4G communication session initiation request, issue a session binding request directed to a Binding Support Function (BSF), the session binding request directing the BSF to create a binding record linking the 4G session to the PCRF to enable routing of Application Function (AF) messages to the PCRF via the BSF. The instructions may further cause the one or more processors to receive an AF message routed to the PCRF based on the binding record.
09 - Scientific and electric apparatus and instruments
16 - Paper, cardboard and goods made from these materials
35 - Advertising and business services
37 - Construction and mining; installation and repair services
38 - Telecommunications services
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Computers; computer hardware; computer software; computer peripherals; communications equipment; internet television hardware; telephones; televisions; streaming devices; video cameras; wireless data communications hardware; computer programs for testing compatibility of computer programs; computer programs for use in computer networking; computer programs for use in computer emulation; computer programs for use in electronic mail; computer programs for creating graphical interfaces; computer programs for use in database management; computer programs for document processing; computer programs for word processing; computer programs for preparing spreadsheets; computer programs for use in computer security; computer programs for use in the development of computer programs, programming languages, tool kits and compilers; computer programs for use in developing, compiling and executing other computer programs on computers, computer networks, and global communications networks; computer programs for use in navigating, browsing, transferring information, and distributing and viewing other computer programs on computers, computer networks and global communications networks; computer programs for recording, processing, receiving, reproducing, transmitting, modifying, compressing, decompressing, broadcasting, merging, and/or enhancing sound, video, images, graphics, and/or data; computer operating system programs; computer utility programs; computer programs for use with computer servers; computer programs for use in telephones; computer programs used in accessing databases; computer programs downloadable from global computer networks; and instructional manuals in electronic format sold therewith; downloadable electronic publications; parts and fittings for all the foregoing. Printed matter; calendars; magazines; notepads; instructional and teaching materials (except apparatus); publications concerning computer technology; operating and user instructions; manuals for computers and computer software. Business management; business administration; organizing, arranging and conducting trade shows and exhibitions for commercial or advertising purposes; all of the aforesaid in the fields of computers, computer software, the deployment of software development, Software-as-a-Service (SaaS), cloud computing, and information technology; organizing, arranging and conducting trade shows and exhibitions in the fields of computers, computer software, the deployment of software development, Software-as-a-Service (SaaS), cloud computing, and information technology; computer information storage and retrieval services in the computer, computer network, and global computer information fields; computerized data processing; collection, compilation and systemization of information and data into computer databases; retail services in relation to computer software, computer hardware and computer services; computerized database management services; data processing services. Repair and maintenance of computer systems and software; installation of computer systems and software; information and advice in relation to the foregoing. Electronic transmission of data over a global communications network, namely the Internet. Education; providing of training; arranging and conducting educational conferences; all of the aforesaid services in the field of computers, computer software and systems; organizing, arranging and conducting conferences and seminars in the field of computers, computer hardware, computer software, Software-as-a-Service (SaaS), cloud computing, and information technology; computer and computer software training courses; providing on-line electronic publications, not downloadable; publication of electronic books and journals on-line in the field of computers, computer software and system. Design and development of computer hardware and software; Software-as-a-Service (SaaS) services; Platform-as-a-Service (PaaS) services; Infrastructure-as-a-Service (IaaS) services; cloud computing services; computer services, namely, providing consultation services and advice in the fields of computers, computer hardware, computer software, computer peripherals, computer systems, computer networks, computer-related equipment, computer security, cloud computing, information technology, electronic commerce technology and global computer network technology; leasing services (long-time rental) in the fields of computer software, computer systems, and computer network; design for others in the fields of computer peripherals, computer systems, computer networks, cloud computing, computer-related equipment, computer security, information technology, electronic commerce technology and global computer network technology; installation, maintenance, and repair of computer software; computer programming; providing online information and news in the field of computers, computer hardware, computer software, cloud computing, and technology; application service provider services, namely, providing, hosting, managing, developing, and maintaining applications, software, websites, and databases in the fields of computers, computer hardware, computer software, computer peripherals, computer systems, computer networks, computer-related equipment, computer security, cloud computing, information technology, electronic commerce technology and global computer network technology, wireless communication, mobile information access, and remote data management; providing virtual computer systems and environments through cloud computing; consulting in the field of cloud computing; leasing of operating software for accessing and using a cloud computing network; development, design, and testing of new information technology products for others; providing temporary use of on-line non-downloadable operating software for accessing and using a cloud computing network; computer security services; consulting services in the field of maintaining the security and integrity of databases; security services, namely, providing security assessments of information systems. Licensing of computer software; licensing of intellectual property; computer security services and consultancy; consulting services in the field of maintaining the security and integrity of databases; security services, namely, providing security assessments of information systems.
61.
PROACTIVE PERFORMANCE SUPERVISION OF MULTITENANT CLOUD DB USING HIERARCHICAL TRIAGE & COMPARATIVE APPROACH MAINTAINING THE DATA PRIVACY AND ISOLATION
Herein are hierarchical and non-intrusive techniques to detect and diagnose incidental contention between database tenants. In an embodiment, a computer hosts a database server that operates a container database. The database server monitors level one performance metrics that characterize the performance of at least a first pluggable database (PDB) in the container database. The database server detects, in the level one performance metrics, a performance degradation of the first PDB. Responsively, the database server dynamically configures collection of level two performance metrics that characterize the performance of at least the first PDB and a second PDB in the container database. The database server detects, in the level two performance metrics, that the performance degradation is caused by the second PDB. The database server generates an alert that identifies the second PDB. The alert contains a particular metric of the level two performance metrics, and the particular metric characterizes the performance of the second PDB.
Various embodiments of the present technology generally relate to systems and methods for generating contextual recommendations for data visualizations. In certain embodiments, a method may comprise operating a data analytics system to implement a contextual data visualization recommendation process configured to generate proposed data visualizations contextually relevant to a user. The method may include evaluating columns of a dataset for contextual relevance, including scoring the columns based on a canvas disposition scoring factor corresponding to which of the columns appear most frequently in a canvas of previously generated visualizations of the user, and scoring the columns based on a user reactions scoring factor corresponding to user action event data reflecting approval of visualizations and associated columns. The method may further include ranking the columns based on the evaluation, generating the proposed data visualizations based on a selection of highest-ranking columns, and providing the proposed data visualizations to the user.
A system and computer-implemented method for a log analytics system that can configure, collect, parse, and analyze log records in an efficient manner. Log records are accessed, each of the log records is associated with a log source. A base parser is identified for parsing a log record based on a type of the log record indicated in the log source. The log record is parsed using the base parser to extract base field values corresponding to base fields. A base-parsed log record is generated on parsing. Sub-parsers are identified using field mappings. The field mappings include base field values mapped to corresponding sub-parsers. The base-parsed log record is parsed using the sub-parsers to extract sub-fields. The sub-fields are merged to the base fields to generate and present an output that includes the parsed log record, the base fields, base field values, the sub-fields and the sub-field values.
Techniques for script-based runtime assembly of object graphs using native instructions compiled by an ahead-of-time compiler are disclosed, including: generating, based on a data structure that defines a business process, a script including instructions for assembling an object graph that represents relationships between objects used by the business process; obtaining, at runtime by a business process execution engine compiled to native instructions by an ahead-of-time compiler, the script; assembling, at runtime by the business process execution engine, the object graph based at least on the instructions in the script.
In accordance with an embodiment, described herein are systems and methods for providing a supply chain command center for intelligent procurement assistance, based on an assessment of inventory trends, demand, or other inputs related to the procurement or management of an inventory of items. In accordance with an embodiment, the system can simultaneously optimize for a set of variables related to procurement, by creating time series forecasts of leaf-level independent variables, and performing a simulation within the boundary conditions of historical or expected distributions of each variable, to determine an optimal timing, quantity, location and/or vendor for each order of items that are to be placed in the inventory.
Embodiments predict errors using database validation rules. Validation rules can be defined that include business logic for validating transactions performed on a database with a data model. Transactions can be performed using the database, where the database is in a post-transaction state after performance of the transactions. The database can be validated in the post-transaction state by performing the defined business logic for a subset of validation rules, where at least one validation rule fails to validate. Using a trained machine learning model, one or more errors for one or more future transactions can be predicted, the predicted errors being based on the at least one failed validation rule.
Systems, methods, and other embodiments associated with a visual guidance model are described. In one embodiment, a method includes generating, via a graphical user interface (GUI), a multi-process scenario including creating one or more process goals, where each define a stage in a lifecycle of the multi-process scenario, and creating one or more objectives within each process goal that defines a task for accomplishing the corresponding process goal, and configuring at least one objective to display a description of the task performed by the objective. The example method may also include configuring, via the GUI, the objectives to include guided actions that partition the task to be performed into a sequence of guided actions for the corresponding objective, and for a selected objective, configuring, via the GUI, the guided action to display an explanation for how to perform the associated guided action upon being selected.
Systems, methods, and other embodiments associated with clustering of time series signals based on frequency domain analysis are described. In one embodiment, an example method includes accessing time series signals to be separated into clusters. The example method also includes determining similarity in the frequency domain among the time series signals. The example method further includes extracting a cluster of similar time series signals from the time series signals based on the similarity in the frequency domain. And, the example method includes training a machine learning model to detect anomalies based on the cluster.
Techniques are disclosed to establish trust in a cluster of edge devices. An edge device cloud service can associate a first cloud-computing edge device with a fleet of cloud-computing edge devices and provision the first cloud-computing edge device with a master encryption key. The edge device cloud service can associate a second cloud-computing edge device with the fleet and provision the second cloud-computing edge device with the master encryption key and the first public encryption key. The first cloud-computing edge device can receive from the second cloud-computing edge device encrypted message data comprising the second public encryption key. The first cloud-computing edge device can decrypt the encrypted message data using the master encryption key stored in the first key store and update the first key store with the second public encryption key.
Techniques for incremental stack walking are disclosed, including: performing a stack walk of a runtime stack, at least by traversing the runtime stack from a current frame to a root frame, to obtain a set of stack walking results; storing a cache of the set of stack walking results; and installing, on the runtime stack, a marker frame that marks a boundary of stack frames represented by the set of stack walking results.
G06F 11/36 - Prevention of errors by analysis, debugging or testing of software
G06F 12/0875 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
A technique may include receiving, by a management service a plurality of instance configurations from a client device. The technique may then include receiving, by the management service, information identifying a launch request for a compute instance. The technique may include determining, by the management service, one or more candidate shapes for the compute instance based at least in part on the plurality of instance configurations. The technique may include selecting, by the management service and from the one or more candidate shapes, a launch shape for the compute instance and launching the compute instance using the launch shape. The technique may then include providing, the client device access to the compute instance, launched based on the launch shape.
In accordance with an embodiment, described herein are systems and methods for generating enterprise forecasts based on an analysis of input variables and direct forecasting. In accordance with an embodiment, the system can use linear regression or other mathematical models or modeling techniques to assess a set of variables related to an enterprise forecast, and their values and rate of change of such values, within a particular forecast window. Based on such assessment, the system can generate an enterprise forecast for that time period, or for a subsequent time period.
G06Q 40/06 - Asset managementFinancial planning or analysis
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Technology is disclosed herein for generating a visualization of data based on an AI-generated data object. In an implementation, an application, such as a data analytics application, receives a natural language input from a user which relates to a table of data in the application. The table includes data organized according to table columns. The application generates a prompt for a large language model (LLM) service which includes the names of the table columns. The prompt tasks the LLM service with selecting columns for the visualization based on the natural language input and the names of the table columns. The prompt tasks the LLM service with generating a response in a JSON format. The application populates the JSON object, which describes the visualization, according to the response. The application then creates visualization based on the JSON object.
Techniques for standardizing text data are disclosed. The system may identify, within a content item, a target phrase that is to be standardized. A subset of characters of a verb in the target phrase may be selected for comparison to a list of nouns. The subset of characters may be compared to a list of nouns identified in a data corpus. A noun in the list of nouns may be added to a candidate subset of nouns to replace the verb if the noun includes a sequence of characters that matches the subset of characters. A particular noun to replace the verb may be selected from the candidate subset of nouns based on a frequency associated with the particular noun occurring within the data corpus. The system may convert the target phrase to generate a standard phrase at least by replacing the verb with the particular noun.
Techniques for managing session lifecycles through custom scripts in a network environment are provided. In one technique, a container of a virtual machine receives a termination signal that is associated with a command to delete or deactivate a session of the container. In response and prior to terminating the session, the container identifies and executes a script that is associated with the command. After the script completes executing, the session is deleted or deactivated. In another technique, cloud system receives reference data that identifies a storage location of a script. A virtual machine is created in the cloud system. Based on the reference data, the script is downloaded from the storage location into storage that is local to the virtual machine. The script is executed and a session within a container of the virtual machine is initiated.
Examples provide a computer system including an electronic processor configured to obtain a set of source code and a plurality of test scenarios. Each of the plurality of test scenarios specifies a respective build architecture. For each respective test scenario of the plurality of test scenarios, the electronic processor is configured to instantiate a respective build environment according to the respective build architecture, compile the set of source code in the respective build environment to generate a respective binary file, and generate a respective set of one or more metrics for the respective binary file.
Techniques for generating terms to replace an initial set of search terms for a query are disclosed. A system generates a training data set for training a machine learning model. Generating the training data set includes generating search value vectors for each of a set of labels based on sets of search values associated respectively with the labels in the set of labels. The system trains a machine learning model to predict a target label for a target search vector based on the set of labels and the respectively associated search value vectors. The system generates a target search value vector based on an initial set of search values. The system then applies the trained machine learning model to the target search value vector to predict the target label. The target label is used as a search term, that replaces the initial set of search values, for executing the query.
Technology is disclosed herein for generating a visualization of data based on an AI-generated data object. In an implementation, an application, such as a data analytics application, receives a natural language input from a user which relates to a table of data in the application. The table includes data organized according to table columns. The application generates a prompt for a large language model (LLM) service which includes the names of the table columns. The prompt tasks the LLM service with selecting columns for the visualization based on the natural language input and the names of the table columns. The prompt tasks the LLM service with generating a response in a JSON format. The application populates the JSON object, which describes the visualization, according to the response. The application then creates visualization based on the JSON object.
In an embodiment, a computer generates a respective original inference from each of many records. Permuted values are selected for a feature from original values of the feature. Based on the permuted values for the feature, a permuted inference is generated from each record. Fairness and accuracy of the original and permuted inferences are measured. For each of many features, the computer measures a respective impact on fairness of a machine learning model, and a respective impact on accuracy of the machine learning model. A global explanation of the machine learning model is generated and presented based on, for multiple features, the impacts on fairness and accuracy. Based on the global explanation, an interactive indication to exclude or include a particular feature is received. The machine learning model is (re-)trained based on the interactive indication to exclude or include the particular feature, which may increase the fairness of the model.
Techniques for generating recommendations based on the predicted performance of an execution plan are disclosed. A system predicts the future characteristics of a set of data objects associated with a set of structured query language (SQL) statements. The system predicts how the changes to the set of data objects will result in changes to a query execution plan associated with the SQL statements. The system predicts a set of performance metrics for the changed query execution plan. Based on the predicted performance, the system generates recommendations for modifying data, applications, or database server operations to improve performance.
Techniques are disclosed herein for implementing digital assistants using generative artificial intelligence. An input prompt comprising a natural language utterance and candidate agents and associated actions can be constructed. An execution plan can be generated using a first generative artificial model based on the input prompt. The execution plan can be executed to perform actions included in the execution plan using agents indicated by the execution plan. A response to the natural language utterance can be generated by a second generative artificial intelligence model using one or more outputs from executing the execution plan.
Techniques for managing the implementation of application-code scanning processes are disclosed. A system scans application code by analyzing metadata associated with the application code to identify a set of data needed to scan the application code with a scanning application. Based on the information obtained from the application metadata, the system identifies extraction processes that are needed to obtain the set of data. The system applies a set of one or more application-code scanners by implementing the extraction processes. The system presents in a graphical user interface (GUI) a set of results from scanning operations.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
83.
Processing Transaction Data At Different Levels Of Granularity
A system accesses transaction data associated with a plurality of transactions, and based on characteristics of the transaction data, determines a set of functions to be applied to the transaction data at different corresponding levels of granularity. Determining the set of functions includes determining parallel processing requirements corresponding to the set of functions and determining an execution order corresponding to the set of functions based on the parallel processing requirements. The system schedules parallel execution of (a) a first function on the transaction data at a first level of granularity to generate a first dataset having the first level of granularity, and (b) a second function on the transaction data at a second level of granularity to generate a second dataset having the second level of granularity.
Systems, methods, and other embodiments associated with automated fine-tuning of text generation for large language models are described herein. In one embodiment, a method accesses a collection of text samples. The text samples include a natural language text prompt that combines content and instructions. The method extracts the instructions from the text prompt. The method fine-tunes a large language model to generate text in natural language based on a text generation loss function that penalizes non-compliance with the extracted instructions by a generated text response to the text prompt. The method generates an evaluation score for performance of the tuned large language model as a text generator based on a value of the text generation loss function for a second generated text response. And, the method automatically signals that the fine tuning of the tuned large language model is complete in response to the evaluation score satisfying a threshold.
Techniques are disclosed herein for managing ambiguous date mentions in natural language utterances in transforming natural language utterances to logical forms by encoding the uncertainties of the ambiguous date mentions and including the encoded uncertainties in the logical forms. In a training phase, training examples including natural language utterances, logical forms, and database schema information are automatically augmented and used to train a machine learning model to convert natural language utterances to logical form. In an inference phase, input database schema information is augmented and used by the trained machine learning model to convert an input natural language utterance to logical form.
A system and computer-implemented method include accessing a request for allocating graphical processing unit (GPU) resources for performing an operation. The request includes metadata identifying a client identifier associated with a client, throughput, and a latency of the operation. A predicted resource limit for performing the operation is determined based on the metadata. A parameter of GPU resources is obtained. The parameter includes a status indicating whether a GPU resource is occupied for performing another operation. A GPU resource utilization value is determined for each node based on the status. The GPU resource utilization value indicates the amount of utilization of GPU resources of the corresponding node. The GPU resource utilization value of each node is compared with a pre-defined resource utilization threshold value. The GPU resources are re-scheduled based on the predicted resource limit. Further, a set of GPU resources from the re-scheduled GPU resources for performing the operation.
A system and computer-implemented method include receiving a request for allocating graphical processing unit (GPU) resources for performing an operation. The request includes metadata identifying a client identifier (ID) associated with a client, throughput, and latency of the operation. A resource limit is determined for performing the operation based on the metadata. Attributes associated with each GPU resource of a plurality of GPU resources available for assignment are obtained. The attribute is analyzed that is associated with each GPU resource with respect to the resource limit. A set of GPU resources is identified from the plurality of GPU resources based on the analysis. A dedicated AI cluster is generated by patching the set of GPU resources within a single cluster. The dedicated AI cluster reserves a portion of a computation capacity of a computing system for a period of time and the dedicated AI cluster is allocated to the client associated with the client ID.
The present disclosure relates to resource allocation among a plurality of clients, for using a cloud-based service, e.g., a generative artificial intelligence (GenAI) service. A first target amount of resource and a second target amount of resource can be allocated to a first client and a second client (respectively). A first and a second client, a first target amount of resource can be allocated to a first client, and a second target amount of resource can be allocated to a second client for using the service. A request can be received from a third client for allocating resources; estimating that (i) the first client is using a first subset of the first target amount and not using a second subset of first target amount, and (ii) the second client is using a third subset of the second target amount and not using a fourth subset of second target amount. It can be determined that the second subset is greater than the fourth subset. At least a portion of the second subset can be allocated as a third target amount of resource to the third client.
H04L 47/76 - Admission controlResource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 47/74 - Admission controlResource allocation measures in reaction to resource unavailability
89.
NARRATIVE POINT OF VIEW MODIFICATION FOR CONTENT GENERATED BY A MACHINE-LEARNED MODEL
Techniques for modifying a narrative point of view for content generated by a machine-learned model, such as a large language model (LLM), are provided. In one technique, a first textual content that was generated by an LLM is accessed. A narrative point of view (NPOV) detection operation is performed on a first portion of the first textual content to identify a first NPOV corresponding to the first portion of the first textual content. Based on an output, of the NPOV detection operation, that indicates that the first NPOV does not meet one or more NPOV criteria, the first portion of the first textual content is modified to generate a modified textual content. The modified textual content is submitted to the LLM, causing the LLM to generate a second textual content.
Techniques for generating repetition-free text using a large language model (LLM) are provided. In one technique, textual content that was generated by an LLM is accessed, where the textual content comprises a plurality of sub-components including a first sub-component and a second sub-component. A first embedding that represents the first sub-component is generated and a second embedding that represents the second sub-component is generated. Based on a similarity between the first embedding and the second embedding, it is determined whether the second sub-component is repetitious with respect to the first sub-component. In response to determining that the second sub-component is repetitious with respect to the first sub-component, at least a portion of the second sub-component is removed from the textual content.
Systems, methods, and other embodiments automated fine-tuning of chatbot performance for large language models are described herein. In one embodiment, a method accesses a collection of sample conversations between two entities. An individual sample conversation includes one or more rounds of natural language example prompt by a querent and example response by an agent. The method fine-tunes an LLM to generate responses in natural language based on a chatbot loss function that evaluates first responses generated by the LLM to the example prompts by the querent. The method generates an evaluation score for performance of the tuned LLM as a chatbot based on second responses generated by the tuned LLM to test prompts from a test conversation. And, the method automatically signals that the fine-tuning of the tuned LLM is complete in response to the evaluation score satisfying a threshold.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
A summary generation system is disclosed that is configured to generate a summary for content to be summarized by identifying relevant chunks of information from the content to be summarized using a large language model (LLM) and a set of questions. The set of questions enable the system to identify and retrieve relevant chunks of information. Each question undergoes a translation or transformation process to generate multiple question variants for each question. The multiple question variants are used by the system to optimize the search to obtain relevant chunks of information. Then, using the multiple question variants and an LLM, the system extracts information (i.e., answers) from the relevant chunks of information. The summary generation system then collates the answers to create an accurate and comprehensive summary for the content to be summarized.
G06F 40/289 - Phrasal analysis, e.g. finite state techniques or chunking
G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
93.
INTERACTIVE, APPLICATION ADJUSTABLE, REENTRANT DESIGN FOR USER GESTURE RECORDING
Techniques for user gesture recording are provided. In one technique, while recording user actions with respect to a website, it is detected that a user entered text within a text field of a webpage of the website. An action pane is presented that includes a value text field and a test value text field. In response to the detection, the text is inserted into the test value text field of the action pane. An association between the text, the text field, and the test value text field is stored as part of a workflow. In a related technique, user input is received through the action pane, where the user input selects a reference, to a source of input, to include in the text field during execution of the workflow.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
Techniques for managing secure virtual card number (VCN) transactions are disclosed. A POS terminal that processes payments receives an instruction in a secure digital communication over a network to process a payment from a customer to a supplier. Based on receiving a payment request via a network, the POS terminal identifies a VCN associated with the request. The POS terminal validates the VCN and processes the payment request. The POS terminal communicates the VCN to the supplier's bank to initiate a funds transfer between the supplier's bank and the customer's bank that issued the VCN. Upon completion of the transaction, the banks confirm the transaction to the customer and the POS terminal.
G06Q 20/34 - Payment architectures, schemes or protocols characterised by the use of specific devices using cards, e.g. integrated circuit [IC] cards or magnetic cards
The present disclosure relates to LLM orchestration with vector store generation. An embeddings model may be selected to generate an embedding for a digital artifact. Metadata for the digital artifact may also be generated and stored in a vector store in association with the embedding. A user query may be received and categorized. One of a plurality of machine learning models may be selected based on the categorization of the user query. A prompt may be generated based at least in part on the user query, and the selected machine learning model may generate a response to the user query based at least in part on the prompt.
Techniques are disclosed herein for improving the performance of an end-to-end (E2E) Automatic Speech Recognition (ASR) model in a target domain. A set of test examples are generated. The set of test examples comprise multiple subsets of test examples and each subset of test examples corresponds to a particular test category. A machine language model is then used to convert audio samples of the subset of test examples to text transcripts. A word error rate is determined for the subset of test examples. A test category is then selected based on the word error rates and a set of training examples is generated for training the ASR model in a particular target domain from a selected subset of test examples The training examples are used to fine-tune the model in the target domain. The trained model is then deployed in a cloud infrastructure of a cloud service provider.
G10L 15/06 - Creation of reference templatesTraining of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Techniques are provided for fine-tuning large language models (LLMs) to reduce the instability of LLM outputs to prompt. In one technique, a plurality of prompts is stored. For each prompt of the plurality of prompts, a plurality of variants of that prompt is generated. A prompt generating LLM is fine-tuned based on that prompt and the plurality of variants. Each variant-prompt association (where the variant is generated based on the prompt and has an identical or similar meaning) is a training sample that is used to train or fine-tune the prompt generating LLM. The prompt generating LLM is configured to generate standardized prompts based on input prompts. In another technique, a response generating LLM is fine-tuned based on sets of training samples, each training sample in a set comprising a different variant of a prompt and a response that the response generating LLM generated based on the prompt.
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for identifying entities for automatic SOAP note generation. A text transcript is accessed and segmented into portions. The text transcript can correspond to an interaction between a first entity and a second entity. One or more entities for the respective portions are identified using one or more machine-learning models. Facts are from the respective portions using the one or more machine-learning models based at least in-part on the context of the respective portions. A SOAP note is generated using the one or more machine-learning models and based at least in-part on the facts. The SOAP note can be stored in a database in association with at least one of the first entity and the second entity.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Techniques are disclosed for automatically generating Subjective, Objective, Assessment and Plan (SOAP) notes. Particularly, techniques are disclosed for automatic SOAP note generation using task decomposition. A text transcript is accessed and segmented into portions. The text transcript can correspond to an interaction between a first entity and a second entity. Machine-learning model prompts are used to extract entities and facts for the respective portions and generate SOAP note sections based at least in-part on the facts. A SOAP note is generated by combining the SOAP note sections. The SOAP note can be stored in a database in association with at least one of the first entity and the second entity.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Techniques are described for performing packet level data centric protection enforcement. Instead of being restricted to perimeter-based security and defining and creating rules that are difficult to maintain, techniques described herein allow users to create data-centric, intent-based policies that are enforced at different enforcement points within one or more networks. In some examples, a method comprises receiving a packet at an enforcement point (EP) within one or more networks that include a plurality of enforcement points (EPs); accessing enforcement data that indicates allowed communications between the EP and one or more other EPs, wherein the data are generated from a policy that specifies how traffic flows the one or more networks and a determination of possible data movements between at least two of EPs in the plurality of EPs; and enforcing the flow of the packet at the EP based on the data.